Job-chaining in Compressor

At my job, I commonly have to encode files for the web. Recently, I’ve added a new deliverable specification to each project I finish: three versions of the file for a streaming video library that will dynamically choose which version to play based upon the viewer’s internet connection. I commonly edit on a ProRes 422 timeline and export a self-contained Quicktime file for my master, and use that to produce whatever else I may need from it. This is a pretty standard 720p/29.97 fps file.

They have to be converted into H264 15 fps, 200/400/600 kbps, various frame sizes 360p and below. Doing the conversion from ProRes to H264 with Frame Controls on takes for-ev-er – like over four hours for a two minute video.

I was thinking about how to make this process faster. I set up a virtual cluster which works pretty good on some videos, but ProRes to H264 with all those changes just burns everything up.

So today I tried out job chaining in Compressor, and it solved what I needed it to do. Yay!

Job chaining basically takes the output of one file and puts it as the source of another job. It streamlines the encoding process and allows you to have different jobs within a batch handle different tasks for quality or time purposes.

I’m still investigating it, but I think in my case, if I just did my steps separately it would operate exactly the same way. In fact, it was what I was going to try before I realized I could use chaining to accomplish it. I’m not sure if any processes are shared within chaining that would speed it all up doing it in one batch, but not having to manually take the output and make it the source of a new batch and submit it separately is very handy. You could set up lots of chaining and stuff and have tons of different outputs if you wanted to use it for that. I’m very interested in how chaining can hand off different tasks for quality or time purposes, I’ll be looking into that further.

What I did today was set up a ProRes job in a new batch and dropped my master file into it. I did all the changes I needed to it – the resizing, progressive frames, frame rate change, everything except for the actual change of the bitrate. Then I did “New Target with Output” and set up an output that would crunch it to 600 kbps. I duplicated this for the 200 and 400 and made the bitrate adjustments, and then I let it run.

The first time it error’d and I discovered you can’t do chaining and clusters at the same time. I ran it through again without clusters.

The second time I forgot I needed the output to be progressive. It finished in under ten minutes since it didn’t need to make it progressive.

The third time, I got all the settings correct including the change to progressive. It finished in under 30 minutes.

Previously, one file would take an entire workday. Now it takes no time at all. I’m glad instead of just wasting all that time, I thought about how to improve my workflow. ProRes to ProRes is a lot easier for Compressor to handle than ProRes to H264. Minimizing the amount of processing in the latter to JUST the compression, and leaving the heavy lifting of frame controls to the former really improved the process.

I now need to investigate my editing process a little further. If I can edit and output a progressive master, I could cut my encoding time down even more. I currently shoot 1080i60. I basically always output for the web. I need to do some workflow tweaking for sure.

So basically, to set up a job chain in a batch, add a video to your batch and add settings to it. Then click on those settings, and select New Target with Source Output. I’m not sure if there’s a limit to the number of links in a chain, but obviously you can mix codecs.

tl;dr: I solved a problem with a possibly fairly obvious solution with a process that doesn’t seem to be that widely used, but is very efficient. Please inform me if I’m doing anything dumb!

This entry was posted in compression, compressor, final cut pro, final cut studio, job-chaining, learning, post-production. Bookmark the permalink.