2016 SMPTE Tech Conference Overview
If you’ve never been to SMPTE’s Annual Technology Conference, I’ll lay the scene down for you. It takes place in central Hollywood, perched above the most touristy parts of the Walk of Fame and Chinese Theater that are littered with visitors and people dressed as superheroes and Transformers hustling for money. But don’t let the luxurious location fool you: once you ascend the escalator from a parking garage where you definitely wouldn’t want to be when the next big earthquake hits (which is apparently sooner rather than later) and help out a guy dressed as Optimus Prime who happens to be stuck in a door (yes, actually happened), you find yourself where all great presentations in our industry are made: in hotel ballrooms arranged classroom style.
SMPTE, whose length acronym stands for The Society of Motion Picture and Television Engineers, puts on this annual conference to allow people to present papers that discuss challenges and discoveries in the post production industry to their peers, proposing actions that need to be taken or new standards that need to be implemented. Although the array of presentations can range from very engaging to especially math-oriented, the overall topics on the agenda present an idea of the shape of the industry, and of what is to come in the future. This year, presentations included everything from quality in virtual reality, automation of file exchanges, wrappers, storage challenges and solutions, and more. The big picture challenge across them all: What do we do with all this data?
I attended the conference for a couple days and spent a few more days reading over the published papers, and found the most relevant and interesting topics to our little part of the industry happened to be HFR, HDR, and resolution. We touch many other issues and concerns, but these were certainly the most interesting for the near-future of dailies and workflow. I’ve summarized the highlights of these, as well as the highlights of another aspect of the conference: the future of the industry. If you’ve ever attended any conference in our industry, you might notice one thing above all: it’s not very diverse, is it? SMPTE’s Tech Conference might be even less diverse on the surface, especially by nature of being engineering-oriented, unless you count the variety of Hawaiian shirts and suits in the crowd. But SMPTE is recognizing this issue and focusing on it more and more, holding panels on reaching young people and creating programs to allow them to thrive as entertainment engineers. So the overarching question is not only what do we do with all this data, but also, and who is going to do that?
High Frame Rate (HFR)
The discussion of HFR has existed for longer than you might expect, and much of the discussion isn’t even about how HFR can be used creatively. One case for HFR acquisition by RealD was not for HFR distribution, but for greater control over frame rate conversions for delivery. Frame rate conversions are difficult because they require complex algorithms or else result in really ugly motion distortion. By shooting high frame rates, you can have more control over the temporal aliasing (squidgy looking playback) introduced by cameras. You can note a very similar approach to acquiring 4K and delivering HD — you aren’t necessarily acquiring 4K in order to have a lovely 4K image, but because it gives you more data and more image to use in post production. Starting with a higher resolution (as opposed to like, HDV) will result in a better-looking compressed image, but it’s not without a lot of careful consideration to the overall post production workflow.
This is even more important with HFR, where lots of software might have trouble supporting high frame rates, and some cameras can’t acquire the target resolution at a high frame rate and must drop down to accommodate the additional data. With the release of Ang Lee’s HFR 4K 3D film “Billy Lynn’s Long Halftime Walk” (which many industry peers say actually works incredibly well to serve the story, even if the reviews of that story are mixed in this instance) I would expect more mad scientist filmmakers to dabble in HFR pretty soon.
High Dynamic Range (HDR)
HDR displays have a larger range of tone and color reproduction than standard dynamic range displays, with a wider color palette — and a wider range of issues and concerns. One of the biggest challenges is that HDR images won’t necessarily look good on other displays the same way SDR images will look passable across different displays. It’s expensive and time consuming to deliver two versions of a product because so much time is spent manually fixing problems that emerge when going between SDR and HDR, so Filmlight has built a new color correction system that no longer relies upon SDR tools, giving the ability to produce SDR and HDR deliverables.
Whichever tools are available, learning to work with HDR at all is a challenge for colorists and will be for some time. Obviously, there are many technical challenges to sort out by workflow nerds like those at Bling, but there’s also an unlearning and re-learning of the storytelling process when you have a lot more dynamic range at your fingertips. You have so much more range — but what is really right to serve the creative intent of the story? And as the tools get better and a colorist works on content created by someone else, is it okay to take advantage of advances in technology that weren’t available when their look was established?
There’s also the matter of reality — we have the ability to make something more “real,” which is really rarely what we want from our moving images. Plus, what really is real when our perception has always been a perfect image created on a carefully lit set? One colorist from Dolby Labs, Shane Mario Ruggieri, discussed his own research in learning what’s really real and what skin (and people in general) look like in various real-world circumstances — in the bright sun, dark shade, and how his eyes physically reacted to looking at those scenes. He took that research into his color bay to try to simulate real images and real physical reactions in his HDR grading, and he discovered that the most interesting aspect of HDR was dynamic range across a whole scene, rather than within a shot. In other words, HDR’s emotional effect on the viewer through physical response to changes across the scene was more engaging than simply having the head room to lift different parts of the image without clipping. There’s much to understand not only about how to tell a story with color, but how we should be reacting to these new dynamic ranges as creative professionals.
4K and Beyond
Many people say that we’re reaching a point of diminishing returns when it comes to resolution. Is all the extra work and processing worth it for little perceived benefit, or is the perceived minor benefit because we’re doing it all wrong? As doctors Cooper and Farrell stated in their paper,“Historically, most arguments about what is technically good enough have proved to be false, as technology improves and we are able to do better.” When we’re figuring out what resolutions we all really need, we have to drill down to what that means: are we looking for something natural or realistic, and is what we perceive to be realistic actually realistic, or a result of what we’re used to viewing on screens and hearing from speakers?
Perception is difficult to estimate and understand — not even neuroscientists really grasp our audio-visual experiences with the world! There’s a lot of natural variation in these experiences, and it’s very subjective and contextual. You might even think compression would be a huge problem for high resolutions, but in fact it may enable these high resolutions to increase perceived quality. There’s a lot we don’t yet understand about how we’ll interact with 4K, 8K and beyond, but it certainly seems true that we can play a lot more tricks on the mind with more bits to work with. Plus, there are deeper aspects of the image we see in a frame — like sharpness — that we don’t necessarily point to as a necessary part of visual acuity.
The Future of the Industry
If you’re reading this and you’re under 30 or so, and you haven’t heard of SMPTE or don’t really understand their importance in the industry, you’re not alone. It turns out the organization has had some major trouble attracting and retaining young people, and helping other companies understand the same. The name itself doesn’t really sound like anything a young creative professional should be interested in — I mean, it has “engineers” right in the name. But the truth is that so many people in the industry who attend these conferences (or should attend, and should be members) are not traditional engineers. Many of them, like workflow supervisors inside Bling, side-stepped into technology from more traditionally “creative” roles. There are many ways to contribute to the industry and an organization like SMPTE that are not just for mathematicians and data scientists.
Part of attracting younger people into these jobs is making them aware they exist to begin with. SMPTE held a session with 20-somethings working in Dolby, Netflix and NBC discussing their work and life so far, with an audience full of participants in a new Young Entertainment Professionals program seeking to connect emerging talent in post to opportunities and education. This outreach is still new for SMPTE, and they’ve got a lot to learn. But they’re listening, and they’re trying to make a case for a more accessible and inclusive environment to ensure the 100-year history of its organization continues long into the future.
Whether it’s central Hollywood or the middle of Saskatchewan, the presentations made at SMPTE’s annual conference ripple downward through the entire industry. Some of the standards presented or proposed will die on the vine. Others will grow into a vital part of our daily life. Either way, it’s interesting to see where they begin and get a pulse on where the industry is heading.