NOTE: This article was first published in my blog on Monday, March 31, 2014.
This morning, I was invited to a technology briefing at Dolby Labs looking at their latest technology for improving image quality: Dolby Vision.
In a nutshell, what this does is expand the dynamic range of an image. The benefit to this is that, based on the demo I saw today, highlights just explode off the screen.
REALITY IS PRETTY AND COMPLEX
While the buzz these days is all about high-resolution images or faster frame rates – what Dolby calls “more pixels” and “faster pixels,” the real issue is that the gray-scale range of video today is too limited and doesn’t accurately represent reality.
Come to find out, “reality” is a pretty complex thing. The sun generates about 1.6 billion NITS (“nits” is a measure of the amount of light). Starlight generates about 0.00001 nits. Your eye can easily see both, though not at the same time. That’s a pretty incredible range — thousands of incremental differences.
The problem is that, today, most video formats compress images into an 8-bit gray-scale straight-jacket; which means they contain only 256 shades of gray. AVCHD, DV, and H.264 are all 8-bit formats. 10-bit video formats, such as ProRes, provide up to 1024 grayscale values. Better, but nowhere close to that of the human eye.
Expanding to 12-bit video, which some higher-end cameras shoot, yields 4,096 grayscale values. While at the high-end, 16-bit video provides 65,536 gray scale values, which emulates what the eye can see much more closely. In fact, based on what I saw in this demo, even standard HD looks spectacular when using the extended dynamic range provided by Dolby Vision.
NOTE: The ability to improve image quality by expanding the dynamic range of the image is a very good reason to shoot video formats that record at higher bit depth. Image quality improves as bit depth increases. This is also a good reason to record to an external recorder, rather than the chip in your camera. “Future-proofing” your images involves a combination of shooting higher resolutions, faster frame rates and/or higher bit depth. You may not be able to afford to do all three, but when starting a project, please consider which of these three you can do.
CHANGING THE PROCESS
Today’s cameras capture FAR more dynamic range than any of our current codecs support. So much so, that Dolby was able to go back to older film footage, rescan the negatives and color grade them using their new system – which they showed implemented on a FilmLight BaseLight system – which made the old footage look amazing.
The problem is that our entire workflow from production through post to compression and distribution is locked into an 8-bit video format; which destroys all the dynamic range in an image. But, existing infrastructure is too complex to easily replace with all new gear.
This, then, leads us to Dolby Vision, a second compression stream that piggy-backs on top of an existing H.264 file that allows systems enabled with Dolby technology to play media with extended dynamic range, while those sets that are not Dolby-enabled play the original encode. An added attraction to this new format is that while the dynamic range improves by close to five orders of magnitude, piggy-backing the second stream only adds 25% to the file size of the original H.264 file.
This is VERY similar to what Dolby did years ago with audio. Those that had Dolby decoders heard much higher quality audio than those that didn’t.
HERE’S WHY WE CARE
Broadcast and cinema will, ultimately, adopt this new format; if for no other reason that the improvement in image quality is strikingly amazingly better with far fewer resources needed to support Dolby Vision than, say, 4K images. But conversion will take years.
A few weeks ago, I was talking with Michael Cioni, CEO of Light Iron, who said that the fastest and easiest way to bring 4K to the home was via Amazon, Netflix, and Google where high-res images are piped directly into the home via the Internet, bypassing traditional distribution.
Today, I learned that Dolby has relationships with Amazon, Instant Video, VUDU, NetFlix and Xbox Video to support their new technology. Plus monitor manufacturers Sharp and TCL Multimedia and a variety of unannounced chip developers that create the firmware necessary to interpret the enhanced video stream into high-dynamic range images. It is Dolby’s hope to have this technology released to the market by early 2015.
The currently missing piece is support for the technology via editing and compression software. Scanning old movies is fairly easy. But not editing current movies. At a minimum, we need support for 12-bit uncompressed video files; which means we are looking at huge media files – pushing our storage even more than editing compressed 4K images.
If you are going to NAB next week, make a point to stop by the Dolby booth (SU1702) and take a look at their demo. Even standard HD images benefit from this technology.
Dolby Vision currently is not yet ready for the mass market. But, as media creators, this gives us a major new tool to make our images look stunning, with distribution that leads into what we already know: the web. And, once you see these new images, you’ll have a hard time looking at traditional HD images again.
Dolby Vision is something I’ll be watching closely in the future.
Learn more about Dolby Vision here: http://www.dolby.com/us/en/professional/technology/home-theater/dolby-vision.html