I was reading a thread this morning on Linked In where a video editor was lamenting that DVDs and Blu-rays are rapidly dying, with nothing similar to take their place – except posting on-line and delivering media files on thumb drives. At which point, he asked the question: “What codecs deliver the best results for files placed on thumb drives?”
NOTE: I use the phrase “thumb drive” to mean a small, non-powered thingy that plugs into the USB port of a computer.
I’ve written a lot about codecs over the years because they are central to our ability to shoot, edit and distribute media. Just a few of these articles include:
But, our industry continues to evolve, so its time to revisit this topic. (Oh, and you’ll find the answer to this question in the Summary, at the end.)
DEFINITIONS
A Codec (Compressor/Decompressor) is a mathematical algorithm that converts reality (sound and light) into binary numbers that can be stored in a computer; that’s the “Compressor” part. Then, it converts all those binary numbers back into images and sound that we can see and hear; that’s the “Decompressor” part.
Codecs exist for still images, audio and video files. Popular still image codecs include:
Popular audio codecs include:
Popular video codecs include:
NOTE: Some formats, like QuickTime, MXF and MPEG-4, are not actual codecs, but containers that hold a variety of different codecs. For example, a QuickTime movie can hold a video file using the H.264 codec and an audio file using the AIF format.
In the past, I used to enjoy keeping track of how many audio and video codecs there were. But, frankly, I gave up when the number shot past 400. While the industry is not generating as many codecs as it was in the heyday of converting from SD to HD, not a month goes by that some manufacturer or developer is not announcing some new codec.
It is safe to say that there are “a lot!”
NOTE: As a sidelight, codecs are often divided into “lossy” and “lossless.” A lossless codec preserves all the original image quality so that when an image is restored it is indistinguishable from the original. RAW high-bit-depth files are examples of a mostly “lossless” codec. (I don’t think there is a perfectly lossless video codec; the files are just too big.)
A lossy codec “throws out” visual information as part of the compression process, which means that the compressed image does not have the original quality of the source. Virtually all video codecs are lossy to some degree, however the amount of loss varies by codec.
ONE MORE CORE CONCEPT
This one is really important: All media, in order to be stored on a digital device, must use a codec. Even more important, all media stored on a digital device must be compressed.
Some video, like that shot on an iPhone is significantly compressed. Other video, like that shot on a RED or high-end Arri camera is compressed, but not to the same degree.
The reason for all this compression is that video files are HUGE and engineers are always looking for ways to make them smaller without sacrificing too much quality. For example, a single 1080p frame in a theoretical “uncompressed” format can, depending upon bit depth, exceed 100 MB! This would require 3 GB PER SECOND to play back a 30 fps sequence!! As resolutions expand, these numbers only get worse.
Without compression, we could not edit video on our computers.
THE PROBLEM
The problem is that there is no free lunch. The perfect codec would provide “infinitely” high quality, “infinitely” small files with “infinitely” efficient editing performance.
Sigh… Ain’t gonna happen.
Instead, in the real world, we get to pick two, as the triangle above illustrates. We can choose codecs that create small files with reasonably high-quality, but they are not efficient to edit; requiring serious computer horsepower to edit. H.264 and H.265 are examples of this category.
Or, we can create very efficient files with great quality, but these files are not small. ProRes and DNxHD are examples here.
Or, we can create very small files that are efficient to edit, but the overall quality is poor. DV could be an example here.
There are four principal goals to consider when choosing a codec:
For instance, if you are posting a file to the Internet, the size of the file and the speed of decompression are more important than how long it takes to compress the file in the first place or the quality of the final image. That is not to say these last two are unimportant, just less important.
On the other hand, if you are streaming a live event, the speed of compression is most important, because if you can’t compress faster than real-time, no one will be able to watch the event.
As a third example, for a network television program, the speed of decompression and the quality of the final image are of paramount importance.
WHAT’S CHANGED
In the past, back in the days of SD (standard-definition video), we would shoot, edit and distribute our media using a single codec. At the professional level, that codec would be DigiBetacam. Or, one level down, DV.
For all its problems with low resolution, interlacing and converting between three different frame rates; shooting standard def video was a walk in the park compared to the mess we find ourselves in today.
What has evolved today is what I call the “Three Codec Workflow.” We shoot one codec, then transcode (convert) it to a second codec for editing, then transcode it to a third codec for distribution. That middle codec is called an “intermediate” or “mezzanine” codec that exists solely to provide a high-quality, very efficient video format that is optimized for editing.
For example, an iPhone shoots H.264 in an MPEG-4 container. That gets converted to ProRes 422 for editing in Final Cut Pro X, then uploaded to YouTube/Vimeo/Facebook as a high-bit-rate H.264 in an QuickTime container.
Or, an Arri Alexa shoots using the ARRIRAW codec. This is transcoded to GoPro Cineform for editing in Premiere, then transcoded to a DCP for distribution to a digital cinema projector in a theater.
These are only two examples, there are dozens and dozens of variations. The point is that we need to use different codecs for different parts of the production and post-production process.
ADDING TO THE CONFUSION
In the past, we could count on codecs running on both Mac and Windows systems. Those days seem to be ending.
Apple has never made ProRes writeable (recordable) on Windows, though it will play easily. Also, recently, Apple has said that it will discontinue support for QuickTime on Windows.
This lack of QuickTime support is huge, because many video editors depend upon it for their own editing; Adobe Premiere comes instantly to mind.
GoPro Cineform and DNxHD are both solid codec alternatives for Windows, but the future lack of QuickTime support does not, yet, have an immediate solution. I’m expecting to hear more about this at the annual NAB Show later this month.
MAKING THINGS STILL MORE COMPLEX
At one level, the move to higher resolutions such as 4K or higher, is really no different than shooting and editing HD – except that we need MUCH more storage, with much faster bandwidth.
But, the move to full support for Rec. 2020 (sometimes called HDR), is more than resolution. It includes:
And to achieve these three goals requires video files that use 10-bit-depth codecs or greater throughout the entire production/post/distribution process. (Currently, the highest available bit-depth is 16-bits per channel (color).)
Shooting high bit-depth video is now commonplace. Most current cameras now offer a recording option that supports 10-bit video in some form.
Editing high bit-depth video is also straightforward. All ProRes formats support 10-bit video, with the two 4444 variants supporting 12-bit. Some, but not all DNxHD formats support 10-bit video, the rest only use 8-bit. GoPro Cineform supports 10-bit, with the RGB variants supporting 12-bit. So, while we need to pick the right codec, editing in higher bit-depth ranges is possible.
The problem is that many, in fact, most popular video codecs for distribution only support 8-bit images. This is one of the big benefits of the VERY-slow-to-roll-out H.265 codec; it supports 10-bit video or greater more easily than the current implementation of H.264.
With distribution, we need to carefully pick a codec that supports the bit-depth we need. And this varies by distribution outlet. There is no current standard codec that works for most situations.
A QUICK THOUGHT ON AUDIO CODECS
Fortunately, audio codecs are much easier to work with. AIF/AIFF and WAV files are two different containers for the same audio data. Working with either one during production and post is an excellent choice as both are considered high-quality, “uncompressed” formats.
Most video cameras and audio gear record 48 kHz sample rates at 16-bith depth. This is a good choice for recording on set.
High-end audio production will often work with 92 kHz sample rates, or higher, for the same reason that video producers shoot high-resolution 4K or 6K video and downsample to HD for editing: It gives them more data to work with when creating effects and mixes.
When compressing, AAC is a better choice than MP3. And sample rates of either 44.1 kHz or 48kHz at 16-bit depth will provide audio quality that exceeds normal human hearing.
NOTE: For distribution, I use 44.1 kHz for audio-only files and 48 kHz sample rates for audio which gets synced to video; both at 16-bit-depth.
SUMMARY
Codecs are like the engine to car. We need it to get anywhere, but most of us would rather think about something else. The problem is that, currently, NOT thinking about the codecs we are using can slow our editing, degrade our images or make our final edit undeliverable.
Making matters even more complex is that there is no single “best practices” codec. But, here are some thoughts.
And the answer to the thumb drive question at the top? Well, it depends. As long as you aren’t interested in supporting some form of HDR, compress the video using the H.264 codec, the audio as AAC, and store them both in an MPEG-4 container. Those formats will play on just about everything.
NOTE: Here’s a video that explains basic compression concepts.
However, it will surprise no one when I say that this will probably change by next year. Sigh…
17 Responses to Understanding Codecs – And Why They Are Important
Newer Comments →Thank you, Larry -as always – for a timely update on what will continue to be a moving target!
Stu Aull
Alaska
Larry, thank you so much for this article on codecs – it is so clear and it helps so much.
Sister Anne
Thank you, Larry. This is helpful for confirming that we’re not crazy dealing with multiple codecs.
I’ve wondered about the “3-codec” workflow. I know we don’t need to experience a quality loss in a digital copy, but I wonder about the effects of serial transcoding.
Any thoughts?
Cheers,
Robert
Robert:
In general, you don’t want to compress already compressed media.
However, transcoding from camera native to ProRes 422 is similar to dumping a five gallon bucket of water into a bathtub. The bathtub is big enough to hold all of it, without losing anything.
Generally, transcoding to a mezzanine format codec does not lose quality, which means that you can then safely re-encode back to H.264 for ultimate distribution without injecting significant artifacts.
larry
[…] Read more at larryjordan.com […]
Hello Larry
Thanks for your post. its awesome.
Question.
Im editing 4k gopro 4 black footage
first step its to change my compressed footage to gopro cineform format and defish footage with the app gopro studio
then i edit those files in final cut pro x
then the final edit i send it to compressor
and then i want compressor to compress my files again to gopro cineform format (basically to keep the same codec as the one i edited with CFHD)
but at the end it gives me video that looks like corrupted with green lines
Why is that?
i can also export at the end to pro res 422 to keep quality but i would like to keep them always as cineform and then upload to youtube instead of upload the pro res file ( wanna keep max quality as possible, upload time its not a problem for me, i believe every time there is a change of codec its no good for quality, correct?)
is there a way to do it?
all my software its up to date.
im using a mac pro 5,1
3.46 GHz 6-Core Intel Xeon
64 GB 1333 MHz DDR3
amd rx 480
sample of how the video looks
https://villasmanzanillo.com/wp-content/uploads/2017/07/Screen-Shot-2017-08-01-at-5.24.12-PM.jpg
Gamaliel:
Yup, that image is pretty ugly.
You are compressing this too much. Change your workflow:
* Shoot using the highest bit rate / quality that GoPro supports for that camera.
* Convert to GoPro Cineform, again selecting a high bit rate.
* Import into FCP X but do NOT optimize your footage, Cineform is already optimized. If you need proxies, feel free to create them
* Export the final project using File > Share > Master File – do NOT compress. (This creates a master file using the Cineform codec.)
* Then, using Compressor, create as many compressed versions as you need. For social media, I use the appropriate size setting in Video Sharing Services.
This will save you two compression steps and create much higher quality images.
Larry
i click on share>master file default>settings>video codec> and it gives me only the options of PROres (it says source – apple pro res 422 ) and all the other pro res flavors also h264 and uncompress 8 and 10 bit. but it doesn’t give me the option for the codec that its actually used which is GoPro-CineForm HD/4K/3D, Linear PCM
in compressor it does give me that option but the output its the green line image i attached
here a image of my final cut pro x which it says CODECS GoPro-CineForm HD/4K/3D, Linear PCM
http://imgur.com/a/vXWrN
in not optimizing at all. im using original. no proxy no nothing. original.
here picture of the codec cineform that im using to edit in final cut
https://villasmanzanillo.com/wp-content/uploads/2017/07/Screen-Shot-2017-08-01-a t-7.23.26-PM.jpg
Gamaliel:
It took me a while to investigate this. Here’s my recommendation. While FCP X supports importing GoPro Cineform, it does not support exporting it.
Because we need to minimize the number of times you compress your file – both to maintain image quality and to decrease the amount of time you spend waiting – my suggestion is to convert your GoPro Cineform file created by the GoPro utility into ProRes 422.
“Pro Res is designed specifically to minimize generational loss when transcoding and creating multiple generations. It is without a doubt a better choice for transcoding and generation loss than any other codec in common use today. Whenever you render in FCP X, you are going to ProRes anyway. Why transcode again to CineForm if you don’t have to? There are no deliverables that require CineForm, right? If you are uploading to a video website, you will be transcoding to their codec. If you are placing it on your own website, you will have playback issues and are better off converting to h.264.” (Apple website)
So, shoot GoPro, convert using the GoPro utility to Cineform, then optimize your media upon import into FCP X. From there, edit and output using ProRes.
Larry
too bad. i really wanted to upload straight to youtube with cineform so then its less conversions and quality its kept at its maximum. since when i upload then youtube converts it again to whatever youtube uses. in that way less transcoding. i believe adobe encoder its capable of actually transcode to cineform. too bad apple can’t.