[ This article was first published in the July, 2009, issue of
Larry’s Final Cut Pro Newsletter. Click here to subscribe. ]
Bob Sloan writes:
I have two camcorders, one has 12 bit camera quantization, the other has 10 bit camera quantization. They both have 8 bit video quantization. (I’m referring to Panasonic DVCPRO-HD Varicams & HDX900’s)
What is the technical difference?
What is the technical difference in picture quality?
What is the perceived difference in picture quality?
Larry replies: For me, the higher the bit depth, the more accurately the digital image can represent the actual image.
But, this is a GREAT question to send to Philip Hodgetts and he graciously sent me the following response:
I’ll certainly pretend to have an answer 🙂
My understanding is that we’re talking, essentially, the same argument that I’ve propounded for a while that “oversampling at source is good throughout the process”. Now usually that refers to pixel-based resolution, and is the reason that any HD camera (from Flip mini to HDV to Viper) will create awesome SD content, because the source is oversampled relative to the result. Likewise a RED One will produce nice HD because the source is oversampled.
I think that’s what’s going on here: there is a final conversion to 8 bit that happens at the codec/encoding stage. If the source information is compromised (10 bit vs 12 bit quantization) then the quality of the signal going into the encoder is lower. In this case gradients and smooth level transitions will be marginally more compromised with 10 bit source than with 12 bit source. Any encoder can only work with what it’s given, so giving it a higher quality source will let the encoder work provide a better result (all else being equal).
However, let’s be real about this. About the only way you’re ever going to see the difference is in side-by-side difference-mode testing. The 8 bit compromise on the final encode will have way more affect on the result than the difference between 10 and 12 bit quantization. But in theory 12 bit would be better because it’s “less compromised” than the 10 bit source.
But seriously, we’re talking the difference between 256 levels-per channel in 8 bit, to 1024 levels per channel in 10 bit to 4096 levels per channel in 10 bit. 8 bit definitely leads to banding on smooth gradients, but I’ve never seen it in 10 bit and I think the difference will be indistinguishable.
But it sure looks good in marketing! 🙂
Larry adds: Thanks, Philip and Bob.