How Much Video Resolution is Enough?

Commentary2.jpgThe first rule in any business is to make a profit so it can stay in business. In the technology industry that means continually releasing new products. In the media industry that means continually releasing products that create or support ever-higher-resolution video.

But… that being said, just how much resolution is enough?

According to research done by Panavision, the differences between 2K and 4K images can not be perceived at normal viewing distances; say 6 feet for the living room or 20 feet for a theater with a 40-foot screen. To say nothing of 5K, 6K or beyond…

There is a marketing benefit to camera companies at providing cameras that shoot at higher resolutions: it gives them something new to promote. But, what’s the benefit to us?

NOTE: On a 54-inch TV, pixels for a 1080 HD image are placed about 0.02 inches (0.5 millimeters) apart, while pixels for a 4K image are about 0.01 inches (0.25 millimeters) apart. These are tiny, TINY distances requiring even tinier pixels!

I don’t know a single actor that likes being shot in higher resolution. Actors wear makeup for a reason, after all. The greater the resolution, the more we must work to soften skin tones and blur backgrounds using depth of field. Yet, blurring is the exact opposite of the benefits of shooting higher resolution.

NOTE: If you are doing effects, higher frame rate video will yield better results (meaning sharper edge detail) than higher resolutions, especially for keys and rotoscoping because it minimizes motion blur.

Yes, higher resolutions give us options for changing framing later, during editing. But, for that to work, we need more of the image in focus, which works against softer, more “painterly,” cinematic depth of field and pull-focus shots.

It is a conundrum.

I’m not advocating a return to standard def video. HD is lovely and I enjoy watching it. And I’m not opposed to anyone shooting higher resolution for the benefits it can provide; the largest of which is the ability to reframe later.

(Though, it could be argued that shooting to reframe later is an indication that the director has not done their homework on the story they want to tell, nor who is the focus of a scene.)

There are experiments being done where an 8K frame is divided into 1080 “regions.” The camera is locked down – on a football field for example – while the region, like a window, is panned as the action changes. We don’t pan the camera, we pan the picture within the camera.

But I think we are going to reach a point of diminishing returns on higher resolution. Yes, we will be able to create higher and higher quality sensors, which capture more and more pixels. But, if our eyes can’t see them, what’s the benefit?

Compounding the confusion is the codec the camera uses. Shooting 4K images only to compress it in the camera using H.264 is an exercise in robbing Peter to pay Paul. What we gain in extra pixels is lost due to compression.

If you are looking for ways to invest your money for the future, I don’t think the pursuit of piling on pixels is a wise decision. Your clients will not be able to perceive the difference, which means they won’t pay extra for it.

However, there’s a different direction that both Adobe and Apple are pursuing: HDR. While the implementation of the spec is still a work-in-progress, for instance, Apple supports wide color, but not yet brighter pixels, the results are more than worth the effort.

There is a significant, perceptual improvement when viewing images with brighter pixels or richer colors. As our sister site, DoddleNEWS, wrote about this week, YouTube is now supporting HDR video posted to its site. (This means that, internally, YouTube is compressing video using a 10-bit depth codec, rather than the traditional 8-bit depth that H.264 supports.)

The cool thing about HDR is that it makes HD video look better. This new format means shooting or recording different codecs, not different image sizes. We need to shift away from capturing H.264 and into recording RAW, Log or ProRes codecs in order to capture the additional bit depth needed for this new format. This need not necessarily mean a new camera, just one that supports recording to an external hard disk that can record these much larger data sets.

But the benefits of HDR are easily perceived by inexperienced clients and equally easily billable, because they can see the difference and the difference is dramatic. And, now that YouTube supports this format, it becomes even easier to convince clients to spend the money.

This is not yet as easy as throwing a switch. You will need relatively new computers (within the last year or two). You’ll need more storage for all the extra data. You’ll need a way to monitor the new video formats. And many hardware vendors are still scrambling to support this format, so hardware will change quickly this year and next.

However, this is an opportunity that our audiences can see, clients will pay for and we can continue to make money to tell the stories we want to tell. While broadcast is still years away from supporting HDR, we can distribute our HDR work today via the web.

I think we’ve reached the point where we have enough pixels. Now, we need to make the pixels we have look better.

Just something to think about. As always, I welcome your comments.


14 Responses to How Much Video Resolution is Enough?

← Older Comments
  1. latvis says:

    I absolutely agree with your resolution logic and I think the real world backs it up…….but, my experience is that ‘downrezzing’ is a very effective way to improve ‘perception quality’ of 1080p content. We use Red cams and have recently received our first of 2 Helium sensored Epics. They are 8!K!……and, yes, resolution wise it’s over the top but it sure makes some nice stills (who cares?). What sets that sensor/camera WILDLY apart is the color saturation and incredible low light performance. Taken to the limit it exhibits almost NO noise and produces 16 stop dynamic range shots that are incredible!

    You somewhat indicated in your article that we need better performing pixels but not necessarily more of them……..I believe that to be very difficult to argue with.

  2. Vaibhav says:

    Hey Larry, Can tell much about how convert video which is 8 bit H.264 (1080p) to 10 bit H.264.
    Which process normally You Tube does.

    • Larry says:

      Vaibhav:

      There is absolutely no reason to covert 8-bit H.264 to 10-bit. YouTube supports both versions of H.264.

      You won’t gain any quality in your image and you won’t automatically support HDR display. All you’ll do is waste time and increase the size of your file.

      Larry

← Older Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Larry Recommends:

FCPX 10.5 Complete

NEW & Updated!

Edit smarter with Larry’s latest training, all available in our store.

Access over 1,900 on-demand video editing courses. Become a member of our Video Training Library today!

JOIN NOW

Subscribe to Larry's FREE weekly newsletter and save 10%
on your first purchase.