Audio Checkerboarding and Setting Audio Levels

Posted on by Larry

[ This article was first published in the November, 2008, issue of
Larry’s Final Cut Pro Newsletter. Click here to subscribe.
Updated Jan. 2009, and Jan. 2012. ]

I love audio. I love recording audio, mixing audio, and talking to engineers about audio. But if there’s one thing I’ve learned its that if I get five audio engineers in the same room at the same time, I’ll get seven opinions on what qualifies as good audio levels.

Yet, good audio levels are essential to any video project. So, in this technique, I want to talk about two things:

NOTE: This article was written for Final Cut Pro 7. But the concepts described here apply to setting audio levels on all NLEs on both Mac and Windows.


When ever you edit audio into your Timeline, you have to put it somewhere. My philosophy is that since you have to store it, why not store it in a place where you can easily find it.

Even better, why not store it in the same place for each project. That way, when you open up an old project, you don’t need to wonder where everything has been put.

When I first started editing with Final Cut, I took it as a personal challenge to make sure there were NO empty holes in my audio tracks. I edited audio like it was a jigsaw puzzle. No two clips went in the same place, or even on the same track.

I was feeling pretty proud of myself until I needed to reedit a project I finished about six months earlier. It took me almost four days to figure out where I put all that audio!

Organizationally, it was a complete shambles, and it cost me days of time to figure it out well enough to fix it.

I realized then that there had to be a better way. There is — its called checker-boarding — and I stole it from the movies.

Here’s the basic idea: Put the same audio on the same track. Though Final Cut audio is not track-based, both Soundtrack Pro and ProTools are. This means that if you ever plan to send your audio out for professional mixing, they will first take the time to checker-board your audio — which costs you both money and time.

Plus, when you decide to start mixing your own projects, checker-boarding will save you time.

To summarize the benefits of checker-boarding:

Audio engineers have been checkerboarding their projects for years. In fact, my system is modeled after many that I’ve seen used in professional audio suites — where mixing a major motion picture soundtrack can have over 100 tracks of audio!

While I don’t expect you to be mixing an audio project that big, we can still follow the same procedures. So, here are the rules I follow as to what audio goes on which track. Feel free to borrow this for your own projects:


Audio Track Allocations
1 and 2 — Sync sound — what news calls Sound-on-Tape (SOT)
Your main speaker, the audio from the image on V1. I now put all the male voices on one track and the female voices on a second track. This allows me to add EQ effects by track.

3 and 4 — Sync sound — what news calls (NATSOT) The sync audio from B-roll images on V2

5 — Narration. One voice needs only one mike, which needs only one track. If I have male and female narrators, I put them on separate tracks.

6, 7, 8 — Sound effects. Sound effects (SFX) that you buy are mono. SFX that you record yourself are most often stereo. Assigning three tracks means that you have room for all of these. Add more tracks as necessary.

9 and 10 — Music cue 1

11 and 12 — Music cue 2. Putting music cues on separate tracks allows you to slowly fade out one piece of music, while rapidly fading in a new piece of music.

I always follow this track allocation during editing. If, for instance, I don’t have a narrator, then I just pull up the lower tracks. However, I make a point to never combine different audio on the same track. Sound effects are never placed on the same track as sync sound.



The most important rule to remember with audio is that your audio MUST NOT EVER go over zero. Not once. Not even a little bit. Not even when no one is looking. NEVER!

The red clip lights must remain dark for your entire project.

Unlike analog audio, digital audio starts distorting as soon as your audio levels exceed zero and those red clip lights flash on. While you are mixing, these red lights tell you that your audio levels need adjusting.

However, during final output, those red lights indicate that your audio has been permanently damaged and there is not a technology on the planet that can fix it. You’ll need to go back to your source materials and remix.

IMPORTANT POINT: Final Cut only displays audio levels as peaks. Peak measurements provide far higher levels than measuring RMS or “average audio levels.” FCP uses an audio scale called “dBFS” or “Decibels Full Scale.”

There are only three technical offenses that will get an editor fired:

1. Audio levels that go over 0 dB.
2. White levels that exceed 100%.
3. Chroma levels that over-saturate.

Of these, the easiest to prevent is maintaining control over your audio levels. (Keep in mind that no one, really, cares how soft your audio goes — that is an aesthetic decision you get to make. Technical problems only exist when your audio gets too loud.)

However, audio sounds better when it is loud than when it is soft. So our goal is to get as close to 0 dB as we can, without going over.

Note: Here’s another little-known audio factoid — audio levels are logarithmic. Every time your gain increases by 6 dB, your volume doubles. This means that when your levels are hovering around -6 dB, you have as much audio gain between -6 and 0 as you do between -6 and -96. -6 dB is the 50% point for your audio levels!

So, here are levels that I recommend for all general mixing — creating audio for DVDs, the Internet, broadcast, weddings — anywhere that there is not a specific technical specification that you need to meet.

Audio level we must not exceed: 0 dB

Total audio mix level: -3 dB to -6 dB

Principle speaker (SOT) audio: -6 dB to -12 dB

Sound effects audio: -12 dB to -18 dB

Music when its an underscore: -18 dB

Use these levels as a guide to get your audio levels close. Then, use your good speakers, your good ears, and your good common sense to make your audio sound perfect.

Remember, the only level that counts is the level of the total mix.

Note: Don’t confuse reference tones with audio levels. Reference tones are used to make sure audio levels match between various pieces of gear. They do NOT determine the maximum point of your audio levels.

I’ve found that these levels sound great when used for DVDs, web video, and general purpose broadcast.

Note: Many broadcast commercials are mixed to a level of -10 dB, with no dynamic range. Again, if you are given specifications for your mix, please follow them.


You sometimes see references to dBFS. This refers to how audio levels are measured. Final Cut uses a dBFS scale. However, that is not the only way we have to measure audio levels… that would be too easy.

In the UK, they use an entirely different audio measuring system called PPM. There’s a very useful Final Cut utility called PPMulator+ that displays PPM levels inside Final Cut projects. Generally, PPM levels between 3 and 4 are considered appropriate.

In Europe, they use dBU.

Older, analog systems use VU (Volume Unit) measurements.

As I was researching this article, I got a very helpful email from Woody Woodhall, of Allied Post Audio in Santa Monica, and head of the Los Angeles Post-Production Group. Woody writes:

These systems are all measuring the same thing. What’s confusing is the dBU and dBFS. The dBU looks to me to be modeled after analog metering which goes into the “+” range and dBFS is absolute and stops at zero. The formula to convert from one to the other is:

0 dBU = 4 PPM = -18 dBFS = -4 VU

[Depending upon project] levels for broadcast delivery are:
+8 dB = 6 PPM = -10 dBFS = +4VU

Like video editing, audio mixing is both art and science. But it need not be a mystery. Using these settings can help your projects sound great — while keeping you on safe ground technically.

UPDATE – Jan. 7, 2009

Bob Merrill adds:

I wanted to add one thing regarding audio levels. If you are using Compressor to make MPEG-2’s for your DVD’s, make sure and set the “Dialog Normalization” to minus 31. Failing to do so will result in a final audio level lower than expected.

Larry replies: Thanks, I’ve mentioned this before in other training, but am happy to add it here.

UPDATE – Jan. 7, 2009

Woody Woodhall sent me an illustration of the mess we’ve made of audio levels. He writes:

If [you are creating a project] for broadcast my experience is that the “peak” levels delivery specs are either all over the place or non-existent. In December I delivered a commercial to [a large] cable [system], I asked for their audio specs. They sent me a very nice sheet that had a lot of information regarding the video and a sentence that said “Program Left should be Channel 1 and Program Right should be Channel 2.”

I then called, found a master control operator who told me to “do whatever you usually do.” So much for standards…

Larry replies: Thanks. I’ve also gotten a couple of emails expressing confusion over being told to set program levels at -20 dBFS. Because audio is logarithmic, setting program levels this low means that you are using less than 12.5% of your total audio dynamic range. This makes no sense to me! However, I’m doing some more research and will supplement this report if I find anything that needs revision.


UPDATE – June, 2010

Recently, I was talking with the tech support folks at Aphex, who make high-end audio gear for network and broadcast work. They told me that MP3 files are optimized for a maximum audio level of -6 dB, while AAC files are optimized for a maximum audio level of 0 dB.

Since much of the audio work that I do gets compressed into both formats, I’ve lowered my audio levels a bit so that they are peaking between -6 and -4.5 dB.

My audio still sounds good and I get nice clean results when compressing to MP3 files.

(For the record, almost all my encoding is the spoken word, as opposed to music, and I use a data rate of 56 kbps for MP3 compression of mono audio files.)


Based on this, when I normalize clips, I’m now normalizing to -4.5 dB. I also use the same level settings when using the Limiter filter in Soundtrack Pro.


I’m sure I mentioned this somewhere, but, if not, just a quick note that the audio meters inside Final Cut measure audio peaks on a 0 dBFS scale. There are other audio measures, such as PPM in the UK and Europe, and average level meters, which can be added to FCP.

So, if you are confused, as I was awhile back when talking with an audio engineer who wanted to mix all the audio to -20 dB, while I wanted to mix it a LOT higher, remember that average audio level and peak levels are not the same thing.

Regardless, remember that your audio levels still can’t go over 0 dB. Not even a little.

Bookmark the permalink.

16 Responses to Audio Checkerboarding and Setting Audio Levels

  1. Micha McLain says:

    While your organizational thoughts are good, and I use a very similar system every day, Most of your comments on mixing would be laughed at by any full time professional audio engineer. Im only going to point out innacuaracies.

    1- never tell someone what level or even approximate level to set anything. You outline of levels for nat sound, efx, music, dialog is a joke. You need to know that more than PEAK levels, which is what you are referring to matter. More on this in a second. Frequency also has a big impact on levels of tracks as certain frequencies are perceived louder than others. For this reason alone, don’t ever take advice from someone telling you where with numbers to out your mix. The only valid point you have on this subject, is not peaking 0dBfs. That’s it.

    2- keeping you mix at -6 is likely to destroy your dynamics. Dynamics are the changes in levels. Keep your mix here and you will destroy it. Movies typically only use the highest range of the scale for big impacts like explosions and such. Rarely will it ever get this high outside of these kinds of things.

    3- While an increase of 6dB is twice as much “POWER”, it is not twice as loud. Here’s the catch, LOUDNESS is a perception. Perception is both a physical and emotional response. The only part here that is measurable is the physical part of perception, but the rest matters too. In many studies done in the past, it has been pretty much universally accepted among the audio community that 10dB increase sounds twice as loud to most people. So, 6dB is twice as much Power (amplification) but 10dB will be the approximate level needed to make something sound twice as loud. The inverse of this works the same.

    Back to the beggining… Along the same line as the power thing I mentioned above, mentioning peak audio levels for anything other than not going above 0dBfs is pointless nonsense. The reason is average levels of a wave form. This is a very complicated subject, so I won’t get into it too much here. However, what’s impart ant here is average level between the peaks and the lowest parts of the wave. To further complicate matters, there are more than one way to measure this with meters. One is RMS meters, the other is Dorrough. If anyone has ever worked in a broadcast network, or a film studio, you will always see Dorrough meters. They are considered by many to have the best algorithm to equate this average weighting closest to what humans PERCEIVE. After all this perception is what’s important.

    Here’s a trick, take an audio track from a commercial CD, preferably rock, then take a piece of dialog or some nat sound you’ve recorded. Place them in two separate track. Solo the first, and set it’s level so it’s hitting -6dBfs as suggested above. Now mute this track. Solo the dialog track and adjust to the same level. Now play and switch back and forth between the two. I can almost garuntee that you will this the music is louder. This is because the difference between peak and the quite parts on current rock albums can be as little as a few dB, while the dialog will have a much bigger range pushing it’s AVERAGE way down. In turn, this will make the dialog sound quieter, even though according to above, you were setting thing appropriately.

    This subject has mass confusion in the industry, but most video editors are not nearly qualified enough to answer these questions.

  2. Micha McLain says:

    Also, PPM maters are not average meters, they are better peak meters. And -20dbfs is a perfectly acceptable place for a mix to be centered. However, it should be the averages that are here, not peaks. Peaks would be quite a bit higher. You can not make these judgementsmwithout Dorrough meters, which is why they are a must for mixing audio for video. The come in three different scales, -20 AES standard, -18 European standard, -14 which is used more for broadcast both radio and tv.

    The AES standard is what the film industry uses. This is why almost all movies except the real low budget ones, have a very similar dynamic range and overall level. They all use dorroughs at -20dBfs. This is where there normal level is. Unfortunately there are no standard for this for anything but movies. It’s asham, but true.

    Again, much more on this, but I don’t want into too much detail here.

    • Larry Jordan says:


      These are all EXCELLENT comments – however, I would disagree in several areas. First, the purpose of this article was not to teach a professional audio engineer how to set levels. This would be like telling a professional colorist how to set a look – with experience they will always say “by feel”… which is true. But in order to get experience, you need some boundaries to work within.

      I’ve taught enough new video editors who have no clue where to even start in setting levels. Giving them a range of numbers is a very useful place to begin.

      You are also correct in that many audio levels are set based upon average levels, but FCP does not, out of the box, display average levels, only peaks. It also doesn’t display PPM levels natively, either. This means that for someone trying to do a mix for a video to be posted to the web, they need to make all their decisions based upon peak levels as they are the only measures FCP provides.

      For anyone doing a serious mix for broadcast, FCP is the wrong tool – ProTools, or, perhaps, Adobe Audition is a much better choice. I also agree that many video editors don’t have the audio chops to do a solid mix. However, that doesn’t stop their clients from requiring them to do so and not providing a budget to hire a sound engineer.

      The purpose of this article was to give them some basic tools to get them started in the right direction.

      I really appreciate your taking the time to respond. Your answers make this article much more valuable.


      • Buckie says:

        If your aitcrles are always this helpful, “I’ll be back.”

      • katty vandenberghe says:

        … I agree Micha, I am one of these people who is more of an artist playing with video than professional anything. I do not have the kind of equipment that allows me to accurately experience audio levels, so I’ve always been ‘shooting in the dark’ as it were – constantly frustrated that nobody could give me a clear set of outlines. The guidelines set out here are an overview on boundaries and ranges, nothing more. And Larry does advise us to use our ears, speakers and common sense to do the rest. Thanks for your details as well Micha, but you could’ve done it without insult.

  3. […] Note: If you have purposely checkboarded your audio leave it checkerboarded as this data will be translated to Roles in FCPX. If you haven’t checkboarded your audio you may want to consider doing so. […]

  4. Derek Boyes says:

    For me as a writer / director who dabbles in all areas of filmmaking, Larry’s article has been extremely useful for my level of understanding. Subsequently the comments that followed have also been very informative particularly for when I do larger feature film work with dedicated sound designers and mixers. ALL information offered here is valid and useful at some level, so please remember who this website is aimed for and maintain a professional level of respect in your comments.

  5. Nathan says:

    Hi Larry,

    Very helpful article. However I am a bit confused when you say the following:

    Audio level we must not exceed: 0 dB
    Total audio mix level: -3 dB to -6 dB
    Principle speaker (SOT) audio: -6 dB to -12 dB
    Sound effects audio: -12 dB to -18 dB
    Music when its an underscore: -18 dB

    You say the total audio mix should range at about -3db to -6 db, but how can this be possible when you list the other levels to be no higher than -6db? By your calculations and using your levels, the total mix of the SOT, sound effects and music underscore would range at -6db to -12db, not -3db to -6db.


    • Brittany says:

      When you add the music to the dialogue, the total audio mix level increases. Try it. Typically I stay within these ranges and the total audio mix level does end up within the range he suggests.

  6. Oscar says:

    Hi Larry,

    We recently received a title sequence from a GFX company, and the audio for this title sequence peaks at around 0db, with the red lights flashing every time it does so. It does not sound distorted.

    I spoke to the producer and he says that it’s nothing to worry about, as long as it just hovers on the 0db and doesn’t stay there too long…??

  7. Mitch Doty says:

    Most audio for radio and television are defined a standards organization. There are ATSC standards used in North America, EBU in Europe and the Middle East, etc. Our audio broadcast levels are determined by EBU Recommendation 128, or R128. Regardless of whether you are using a PPM meter, there are several, BBC PPM1, BBC PPM2, or VU or you are reading the scale in dBFS (dB Full Scale) it is necessary to know how these different types of metering methods as determined by rise time and fall time compare to each other. EBU R128 specifically defines alignment levels for studio audio. From the perspective of broadcast engineering all levels are defined by whichever standard has been adopted. None of these things are arbitrary.

    With regards to the movie industry (and I am guessing) it’s likely that SMPTE standards (Society of Motion Picture and Television Engineers) define levels to be used for audio.

    The general consensus across the industry is that if one thing holds true it is the fact that there is more bad information being passed around than good information.

    Most people I have met do not know the difference between level and loudness, or how they are related. They have no idea what the scale of an audio meter means and most of them are talking out of their asses when they give advice.

    If you really want to know what is right, read the standards. Study. Learn the difference between the audio scales. What is BBC PPM1? What is BBC PPM2? What is VU? What is alignment level? What is QPPM (QuasiPPM)? Stop giving bad advice, and more importantly stop listening to bad advice. Every engineer you meet worth their salt will agree with Larry. If you are using a PPM meter to measure audio levels, NEVER, EVER, EVER go over 0db. It is the clipping level and it will result in distortion. I personally let levels to peak at -9dbFS which leaves 9dB of headroom before hitting the clipping level. That is our house standard. We have adopted EBU R128.

    Standards are industry specific and geographically specific.

    The following link might be of interest to some readers.

  8. […] recommendations are a little on the quiet side when compared to some industry experts, but I find it much better for an audience to turn up the volume than run the risk of peaking or […]

  9. […] When you’re editing for television, there are much stricter guidelines and rules, but editing a video for web or even an independent film allows you more freedom in the process. Larry Jordan has an excellent piece on organizing your audio timeline and setting levels. […]

  10. Chad Philp says:

    If Music when its an underscore is set to -18 dB, what do you EQ music to when its the only audio source(dialog stops, during b-roll, etc)?

    • Larry Jordan says:


      EQ means to adjust the frequency range of audio – setting levels means to adjust the volume.

      When there’s no other sound except the music, you can bring the audio level up to whatever range you want – provided that at no time do the audio meters exceed 0 dB. The exact level depends upon what role the music is performing and how it interacts with the story.

      I generally don’t EQ music unless I need it to sound like it is coming from a small speaker or telephone.


Leave a Reply

Your email address will not be published. Required fields are marked *

Larry Recommends:

FCPX Complete

NEW & Updated!

Edit smarter with Larry’s latest training, all available in our store.

Access over 1,900 on-demand video editing courses. Become a member of our Video Training Library today!


Subscribe to Larry's FREE weekly newsletter and save 10%
on your first purchase.