The Meniscus Effect – Blending Digital Audio Analogue-Style

waveform

Many engineers and producers love the sound of analogue, despite (or perhaps because-of) the superb quality of digital audio products. Analogue is felt to be more musical and it seems easier to mix songs done in analog formats. Why is this?

Most people, when asked why they like the sound of analogue, focus on things like “warmth” and “roundness”, the slight bass hump you get off analogue tape, the subtle musical harmonic components, the naturally better average-to-peak ratios that make your music sound louder and fuller. The inherent infinite-band compression/limiting of analog tape that acts as a de-esser. Actually all of these do sound great!

Of course there’s also the downsides that many forget: Noise. Distortion that you don’t want. Noise. Expense. Maintenance. Non-linearities in the sound that you don’t want. Noise. Crosstalk. Track limitations. NOISE!

What most engineers don’t seem to talk about much is WHY it’s easier to mix analog sources. Those that do tend to look at things like crosstalk between the channels or across tape tracks, or maybe the individual analog channel non-linearity making the music sound richer as the main reasons. Many younger engineers who haven’t had much experience with tape probably don’t even appreciate the difference.
Whatever it is (and I think it’s definitely a complex blend of a few things including harmonic distortion), I think the result is that each sound is easier to blend or “gel” with another sound. It’s like each sound has a meniscus on it, that grabs other sounds when you get close enough. Analog also has a strange ability to have things really loud in the mix, without the detail overshadowing the other instruments.

Of course, if it’s perfect clarity you want, then you don’t want this happening, but in most cases you want a song to feel homogenous and musically prosodic – with all the elements gelling cohesively together into something that feels larger than the sum of its parts. It becomes a pleasing landscape rather than just numerous close-ups up of tree bark.

This also depends on the type of music/sound you want to make. If you were an artist – are you going for photo-realism, or for a more impressionist look? Or Surrealism? If it’s a photo, are you going for a snapshot with all elements in focus (please no!), or a nice well-lit shot with foreground in focus and background out of focus (hell yes!).

My idea of the Meniscus Effect – and let me be clear that this isn’t something that is new or that I’ve created, it’s simply a method I use to consciously perceive a mix in a useful manner – and it might be useful for others as well. It’s simply the art of blending of each part into another – the gentle blurring of the edges – making sure the main part of each sound retains its distinct shape and character.

One of the problems with mixing in digital is that the cleanliness of the recording allows each sound to retain its focus from the peak level to the dark limits of lower bit encoding. It’s like a perfect digital snapshot where everything’s in focus, all at once. The information’s all there, but is it aesthetically pleasing, and does it blend?

In practice – what’s currently happening today is more of the same sort of thing that’s always been used – judicious use of EQ, reverb, distortion and compression to blend the parts.

So let’s look more closely at what we need to help gel the sounds together.

The key thing is just blending into the digital silence. Solutions? Subtle reverb, compression to bring up the noise floor in the track – and preferably in a rhythmical manner, and adding distortion for blending harmonics.

Reverb
The general rule of thumb with reverb is that the longer the decay, the quieter it should be, otherwise it just washes everything out.
I recommend tweaking the pre-delay so it adds some rhythmical attribute (and adds clarity on vocals), and also filtering out some of the bottom-end to clean it up. Use a nice plate or hall for the “long” reverb, blended so you almost can’t even hear it, but you definitely notice it’s gone when it’s muted. Don’t make the reverb really toppy/bright – keep it “warm” sounding.

Compression
Compression is basically reducing dynamic range – turning down the loud bits, then you turn it up again so you can still hear the track at the same sort of level. The key use here is very similar to the reverb – to make the compressor release match the tempo. In this case your “meniscus” is the musical pumping up of the compressed track’s background around the beat. Again this is pretty subtle – and it’s probably better to start on the slow-release side than fast. In this scenario you’re after “character” rather than sheer loudness!
Also – sticking a gentle compressor over the master (stereo) bus works wonders. Note that it should be almost not even doing anything – 1 or 2 dB of reduction is a lot. It’s there to help gel the mix, not add overall level. Save that for the mastering engineer.

Distortion
Often dialed-up to add a level of aggressiveness to a track, distortion has a lot more uses than this simple task, but it means being more careful about the “type” of distortion you use. You may have already noticed that adding distortion to an instrument during mixing doesn’t necessarily make it jump out any more than it did. If anything, the extra harmonics you generated can create masking frequencies against the other instruments in the mix, and it can actually become harder to hear and add more “mud”.
The secret is to use this same masking effect as a blending tool. Make sure you’re not adding brittle harmonics to the high frequencies as these will simply make your mix sound harsh and abrasive (unless that’s what you want). What you’re looking for is more warm mid-range distortion – ideally in a range that’s favorable to the instrument or vocal. Use only small amounts.
Feel free to duplicate a track and run it in parallel with distortion added – and maybe those two versions could have different EQs so you’re not distorting the entire frequency range in the track.
Don’t be scared to reduce the treble at the same time – see the next section.

EQ
Without resorting to extensive tutorials on how to effectively EQ your mix – let me just point out one useful tip. Don’t be scared to make some of your tracks “darker”. This means reducing the high-frequencies on certain tracks, instead of boosting the highs on absolutely everything. In general, don’t EQ while you’re soloing tracks – do it while listening to the full mix.
I generally find notching out somewhere in the 180-250Hz region on each track cleans things up a bit in the normally-dense low mids, but make sure you don’t strip out the warmth of this region completely as it is important to the body and power of your mix.

Although the current strategy nowadays is for cutting the lows on each of your tracks (with a high-pass filter) to get rid of any sub-sonic power-sucking rumble, sometimes you may need a track to actually contain some low-frequency depth – just choose the right track! I highly recommend the VOG plug-in from UAD to add some smooth fatness to a track (especially kick-drums!) – it’s basically a resonant high-pass-filter, with the turnover-frequency peak positioned at the frequency you want boosted.


Other experiments to try:

Avoid having all the attack transients on the same spot. Offset kick drum, bass, and rhythm guitar tracks slightly to “stagger” the attacks – ie the opposite of quantising. Try delaying the bass for starters. This can blur and expand the energy on each beat. Make sure your tracks are super-tight to start off with.

Recording the room ambience with a track. Rather than fight it, let it in. Not too much though, unless you have a million-dollar room, otherwise it will quickly dominate and can sound pretty awful. Try adding distortion to the room ambience. It helps if it’s on a separate track. One thing I like to do is capture my own impulse reverbs of the recording room and then add it back in to the dry recording. This gives you full control at mix-time over “dry vs room ambience”.

Leave noise on tracks. (But clean up lead vocals and any featured instruments). All those weird little background noises and people talking in the background when you were putting down what was originally supposed to be your guide acoustic guitar – keep it. In other words – don’t clean all your tracks up too much – just the ones that need it. This doesn’t apply to the start/end of songs – it’s better to keep these really tidy as they frame your song and show that you actually meant that other stuff in the middle to sound loose.

Use “dirty” or highly-compressed loops. Bringing up the textural background in a single loop ripped from eg vinyl can add a certain magical blending factor and way more character to your song. Make sure to clear the sample if you do this.

Add extra ambience tracks. Party sounds, outdoor ambience, car drone, room tone, or other subtle sound effects. It’s like having subtle blending background noise, but it’s a lot more interesting and with the extra possibility of occasional random coincidences that add value.

Use the free “Paulstretch” or something similar to slow a copy of your entire song down glacially (eg 8x) and use it to add background textural ambience that conveniently works tonally with your song. Bonus: makes a nice intro and is also good for creating atmospheric film soundtracks.

 

Over the Top and Back – Avoiding the Uncanny Valley in Music Production

uncanny reverb valleyOne of the dangers of nibbling away at mixing songs – commonly with your mouse rather than a dedicated audio control surface or mixing desk, is that it’s easy to be far too conservative when adding effects and the like.

What typically happens is you slowly push the level of an effect up until it starts to sound like it’s too much – then you back it down slightly to get a nice balance of “wet” effect vs the “dry” sound source. Ahhhh. Nice.

This is fine – but there are often different contexts that effects work within when the balance of effect vs dry sound change radically – so by using this conservative method, you’re always remaining inside the one safe context of the sound balance without ever realising any of the other creative possibilities available.

If you keep pushing the level of the effect past the point where it sounds bad or too much, then you can sometimes get beyond the audio version of the “Uncanny Valley” into a different range of possible sounds.

A simple example is reverb. The first “conservative” range remains within the context of adding a nice subtle tail to a sound to make it blend, or perhaps to give a subtle halo of space around the sound.

As you keep pushing the reverb level up – the sound becomes muddy and cluttered as the dry sound and the reverb fight each other. This is the audio version of the “Uncanny Valley”.

If you keep pushing the reverb level even further, you will change the reverb’s context completely. The room environment is now dominant, with your instrument or voice existing within it. Of course at this point it will probably also become apparent that there will need to be some tweaking of the reverb to clean it up a bit – adjusting predelay, reverb time and perhaps applying some low-cut EQ to take out some mush.

Reverb is not the only thing that this works with – try it with any applied effect like chorus, flange, distortion, echo etc. Or try going to the extreme and remove the dry sound completely. (Try pre-fade effects sends with the fader pulled right back).

It certainly opens up many more creative possibilities and can help you discover fresh sounds for your mix to make it a little more exciting. Plus it doesn’t take much more effort or time to do this and it has zero risk! So make sure you go way over-the-top when applying effects, then just bring it back to where it works best for the song.

3 Common Mistakes of Lyric Writing

As a producer, one of the things that is most apparent to me is the difference between an amateur and professional songwriter – even if that amateur is talented and doing well in their career. Many bands and artists come into the studio with what initially seems to be a great song, but in the process of putting down the vocals, it can become increasingly apparent that the lyrics have not had the same level of development (or writing expertise) as the rest of the song, often with basic mistakes that can leave an otherwise excellent song fundamentally flawed.

Lyric-writing is a craft as well as an art – words have more or less power and meaning depending on the order and context in which they are conveyed, and knowing some tricks to getting the maximum impact (and least amount of song self-destruction) from your lyrics should really be of high priority. Of course, there are no “rules” in writing, but there are observable effects on the listener depending on how you construct the lyric, and you can simply choose to use these tools or not.

Here’s my three worst contenders for shooting yourself in the lyrical foot.

1) Don’t use perfect rhymes

This is probably the most amateur mistake of all.
Try to use other types of rhymes instead – eg family rhymes, internal rhyme, additive, subtractive, assonance or consonance rhymes.
Although a part of our brain always desires perfect rhyme, we have come a long way since the early days of songwriting, and all those obvious perfect rhymes have been so well-used that they are now totally cliched and too-predictable.

Get yourself a rhyming dictionary (there are online versions too, although I prefer the MasterWriter app) and choose rhymes that are less obvious and maybe pleasantly surprising. Instead of using the perfect rhyme “Bread” and “Head”, maybe use a family rhyme – eg “Bread” and “Web” or “Tear”. In singing, we generally tend to rhyme vowel sounds, and the consonants matter less. Check out books/articles/workshops by lyric guru Pat Pattison for more details on rhyme types.

Note that sung rhymes are not usually the same as written rhymes, so make sure you sing them as you write to make sure that they ARE singable.

2) Use “spotlighting” effectively

There are natural accents within a musical bar that will automatically highlight or spotlight to the listener any word or syllable placed upon it. These spotlights tend to be on the downbeats of the bar, plus a big one at the end of a line, and even bigger at the end of a verse.

Ignorance of this behaviour means that you may end up with “nothing” words like “the”, “and” or “but” placed on these prize positions in the bar rather than your cool meaningful words.
This risks weakening your lyric and can even undermine the meaning of it by placing importance on the wrong word.

Back in 2007 I wrote a song, just before going to a Pat Pattison workshop, that included this lyric:
“The tide is slowly rising, Blood red sun on the horizon”
Spotlighting these words:
Tide, slow, rise, bloodred, sun, the, horizon.

Notice how “the” has a spotlight that it really doesn’t deserve?
I fixed it in this example by removing it from the spotlighted position:
“The tide is slowly rising, Blood red sun on ….the horizon”
Here’s the link if you want to listen (warning – ultra-demo quality!): Tied up in Knots
Note also that “rising” and “horizon” rhyme when sung. 

And in relation to syllable position:

3) Don’t put the “emPHAsis on the wrong sylLAble”*.

As much as possible, try to sing as you would normally speak in conversation. If you don’t, you risk breaking the meaning of what you are trying to get across, and it can sound contrived, amateurish, or just like you haven’t taken the time to make the lyric fit the music properly.

You should be able to read your song lyrics out, spoken-word fashion, and the phrasing shouldn’t be too far away from how you sing it. Or vice-versa. This is most noticeable when you’re going for an “authentic”-style delivery (rock/blues/indie) rather than stylised (r’n’b, soul, pop) – accenting the wrong syllable can instantly break authenticity. The listener will go “huh?” and the flow and belief is broken.

There are many more lyrical tips than this, of course, and some equally or more important, but the best idea is to do a proper workshop or short course on it, or at least get a decent book or two about how to structure your lyrics.

For those of you who balk at being told what to do – I remind you that these are not rules as such – they are simply based on observable effects on a listener, and you can still go ahead and do whatever you want.
Sometimes you might need to make a call between including a word that adds the perfect meaning to your lyric, and having to jam it in there a bit more clunkily since it doesn’t quite fit. But you should definitely be aware of the risks on how the listener will receive and decode your meaning when you decide to do things like this.

And finally – you should ALWAYS use some kind of rhyming dictionary – otherwise you are relying on a choice of only the rhymes that you can currently remember. Which is often only a small fraction of the huge amount of available rhymes – many of which are probably more interesting than the one you can currently think of. 

*As spoken by Mike Myers in “A View From the Top”.

References:

Pat Pattison: Essential Guide to Rhyming (Formerly titled Rhyming Techniques and Strategies) Berklee Press, distributed by Hal Leonard January, 1992
You can order all these three books – Writing Better Lyrics (second edition), Essential Guide to Rhyming and Essential Guide to Lyric Form and Structure for a special price here

Jason Blume: Writing Hit Lyrics with Jason Blume – get the book here

The Lucky 13 Song Mixing Tips

Before I get started I just want to reinforce something I’ve mentioned in earlier posts – sometimes a reduction in parameters actually generates more creativity. Being aware of a set of limitations, or guidelines, can actually allow you much more creative control over your final mix.This could mean limiting the amount of effects that you allow yourself to use, or a more obvious one is to only use a particular set of effects that suits the genre or style. If you have the permission to do it, perhaps editing tracks or even removing “surplus” instrumentation or vocal is the first step.

Approach-wise, ideally you want all aspects of a song to reinforce together and create a stronger impact, and if you aren’t aware of what you’re doing, it’s very possible (in fact more common than you think) to get a generally nice balance of instruments that somehow doesn’t “gel”. You can hear everything, but it lacks emotional impact.

So here’s a bunch of ideas to think about next time you’re mixing a song – there are many more ideas and concepts to experiment with than these, but I stopped myself before the post became a novel.

1  Know what the song’s about. Clues are in the lyrics. Knowing what it’s about gives you the opportunity to amplify the concept rather than inadvertently fighting it. That doesn’t mean you have to “follow” the lyrics with the mix in a literal sense – you might do nothing at all in that regard, but at least you won’t be fighting the meaning of the song without even realising it, and when it comes to trying to think of creative mix directions, it’s yet another clue to help you.

2  Know the context of the music. What’s the genre or style of the artist. How does it relate to the artist’s identity? Being aware of this really makes it much more likely that you’ll promote that artist’s identity and overall concept, plus the artist will be more likely to appreciate what you do with the mix. For example does the artist exemplify “authenticity” where a raw, “character” sound with any intonation problems remaining unfixed is most desirable? Or is it about slick and smooth production?

3  Be adventurous. A mix is not a simple balance of levels of the instruments in a mix, it’s about featuring various aspects that you think the listener would like to hear, or more accurately needs to hear at any given section of the song. Pretend it’s a movie – how do you present each section of the song? Don’t be scared to go “over the top” with effects, fader moves and featuring of mix aspects – you can always tone it back if need be. Don’t be scared to turn vocal up loud – trying to hide weak vocals makes it even worse. Even ugly actors have to have close-ups in a movie to make it all work.

4  Think about texture and tone. It’s partly tone, partly level, partly how dominant something is in the mix. If you compress something – its texture changes. Listen out for it tonally as a sound rather than just checking it’s variation in level. How pervasive is it compared to everything else, despite its volume in the mix?
How does it link into the overall texture of the song? Textures are like a tonal colour palette – you probably don’t want to mix a neon green element in with some nice earth tones (remember there are no rules!), but then again you don’t want everything the same shade of beige.

5 It’s about melody In even the most distortion-fest mixes, our human nature will use our built-in pattern-detecting algorithms to extract a melody out of it somewhere, whether it be in the movement of the harmonics in the wall of guitar noise or in the groovy bassline. Make sure there’s one dominant melody at any given instant, or if there’s more than one, that they aren’t fighting each other and canceling out.

6 The pocket. It’s more than something to put your wallet in. It’s that magic interaction of instruments when it all suddenly locks into a groove. Spend some time adjusting relative timing of instruments to see if you can help the groove “gel”. You’ll know when it happens because it’s magic and you’ll start moving with the music whether you want to or not. Note that Beat Detective and other forms of quantization can fight this effect – it’s “felt” rather than being on an exact grid. Saying that, if the playing is too loose then a timing grid is definitely a step up.

7  Keep it simple stupid. Less is more. These things are fundamental truths, despite our over-familiarity with them often leaving them as meaningless statements in our minds. Think about the mix as a photo – the more people you want to appear in the photo, the smaller they’ll have to be. Don’t be scared to bring the main things to the foreground, and push other things back to the point of blurriness or being hidden behind the main elements. A good mix is not about individual band members’ egos, it’s about the overall blend. When you think about it, the individual band members have the least idea about what the mix should sound like – they all hear completely different versions of a mix depending on where they stand/sit when they perform.

8  Three “Tracks”. Back in the olden days, after mono and stereo, there were three tracks. One was for “Rhythm” (and could include drums, bass, percussion and rhythm guitar for example), one for Vocals and one for “Sweetening” which might be things like brass, strings, lead instruments etc. This strategy is still a great one to keep in mind for mixing. It forces you to think about your rhythm section as one single thing, and you need to make it all gel. Bass needs to lock in the pocket with the kick drum. Sweetening nowadays is whatever else you need outside rhythm and vocals. Think carefully about which mix elements fit into each of these three roles, and if all three are already populated – maybe it’s time to do some cutting. Note that some instruments such as guitars might switch between modes depending on what they’re playing at the time – rhythm, fills or lead.

9 One thing at a time. Rather than thinking of one of the aforementioned three tracks as just “Vocals” perhaps it’s better to look on it as “Melody”. The melody line often chops and changes between vocal, instrumental fills and solos. If you think of these three elements as playing a similar role at different times in the song, it makes it easy when trying to decide on levels/sounds between the three. It also highlights that you shouldn’t have any of those melodies crossing over each other and fighting at any point – keep ’em separated!

10 Getting the bass sitting right is tricky – especially when it needs to work on both large and small speaker systems. Try mixing the bass while listening on the smallest speakers that you have, to get it sitting at the right level. Then adjust the tonal balance while listening on bigger speakers to reign any extreme frequencies back in. Sometimes you might need to layer the bass sound to get this to happen effectively.

11 Don’t over-compress everything. Listen to the TONE while compressing each instrument and keep it sounding natural if possible. Pay close attention to the start and end (attack and release) of the notes of each instrument you compress. Your final mix should be sitting at an average RMS level of about -12 to -18dBFS with peaks no higher than around -3dBFS. Leave the mastering engineer to do the final compression and limiting. Remember to leave dynamic range in the mix – contrast! Our ears need some sort of contrast to determine what’s loud and soft. If you hammer all the levels to the max you may as well just record the vacuum cleaner at close range and overdrive the mic/preamp. Hmmm. Might have to try that.

12 Easier than Automation. In these days of automation, it’s easy to spend inordinate amounts of time tweaking automation changes on instruments or vocals between different sections of a song (eg adding more reverb to the vocals in the chorus or adjusting rhythm gtr levels in the bridge). With today’s digital audio workstations, extra tracks are usually in ready supply, so rather than fluffing about with automation for a specific section of the song, why not just move that part over onto another duplicated track instead, then just make whatever changes you need to suit that section. Much quicker than continually mucking around with automation on the same track. By the way – make sure your mix is dynamic. A mix is a performance in itself, not a static set of levels.

13 Use submix busses for each element of the mix. Eg drum subgroup,  guitar subgroup, vocal subgroup etc. Rather than send all your drums straight to the L/R or Stereo mix, first send them all to an Aux return channel instead. Then send that Aux to the LR/Stereo mix. (Tip: disable solo on the Auxes) This makes it simple to do overall tweaks to your mix even after you’ve automated levels on individual tracks.
You need to be careful about aux effects returns and where they come back though, as their balance might change slightly if you adjust the instrument subgroups.
And hey, what about creating just three subgroups – Rhythm, Melody, Sweetening? Let me know if it works ;o)

Sources: Stephen Webber, Bob Katz, Mixerman, Mike Senior.

Digital Recording Levels – a rule of thumb

Okay, I mentioned this as one of my tips in a previous post, but there’s confusion and many heated debates out there about the ideal level to record into your digital audio workstation.

I’m just summing up the information readily available elsewhere (if you are willing to wade through endless online debates and the numerous in-depth articles), for people who just want to know right here and now what the best level is to record into their digital audio systems.

So I’m going to start with just a quick easy rule of thumb for these people, followed with a little bit more detail after that to explain why I’m recommending these numbers.

I apologize for simplifying some of the math – but if you’re really interested there are plenty of texts and in-depth articles available with a bit of searching. I’ve included a few references and links at the end of the article.

The rule of digital thumb

  1. Record at 24-bit rather than 16-bit.
  2. Aim to get your recording levels on a track averaging about -18dBFS. It doesn’t really matter if this average floats down as low as, for example -21dBFS or up to -15dBFS.
  3. Avoid any peaks going higher than -6dBFS.

That’s it. Your mixes will sound fuller, fatter, more dynamic, and punchier than if you follow the “as loud as possible without clipping” rule.

For newbies – dBFS means “deciBels Full Scale”. The maximum digital level is 0dBFS over which you get nasty digital clipping, and levels are stated in how many dB below that maximum level you are.

Average level is very important – people hear volume based on the average level rather than peak. Use a level meter that shows both peak and average/RMS levels. Even better if you can find a meter that uses the K-system scale.

Some common questions:

Q: Why do we avoid going higher than -6dB on peaks? Surely we can go right up to 0dBFS?

Answer 1 – the analogue side.
Part of the problem is getting a clean signal out of your analogue-to-digital converter. Unless you have a very expensive professional audio interface, or you like the sound of the distortion that it makes when you drive it hard, then you’re going to get some non-linearities (ie distortion) happening at higher levels, often relating to power supply limitations and slew rates.

Most interfaces are calibrated to give around -18dBFS/-20dBFS when you send 0VU from a mixing desk to their line-ins. This is the optimum level!
-18dBFS is the standard European (EBU) reference level for 24-bit audio and it’s -20dBFS in the States (SMPTE).

Answer 2 – the digital side.
Inter-sample and conversion errors. If all we were ever doing is mixing levels of digital signals, we would probably be fine most of the time going up close to 0dBFS, as most DAWs can easily and cleanly mix umpteen tracks at 0dBFS.

EXCEPT there are some odd things that happen;

  • Inter-sample errors can create a “phantom” peak that exceeds 0dBFS on analogue playback.
  • When plug-ins are inserted they can potentially cause internal bus overloads. These can build-up some unpleasant artifacts to the audio as you add more plug-ins as your mix progresses. They can also potentially generate internal peaks of up to 6dB – even if you’re CUTTING frequencies with an EQ, for example.
  • Digital level meters on channel strips seldom show the true level – they don’t usually look at every single sample that comes through. It’s possible to have levels up to 3dB higher than are displayed on the meters.

Keeping your individual track levels a bit lower avoids most of these issues. If your track levels are high, inserting trim or gain plug-ins at the start of the plug-in chain can help remove or reduce these problems. Use your ears!

Q: Aren’t we losing some of our dynamic range if we record lower? Aren’t we getting more digital quantization distortion because we’re closer to the noise floor?

Short answer. No.

Really, both of these questions sort of miss the point, as we shouldn’t be boosting our audio up to higher levels and then turning it down again. So there’s nothing to be “lost”.

It’s the equivalent of boosting the gain right up on a mixing desk while having the fader down really low, giving you extra noise and distortion that you didn’t even need. You should leave the fader at it’s reference point and add just enough gain to give you the correct audio level. This is what we’re trying to do when recording our digital audio as well – nicely optimizing our “gain chain”.

The best way to illustrate this is to throw a few numbers up;

Each bit in digital audio equates to approximately 6dB.
So 16-bit audio has a dynamic range of 96dB.
24-bit audio has a range of 144dB.

With me so far? Probably doesn’t mean a lot just yet.

Now, let’s look at the analogue side where it becomes slightly more interesting.

The theoretical maximum signal-to-noise ratio in an analogue system is around 130dB.
Being awesomely observant, you picked up immediately that this is a lot less than 24-bit’s 144dB range!

In fact, the best analogue-to-digital converters you can buy are lucky to even approach 118dB signal-to-noise ratio never mind 144dB.

So – let’s think about this.
If we aim to record at -18dBFS, how many bits does that give us?

24 bits minus 3 (each bit is 6dB remember). That’s 21 bits left.
What’s the dynamic range of 21 bits? 126dB
What’s the dynamic range of your analogue-to-digital converter again? 120dB-ish.
Less than 20 bits.
One bit less than our 21-bit -18dBFS level.

The conclusion is that when recording at -18dBFS you are already recording at least one bit’s worth of the noise floor/quantization error, and if you actually turn your recording levels up towards 0dBFS, all you’re really doing is turning up the noise with your signal.

And most likely getting unnecessary distortion and quantisation artifacts.

Apart from liking the sound of your converter clipping, there’s NO technical or aesthetic advantage to recording any louder than about -18 or -20dBFS. Ta-Da!

Mix Levels

If you’ve been good and recorded all your tracks at the levels I recommended, you probably won’t have any issues at all with mix levels.

The main thing is to make sure your mix bus isn’t clipping when you bounce it down.

Most DAW’s can easily handle the summing of all the levels involved, even if channels are peaking above 0dBFS. In fact even if the master fader is going over 0dBFS, there’s generally not a problem until it reaches the analogue world again, or when the mix is being bounced down.

Most DAWs have headroom in the order of 1500-2500dB “inside the box”. You can usually just pull the master fader down to stop the master bus clipping.

Saying that, it’s still safer if you keep your levels under control.
Like I mentioned before – a key problem is overloads before and between plug-ins. If your channel or master level is running hot and you insert a plug-in, it could be instantly overloading the input of the plug-in depending on whether the plug-in is pre-or-post the fader. So use your ears and make sure you’re not getting distortion or weird things happening on a track when you insert and tweak plug-ins.

Try to use some sort of average/RMS metering, and try to keep your average mix level (ie on your Master fader) between about -12 to -18dBFS, with peaks under -3dBFS.

Mastering will easily take care of the final level tweaks.

To conclude – when recording at 24-bit, there is a much higher possibility of ruining a mix through running levels too high than having your levels too low and noisy.

As Bob Katz says, if your mix isn’t loud enough – just turn the monitor level up!

PS – say “no” to normalizing. That’s almost as bad as recording too loud.

References:
Bob Katz’ web site.
Plus Bob’s excellent book “Mastering Audio – the Art and the Science”.
Paul Frindle et al on GearSlutz.com
A nice paper on inter-sample errors

Download a free SSL inter-sample meter (includes a nice diagram of inter-sample error )