Zed’s Digital Audio Top 10 of 2011

Window
Uploaded with Skitch!

This list is made up of those plug-ins and digital apps I used the most this year and/or made me the most excited. Most are available for both PC and Mac, and only a few are still stuck in 32-bit mode – hopefully that will change soon.

10) Izotope Ozone 5. Link This is the Mastering app of choice for most semi-pro engineer/producers and a great update to the ubiquitous Ozone 4 multi-processor – albeit slightly expensive for the “Pro” version without the introductory discount. The Pro version allows you to insert each module as a separate plug-in if you so desire, and has awesome audio visualisation options (plus a few extra features per module). Within this plug-in you get all the essential tools to repair a finished mix – a mastering limiter (Maximizer),  multi-band compression, multi-band exciter, multi-band width controls (Imager), mastering reverb, and an awesome EQ with handy frequency “solo” for ease of locating those crazy out of control frequencies. Oh, and you can go stereo or mid-side depending on your needs, and you also get all sorts of dithering and metering options.

9) SoundIron Emotional Piano 2Link It’s amazing how often you need a piano in a mix, and because I don’t have access to real one, I’m always struggling to find one that sits nicely in a song, especially as the ubiquitous grand pianos that seem to come with various packages don’t always work with the track. This piano is meant to be more “soundtrack-ey”, it’s warm, has character, and seems to sit much better than any of the others. If I want a clean sound I use Modart’s Pianoteq Link – a very nice modelled piano.

8) Avid Pro Tools 10Link Just so you know, although I’m a Logic Pro afficianado, I’m also a trained Pro Tools user and it’s good to see Pro Tools coming along so well and, although they’ve had the studio-recording side totally nailed for so long (and are the industry-standard for recording in the studio), they are still catching up somewhat with everyone else in the compositional features stakes. Also – now you don’t need to have a piece of Avid hardware to run it, it simplifies (and cheapens) your setup. Still a bit overpriced (especially as you have to pay quite a bit extra for much of the really cool stuff), but if you want to work with a variety of studios, you will probably need to use it at some point.

7) Arts Acoustic ReverbLink This algorithmic reverb is not only easy on the CPU, it sounds fantastic. I think our love-affair with the impulse reverb is fading, because as good as they initially sound, they are inherently linear – the sound doesn’t change based on level going in, so they can end up being a bit sterile. I think of them as “precision reverbs”. The Arts Acoustic can still sound clean, but you can get some pretty twisted sounds out of it if you need to, or some gorgeous Lexicon-like warmth. I use it a lot for dark and twisted drum reverbs, and for clean and open vocal reverb.

6) Logic Pro 9Link Notice it’s not at number 1, because as much as it’s my main tool in the studio, and it IS pretty damn awesome, (and ridiculously good value for money BTW – especially as you can now buy it on App Store for $200 USD) Apple have let it sit in the background for a while now, with very few updates, and some bugs that have been there for several years. I’m hoping that they release version 10 soon, without destroying what makes it so good – like they almost did with Final Cut Pro X. Runners up – Ableton Live Link – you’d have to have your head in the sand not to notice this DAW (Digital Audio Workstation) app increasingly dominating the market – although mostly in DJ, electronica and live performance realms, and Reaper Link – an inexpensive and increasingly fully-featured application that’s really taking off.

5) SoundToys Devil-Loc DeluxeLink Actually the entire Sound Toys native bundle is also fantastic (and pretty well priced if you’re a student and can get the Academic pricing), but I’ve found this particular compression-with-distortion plug-in an essential for big fat drum sounds. You can get it pumping in a really good way, and it sure adds instant excitement to the drum mix.

4) D16 Group ToraverbLink This plug-in reverb can get the biggest, widest, lushest chorused reverb sounds ever. It’s very impressive, and once you hear it, you’ll want it. I use it every time I need a huge sense of space and distance on something in my mix. Actually the D-16 Group do some fantastic plug-ins – I have the rest of the Silverline collection and also use the Decimort (which can emulate the colouration from various older samplers) and Devastator (a multi-band distortion unit) plug-ins a lot.

3) Slate Digital VCC (Virtual Console Collection)Link The idea with this plugin is that you put it on every channel strip and/or the busses to simulate one of four (now five!) analogue mixing consoles. It’s very subtle per channel strip, but somehow adds up to making a mix sound great and just “gel”. Runner-up to this is the very affordable Sonimus Satson Buss Link.

2) Celemony Melodyne Editor 2Link When it comes time to transparently fix poor intonation in vocals, without the obvious side-effects that you might want for some styles of music, then Melodyne is the one. It retains the nuances of phrasing and vibrato, and allows you to just fix the gross pitch errors if you like, or you can still go more extreme if you really want to. Also great for matching and creating backing vocal lines, repairing guitar tracks (got one string out of tune?) as it can now do polyphonic tracks, and my favourite; fixing poorly-played bass lines, because you can quantize to a time grid as well as fixing poor intonation on cheap basses. There are a bunch of products that Celemony put out, including the multi-track Melodyne Studio, but I like this one as it’s a pretty full-featured plug-in that can also do Rewire. An absolute essential!

1) Anything by UADLink This was my big “Eureka” moment this year. I decided to buy the UAD-2 Solo Laptop card to get some more processing into my overstressed Mac Book Pro laptop. Here’s what I found – the UAD plug-ins sound so much better than any other versions of the same plug-in, and sound so very close to the real hardware units that they’re modelling. You don’t need “golden ears” to tell the difference either. It might have something to do with the way the plug-ins are up-sampled for processing, or it might be the ridiculous huge amount of detailed modelling that they’ve done to recreate the vintage equipment so realistically. My favourites so far are the good old Pultec EQ – it really does just make things sound better – even without adding any EQ (although you probably will), the Ampex ATR-102 reel-to-reel, the Fatso Jr/Sr for, well, fatness, and the SPL Vitalizer for adding character to synths. My credit card is still hurting from going a wee bit crazy on these plug-ins this year, but I don’t regret it.

Notable mentions: the free Michael Norris effects collection Link for some quite radical granular processing options – especially useful for sound design. Some of the cool Waves plugins Link; for example the Kramer MPX reel-to-reel tape recorder and the Vocal Rider Not cheap, but good. Xfer Records’ LFOTool Link – adds tweakable sync’ed modulation to just about anything. Great for locking-in, enhancing or creating grooves in any track. Izotope’s Stutter Edit Link – awesome for adding those extra crazy head-sounds to your mix and for creating some extra action when it gets too boring – and you can play it in from a MIDI keyboard. The Sonnox collection Link – every single plug-in is useful and just sound awesome. And they’ve dropped the prices so real people can now almost afford them. Cytomic’s “The Glue” Link – a really excellent analogue-modelled master bus processor that you just set and forget.

The Lucky 13 Song Mixing Tips

Before I get started I just want to reinforce something I’ve mentioned in earlier posts – sometimes a reduction in parameters actually generates more creativity. Being aware of a set of limitations, or guidelines, can actually allow you much more creative control over your final mix.This could mean limiting the amount of effects that you allow yourself to use, or a more obvious one is to only use a particular set of effects that suits the genre or style. If you have the permission to do it, perhaps editing tracks or even removing “surplus” instrumentation or vocal is the first step.

Approach-wise, ideally you want all aspects of a song to reinforce together and create a stronger impact, and if you aren’t aware of what you’re doing, it’s very possible (in fact more common than you think) to get a generally nice balance of instruments that somehow doesn’t “gel”. You can hear everything, but it lacks emotional impact.

So here’s a bunch of ideas to think about next time you’re mixing a song – there are many more ideas and concepts to experiment with than these, but I stopped myself before the post became a novel.

1  Know what the song’s about. Clues are in the lyrics. Knowing what it’s about gives you the opportunity to amplify the concept rather than inadvertently fighting it. That doesn’t mean you have to “follow” the lyrics with the mix in a literal sense – you might do nothing at all in that regard, but at least you won’t be fighting the meaning of the song without even realising it, and when it comes to trying to think of creative mix directions, it’s yet another clue to help you.

2  Know the context of the music. What’s the genre or style of the artist. How does it relate to the artist’s identity? Being aware of this really makes it much more likely that you’ll promote that artist’s identity and overall concept, plus the artist will be more likely to appreciate what you do with the mix. For example does the artist exemplify “authenticity” where a raw, “character” sound with any intonation problems remaining unfixed is most desirable? Or is it about slick and smooth production?

3  Be adventurous. A mix is not a simple balance of levels of the instruments in a mix, it’s about featuring various aspects that you think the listener would like to hear, or more accurately needs to hear at any given section of the song. Pretend it’s a movie – how do you present each section of the song? Don’t be scared to go “over the top” with effects, fader moves and featuring of mix aspects – you can always tone it back if need be. Don’t be scared to turn vocal up loud – trying to hide weak vocals makes it even worse. Even ugly actors have to have close-ups in a movie to make it all work.

4  Think about texture and tone. It’s partly tone, partly level, partly how dominant something is in the mix. If you compress something – its texture changes. Listen out for it tonally as a sound rather than just checking it’s variation in level. How pervasive is it compared to everything else, despite its volume in the mix?
How does it link into the overall texture of the song? Textures are like a tonal colour palette – you probably don’t want to mix a neon green element in with some nice earth tones (remember there are no rules!), but then again you don’t want everything the same shade of beige.

5 It’s about melody In even the most distortion-fest mixes, our human nature will use our built-in pattern-detecting algorithms to extract a melody out of it somewhere, whether it be in the movement of the harmonics in the wall of guitar noise or in the groovy bassline. Make sure there’s one dominant melody at any given instant, or if there’s more than one, that they aren’t fighting each other and canceling out.

6 The pocket. It’s more than something to put your wallet in. It’s that magic interaction of instruments when it all suddenly locks into a groove. Spend some time adjusting relative timing of instruments to see if you can help the groove “gel”. You’ll know when it happens because it’s magic and you’ll start moving with the music whether you want to or not. Note that Beat Detective and other forms of quantization can fight this effect – it’s “felt” rather than being on an exact grid. Saying that, if the playing is too loose then a timing grid is definitely a step up.

7  Keep it simple stupid. Less is more. These things are fundamental truths, despite our over-familiarity with them often leaving them as meaningless statements in our minds. Think about the mix as a photo – the more people you want to appear in the photo, the smaller they’ll have to be. Don’t be scared to bring the main things to the foreground, and push other things back to the point of blurriness or being hidden behind the main elements. A good mix is not about individual band members’ egos, it’s about the overall blend. When you think about it, the individual band members have the least idea about what the mix should sound like – they all hear completely different versions of a mix depending on where they stand/sit when they perform.

8  Three “Tracks”. Back in the olden days, after mono and stereo, there were three tracks. One was for “Rhythm” (and could include drums, bass, percussion and rhythm guitar for example), one for Vocals and one for “Sweetening” which might be things like brass, strings, lead instruments etc. This strategy is still a great one to keep in mind for mixing. It forces you to think about your rhythm section as one single thing, and you need to make it all gel. Bass needs to lock in the pocket with the kick drum. Sweetening nowadays is whatever else you need outside rhythm and vocals. Think carefully about which mix elements fit into each of these three roles, and if all three are already populated – maybe it’s time to do some cutting. Note that some instruments such as guitars might switch between modes depending on what they’re playing at the time – rhythm, fills or lead.

9 One thing at a time. Rather than thinking of one of the aforementioned three tracks as just “Vocals” perhaps it’s better to look on it as “Melody”. The melody line often chops and changes between vocal, instrumental fills and solos. If you think of these three elements as playing a similar role at different times in the song, it makes it easy when trying to decide on levels/sounds between the three. It also highlights that you shouldn’t have any of those melodies crossing over each other and fighting at any point – keep ’em separated!

10 Getting the bass sitting right is tricky – especially when it needs to work on both large and small speaker systems. Try mixing the bass while listening on the smallest speakers that you have, to get it sitting at the right level. Then adjust the tonal balance while listening on bigger speakers to reign any extreme frequencies back in. Sometimes you might need to layer the bass sound to get this to happen effectively.

11 Don’t over-compress everything. Listen to the TONE while compressing each instrument and keep it sounding natural if possible. Pay close attention to the start and end (attack and release) of the notes of each instrument you compress. Your final mix should be sitting at an average RMS level of about -12 to -18dBFS with peaks no higher than around -3dBFS. Leave the mastering engineer to do the final compression and limiting. Remember to leave dynamic range in the mix – contrast! Our ears need some sort of contrast to determine what’s loud and soft. If you hammer all the levels to the max you may as well just record the vacuum cleaner at close range and overdrive the mic/preamp. Hmmm. Might have to try that.

12 Easier than Automation. In these days of automation, it’s easy to spend inordinate amounts of time tweaking automation changes on instruments or vocals between different sections of a song (eg adding more reverb to the vocals in the chorus or adjusting rhythm gtr levels in the bridge). With today’s digital audio workstations, extra tracks are usually in ready supply, so rather than fluffing about with automation for a specific section of the song, why not just move that part over onto another duplicated track instead, then just make whatever changes you need to suit that section. Much quicker than continually mucking around with automation on the same track. By the way – make sure your mix is dynamic. A mix is a performance in itself, not a static set of levels.

13 Use submix busses for each element of the mix. Eg drum subgroup,  guitar subgroup, vocal subgroup etc. Rather than send all your drums straight to the L/R or Stereo mix, first send them all to an Aux return channel instead. Then send that Aux to the LR/Stereo mix. (Tip: disable solo on the Auxes) This makes it simple to do overall tweaks to your mix even after you’ve automated levels on individual tracks.
You need to be careful about aux effects returns and where they come back though, as their balance might change slightly if you adjust the instrument subgroups.
And hey, what about creating just three subgroups – Rhythm, Melody, Sweetening? Let me know if it works ;o)

Sources: Stephen Webber, Bob Katz, Mixerman, Mike Senior.

Relieving Threshold Shift (Temporary Hearing Loss) with Acupressure

This is a handy tip for those moments when you’ve gone to see a loud band and forgotten to take earplugs, and one that I’ve used numerous times to “reset” my ears after a gig. I was shown this trick about 20 years ago by a friend and have been using it since then, but in preparing this blog I’ve also found lots of supporting evidence on the web that reinforces the basic concept. It has definitely and audibly worked for me and others that I’ve shown it to and it really can’t hurt to try it. Actually it does hurt a bit when you find the right spot to press, and I also have to admit it looks a bit stupid when you’re doing it, so best not do it when actually walking out of the gig – at least wait till you’re in the car when nobody can see you.

Press and hold the area shown in the diagram – it’s in the hollow just in front of the ear lobe. If you press the right spot it will feel tender, and after a few minutes you should feel the “cotton-wool” feeling diminish and your hearing begin to return.

Threshold Shift, for those who don’t know, is muffled high-frequencies or pressure or ringing in your ears that you can feel as you’re walking out of a loud gig. This is extremely dangerous in the long-term, and has even more significance nowadays for long-term headphone or ear-bud use.

Long-term exposure to loud sounds:

What happens is that when loud noise is perceived by the brain, it attempts to protect your hearing by tightening the muscles inside the ear in order to reduce the amount of noise passing through the ear mechanism. A fantastic system really, but not designed for a lifetime of loud music or industrial noise.

This muscle constriction can also restrict blood flow to the inner ear, and if it happens repeatedly it can cause long-term damage to the nerve cells in the inner ear, which eventually end up dying. Fatally. As Motorhead almost said – “Killed by Deaf”.

Seriously – long-term noise exposure can cause permanent hearing damage.

This acupressure trick relieves the constriction of the muscles around and in your ear, and hence allows full blood flow again to the nerves in the ear, hopefully extending the life of your hearing a bit longer. Obviously it won’t suddenly reincarnate the dead nerve cells in your inner ear, but if used early and often enough it will hopefully at least minimise the damage somewhat. No guarantees of course.

* Threshold shift and associated long-term hearing damage are not the only cause of hearing damage. I have met people who have lost hearing with a single exposure to a loud impulse sound (someone pressing the wrong button on the mixing console and blasting maximum volume through the headphones, or a massively loud click through a PA system at a gig) as well as others who have ended up with tinnitus (ringing in the ears) which can last FOR THE REST OF YOUR LIFE. Apart from these problems there are other odd things that can happen related to your inner ear – for example upsetting your sense of balance. Not much fun – I had continuous vertigo for a few days when I had the nasty flu earlier last year – no laughing matter as I ride a motorcycle to work.

Noise vs Music – I’ve often pondered this as I’ve been assaulted by a band that sounds like crap – as long as you perceive the music as, well music, your brain isn’t trying to shut your ears down, but if the band sucks and it sounds like obnoxious noise they’re effectively killing your ears! Obvious solutions – drink more alcohol to thin the blood and keep that oxygen getting to the ear cells, or try to psych yourself into believing the band is awesome, thereby fooling your own brain.
My wife says “why don’t you just leave?”, but I view that as defeatist.

Factoid 1: Research disputes what I just said before. Studies have shown that musicians suffer as much hearing damage as those exposed to industrial noise of equivalent level. I argue that musicians aren’t ever just exposed to music they like (we usually all have to share gigs with other bands), or other loud noise, so it’s hard to prove this either way without adequate methods or controls.

Factoid 2: Published Acceptable Exposure Time vs Sound level graphs are based on industrial noise, not music. At 110dBA your acceptable daily exposure time is 1 min 30 seconds!

Other (more serious) solutions:
Obviously, considering all this, the best solution is to avoid loud sound or wear appropriate hearing protection. Go get some proper earplugs -the custom-moulded “musician’s” earplugs are pretty darn good- they’re relatively “flat” and uncolored but there are other slightly cheaper options as well (custom-fitted plugs can be quite expensive, but they last for years with careful use). The problem I’ve found with them is you can truly hear how out-of-tune the singer is when watching a live band which might ruin your enjoyment or “perception of talent” slightly, but I have to say I’ve been to gigs and wearing some -15dB custom plugs and my eardrums have still been distorting painfully at times. You can get -25dB or more reduction plugs, and some come with both inserts as options so you can swap them.

And finally an observation – isn’t it weird how society is au-fait with people wearing glasses to correct their vision, but wearing a hearing aid has a stigma attached to it. You see graphic artists, photographers, directors and numerous other industry professionals (who rely on visual acuity!) wearing glasses, but would you trust an audio engineer with a hearing aid? Hmmmmm.
Not that I need one YET, just paving the way for the future.

References:
Acupressure Points

The Apogee GIO and Mainstage Experiment Part 2

Well, I got through my solo gig in one piece and with reasonable success, but some things became immediately apparent that I will definitely change for next time.


My Setup:

17″ Apple MacBook Pro (mine’s an older one) running MainStage 2

PreSonus Firestudio audio interface. It uses a FireWire connection, so has lower latency than most USB-based interfaces.

Fender guitar and for vocals a Shure Green Bullet microphone plugged into the PreSonus
(I usually have another Shure Beta58 mic set up for percussion loops, but I didn’t bother for this gig).

Novation 49SLII keyboard controller connected (and powered) via USB to the laptop – for playing the occasional keyboard line and controlling levels etc

The Apogee GIO connected (and powered) via USB to the laptop for playing backing and loops, with my expression pedal connected to it for guitar bits.

Come time to perform, the laptop conformed to Murphy’s law relating to gigs and played-up despite being solid on every rehearsal, and I had to boot it three times before it played nice – including a forced-shutdown once as it froze up.

The Novation keyboard comes with its own Automap software, and the software runs automatically when you start up a MIDI-compatible application so it can act as an intermediary between the application and the keyboard, but it in this case it locked-up searching for the Novation (which was plugged-in with all its lights going) – forcing the restart.
Of course it goes without saying that this was an agonizingly long time while standing on stage with my guitar waiting to play.

Also – for some reason the GIO didn’t recognise my expression pedal – a bit of a major since I need it to cross-fade between some of my guitar tones. I have it set up so it either cross-fades between two separate channel strips with, for example, verse and chorus guitar patches (rather than a complete patch switch I often like to mix in a bit of “clean” guitar with the “distorted” guitar as it adds clarity, or sometimes I set it up so the pedal turns up a second “layering” channel strip with some pad-like or weird character guitar effects at appropriate times in the song.
I suspect maybe the GIO likes to see the expression pedal plugged-in as it fires up, and on the third laptop reboot it finally discovered it (after I had decided it must be the cable!). The GIO doesn’t have a power switch, it just turns on when you plug it in.

Both the Novation and the GIO both get their power off their USB connections, and although it normally doesn’t seem to make any difference, I made sure to turn on the Novation well after the laptop booted on that third attempt. At home I also usually have a computer keyboard, wireless bluetooth mouse dongle and external hard drive all running happily off the USB power as well, so the lappie should be able to run just the Novation and GIO.

Mix Issues

Once it was all up and going, the issues were mainly mix-based.

The trick, of course, is getting something that works out front as well as in the foldback monitors, and although it actually sounded fine in the foldback, the vocals were apparently too quiet out front.
Trying to turn them up got the mic a bit too close to feedback, which meant turning down the backing instead, which meant some of the backing became just a bit TOO quiet to be able to hear. One song had a triangle rhythm intro that ended up being way too quiet and I got out of sync – needing a restart of the song. A wee bit embarrassing.

So – before the next gig the main thing I will do is;

Create separate audio outputs to the PA system for the different mix elements.

Or at the very least create a separate physical output for the vocals, since they’re one of the most critical things to get happening properly in both monitors and out front.


For the gig I did actually create separate subgroups for each type of sound:
(Vocals, Guitars, Drums, Backing, Keys, FX) so I could use the nifty little faders on the Novation to balance the overall mix, but it wasn’t enough. It has to be a separate output from the audio interface into its own channel on the PA mixer.

Backing Tracks

Apart from that, the only other niggles I had were with the backing tracks – they were a little inconsistent with their start times due to the too-quiet monitoring.

I have it set up so I can switch between sections of a song with the GIO with the “wait for next bar” setting – meaning you have to hit the foot-switch within the last bar before you want the next section to start. If you’re a fraction too early or late, the whole backing is out by a bar.

I’m still not sure of the ideal way to set these backing tracks up. I’ve tried having the entire backing for the song as one track, but it leaves no flexibility for jamming out on sections or padding it out a bit if you stuff up or something.

I’ve also tried having just the one backing track with some song section markers that you can cycle within when necessary, and to be honest that wasn’t too bad, so I may go back to that method.

The beauty of the way I was doing it this time though is that you can jump to any section of the song if you feel like it, but that flexibility comes with its own risks and problems.

The thing is to try to keep it all as simple as possible for the performance itself, so I’ll need to experiment a bit more with the ideal method.


Finally, I’d like to come up with a better system for using Ultrabeat drum machines in my setup and find a way to simply switch between patterns – I might map the bottom few keys on the Novation for that purpose or perhaps assign some of the many buttons on it.

Overall, I’m pretty pleased with the whole setup apart from those few tweaks I’ll need to make.
I really like MainStage 2 – it’s an incredibly powerful live performance program with only a few minor bugs that will hopefully be sorted soon.

The Apogee Gio and Mainstage Experiment

I have a solo gig coming up and have decided that being yet another singer-songwriter is boring as hell. Especially as I haven’t been blessed with one of those voices that could make singing the shopping list sound awesome.

So I need to use everything in my power to add value and variety to the gig – hence the MainStage experiment.

I wanted to be able to go from simple vocal and guitar to full-on backing based on my recorded songs. While keeping it all “live” and interactive so I can jam it out a bit if the opportunity arises.

The beauty of MainStage 2 is that it’s basically the guts of Logic Pro bundled into an application for performing live. That means you get the same instruments and effects, plus any of your third-party plug-ins as well.


It means you can also add bounced backing tracks for your songs – with markers that you can loop around or jump to. The markers allow you to see what song section’s coming up next in case you forgot.

And there’s a cool Looper plug-in that allows you to recreate the current trend of having those dinky guitar pedals that allow you to build up your own musical or percussive layers during a live set. You just play something in, hit the pedal and it loops around while you play something over the top, or you can just keep recording more layers, undo the last one, or clear it all and start fresh.


MainStage allows you to create your own user-interface – you can customise what you are looking at on the computer screen, and also create objects that will be controlled by whatever pedals, buttons, knobs, faders or keyboards you have connected to it in the real world.

Hence me also getting an Apogee Gio – this allows me to have 12 buttons on the foot controller that I can assign to whatever I need to per song, and I can also plug in my expression pedal to do my chucka-chucka-wah-wah thing.


The Gio also has a built-in audio input for guitar or bass, which actually sounds great. Apogee are renowned for their great-sounding converters and it’s nice to find even their cheap-ish ones are good. Definitely a good way of getting your instrument into MainStage.

The only hassles I had were when I wanted to plug in a microphone as well as my guitar – meaning I had to use another audio interface as well – in this case an M-Box Pro.

Apple’s OSX allows you to combine two separate interfaces together as an aggregate device so they appear as one source to the audio application, but no matter which way I did it, they didn’t play nice with each other, eventually degrading the audio quality.

So I had to ditch the awesome sound of the Apogee for the more average M-Box one.
Oh well – at least the Gio buttons still worked and looked pretty.
The little LED indicators change color to suit what the pedals are mapped to in MainStage – ooooh aaaaah….

When you use the Gio with Logic, and apparently GarageBand as well, the foot controls are automatically mapped to Record, Play, Rewind, Fast Forward etc for hands-free recording which is a bonus.

Build quality of the Gio is great by the way – it’s a solid little unit – quite heavy in fact, so it’s going to stay put on stage, and feels fairly indestructible.

So, for the moment I’m still wrestling my way through customizing MainStage for the upcoming gig – there’s still a trick or two I need to learn. There’s a Concert/Set/Patch hierarchy that is important to get your head around otherwise the backing stops when you change guitar patches for example – and the synchronisation options with backing tracks and loops has some quirks.

But I’m getting there bit by bit, so I’ll let you know how it goes…

Links:
The Gio
MainStage

Digital Recording Levels – a rule of thumb

Okay, I mentioned this as one of my tips in a previous post, but there’s confusion and many heated debates out there about the ideal level to record into your digital audio workstation.

I’m just summing up the information readily available elsewhere (if you are willing to wade through endless online debates and the numerous in-depth articles), for people who just want to know right here and now what the best level is to record into their digital audio systems.

So I’m going to start with just a quick easy rule of thumb for these people, followed with a little bit more detail after that to explain why I’m recommending these numbers.

I apologize for simplifying some of the math – but if you’re really interested there are plenty of texts and in-depth articles available with a bit of searching. I’ve included a few references and links at the end of the article.

The rule of digital thumb

  1. Record at 24-bit rather than 16-bit.
  2. Aim to get your recording levels on a track averaging about -18dBFS. It doesn’t really matter if this average floats down as low as, for example -21dBFS or up to -15dBFS.
  3. Avoid any peaks going higher than -6dBFS.

That’s it. Your mixes will sound fuller, fatter, more dynamic, and punchier than if you follow the “as loud as possible without clipping” rule.

For newbies – dBFS means “deciBels Full Scale”. The maximum digital level is 0dBFS over which you get nasty digital clipping, and levels are stated in how many dB below that maximum level you are.

Average level is very important – people hear volume based on the average level rather than peak. Use a level meter that shows both peak and average/RMS levels. Even better if you can find a meter that uses the K-system scale.

Some common questions:

Q: Why do we avoid going higher than -6dB on peaks? Surely we can go right up to 0dBFS?

Answer 1 – the analogue side.
Part of the problem is getting a clean signal out of your analogue-to-digital converter. Unless you have a very expensive professional audio interface, or you like the sound of the distortion that it makes when you drive it hard, then you’re going to get some non-linearities (ie distortion) happening at higher levels, often relating to power supply limitations and slew rates.

Most interfaces are calibrated to give around -18dBFS/-20dBFS when you send 0VU from a mixing desk to their line-ins. This is the optimum level!
-18dBFS is the standard European (EBU) reference level for 24-bit audio and it’s -20dBFS in the States (SMPTE).

Answer 2 – the digital side.
Inter-sample and conversion errors. If all we were ever doing is mixing levels of digital signals, we would probably be fine most of the time going up close to 0dBFS, as most DAWs can easily and cleanly mix umpteen tracks at 0dBFS.

EXCEPT there are some odd things that happen;

  • Inter-sample errors can create a “phantom” peak that exceeds 0dBFS on analogue playback.
  • When plug-ins are inserted they can potentially cause internal bus overloads. These can build-up some unpleasant artifacts to the audio as you add more plug-ins as your mix progresses. They can also potentially generate internal peaks of up to 6dB – even if you’re CUTTING frequencies with an EQ, for example.
  • Digital level meters on channel strips seldom show the true level – they don’t usually look at every single sample that comes through. It’s possible to have levels up to 3dB higher than are displayed on the meters.

Keeping your individual track levels a bit lower avoids most of these issues. If your track levels are high, inserting trim or gain plug-ins at the start of the plug-in chain can help remove or reduce these problems. Use your ears!

Q: Aren’t we losing some of our dynamic range if we record lower? Aren’t we getting more digital quantization distortion because we’re closer to the noise floor?

Short answer. No.

Really, both of these questions sort of miss the point, as we shouldn’t be boosting our audio up to higher levels and then turning it down again. So there’s nothing to be “lost”.

It’s the equivalent of boosting the gain right up on a mixing desk while having the fader down really low, giving you extra noise and distortion that you didn’t even need. You should leave the fader at it’s reference point and add just enough gain to give you the correct audio level. This is what we’re trying to do when recording our digital audio as well – nicely optimizing our “gain chain”.

The best way to illustrate this is to throw a few numbers up;

Each bit in digital audio equates to approximately 6dB.
So 16-bit audio has a dynamic range of 96dB.
24-bit audio has a range of 144dB.

With me so far? Probably doesn’t mean a lot just yet.

Now, let’s look at the analogue side where it becomes slightly more interesting.

The theoretical maximum signal-to-noise ratio in an analogue system is around 130dB.
Being awesomely observant, you picked up immediately that this is a lot less than 24-bit’s 144dB range!

In fact, the best analogue-to-digital converters you can buy are lucky to even approach 118dB signal-to-noise ratio never mind 144dB.

So – let’s think about this.
If we aim to record at -18dBFS, how many bits does that give us?

24 bits minus 3 (each bit is 6dB remember). That’s 21 bits left.
What’s the dynamic range of 21 bits? 126dB
What’s the dynamic range of your analogue-to-digital converter again? 120dB-ish.
Less than 20 bits.
One bit less than our 21-bit -18dBFS level.

The conclusion is that when recording at -18dBFS you are already recording at least one bit’s worth of the noise floor/quantization error, and if you actually turn your recording levels up towards 0dBFS, all you’re really doing is turning up the noise with your signal.

And most likely getting unnecessary distortion and quantisation artifacts.

Apart from liking the sound of your converter clipping, there’s NO technical or aesthetic advantage to recording any louder than about -18 or -20dBFS. Ta-Da!

Mix Levels

If you’ve been good and recorded all your tracks at the levels I recommended, you probably won’t have any issues at all with mix levels.

The main thing is to make sure your mix bus isn’t clipping when you bounce it down.

Most DAW’s can easily handle the summing of all the levels involved, even if channels are peaking above 0dBFS. In fact even if the master fader is going over 0dBFS, there’s generally not a problem until it reaches the analogue world again, or when the mix is being bounced down.

Most DAWs have headroom in the order of 1500-2500dB “inside the box”. You can usually just pull the master fader down to stop the master bus clipping.

Saying that, it’s still safer if you keep your levels under control.
Like I mentioned before – a key problem is overloads before and between plug-ins. If your channel or master level is running hot and you insert a plug-in, it could be instantly overloading the input of the plug-in depending on whether the plug-in is pre-or-post the fader. So use your ears and make sure you’re not getting distortion or weird things happening on a track when you insert and tweak plug-ins.

Try to use some sort of average/RMS metering, and try to keep your average mix level (ie on your Master fader) between about -12 to -18dBFS, with peaks under -3dBFS.

Mastering will easily take care of the final level tweaks.

To conclude – when recording at 24-bit, there is a much higher possibility of ruining a mix through running levels too high than having your levels too low and noisy.

As Bob Katz says, if your mix isn’t loud enough – just turn the monitor level up!

PS – say “no” to normalizing. That’s almost as bad as recording too loud.

References:
Bob Katz’ web site.
Plus Bob’s excellent book “Mastering Audio – the Art and the Science”.
Paul Frindle et al on GearSlutz.com
A nice paper on inter-sample errors

Download a free SSL inter-sample meter (includes a nice diagram of inter-sample error )

Transferring MIDI and Audio sessions from Logic to Pro Tools in about 5 minutes.


It’s pretty common to have to transfer a song written in Logic into Pro Tools for a client to mix (or remix). Here’s how to do it as fast as possible with the least amount of hassle.

Audio Files Only

If all you need to supply is audio files for transferring to Pro Tools (usually the most common requirement), it’s a very easy 5 steps (MIDI files are trickier – we’ll get to those later).
All files will start at the same point and be as long as they need to be.
Files won’t include any Bus/Aux effects, only what’s on each Channel Strip.
Files are PRE-fade (ie the equivalent of the fader being at 0.0), so they may be quite loud.

1. Name your Logic tracks intelligently (double click on the track header to give it a useful name – this is what your file will be named)

2. Make sure the length of your song is set to about the right length -ie not 200 bars if it’s only 20 bars long. It’s no biggie if you forget this one, but you’ll be sitting waiting for longer than you need to while waiting for the files to bounce.

3. Delete any unused tracks and/or mute unwanted regions.

4. Select menu File-Export-“All Tracks as Audio Files”.


5. Select Wave and 24 bit (unless something else is desired). Select Normalize “Overload Protection Only” (this is not your typical “normalize” function and will just make sure your Channel Strip level will never overload). Make sure you know where you’re bouncing to. The default is the “bounce” folder within same session. (You don’t have to enter any file name/s). Hit “Save”. All done.


Easy huh?

MIDI File Export

Exporting MIDI tracks as MIDI files is a bit fiddlier than creating audio bounces, as many of the processes in Logic such as region Quantise and Transpose are “real-time” and need to be rendered into the MIDI track itself before exporting as a Standard MIDI File.

Do this (assumes standard Logic key commands):

1. Select all MIDI regions you’re going to export as a file.

2. Press “Control N” (normalises any region parameters for the selected regions – eg Transpose).

3. Press “Control Q” (normalises any Quantize parameters for the selected regions).

4. Press “Control L” (turns any loops into copies).

5. Press “Shift =” (merges the copies and other regions into a solid file on each track).

6. Name each region with the text tool (you’ll thank me later).

7. Select menu; File-Export-“Selection as MIDI file”. Name your file (eg blah.mid), hit Save and you’re done.


Importing into Pro Tools

Now to bring these shiny new audio or MIDI files into Pro Tools.

The easiest way is to create a new, empty Pro Tools session, then drag your files directly from the “bounce” folder in Finder and drop them into the empty Edit window in Pro Tools. PT will now import the files and automatically create the appropriate track for each file.

Logic 9 – using Pedalboard in parallel mode for fat Bass and Guitar sounds

Click on the photo to enlarge.

A little while back I wrote a blog article about cool things to do with multi-band compressors

One of the things I discussed was how to use the crossovers built into one of these plug-ins to separate the lows and high frequencies of, for example, a Bass track, so that distortion could be added to the top-end of the Bass without robbing the fat bottom end.

Well now with Logic 9’s new Pedalboard, you can easily add some grainy distortion to the Bass track without thinning the sound by using the distortion pedals inserted in parallel mode.

Pedalboard is a great new plug-in that has been added to the latest version of Logic, and includes some great-sounding pedals that can be custom-assembled into complete pedalboards. (You can even map individual pedals to controllers with built-in macros, but we won’t cover that in this article)

By dragging, for example, a Distortion pedal from the selection box on the right into the main pedalboard, then adding a Splitter pedal, you can then click on the name above the Distortion pedal to toggle it between series and parallel modes.

Series means the whole Bass sound goes through the distortion pedal, parallel means the distortion pedal is blended with the original dry Bass sound.

What’s even better is you can switch the Splitter pedal into “Freq” (Frequency) mode. This allows you to select what range of frequencies goes into to the parallel chain. In my example, I’ve set it to send from 1.5kHz upwards. (Hint: to see this exact value, I temporarily switched the plug-in “View” from “Editor” to “Controls”).

When you insert a Splitter pedal, it automatically inserts another Mixer pedal at the end of the chain so you can blend the two parallel paths back together again, in whatever proportion you desire.

Here’s another tip – if you’ve recorded your electric guitar straight into Logic via your audio interface and are then adding effects in Logic – try using the parallel mode to blend your clean electric guitar with the distorted version on the other side of the parallel chain. This can give your wall of distorted guitars some extra clarity.

12 Tips for improving the quality of your recordings


1. When recording to digital – keep your levels a bit more conservative. Aim for -18dBFS when recording at 24-bit. And at 16-bit? Best to just stick to 24-bit. Don’t worry about levels looking low on the meters, and don’t worry about “having less bits available”. You’re still getting 21 bits, which is about the maximum you can actually encode from the analogue side anyway. You’re not losing anything, and you’re getting decent digital headroom and much bigger/more dynamic sound. Try it!

2. The best EQ you’ll ever get is on the end of the microphone. Spend time getting an awesome sound from the microphone itself, and your mixing will be much easier. Get the mic/instrument position nailed and try different mics if the sound’s not working for you. Omnis are awesome. Don’t think the most expensive mic is always the best, either – the humble Shure 57 and Sennheiser 421 are more than just drum mics.

3. Don’t over-compress everything. Be judicious when you compress – be aware of what you are trying to achieve. Are you even-ing out the performance of a bass track? Or compressing the drums to get a particular texture? Don’t just do it to “turn it up”. That’s what the faders are for. If you want your overall mix to sound louder – get the mastering engineer to do it. Over-compressing will rob your song of “punch” and fatness.

4. Set the compressor release-time so it works with the rhythm of the track. Set it as long as possible but so level reduction still manages to get back up to unity before the next beat/phrase. Then fine-tune so it adds to the groove. It’s tempo-based.

5. Work with the song arrangement. The maximum volume in any given song is divided into however many sounds/instruments you have playing at the same time. 20 small guitars do not usually sound as impressive as one big guitar. (They might have an interesting texture though). The instruments in a 3-piece band will sound bigger than those in a 12-piece band UNLESS you deliberately leave space for each instrument at different parts of the song. Don’t be afraid to cut things out, or to have musicians not play at various points – which leads into…

6. Create contrast. On the subject of arrangement – take a leaf out of Nirvana’s songbook – create big contrasts between, for example, verses and choruses. “Loud” only sounds loud if it’s got some “quiet” to compare against. Another reason to watch your compression, too. Try subtly easing down the rhythm guitar level as you go through the verse, and then suddenly bring it back up to the original level for the chorus. Sounds loud again, doesn’t it?

7. Commit. Don’t record 70 takes of a vocal track and then edit it later. Why didn’t you just keep doing punch-ins until it was right? Now you’re going to have to spend 6 hours trying to edit vocals when you could have got a decent take in probably an extra half-hour. Murphy’s law will also make sure that NONE of those 70 takes contains a good first line of the third verse.

And if you think that the rhythm guitar sounds perfect with that grottelflange pedal on it – record it like that! If you’re paranoid – capture both versions – and keep the clean guitar track in a backup session.
In other words – don’t defer all your decisions till the mix – make a call and go with it.

8. Be daring. Bands don’t usually become famous for sounding just like other bands (maybe in the short-term). They become famous for being unique. If the band sounds like everyone else, you’d better be trying hard to find something unique in there and be highlighting it. Or find a unique way to present them in the recording by your approach. Don’t be scared to go “over-the-top” with effects – you can always make them more conservative if you have to, but it’s almost impossible to go the other way once you’re used to the sound you have.

9. Err on the side of performance. There’s magic in a good performance. Does it give you goose-bumps? Better to have a piece of music that moves you than something that’s technically perfect but “cold”. This is where an experienced band can nail it – they can give a good performance early-on, before they get bored. By the way – don’t run-through the whole song when sound-checking otherwise the performers get stale before you’re ready. And why weren’t you recording already anyway!?!?!

10. Highlight character. Often it’s the imperfections that make our ears prick up. Ideally the imperfections shouldn’t be big enough to ruin the song, though. Have you often thought the demo of a song is better than the final recording? What made the demo unique? Don’t try to make every instrument “perfect”. Don’t EQ instruments while they are solo’ed – you’ll end up trying to make everything sound fat and full, which adds up to “bland”. Try to make at least one sound unique in the mix.

11. The mixdown is a performance too. If the levels are static in your mix, it’s going to sound boring. The human brain is wired to detect change. You better have some stuff changing through the song to keep the listener’s brain stimulated. If you have an interesting arrangement, you probably don’t need to worry so much about eg levels changing through the mix, but if your mix lacks contrast, you’d better be riding those controls. Think of the song like a movie – what’s the camera looking at now?

12. Use your ears – not your eyes. One of the dangers of digital recording is that we can see what the waveform looks like. And what the levels look like. And what the EQ curve on the plug-in looks like. Turn off the display when you’re doing your critical listening. Don’t move all the drum beats and bass and guitar perfectly in time – they’ll sound tighter but thinner. Don’t tweak your EQ until it “looks” better. Have you noticed how you notice things differently while you’re bouncing the final mix?

Songwriting – can it be taught?

I just spent the weekend at a songwriting workshop by Jason Blume. It was awesome.

This is the second one I’ve participated in, and to be honest because I’d helped organise my workplace to host the workshop, I got to go for free.

I’d always been a bit hesitant about going along to workshops and training seminars about songwriting, because I always figured “I don’t want somebody to give me rules that I have to stick to – I want to make my own ORIGINAL music, maaaan” (That last because I’m kind of whining as I think that).

This is also why I resisted learning music theory ;o)

Anyhoo – now that I’ve been to a few of these things, I realise that they DON’T rob you of your unique voice and creative centre – in fact it’s more liberating if anything, because one of the main things that Jason expounds is that there are no rules. You can make whatever music you want, and it’s all great.

However – he is a storehouse of astute observations about songwriting (as well as the music and song publishing industries). So rather than saying what is right and what is wrong, he will point out that most of the popular songs have certain things in common – for example a chorus that has a memorable melody and lyric, and that can deliver an appropriate emotional reaction.

Jason will not tell you how to write your chorus, but he might certainly observe that it doesn’t really sound different from the verse, or that the song drops rather than lifts at that point, or the words or phrasing don’t’t make sense, or something along those lines.

He also makes a distinction between songs written by people for their own pleasure, and those who write for the public – if you are writing for yourself feel free to do whatever you want, if you are doing it for others, then it’s probably good to make it easy for them to engage with, and hopefully remember your song.

One of the most interesting things Jason does is to critique songs that people bring along to the workshop (either on CD/iPod or playing them live).

This is a real eye-opener, as you can see and hear yourself all the flaws in other people’s (and your own!) submissions, especially by the end of the second day, where you are more aware of the aspects to look for.

It becomes obvious that a good song not only has to be a unique, creative and detailed viewpoint of something, but also needs to be well-crafted to highlight its own good points rather than destroy them.
After so many meandering singer-songwriter instrospectives (I’ve been guilty of doing this for many years as well), it’s actually refreshing to hear simplicity and repetition. Half the problem is that everybody wants to be “clever”, and instead they end up with cumbersome and meandering and forgetful.

Last year my own submission, which of course I was sooooo proud of – (then my current latest and greatest!), was exposed as having three different verses that all had different structures, and lyrics that mostly failed to be a unique way to say what I wanted to say. It was true – it wasn’t a bad song by any means, but all you songwriters out there must know what it feels like to play your song to somebody and hear it though their ears as you listen? I was cringeing.

Over the last year I have taken on board a lot of what Jason taught me at the last workshop – I had written what I felt was a much better song – simpler verses, more repetitive, stronger melodies, a great chorus line. But I was anxious about the verse lyrics, having already thrown them all away a couple of times already and going back almost to the original idea. The lyrics still needed a lot of work to create a solid setup for the choruses, though.
A couple of the lines were even the original scratch lines I jammed along to it when I was first writing the song. One line was a complete throw-away and a bit of a joke. “Check one-two”. Whaaaat?

I had hoped to fix a couple of these lines before the second day of the workshop – but some problems with an Apple OSX update corrupted my Logic song session, so I had to submit it as it was.

I actually began to regret putting my song in the submission pile – as the stack grew shorter towards my own disc I grew more and more nervous. “Check one-two!?!?!!” Oh my god, what a stupid line!

Finally, Jason worked down to my own submission. My heart was racing. He put my lyrics up on the projector. There were some snickers and giggles – oh the humiliation!
He flipped the disc in the player and hit play.

Doesn’t sound toooo bad, nice hooky intro rhythm, clean verse lines (Argh those lyrics! Argh I hate the sound of my own voice). Kicks up into the pre-chorus, then bang into the chorus – phew – relatively safe. Then suddenly STOP!!!!!!

Jason cuts the song. He says “There are two major problems with this song”. My blood pressure has skyrocketed, my heart rate is so high it’s like I’ve sniffed Amyl Nitrate and the blood has drained from my face. I’m now sure I’m going into cardiac arrest and I’m almost welcoming the unconsciousness that will soon release me from this embarrasment. I’ve failed again!

“There are two major problems with this song – it’s not on the radio and my name’s not on it”. The room breaks into applause. My friends laugh at me. I manage a feeble “woo-hoo” and a shaky unconvincing smile. I feel a sense of relief – almost like managing to pass my driving test or an exam.
It’s not until later on when we have a break, when people come up to me to congratulate me and teenagers get my email address that I feel like I’ve achieved something special.

I guess for myself, songwriting workshops have been a relatively positive thing so far.

Edit: Oh if you want to have a listen: FallingMix3 by Mr Zeberdee