Zed Brookes – O Sweet Cacophony Album Released

I finally released my album in 2016 after changing the name to “O Sweet Cacophony” as it seemed more intriguing and different than “Deus Ex Machina” (which also reminded me too much of a recent film).

Luckily a good friend of mine stepped in and helped me curate the final track listing, and even booked the venue for the release listening party at Brothers Beer in Auckland.

Feel free to check it out:

Zed Brookes – O Sweet Cacophony

iTunes, Spotify, Tidal, Amazon

Zed’s Bandcamp page.




Mixing Metaphors for Music – the Picket Fence and Shrubbery Metaphor

Picket Fence

Photo by Martin Kennedy via Creative Commons Non-commercial Share-Alike license

Mixing engineers and producers generally have a bunch of internal metaphors for visualising or handling the various aspects of a song mix, depending on what they’re working on and their own personal preferences. Some stick with the same ones all the time, but most dynamically shift between a selection of metaphors as they go. It can be a way of keeping certain mix “rules” or approaches in play when building up a song mix.

For example, some “see” a mix as a picture or painting, with each part made up of different colours and shapes. “Make the bass more blue”. Whatever that means. Others think in food metaphors – “we need more spice on the organ” (sounds painful to be honest), the “vocal isn’t sweet enough”, or “it’s still too sickly – it needs some salt in there” etc. It even works the other way around – while beer-tasting I can’t help comparing the flavour to the spectrum of frequencies in a mix. “There’s not enough low-mids in this IPA”.

Personally, when mixing I switch between the colours and shapes, the flavour and others as well. For example, I’ve always felt the the mix as being composed of horizontal and vertical aspects, especially when dealing with anything primarily groove-based. One day as I was trying to explain it to someone, I jokingly referred to it as the “picket fence and shrubbery” metaphor.

It does make it very easy to visualise and compare the vertical (pickets) transient aspects – for example drums and fast intricate glitchy stuff, with horizontal aspects (shrubbery)- eg pads, basslines, vocals etc. It becomes more about timing and note length.

Is the picket fence too large compared with the shrubbery behind it, or is the shrubbery overgrown and concealing the picket fence? Obviously it depends on the song, but it’s easy to “feel” the balance between the two elements and hopefully help achieve an appropriate balance.

If there’s too much shrubbery, the mix could be washy and rhythmically undefined, whereas if there’s too much picket fence, the drummer was probably helping mix. Boom boom.


Some people seem to have trouble with the simplicity of this metaphor, so here is a more academic translation:

Music could be considered to be made up of parts that often have both transitory and sustained components to the sound. Transitory parts (ie shorter than approx 50mS) could be considered to be “vertical” whilst sustained elements (longer than 100mS) could be viewed as “horizontal”. Most instruments have a combination of both – for example a piano or acoustic guitar could be mixed to feature either a predominance of the attack or sustain portion of the envelope, contributing differently to the mix in either case.

Visualisation of a mix with these horizontal and vertical attributes in mind could help a mix engineer ascertain whether there is an appropriate balance of the two aspects depending on the context of the music. For example, in dance music where the rhythm is important, it might be beneficial to feature more of the attack portion of the sound, while in something more musically ambient, more longitudinal features could be highlighted, perhaps even adding reverberation to enhance the effect. (Note that in dance music, it is common to apply a side-chain to impart more vertical impetus to a horizontal sound – eg using the kick to side-chain the string pad for example).

The key point is that almost any sound can be treated to expose more of the attack or sustain portion, and the collective treatment of various parts in the mix with this approach in mind could see completely different mixes created from the same material, even with similar peak levels for each part. In this light, it is simply the controlling of the balance between RMS or average level compared with the peak levels for each part.

In mixing drums, for example, this vertical vs horizontal viewpoint can be effective in order to help decide on an optimum balance between close-mic’ed drums (eg the snare) and the often-well-compressed overheads. It can also make things simpler while adjusting the attack on a compressor in order to provide an effective balance between the attack and release components of a sound.

The Meniscus Effect – Blending Digital Audio Analogue-Style


Many engineers and producers love the sound of analogue, despite (or perhaps because-of) the superb quality of digital audio products. Analogue is felt to be more musical and it seems easier to mix songs done in analog formats. Why is this?

Most people, when asked why they like the sound of analogue, focus on things like “warmth” and “roundness”, the slight bass hump you get off analogue tape, the subtle musical harmonic components, the naturally better average-to-peak ratios that make your music sound louder and fuller. The inherent infinite-band compression/limiting of analog tape that acts as a de-esser. Actually all of these do sound great!

Of course there’s also the downsides that many forget: Noise. Distortion that you don’t want. Noise. Expense. Maintenance. Non-linearities in the sound that you don’t want. Noise. Crosstalk. Track limitations. NOISE!

What most engineers don’t seem to talk about much is WHY it’s easier to mix analog sources. Those that do tend to look at things like crosstalk between the channels or across tape tracks, or maybe the individual analog channel non-linearity making the music sound richer as the main reasons. Many younger engineers who haven’t had much experience with tape probably don’t even appreciate the difference.
Whatever it is (and I think it’s definitely a complex blend of a few things including harmonic distortion), I think the result is that each sound is easier to blend or “gel” with another sound. It’s like each sound has a meniscus on it, that grabs other sounds when you get close enough. Analog also has a strange ability to have things really loud in the mix, without the detail overshadowing the other instruments.

Of course, if it’s perfect clarity you want, then you don’t want this happening, but in most cases you want a song to feel homogenous and musically prosodic – with all the elements gelling cohesively together into something that feels larger than the sum of its parts. It becomes a pleasing landscape rather than just numerous close-ups up of tree bark.

This also depends on the type of music/sound you want to make. If you were an artist – are you going for photo-realism, or for a more impressionist look? Or Surrealism? If it’s a photo, are you going for a snapshot with all elements in focus (please no!), or a nice well-lit shot with foreground in focus and background out of focus (hell yes!).

My idea of the Meniscus Effect – and let me be clear that this isn’t something that is new or that I’ve created, it’s simply a method I use to consciously perceive a mix in a useful manner – and it might be useful for others as well. It’s simply the art of blending of each part into another – the gentle blurring of the edges – making sure the main part of each sound retains its distinct shape and character.

One of the problems with mixing in digital is that the cleanliness of the recording allows each sound to retain its focus from the peak level to the dark limits of lower bit encoding. It’s like a perfect digital snapshot where everything’s in focus, all at once. The information’s all there, but is it aesthetically pleasing, and does it blend?

In practice – what’s currently happening today is more of the same sort of thing that’s always been used – judicious use of EQ, reverb, distortion and compression to blend the parts.

So let’s look more closely at what we need to help gel the sounds together.

The key thing is just blending into the digital silence. Solutions? Subtle reverb, compression to bring up the noise floor in the track – and preferably in a rhythmical manner, and adding distortion for blending harmonics.

The general rule of thumb with reverb is that the longer the decay, the quieter it should be, otherwise it just washes everything out.
I recommend tweaking the pre-delay so it adds some rhythmical attribute (and adds clarity on vocals), and also filtering out some of the bottom-end to clean it up. Use a nice plate or hall for the “long” reverb, blended so you almost can’t even hear it, but you definitely notice it’s gone when it’s muted. Don’t make the reverb really toppy/bright – keep it “warm” sounding.

Compression is basically reducing dynamic range – turning down the loud bits, then you turn it up again so you can still hear the track at the same sort of level. The key use here is very similar to the reverb – to make the compressor release match the tempo. In this case your “meniscus” is the musical pumping up of the compressed track’s background around the beat. Again this is pretty subtle – and it’s probably better to start on the slow-release side than fast. In this scenario you’re after “character” rather than sheer loudness!
Also – sticking a gentle compressor over the master (stereo) bus works wonders. Note that it should be almost not even doing anything – 1 or 2 dB of reduction is a lot. It’s there to help gel the mix, not add overall level. Save that for the mastering engineer.

Often dialed-up to add a level of aggressiveness to a track, distortion has a lot more uses than this simple task, but it means being more careful about the “type” of distortion you use. You may have already noticed that adding distortion to an instrument during mixing doesn’t necessarily make it jump out any more than it did. If anything, the extra harmonics you generated can create masking frequencies against the other instruments in the mix, and it can actually become harder to hear and add more “mud”.
The secret is to use this same masking effect as a blending tool. Make sure you’re not adding brittle harmonics to the high frequencies as these will simply make your mix sound harsh and abrasive (unless that’s what you want). What you’re looking for is more warm mid-range distortion – ideally in a range that’s favorable to the instrument or vocal. Use only small amounts.
Feel free to duplicate a track and run it in parallel with distortion added – and maybe those two versions could have different EQs so you’re not distorting the entire frequency range in the track.
Don’t be scared to reduce the treble at the same time – see the next section.

Without resorting to extensive tutorials on how to effectively EQ your mix – let me just point out one useful tip. Don’t be scared to make some of your tracks “darker”. This means reducing the high-frequencies on certain tracks, instead of boosting the highs on absolutely everything. In general, don’t EQ while you’re soloing tracks – do it while listening to the full mix.
I generally find notching out somewhere in the 180-250Hz region on each track cleans things up a bit in the normally-dense low mids, but make sure you don’t strip out the warmth of this region completely as it is important to the body and power of your mix.

Although the current strategy nowadays is for cutting the lows on each of your tracks (with a high-pass filter) to get rid of any sub-sonic power-sucking rumble, sometimes you may need a track to actually contain some low-frequency depth – just choose the right track! I highly recommend the VOG plug-in from UAD to add some smooth fatness to a track (especially kick-drums!) – it’s basically a resonant high-pass-filter, with the turnover-frequency peak positioned at the frequency you want boosted.

Other experiments to try:

Avoid having all the attack transients on the same spot. Offset kick drum, bass, and rhythm guitar tracks slightly to “stagger” the attacks – ie the opposite of quantising. Try delaying the bass for starters. This can blur and expand the energy on each beat. Make sure your tracks are super-tight to start off with.

Recording the room ambience with a track. Rather than fight it, let it in. Not too much though, unless you have a million-dollar room, otherwise it will quickly dominate and can sound pretty awful. Try adding distortion to the room ambience. It helps if it’s on a separate track. One thing I like to do is capture my own impulse reverbs of the recording room and then add it back in to the dry recording. This gives you full control at mix-time over “dry vs room ambience”.

Leave noise on tracks. (But clean up lead vocals and any featured instruments). All those weird little background noises and people talking in the background when you were putting down what was originally supposed to be your guide acoustic guitar – keep it. In other words – don’t clean all your tracks up too much – just the ones that need it. This doesn’t apply to the start/end of songs – it’s better to keep these really tidy as they frame your song and show that you actually meant that other stuff in the middle to sound loose.

Use “dirty” or highly-compressed loops. Bringing up the textural background in a single loop ripped from eg vinyl can add a certain magical blending factor and way more character to your song. Make sure to clear the sample if you do this.

Add extra ambience tracks. Party sounds, outdoor ambience, car drone, room tone, or other subtle sound effects. It’s like having subtle blending background noise, but it’s a lot more interesting and with the extra possibility of occasional random coincidences that add value.

Use the free “Paulstretch” or something similar to slow a copy of your entire song down glacially (eg 8x) and use it to add background textural ambience that conveniently works tonally with your song. Bonus: makes a nice intro and is also good for creating atmospheric film soundtracks.


The Art of Conversation in Song

(Some thoughts on the idea of conversation as a way to visualise the interaction of parts within a song).

Both music and its production involve various transactions, or a dialogue, or a conversation, between interested parties.

This might be between the artist and the audience at a gig, as the audience responds to what the artist is playing, which inspires the performer to even greater heights. Or it might be between the artist and the producer (acting as an informed audience) while in the recording studio.

Or it might be the artist talking to themselves (in a musical sense I hope) across a period of time – sort of like those emails you can send to your future self. How often does an artist make a musical decision on an ongoing piece of work, only to change things later when they have lived a bit longer and learned or experienced more? This is just a conversation over time.

Or it might be a type of conversation within the music itself. Melody and counterpoint. Call and response. The statements uttered by a brass section or backing singers. The drummer and bass player locking in together for certain fills.

When you look at it in a certain way, a good song is a continuous conversation between all of its component parts. Some bands seem to have this nailed – everything they do just locks together like a big fat intricate interesting machine. Jazz artists do it as part of the way the whole genre works – taking turns to solo for example.

So – does this mean a, shall we say “less effective” song, might be not so much a conversation, as a room full of people speaking at the same time, over the top of each other? Maybe so, and maybe the problem isn’t so much about all the voices speaking, but more about the lack of listening before talking.

Let’s look at the value of live performance. There’s not much doubt that a magic performance will grab us in a way that a more technical rendition of the same song doesn’t. In fact, in the studio, early takes of songs seem to exhibit more of this magic than later takes. The first take, despite much more chance of mistakes, generally has the best overall “vibe”. (And if you’re recording stuff while you’re still writing the song, it supposedly grabs it even before the first official take).

Why are these early takes so good? I suggest that it’s because all the participants are listening to each other. Whether it be for clues as to when the next change is coming up, or to see if they are playing the right notes or are in the correct key, or are locking in to the groove, or whatever it may be.

And listening is not just a component of good conversation – it’s arguably much more important than the talking bit. Because each spoken part of the conversation is a continuous transaction between every other part that is heard, and the conversation can adapt and change as needed. More breathing space is left between each component. By the way – don’t you hate it when people are just waiting for a gap to say something that they’ve been holding on to for ages? Even if it doesn’t fit anymore because the conversation has moved on? That’s not good conversation. Just saying.

Good conversation helps the music move towards that area of “flow” where everyone is unselfconsciously involved and “in the zone”. Where the conversation becomes the thing that everyone wants to keep going – like playing tennis or badminton (or maybe beach volleyball), where instead of trying to win the competition, you really just want to keep that ball in the air for as long as possible. That’s where the fun is, that’s where the magic happens.

Conversation can also have its part in the more sundry technical aspects of song arrangement. For example, how can the sections of a song have a conversation, and what sort of voice would they “speak” in? When you think about it in this way (and really all these sorts of concepts are just handles for manipulating musical ideas) it opens up a world of possibilities for ways to look at an arrangement.

Does your chorus shout in a happy voice while your verse is more of a grinding tortured whisper? Are the drums angry or subdued? Is there a buzz of an annoying bassy mosquito whirring around your head on a song that sounds otherwise like a murmuring summer’s day? Maybe that mosquito is a good thing otherwise you’d fall asleep. Unless that’s what you wanted to happen. Okay I’m stopping there…




I just had one more thought on a related note.

The problem with working solo, or being the only “voice” in your song is this; It’s like a room full of only one “voice”. Yours. It can be a good idea to do full or mini-collaborations with others – even if it’s just to add another voice or two in there somewhere. Of course I don’t mean voice literally – it could be guitar or bass. Or banjo if that’s your thing.



Over the Top and Back – Avoiding the Uncanny Valley in Music Production

uncanny reverb valleyOne of the dangers of nibbling away at mixing songs – commonly with your mouse rather than a dedicated audio control surface or mixing desk, is that it’s easy to be far too conservative when adding effects and the like.

What typically happens is you slowly push the level of an effect up until it starts to sound like it’s too much – then you back it down slightly to get a nice balance of “wet” effect vs the “dry” sound source. Ahhhh. Nice.

This is fine – but there are often different contexts that effects work within when the balance of effect vs dry sound change radically – so by using this conservative method, you’re always remaining inside the one safe context of the sound balance without ever realising any of the other creative possibilities available.

If you keep pushing the level of the effect past the point where it sounds bad or too much, then you can sometimes get beyond the audio version of the “Uncanny Valley” into a different range of possible sounds.

A simple example is reverb. The first “conservative” range remains within the context of adding a nice subtle tail to a sound to make it blend, or perhaps to give a subtle halo of space around the sound.

As you keep pushing the reverb level up – the sound becomes muddy and cluttered as the dry sound and the reverb fight each other. This is the audio version of the “Uncanny Valley”.

If you keep pushing the reverb level even further, you will change the reverb’s context completely. The room environment is now dominant, with your instrument or voice existing within it. Of course at this point it will probably also become apparent that there will need to be some tweaking of the reverb to clean it up a bit – adjusting predelay, reverb time and perhaps applying some low-cut EQ to take out some mush.

Reverb is not the only thing that this works with – try it with any applied effect like chorus, flange, distortion, echo etc. Or try going to the extreme and remove the dry sound completely. (Try pre-fade effects sends with the fader pulled right back).

It certainly opens up many more creative possibilities and can help you discover fresh sounds for your mix to make it a little more exciting. Plus it doesn’t take much more effort or time to do this and it has zero risk! So make sure you go way over-the-top when applying effects, then just bring it back to where it works best for the song.

3 Common Mistakes of Lyric Writing

As a producer, one of the things that is most apparent to me is the difference between an amateur and professional songwriter – even if that amateur is talented and doing well in their career. Many bands and artists come into the studio with what initially seems to be a great song, but in the process of putting down the vocals, it can become increasingly apparent that the lyrics have not had the same level of development (or writing expertise) as the rest of the song, often with basic mistakes that can leave an otherwise excellent song fundamentally flawed.

Lyric-writing is a craft as well as an art – words have more or less power and meaning depending on the order and context in which they are conveyed, and knowing some tricks to getting the maximum impact (and least amount of song self-destruction) from your lyrics should really be of high priority. Of course, there are no “rules” in writing, but there are observable effects on the listener depending on how you construct the lyric, and you can simply choose to use these tools or not.

Here’s my three worst contenders for shooting yourself in the lyrical foot.

1) Don’t use perfect rhymes

This is probably the most amateur mistake of all.
Try to use other types of rhymes instead – eg family rhymes, internal rhyme, additive, subtractive, assonance or consonance rhymes.
Although a part of our brain always desires perfect rhyme, we have come a long way since the early days of songwriting, and all those obvious perfect rhymes have been so well-used that they are now totally cliched and too-predictable.

Get yourself a rhyming dictionary (there are online versions too, although I prefer the MasterWriter app) and choose rhymes that are less obvious and maybe pleasantly surprising. Instead of using the perfect rhyme “Bread” and “Head”, maybe use a family rhyme – eg “Bread” and “Web” or “Tear”. In singing, we generally tend to rhyme vowel sounds, and the consonants matter less. Check out books/articles/workshops by lyric guru Pat Pattison for more details on rhyme types.

Note that sung rhymes are not usually the same as written rhymes, so make sure you sing them as you write to make sure that they ARE singable.

2) Use “spotlighting” effectively

There are natural accents within a musical bar that will automatically highlight or spotlight to the listener any word or syllable placed upon it. These spotlights tend to be on the downbeats of the bar, plus a big one at the end of a line, and even bigger at the end of a verse.

Ignorance of this behaviour means that you may end up with “nothing” words like “the”, “and” or “but” placed on these prize positions in the bar rather than your cool meaningful words.
This risks weakening your lyric and can even undermine the meaning of it by placing importance on the wrong word.

Back in 2007 I wrote a song, just before going to a Pat Pattison workshop, that included this lyric:
“The tide is slowly rising, Blood red sun on the horizon”
Spotlighting these words:
Tide, slow, rise, bloodred, sun, the, horizon.

Notice how “the” has a spotlight that it really doesn’t deserve?
I fixed it in this example by removing it from the spotlighted position:
“The tide is slowly rising, Blood red sun on ….the horizon”
Here’s the link if you want to listen (warning – ultra-demo quality!): Tied up in Knots
Note also that “rising” and “horizon” rhyme when sung.

And in relation to syllable position:

3) Don’t put the “emPHAsis on the wrong sylLAble”*.

As much as possible, try to sing as you would normally speak in conversation. If you don’t, you risk breaking the meaning of what you are trying to get across, and it can sound contrived, amateurish, or just like you haven’t taken the time to make the lyric fit the music properly.

You should be able to read your song lyrics out, spoken-word fashion, and the phrasing shouldn’t be too far away from how you sing it. Or vice-versa. This is most noticeable when you’re going for an “authentic”-style delivery (rock/blues/indie) rather than stylised (r’n’b, soul, pop) – accenting the wrong syllable can instantly break authenticity. The listener will go “huh?” and the flow and belief is broken.

There are many more lyrical tips than this, of course, and some equally or more important, but the best idea is to do a proper workshop or short course on it, or at least get a decent book or two about how to structure your lyrics.

For those of you who balk at being told what to do – I remind you that these are not rules as such – they are simply based on observable effects on a listener, and you can still go ahead and do whatever you want.
Sometimes you might need to make a call between including a word that adds the perfect meaning to your lyric, and having to jam it in there a bit more clunkily since it doesn’t quite fit. But you should definitely be aware of the risks on how the listener will receive and decode your meaning when you decide to do things like this.

And finally – you should ALWAYS use some kind of rhyming dictionary – otherwise you are relying on a choice of only the rhymes that you can currently remember. Which is often only a small fraction of the huge amount of available rhymes – many of which are probably more interesting than the one you can currently think of.

*As spoken by Mike Myers in “A View From the Top”.


Pat Pattison: Essential Guide to Rhyming (Formerly titled Rhyming Techniques and Strategies) Berklee Press, distributed by Hal Leonard January, 1992
You can order all these three books – Writing Better Lyrics (second edition), Essential Guide to Rhyming and Essential Guide to Lyric Form and Structure for a special price here

Jason Blume: Writing Hit Lyrics with Jason Blume – get the book here

Review – the UAD Apollo Duo

UAD Apollo under laptop with heaps of UAD plug-ins showing

It’s no secret I’ve been a fan of UAD since I purchased the little UAD-2 Solo/Laptop card. Since the Solo has only one of the SHARC plug-in processing chips in it, my plug-in demands quickly overloaded it. This was usually accompanied by the fiddly and annoying juggling of said plug-ins and bouncing or freezing of tracks in Logic.

So when UAD’s Apollo emerged as their new flagship product, I was excited (and after 25-odd years in the music industry, I don’t get excited by much anymore). Not only did the Apollo come with some reputedly rather nice preamps, it contained either a Duo or Quad SHARC chipset for running UAD’s rather tasty plug-ins as well. Even better, you could have some of those UAD plug-ins in between those preamps and your DAW. This means you could have the virtual equivalent of a very expensive vintage tracking chain of effects with a usefully-low recording latency. That’s pretty damn cool.

So I managed to get my hands on an Apollo Duo for a try-out (Thanks Leon at NZ Rockshop!) since all the Quads sold out instantly in NZ on hitting the shores.

A big potential selling point for me was that the Apollo is supposed to be able to be fitted with Intel’s new Thunderbolt interface. (More here) This involves purchasing an extra add-on card which hasn’t been released yet at the time of publishing, but I really look forward to checking out the performance when these become available. The Thunderbolt transfer speeds are supposed to be blisteringly-fast – actually similar to connecting directly to the PCI-express port on your computer (and there’s two channels of that per Thunderbolt port). This means even lower latencies between the interface and the computer, with none of the Firewire-bus wrangling. This is really only an issue anyway if you have external Firewire drives daisy-chained to the Apollo.

UAD have a solid reputation in the audio industry – they make great “vintage”- quality hardware, and have also pretty much nailed the “accurately-modelled vintage studio hardware” DAW plug-in market. I have found the UAD version of classic vintage units such as the Pultec and LA-2A to be several steps above other versions I have tried. They model each component of prized representative units of vintage hardware to capture all the non-linearity that made the originals so musical and desirable. Then they add any extra handy functions to make them slightly more usable in modern DAW production.

As you can imagine, all this extremely detailed modelling takes a ridiculous amount of extra processing power – hence needing some heavy-duty plug-in engines to do all the hard work. This is why UAD plug-ins only run in UAD hardware (and no doubt also for copy-protection reasons). Although you can buy duo or quad-chipped UAD Firewire and internal PCIe units, the Apollo conveniently includes the chips in this tasty preamp/audio interface unit.

Apollo Front Panel

The build on these units is solid. They are only one rack unit high, but are quite long.  They are well-perforated for good ventilation, and the finish is impeccable. Simple front panel controls make operation easy, and probably help keep the cost down. Large gain and output volume knobs have an LED ring around them to indicate current level, and each has a push-switch included. For the gain knob, this selects between the four microphone preamps, and for the master volume knob, it is also a mute control.
Power is supplied through a fairly chunky external power pack that handles the various international voltages.

Apollo Rear Panel

Despite all the digital gear packed around it, the analogue side remains completely silent and clear.

The Burr-Brown preamps are very nice – high gain, quiet and very transparent. Luckily you can insert some UAD plug-ins between the preamps and the DAW inputs to colour this sound should you wish.

In fact, as I hinted at earlier, this is one of the other major selling points of this unit – the ability to put a chain of UAD plugins in between your microphone, guitar or bass and the input of your DAW. For those of us that like to commit immediately to a particular sound, and can’t afford the luxury of racks of vintage analog gear, this is a godsend. Of course this is only useful if the recording latency is low enough to be usable, and in this case it is – only a couple of milliseconds in total. Saying that – there was still a little wrangling of monitor sources in the DAW to avoid the ol’ comb-filtering effect of two sources with different latencies.

I look forward to seeing what the recording latency is like in the Thunderbolt-interface version of the unit when it becomes available. It means bringing it down to latencies that are very close to using the old Pro Tools HD hardware – less than 1 millisecond. That’s pretty darn good.

Okay, so far everything looks pretty good – there’s not much to fault in this unit really.

I would say that the only niggles are in the software/firmware.

My first issue was with the combination of the Apollo and my old UAD-2 Solo/Laptop card. Since my trial Apollo unit was only a Duo, I thought, “ah, I’ve still got my little Solo/Laptop card as well, so that’s three processors – not too far off four”.
While this was true in theory – it didn’t immediately work in practice. I discovered there are some “quirks” with laptop cards. Cardslots in general have always been a little quirky – although in theory it’s fine to hot-plug certain devices into the guts of a computer while it’s running (and the Solo/Laptop by itself usually seemed to handle it okay) – it’s really pushing the technology/software/OS towards their limits. I found that with the combination of the two devices, I was getting some UAD plug-in overload messages opening up old Logic sessions with UAD plugs inserted.

Anyway – the friendly and patient support guy at Universal Audio helped me troubleshoot it on the phone, and it turns out that as long as the Solo/Laptop card is plugged-in before booting up the laptop (so it shows up first in the hardware list in the UAD Control Panel), and then the Apollo is plugged in/turned on afterwards, everything works just dandy. Any loaded UAD plug-ins in the session are now distributed over all the available SHARC chips. Sorted.

Another minor annoyance I found was with the implementation of the plug-ins within the Console app – you can’t drag and drop the plug-ins into a different order – you have to cut and paste etc. A bit fiddly when you’re wearing a guitar and trying to quickly get something tracked while the idea’s hot.

The Console plug-in is modelled on an analog console, with its own built-in effects sends and returns and headphone sends. It allows some nice low-latency patching of monitoring effects like reverb and delay, but it doesn’t include the more obvious ability to patch any input to any output like most audio interface mixer apps. This is offset somewhat by each of the various ins/outs within the Console app showing up in your DAW as ins/outs, but is not what you would call “conventional” or transparent operation. Especially if trying to set up multiple headphone sends, each with a different mix.

There’s no beat-clock for UAD effects that depend on session tempo either – I can see why this is difficult to include as the Console app sits external to your DAW session – but perhaps having the facility to accept MIDI clock via a virtual MIDI port would be valuable. (UAD boffins – maybe this could be implemented via the Console plug-in?) Having beat-synced delays is important whilst tracking to a song with delay-based effects.
I’d also like to see the ability to map MIDI controllers to various Console parameters/UAD plug-in parameters.

It’s not all niggles though – there’s a handy plug-in version of the Console for remembering the settings as part of your project – nice! And the Console app allows the easy copy/pasting of your channel mix settings to your effects/headphone sends – very handy!

Review of features

Sound quality; I couldn’t fault it. This is a very nice unit. Very low noise and no obvious colouration in the mic preamps. Plenty of gain. Full 48 V Phantom power for mics. Pad, low cut and phase switches. All controlled by a fancy rotating knob with surrounding lit ring showing current gain, and some selector buttons.

I tried recording and playback at various sample rates up to 192kHz. I didn’t find that much difference between them really. Just correspondingly lower latency and perhaps a touch more “silkiness” on some instruments in the higher frequencies. I’m guessing the Apollo oversamples for the lower sample rates anyway, so there’s not much in it. (And some of the UAD plug-ins themselves are upsampled as well).

I tried recording some acoustic guitar – first without any UAD plug-ins inserted – the preamps sounded open and clean-sounding. Very transparent and very low noise. About what I expected from UAD. Very nice.

I then tried inserting some plug-ins into the Console app that comes with the Apollo.

UAD Apollo Console App

Some gentle compression with the 1176SE, Pultec adding small amount of top, and LA3A as peak limiter. Beautiful – now the guitar was sounding shiny and firm, with no nasty artifacts, and no noticeable noise problems despite the compression.
I had to pull the fader down in Logic and just monitor through the UAD console app to avoid the latency-induced comb-filtering.

Next I thought I’d try one of the High-impedance instrument inputs on the Apollo’s front panel. The input automatically switches from Mic/Line to Instrument when you plug in. Handy. Oh – and it sounds really good. I’m not suprised – UA has an excellent reputation with not only audio quality, but also pro-level usability design.
As I’m guilty of pretty much always just plugging my Fender Jag-Stang straight into my interface to capture ideas as fast as possible – I could tell that this immediately sounded better than my old interface ever did. A LOT better. Winding the gain knob up gave a really nice creamy analogue distortion too. I’m really liking this unit so far. Clean on mic inputs, nicely coloured analogue on the Hi-Z inputs.

Let’s try the dodgy acoustic guitar piezo bridge pickup. Plugging straight into the Hi-Z jack. Wow – same thing again. A huge improvement – in fact the best I’ve heard it sound from the pickup. It’s normally quite clicky and overly percussive. This sounds much smoother and fatter.

And my Jaguar bass sounds great plugged straight into the Hi-Z input as well – as much as I love my old Drawmer 1960 tube preamp on bass, I have much more control with the Apollo.

So – it seems like we’re getting 4 very good quality mic preamps with this unit. The mic inputs can be switched to the line inputs on the back, with the first two inputs automatically switching to Instrument when something is plugged into the Hi-Z jacks in the front.
There’s also the usual S/PDIF I/O (with auto sample rate conversion if you like), and an S-Muxed dual ADAT I/O that can do either 8 channels of regular 44.1 or 48kHz sample rates or handle up to 4 channels of 192kHz audio.
There’s two headphone outputs that can be fed from various parts of the Console app.

The controls are simple and clear – each of the four preamp inputs can be selected by pressing the gain knob, and rotating to set amount of gain. A circular light ring surrounding the gain pot shows the amount of gain added. Switches apply 48v Phantom Power, Pad, high-pass filter etc to the selected channel.

The Console app shows much more detail, reading out in decibels and showing the state of everything all at once. It has its own built-in headphone busses and effects monitoring setup.
Because this is a digitally-controlled preamp, all settings may be saved and recalled – as I mentioned, there is a Console plug-in (in all the typical plug-in formats) that can be inserted into the session that can recall all these settings automatically if you like.


A great unit overall. Sounds fantastic and is, without doubt, value for money – especially the Quad version.
Not quite perfect yet – there’s still a few very minor things to iron out with the interface software, and the Thunderbolt adaptor is still absent (as at July 2012).

For this price (about USD$2,500 or just under NZD$4,000), the question is why woudn’t you buy one.
I did.

ps. I forgot to mention, for those of you who know little about UAD yet; like most of their plug-in based devices, the Apollo comes standard with the UAD-2 Analog Classics plugins collection (the 1176LN, LA-2A and PultecEQP-1A) and a voucher for $100 worth of plugins.
That’s pretty cool value – especially if you wait until one of their fairly regular online sales to use it.

Plus – when you install your plug-ins, it installs every UAD plug-in, and the others (that you haven’t purchased yet) can be demoed for two weeks from whenever you click the little “demo” button on the plug-in. I’ve also noticed that all the demos seem to be reset every time you purchase a new plug-in. These guys are very canny yet professional with their marketing, and they certainly know how to look after their customers.

8 Top Logic Pro 9 Features

Bounce Track in Place

Here’s 8 of my top features in Logic pro 9. If you have any others – feel free to comment!
Click on the pictures for larger versions.

8 Bounce-in-place, for either track or region. Bounces down either your audio instrument MIDI regions or your audio regions – with or without plugins – to new audio regions and then mutes the original track or region. Great for rapidly “printing” any special processing or pitch-fixing plug-ins like Melodyne into a solid file. Note you can also do this for EVERY track simultaneously should you wish to export all your tracks into a different DAW program for mixing or something. Closely related to…

Freeze Track – the Two Modes

7 Freeze Track – you can freeze a track either just after the instrument (or less-usefully after an audio file I suppose), or after all the plug-ins and the fader, depending on how much load you want to take off the processor and how much control you want over tweaking plug-ins. Basically it does a cunning invisible 32-bit float bounce of the track (which you can go and copy from its folder if necessary) and then disables plug-ins/instruments on that track – in other words swapping processor power for hard-disk speed. It still sounds exactly the same but now you have more CPU power to do stuff. You can still un-tick the freeze button at any time to do some editing etc. Essential when you inevitably have so many plug-ins and instruments in your project that it won’t play!

Replacing or Layering Drum Track

6 Replace or layer individual drum tracks – need to fix up or bolster those poorly-recorded kicks, snares or toms? Logic can automatically detect the drum hits on an audio track (you can adjust sensitivity), then you can choose a replacement (or layered) drum sample which is then automatically imported into a new EXS24 sampler which is placed underneath the original track with a handy MIDI trigger region to play it.

Convert Audio Region to Sampler

5 Convert to sampler – this is cool and ridiculously easy. You can either select a strip-silenced bunch of audio regions on a track, or you can let Logic identify the transients in the audio region/s, then it will automatically pack all those audio chunks into a sampler assigned to MIDI notes ready to play. And then mute the original audio regions. Oh, and it conveniently creates a MIDI region that plays those samples exactly as if they were still the original audio files – so it recreates the source audio exactly. You can delete this if you want to do something different.

Select Sampler Options

Final Converted Sampler Track with MIDI region

Making Groove Template from MIDI Region

 4 Create your own grooves, or match your MIDI stuff to live drums. You can do this in a few ways – using the “Audio-to-MIDI” groove template under “Factory” in the Sample Editor window, or recording a MIDI track while you tap along with every 8th or 16th note on a MIDI keyboard, or use the drum/replacement doubling trick above. Once you have a MIDI file, select it and go to the region inspector pane, click on the “Quantize” drop-down menu and select “Make Groove Template”. This can then be applied to any other MIDI or Audio regions.

Audio to MIDI Groove Template


Region Gain

3 Region Gain – many users of Logic still don’t know you can use the little region inspector panel top left of the Arrange window to adjust the individual gain for each audio region.  It can replace the use of automation in some cases – just cut up your regions and set the gain for each one. It handily does this before it goes through any plug-ins in the channel strip (although this can be a problem if you have compressors inserted and you’re trying to use it instead of automation – the compressor may “fight” any gain changes).


Capture Record Button

2 Capture Record. This mysterious little button is like magic. Say you’re just been jamming along on your MIDI keyboard or controller in Logic, then you realise you’ve just played something absolutely amazing, but “oh no!” you haven’t been recording!

Fear not, just press this little button and what you just played will magically appear as recorded. If the button doesn’t show up for you – Ctrl-Click on your Transport bar and add it.

Tip – use “Take Folder” mode for your MIDI cycle recording – even more useful.

NB: This doesn’t work with audio unfortunately – unless you use the sneaky “Punch-on-the-fly” mode trick. When “Punch on the Fly” mode is selected under the Audio menu, Logic is ALWAYS recording any armed audio tracks whilst playing-back. You just need to actually hit “record” for a second while it’s still playing to let Logic know you want to capture and hence retrospectively grab what you just played.

Take Folder

1 Take Folders. Most people know about this one but it still bears mentioning. Record while in cycle mode, then just swipe the bits you want to keep from each take. It’s a fantastic way to get great vocal takes and Apple really has this one nailed better than the competitors. Many people aren’t aware of all of its cool features though.

Some additional tricks are; creating alternate versions of your comp (good for backing vocals or doubling lead vocals), exporting particular comps to duplicate tracks, editing the audio itself (eg trimming or moving parts) while comp’ing a good take, tagging the best bits as you go, or editing the size of the looped section (and folder) if it chops off the beginning of the loop each time.
You can also manually create a take folder out of various selected audio regions for creative purposes and you can also cut the take folder into chunks if needed.

What are your favourites?

The Lucky 13 Song Mixing Tips

Before I get started I just want to reinforce something I’ve mentioned in earlier posts – sometimes a reduction in parameters actually generates more creativity. Being aware of a set of limitations, or guidelines, can actually allow you much more creative control over your final mix.This could mean limiting the amount of effects that you allow yourself to use, or a more obvious one is to only use a particular set of effects that suits the genre or style. If you have the permission to do it, perhaps editing tracks or even removing “surplus” instrumentation or vocal is the first step.

Approach-wise, ideally you want all aspects of a song to reinforce together and create a stronger impact, and if you aren’t aware of what you’re doing, it’s very possible (in fact more common than you think) to get a generally nice balance of instruments that somehow doesn’t “gel”. You can hear everything, but it lacks emotional impact.

So here’s a bunch of ideas to think about next time you’re mixing a song – there are many more ideas and concepts to experiment with than these, but I stopped myself before the post became a novel.

1  Know what the song’s about. Clues are in the lyrics. Knowing what it’s about gives you the opportunity to amplify the concept rather than inadvertently fighting it. That doesn’t mean you have to “follow” the lyrics with the mix in a literal sense – you might do nothing at all in that regard, but at least you won’t be fighting the meaning of the song without even realising it, and when it comes to trying to think of creative mix directions, it’s yet another clue to help you.

2  Know the context of the music. What’s the genre or style of the artist. How does it relate to the artist’s identity? Being aware of this really makes it much more likely that you’ll promote that artist’s identity and overall concept, plus the artist will be more likely to appreciate what you do with the mix. For example does the artist exemplify “authenticity” where a raw, “character” sound with any intonation problems remaining unfixed is most desirable? Or is it about slick and smooth production?

3  Be adventurous. A mix is not a simple balance of levels of the instruments in a mix, it’s about featuring various aspects that you think the listener would like to hear, or more accurately needs to hear at any given section of the song. Pretend it’s a movie – how do you present each section of the song? Don’t be scared to go “over the top” with effects, fader moves and featuring of mix aspects – you can always tone it back if need be. Don’t be scared to turn vocal up loud – trying to hide weak vocals makes it even worse. Even ugly actors have to have close-ups in a movie to make it all work.

4  Think about texture and tone. It’s partly tone, partly level, partly how dominant something is in the mix. If you compress something – its texture changes. Listen out for it tonally as a sound rather than just checking it’s variation in level. How pervasive is it compared to everything else, despite its volume in the mix?
How does it link into the overall texture of the song? Textures are like a tonal colour palette – you probably don’t want to mix a neon green element in with some nice earth tones (remember there are no rules!), but then again you don’t want everything the same shade of beige.

5 It’s about melody In even the most distortion-fest mixes, our human nature will use our built-in pattern-detecting algorithms to extract a melody out of it somewhere, whether it be in the movement of the harmonics in the wall of guitar noise or in the groovy bassline. Make sure there’s one dominant melody at any given instant, or if there’s more than one, that they aren’t fighting each other and canceling out.

6 The pocket. It’s more than something to put your wallet in. It’s that magic interaction of instruments when it all suddenly locks into a groove. Spend some time adjusting relative timing of instruments to see if you can help the groove “gel”. You’ll know when it happens because it’s magic and you’ll start moving with the music whether you want to or not. Note that Beat Detective and other forms of quantization can fight this effect – it’s “felt” rather than being on an exact grid. Saying that, if the playing is too loose then a timing grid is definitely a step up.

7  Keep it simple stupid. Less is more. These things are fundamental truths, despite our over-familiarity with them often leaving them as meaningless statements in our minds. Think about the mix as a photo – the more people you want to appear in the photo, the smaller they’ll have to be. Don’t be scared to bring the main things to the foreground, and push other things back to the point of blurriness or being hidden behind the main elements. A good mix is not about individual band members’ egos, it’s about the overall blend. When you think about it, the individual band members have the least idea about what the mix should sound like – they all hear completely different versions of a mix depending on where they stand/sit when they perform.

8  Three “Tracks”. Back in the olden days, after mono and stereo, there were three tracks. One was for “Rhythm” (and could include drums, bass, percussion and rhythm guitar for example), one for Vocals and one for “Sweetening” which might be things like brass, strings, lead instruments etc. This strategy is still a great one to keep in mind for mixing. It forces you to think about your rhythm section as one single thing, and you need to make it all gel. Bass needs to lock in the pocket with the kick drum. Sweetening nowadays is whatever else you need outside rhythm and vocals. Think carefully about which mix elements fit into each of these three roles, and if all three are already populated – maybe it’s time to do some cutting. Note that some instruments such as guitars might switch between modes depending on what they’re playing at the time – rhythm, fills or lead.

9 One thing at a time. Rather than thinking of one of the aforementioned three tracks as just “Vocals” perhaps it’s better to look on it as “Melody”. The melody line often chops and changes between vocal, instrumental fills and solos. If you think of these three elements as playing a similar role at different times in the song, it makes it easy when trying to decide on levels/sounds between the three. It also highlights that you shouldn’t have any of those melodies crossing over each other and fighting at any point – keep ’em separated!

10 Getting the bass sitting right is tricky – especially when it needs to work on both large and small speaker systems. Try mixing the bass while listening on the smallest speakers that you have, to get it sitting at the right level. Then adjust the tonal balance while listening on bigger speakers to reign any extreme frequencies back in. Sometimes you might need to layer the bass sound to get this to happen effectively.

11 Don’t over-compress everything. Listen to the TONE while compressing each instrument and keep it sounding natural if possible. Pay close attention to the start and end (attack and release) of the notes of each instrument you compress. Your final mix should be sitting at an average RMS level of about -12 to -18dBFS with peaks no higher than around -3dBFS. Leave the mastering engineer to do the final compression and limiting. Remember to leave dynamic range in the mix – contrast! Our ears need some sort of contrast to determine what’s loud and soft. If you hammer all the levels to the max you may as well just record the vacuum cleaner at close range and overdrive the mic/preamp. Hmmm. Might have to try that.

12 Easier than Automation. In these days of automation, it’s easy to spend inordinate amounts of time tweaking automation changes on instruments or vocals between different sections of a song (eg adding more reverb to the vocals in the chorus or adjusting rhythm gtr levels in the bridge). With today’s digital audio workstations, extra tracks are usually in ready supply, so rather than fluffing about with automation for a specific section of the song, why not just move that part over onto another duplicated track instead, then just make whatever changes you need to suit that section. Much quicker than continually mucking around with automation on the same track. By the way – make sure your mix is dynamic. A mix is a performance in itself, not a static set of levels.

13 Use submix busses for each element of the mix. Eg drum subgroup,  guitar subgroup, vocal subgroup etc. Rather than send all your drums straight to the L/R or Stereo mix, first send them all to an Aux return channel instead. Then send that Aux to the LR/Stereo mix. (Tip: disable solo on the Auxes) This makes it simple to do overall tweaks to your mix even after you’ve automated levels on individual tracks.
You need to be careful about aux effects returns and where they come back though, as their balance might change slightly if you adjust the instrument subgroups.
And hey, what about creating just three subgroups – Rhythm, Melody, Sweetening? Let me know if it works ;o)

Sources: Stephen Webber, Bob Katz, Mixerman, Mike Senior.

Relieving Threshold Shift (Temporary Hearing Loss) with Acupressure

This is a handy tip for those moments when you’ve gone to see a loud band and forgotten to take earplugs, and one that I’ve used numerous times to “reset” my ears after a gig. I was shown this trick about 20 years ago by a friend and have been using it since then, but in preparing this blog I’ve also found lots of supporting evidence on the web that reinforces the basic concept. It has definitely and audibly worked for me and others that I’ve shown it to and it really can’t hurt to try it. Actually it does hurt a bit when you find the right spot to press, and I also have to admit it looks a bit stupid when you’re doing it, so best not do it when actually walking out of the gig – at least wait till you’re in the car when nobody can see you.

Press and hold the area shown in the diagram – it’s in the hollow just in front of the ear lobe. If you press the right spot it will feel tender, and after a few minutes you should feel the “cotton-wool” feeling diminish and your hearing begin to return.

Threshold Shift, for those who don’t know, is muffled high-frequencies or pressure or ringing in your ears that you can feel as you’re walking out of a loud gig. This is extremely dangerous in the long-term, and has even more significance nowadays for long-term headphone or ear-bud use.

Long-term exposure to loud sounds:

What happens is that when loud noise is perceived by the brain, it attempts to protect your hearing by tightening the muscles inside the ear in order to reduce the amount of noise passing through the ear mechanism. A fantastic system really, but not designed for a lifetime of loud music or industrial noise.

This muscle constriction can also restrict blood flow to the inner ear, and if it happens repeatedly it can cause long-term damage to the nerve cells in the inner ear, which eventually end up dying. Fatally. As Motorhead almost said – “Killed by Deaf”.

Seriously – long-term noise exposure can cause permanent hearing damage.

This acupressure trick relieves the constriction of the muscles around and in your ear, and hence allows full blood flow again to the nerves in the ear, hopefully extending the life of your hearing a bit longer. Obviously it won’t suddenly reincarnate the dead nerve cells in your inner ear, but if used early and often enough it will hopefully at least minimise the damage somewhat. No guarantees of course.

* Threshold shift and associated long-term hearing damage are not the only cause of hearing damage. I have met people who have lost hearing with a single exposure to a loud impulse sound (someone pressing the wrong button on the mixing console and blasting maximum volume through the headphones, or a massively loud click through a PA system at a gig) as well as others who have ended up with tinnitus (ringing in the ears) which can last FOR THE REST OF YOUR LIFE. Apart from these problems there are other odd things that can happen related to your inner ear – for example upsetting your sense of balance. Not much fun – I had continuous vertigo for a few days when I had the nasty flu earlier last year – no laughing matter as I ride a motorcycle to work.

Noise vs Music – I’ve often pondered this as I’ve been assaulted by a band that sounds like crap – as long as you perceive the music as, well music, your brain isn’t trying to shut your ears down, but if the band sucks and it sounds like obnoxious noise they’re effectively killing your ears! Obvious solutions – drink more alcohol to thin the blood and keep that oxygen getting to the ear cells, or try to psych yourself into believing the band is awesome, thereby fooling your own brain.
My wife says “why don’t you just leave?”, but I view that as defeatist.

Factoid 1: Research disputes what I just said before. Studies have shown that musicians suffer as much hearing damage as those exposed to industrial noise of equivalent level. I argue that musicians aren’t ever just exposed to music they like (we usually all have to share gigs with other bands), or other loud noise, so it’s hard to prove this either way without adequate methods or controls.

Factoid 2: Published Acceptable Exposure Time vs Sound level graphs are based on industrial noise, not music. At 110dBA your acceptable daily exposure time is 1 min 30 seconds!

Other (more serious) solutions:
Obviously, considering all this, the best solution is to avoid loud sound or wear appropriate hearing protection. Go get some proper earplugs -the custom-moulded “musician’s” earplugs are pretty darn good- they’re relatively “flat” and uncolored but there are other slightly cheaper options as well (custom-fitted plugs can be quite expensive, but they last for years with careful use). The problem I’ve found with them is you can truly hear how out-of-tune the singer is when watching a live band which might ruin your enjoyment or “perception of talent” slightly, but I have to say I’ve been to gigs and wearing some -15dB custom plugs and my eardrums have still been distorting painfully at times. You can get -25dB or more reduction plugs, and some come with both inserts as options so you can swap them.

And finally an observation – isn’t it weird how society is au-fait with people wearing glasses to correct their vision, but wearing a hearing aid has a stigma attached to it. You see graphic artists, photographers, directors and numerous other industry professionals (who rely on visual acuity!) wearing glasses, but would you trust an audio engineer with a hearing aid? Hmmmmm.
Not that I need one YET, just paving the way for the future.