Value and Success in the Music Industry

There was a humorous Cracked article “6 Harsh Truths That Will Make You a Better Person” by David Wong doing the rounds not long ago about the need to define your value to others in order to make you a more worthwhile person. I recommend you read it – it’s a timely reminder even if you already know this sort of thing.

The article immediately flashed me back a couple of years to when I brought an old friend of mine – with years of experience in the upper echelons of the NZ music industry- into one of my classes as a guest speaker. It was nominally supposed to be a communications lecture, but what he focused on instead was defining your own value for others. Basically the same message as the Cracked article but targeted more towards success in the music industries.

If you want to succeed in the music industry you need to provide value to others – either in the service sense if you’re going into audio engineering, or by producing content that people want to hear. This is exactly what slightly-delusional wanna-be rock stars need to hear.

Zed Brookes – O Sweet Cacophony Album Released

I finally released my album in 2016 after changing the name to “O Sweet Cacophony” as it seemed more intriguing and different than “Deus Ex Machina” (which also reminded me too much of a recent film).

Luckily a good friend of mine stepped in and helped me curate the final track listing, and even booked the venue for the release listening party at Brothers Beer in Auckland.

Feel free to check it out:

Zed Brookes – O Sweet Cacophony

iTunes, Spotify, Tidal, Amazon

Zed’s Bandcamp page.

 

 

 

Mixing Metaphors for Music – the Picket Fence and Shrubbery Metaphor

Picket Fence

Photo by Martin Kennedy via Creative Commons Non-commercial Share-Alike license

Mixing engineers and producers generally have a bunch of internal metaphors for visualising or handling the various aspects of a song mix, depending on what they’re working on and their own personal preferences. Some stick with the same ones all the time, but most dynamically shift between a selection of metaphors as they go. It can be a way of keeping certain mix “rules” or approaches in play when building up a song mix.

For example, some “see” a mix as a picture or painting, with each part made up of different colours and shapes. “Make the bass more blue”. Whatever that means. Others think in food metaphors – “we need more spice on the organ” (sounds painful to be honest), the “vocal isn’t sweet enough”, or “it’s still too sickly – it needs some salt in there” etc. It even works the other way around – while beer-tasting I can’t help comparing the flavour to the spectrum of frequencies in a mix. “There’s not enough low-mids in this IPA”.

Personally, when mixing I switch between the colours and shapes, the flavour and others as well. For example, I’ve always felt the the mix as being composed of horizontal and vertical aspects, especially when dealing with anything primarily groove-based. One day as I was trying to explain it to someone, I jokingly referred to it as the “picket fence and shrubbery” metaphor.

It does make it very easy to visualise and compare the vertical (pickets) transient aspects – for example drums and fast intricate glitchy stuff, with horizontal aspects (shrubbery)- eg pads, basslines, vocals etc. It becomes more about timing and note length.

Is the picket fence too large compared with the shrubbery behind it, or is the shrubbery overgrown and concealing the picket fence? Obviously it depends on the song, but it’s easy to “feel” the balance between the two elements and hopefully help achieve an appropriate balance.

If there’s too much shrubbery, the mix could be washy and rhythmically undefined, whereas if there’s too much picket fence, the drummer was probably helping mix. Boom boom.

Edit:

Some people seem to have trouble with the simplicity of this metaphor, so here is a more academic translation:

Music could be considered to be made up of parts that often have both transitory and sustained components to the sound. Transitory parts (ie shorter than approx 50mS) could be considered to be “vertical” whilst sustained elements (longer than 100mS) could be viewed as “horizontal”. Most instruments have a combination of both – for example a piano or acoustic guitar could be mixed to feature either a predominance of the attack or sustain portion of the envelope, contributing differently to the mix in either case.

Visualisation of a mix with these horizontal and vertical attributes in mind could help a mix engineer ascertain whether there is an appropriate balance of the two aspects depending on the context of the music. For example, in dance music where the rhythm is important, it might be beneficial to feature more of the attack portion of the sound, while in something more musically ambient, more longitudinal features could be highlighted, perhaps even adding reverberation to enhance the effect. (Note that in dance music, it is common to apply a side-chain to impart more vertical impetus to a horizontal sound – eg using the kick to side-chain the string pad for example).

The key point is that almost any sound can be treated to expose more of the attack or sustain portion, and the collective treatment of various parts in the mix with this approach in mind could see completely different mixes created from the same material, even with similar peak levels for each part. In this light, it is simply the controlling of the balance between RMS or average level compared with the peak levels for each part.

In mixing drums, for example, this vertical vs horizontal viewpoint can be effective in order to help decide on an optimum balance between close-mic’ed drums (eg the snare) and the often-well-compressed overheads. It can also make things simpler while adjusting the attack on a compressor in order to provide an effective balance between the attack and release components of a sound.

The Meniscus Effect – Blending Digital Audio Analogue-Style

waveform

Many engineers and producers love the sound of analogue, despite (or perhaps because-of) the superb quality of digital audio products. Analogue is felt to be more musical and it seems easier to mix songs done in analog formats. Why is this?

Most people, when asked why they like the sound of analogue, focus on things like “warmth” and “roundness”, the slight bass hump you get off analogue tape, the subtle musical harmonic components, the naturally better average-to-peak ratios that make your music sound louder and fuller. The inherent infinite-band compression/limiting of analog tape that acts as a de-esser. Actually all of these do sound great!

Of course there’s also the downsides that many forget: Noise. Distortion that you don’t want. Noise. Expense. Maintenance. Non-linearities in the sound that you don’t want. Noise. Crosstalk. Track limitations. NOISE!

What most engineers don’t seem to talk about much is WHY it’s easier to mix analog sources. Those that do tend to look at things like crosstalk between the channels or across tape tracks, or maybe the individual analog channel non-linearity making the music sound richer as the main reasons. Many younger engineers who haven’t had much experience with tape probably don’t even appreciate the difference.
Whatever it is (and I think it’s definitely a complex blend of a few things including harmonic distortion), I think the result is that each sound is easier to blend or “gel” with another sound. It’s like each sound has a meniscus on it, that grabs other sounds when you get close enough. Analog also has a strange ability to have things really loud in the mix, without the detail overshadowing the other instruments.

Of course, if it’s perfect clarity you want, then you don’t want this happening, but in most cases you want a song to feel homogenous and musically prosodic – with all the elements gelling cohesively together into something that feels larger than the sum of its parts. It becomes a pleasing landscape rather than just numerous close-ups up of tree bark.

This also depends on the type of music/sound you want to make. If you were an artist – are you going for photo-realism, or for a more impressionist look? Or Surrealism? If it’s a photo, are you going for a snapshot with all elements in focus (please no!), or a nice well-lit shot with foreground in focus and background out of focus (hell yes!).

My idea of the Meniscus Effect – and let me be clear that this isn’t something that is new or that I’ve created, it’s simply a method I use to consciously perceive a mix in a useful manner – and it might be useful for others as well. It’s simply the art of blending of each part into another – the gentle blurring of the edges – making sure the main part of each sound retains its distinct shape and character.

One of the problems with mixing in digital is that the cleanliness of the recording allows each sound to retain its focus from the peak level to the dark limits of lower bit encoding. It’s like a perfect digital snapshot where everything’s in focus, all at once. The information’s all there, but is it aesthetically pleasing, and does it blend?

In practice – what’s currently happening today is more of the same sort of thing that’s always been used – judicious use of EQ, reverb, distortion and compression to blend the parts.

So let’s look more closely at what we need to help gel the sounds together.

The key thing is just blending into the digital silence. Solutions? Subtle reverb, compression to bring up the noise floor in the track – and preferably in a rhythmical manner, and adding distortion for blending harmonics.

Reverb
The general rule of thumb with reverb is that the longer the decay, the quieter it should be, otherwise it just washes everything out.
I recommend tweaking the pre-delay so it adds some rhythmical attribute (and adds clarity on vocals), and also filtering out some of the bottom-end to clean it up. Use a nice plate or hall for the “long” reverb, blended so you almost can’t even hear it, but you definitely notice it’s gone when it’s muted. Don’t make the reverb really toppy/bright – keep it “warm” sounding.

Compression
Compression is basically reducing dynamic range – turning down the loud bits, then you turn it up again so you can still hear the track at the same sort of level. The key use here is very similar to the reverb – to make the compressor release match the tempo. In this case your “meniscus” is the musical pumping up of the compressed track’s background around the beat. Again this is pretty subtle – and it’s probably better to start on the slow-release side than fast. In this scenario you’re after “character” rather than sheer loudness!
Also – sticking a gentle compressor over the master (stereo) bus works wonders. Note that it should be almost not even doing anything – 1 or 2 dB of reduction is a lot. It’s there to help gel the mix, not add overall level. Save that for the mastering engineer.

Distortion
Often dialed-up to add a level of aggressiveness to a track, distortion has a lot more uses than this simple task, but it means being more careful about the “type” of distortion you use. You may have already noticed that adding distortion to an instrument during mixing doesn’t necessarily make it jump out any more than it did. If anything, the extra harmonics you generated can create masking frequencies against the other instruments in the mix, and it can actually become harder to hear and add more “mud”.
The secret is to use this same masking effect as a blending tool. Make sure you’re not adding brittle harmonics to the high frequencies as these will simply make your mix sound harsh and abrasive (unless that’s what you want). What you’re looking for is more warm mid-range distortion – ideally in a range that’s favorable to the instrument or vocal. Use only small amounts.
Feel free to duplicate a track and run it in parallel with distortion added – and maybe those two versions could have different EQs so you’re not distorting the entire frequency range in the track.
Don’t be scared to reduce the treble at the same time – see the next section.

EQ
Without resorting to extensive tutorials on how to effectively EQ your mix – let me just point out one useful tip. Don’t be scared to make some of your tracks “darker”. This means reducing the high-frequencies on certain tracks, instead of boosting the highs on absolutely everything. In general, don’t EQ while you’re soloing tracks – do it while listening to the full mix.
I generally find notching out somewhere in the 180-250Hz region on each track cleans things up a bit in the normally-dense low mids, but make sure you don’t strip out the warmth of this region completely as it is important to the body and power of your mix.

Although the current strategy nowadays is for cutting the lows on each of your tracks (with a high-pass filter) to get rid of any sub-sonic power-sucking rumble, sometimes you may need a track to actually contain some low-frequency depth – just choose the right track! I highly recommend the VOG plug-in from UAD to add some smooth fatness to a track (especially kick-drums!) – it’s basically a resonant high-pass-filter, with the turnover-frequency peak positioned at the frequency you want boosted.


Other experiments to try:

Avoid having all the attack transients on the same spot. Offset kick drum, bass, and rhythm guitar tracks slightly to “stagger” the attacks – ie the opposite of quantising. Try delaying the bass for starters. This can blur and expand the energy on each beat. Make sure your tracks are super-tight to start off with.

Recording the room ambience with a track. Rather than fight it, let it in. Not too much though, unless you have a million-dollar room, otherwise it will quickly dominate and can sound pretty awful. Try adding distortion to the room ambience. It helps if it’s on a separate track. One thing I like to do is capture my own impulse reverbs of the recording room and then add it back in to the dry recording. This gives you full control at mix-time over “dry vs room ambience”.

Leave noise on tracks. (But clean up lead vocals and any featured instruments). All those weird little background noises and people talking in the background when you were putting down what was originally supposed to be your guide acoustic guitar – keep it. In other words – don’t clean all your tracks up too much – just the ones that need it. This doesn’t apply to the start/end of songs – it’s better to keep these really tidy as they frame your song and show that you actually meant that other stuff in the middle to sound loose.

Use “dirty” or highly-compressed loops. Bringing up the textural background in a single loop ripped from eg vinyl can add a certain magical blending factor and way more character to your song. Make sure to clear the sample if you do this.

Add extra ambience tracks. Party sounds, outdoor ambience, car drone, room tone, or other subtle sound effects. It’s like having subtle blending background noise, but it’s a lot more interesting and with the extra possibility of occasional random coincidences that add value.

Use the free “Paulstretch” or something similar to slow a copy of your entire song down glacially (eg 8x) and use it to add background textural ambience that conveniently works tonally with your song. Bonus: makes a nice intro and is also good for creating atmospheric film soundtracks.

 

The Art of Conversation in Song

(Some thoughts on the idea of conversation as a way to visualise the interaction of parts within a song).

Both music and its production involve various transactions, or a dialogue, or a conversation, between interested parties.

This might be between the artist and the audience at a gig, as the audience responds to what the artist is playing, which inspires the performer to even greater heights. Or it might be between the artist and the producer (acting as an informed audience) while in the recording studio.

Or it might be the artist talking to themselves (in a musical sense I hope) across a period of time – sort of like those emails you can send to your future self. How often does an artist make a musical decision on an ongoing piece of work, only to change things later when they have lived a bit longer and learned or experienced more? This is just a conversation over time.

Or it might be a type of conversation within the music itself. Melody and counterpoint. Call and response. The statements uttered by a brass section or backing singers. The drummer and bass player locking in together for certain fills.

When you look at it in a certain way, a good song is a continuous conversation between all of its component parts. Some bands seem to have this nailed – everything they do just locks together like a big fat intricate interesting machine. Jazz artists do it as part of the way the whole genre works – taking turns to solo for example.

So – does this mean a, shall we say “less effective” song, might be not so much a conversation, as a room full of people speaking at the same time, over the top of each other? Maybe so, and maybe the problem isn’t so much about all the voices speaking, but more about the lack of listening before talking.

Let’s look at the value of live performance. There’s not much doubt that a magic performance will grab us in a way that a more technical rendition of the same song doesn’t. In fact, in the studio, early takes of songs seem to exhibit more of this magic than later takes. The first take, despite much more chance of mistakes, generally has the best overall “vibe”. (And if you’re recording stuff while you’re still writing the song, it supposedly grabs it even before the first official take).

Why are these early takes so good? I suggest that it’s because all the participants are listening to each other. Whether it be for clues as to when the next change is coming up, or to see if they are playing the right notes or are in the correct key, or are locking in to the groove, or whatever it may be.

And listening is not just a component of good conversation – it’s arguably much more important than the talking bit. Because each spoken part of the conversation is a continuous transaction between every other part that is heard, and the conversation can adapt and change as needed. More breathing space is left between each component. By the way – don’t you hate it when people are just waiting for a gap to say something that they’ve been holding on to for ages? Even if it doesn’t fit anymore because the conversation has moved on? That’s not good conversation. Just saying.

Good conversation helps the music move towards that area of “flow” where everyone is unselfconsciously involved and “in the zone”. Where the conversation becomes the thing that everyone wants to keep going – like playing tennis or badminton (or maybe beach volleyball), where instead of trying to win the competition, you really just want to keep that ball in the air for as long as possible. That’s where the fun is, that’s where the magic happens.

Conversation can also have its part in the more sundry technical aspects of song arrangement. For example, how can the sections of a song have a conversation, and what sort of voice would they “speak” in? When you think about it in this way (and really all these sorts of concepts are just handles for manipulating musical ideas) it opens up a world of possibilities for ways to look at an arrangement.

Does your chorus shout in a happy voice while your verse is more of a grinding tortured whisper? Are the drums angry or subdued? Is there a buzz of an annoying bassy mosquito whirring around your head on a song that sounds otherwise like a murmuring summer’s day? Maybe that mosquito is a good thing otherwise you’d fall asleep. Unless that’s what you wanted to happen. Okay I’m stopping there…

 

 

 

I just had one more thought on a related note.

The problem with working solo, or being the only “voice” in your song is this; It’s like a room full of only one “voice”. Yours. It can be a good idea to do full or mini-collaborations with others – even if it’s just to add another voice or two in there somewhere. Of course I don’t mean voice literally – it could be guitar or bass. Or banjo if that’s your thing.

 

 

Over the Top and Back – Avoiding the Uncanny Valley in Music Production

uncanny reverb valleyOne of the dangers of nibbling away at mixing songs – commonly with your mouse rather than a dedicated audio control surface or mixing desk, is that it’s easy to be far too conservative when adding effects and the like.

What typically happens is you slowly push the level of an effect up until it starts to sound like it’s too much – then you back it down slightly to get a nice balance of “wet” effect vs the “dry” sound source. Ahhhh. Nice.

This is fine – but there are often different contexts that effects work within when the balance of effect vs dry sound change radically – so by using this conservative method, you’re always remaining inside the one safe context of the sound balance without ever realising any of the other creative possibilities available.

If you keep pushing the level of the effect past the point where it sounds bad or too much, then you can sometimes get beyond the audio version of the “Uncanny Valley” into a different range of possible sounds.

A simple example is reverb. The first “conservative” range remains within the context of adding a nice subtle tail to a sound to make it blend, or perhaps to give a subtle halo of space around the sound.

As you keep pushing the reverb level up – the sound becomes muddy and cluttered as the dry sound and the reverb fight each other. This is the audio version of the “Uncanny Valley”.

If you keep pushing the reverb level even further, you will change the reverb’s context completely. The room environment is now dominant, with your instrument or voice existing within it. Of course at this point it will probably also become apparent that there will need to be some tweaking of the reverb to clean it up a bit – adjusting predelay, reverb time and perhaps applying some low-cut EQ to take out some mush.

Reverb is not the only thing that this works with – try it with any applied effect like chorus, flange, distortion, echo etc. Or try going to the extreme and remove the dry sound completely. (Try pre-fade effects sends with the fader pulled right back).

It certainly opens up many more creative possibilities and can help you discover fresh sounds for your mix to make it a little more exciting. Plus it doesn’t take much more effort or time to do this and it has zero risk! So make sure you go way over-the-top when applying effects, then just bring it back to where it works best for the song.

About Zed

Zed PhotoZed has had 30-odd years in the music industry, mostly as musician, songwriter, studio engineer and producer, with only slightly less time teaching others about audio.

His day job is at MAINZ (Music and Audio Institute of New Zealand) in Auckland, where he is head of the Audio Department. To retain his professional practise, he spends time each year recording, mixing, producing and mastering other people’s projects, and has his own professional audio company – Brookes Audio Design.

He is a recognised expert in Logic Pro, and also has experience with Pro Tools (lots) and Ableton Live (a little). He runs Logic Pro short courses, and “Introduction to Production for Songwriters” at MAINZ.

He has recently completed his Master of Arts in Music, based around enhancing creativity in the home studio, and has his associated “Deus Ex Machina” Masters album release coming up shortly.

His main interest is in songwriting and song production, and has a number of collaborative and band projects in progress. Zed sings, plays bass, guitar and keys, and is a dab hand on the tambourine.

 

Deus Ex Machina Album

Update: The album was released as Zed Brookes – O Sweet Cacophony in September 2016 and can be found on the various music sites iTunes, Spotify etc, and also on Bandcamp.

 

 

I’m currently trying to complete my album project “Deus Ex Machina”. It revolves around the idea of the organic and the machine converging.

It began as my Masters project, but as these Masters projects tend to be based around experimentation of various concepts, the songs are not generally as “listenable” as they could be. My experimentation was around the idea of enhancing creativity in the home studio, primarily when working solo, something that’s becoming more and more common in our technology-rich musical environment.

So although the album was submitted early in the year (and got a pretty good mark by the way), I am going through each song fixing the list of items that I flagged up as needing more work done before general consumption.

Here’s the title track (unmastered as yet) “Deus Ex Machina” – an instrumental.

  • The universe is a big relentless machine – also links in with the song “Getaway”.
  • Experimenting around the tension between machine-like characteristics and more human instruements – eg guitars.
  • How many instrumental layers can play the same riff before it loses its definition? Main riff has 3-5 synths layered with a live-played bassline.
  • Overall feel: instrumental, dense, driving, a relentless machine
  • Contrast ideas: Legato bends in second “B” section vs rigid pumping defined riff elsewhere. Move to minor chord in 2nd half of B-section, guitars change roles per section, the real bass switches to more legato slides in the B section. Legato vs clean changes.
  • Main real bass riff is clacky and textural, while the synth basses add the body and substance.
  • Guitars add conversational elements – there is a kind of dialogue between panned guitar parts, one side hopeful and preachy, the other a cynical “oh really?”
  • Some synth sounds are deliberately “laggy” to simulate reverb.
  • Tempo was adjusted slightly per section to amplify groove and overall feel of song (this turned out to be a real pain in retrospect – I can see why most people prefer to drop the click completely, or just stick to one tempo).

 

Here’s another track called “Getaway”.

“Pressure chamber’s running red, the stones are bled

dull ache between the scrutineyes, and our sense of humour’s fled, so lets…

Getaway now, start the motor, get inside

Getaway now, turn it over, and drive

 Hammer horror office lives, us zombies never fed

put a spanner in the gears my friend, put the monster in its bed

and lets…

Let’s drive forever, never slow, on roads that will not end

drop the top and mind the bats, wave goodbye to daily grind and lets just…”

  • Brain-dead and burned out? You need a solipsist road trip. Get in the car and drive.
  • Introspective dark verses vs a big dreamy hopeful chorus.
  • Mirrors a sense of isolation and alienation you sometimes feel when on a long car drive without stopping.
  • Detuned glissando keyboard parts to create tension, and simulate cars passing
  • Lots of vocal harmony parts with tension created by melodic/harmonic movement between the parts.
  • Lush, floating, almost the hypnotic drone of the car interior – pulsating. Actually added some car drone SFX as well.

Thunderbolt arrives for the UAD Apollo – huzzah!

A large box for a small card. That image of the card is pretty close to “actual size”

Further to my recent review of the UAD Apollo, I finally received my Apollo Thunderbolt interface card today. (More on what Thunderbolt is on Intel’s site and on Apple’s site)

There’s a lot of chippage going on in this little card.

That means I can now use the fancy little Thunderbolt port on my Mac Book Pro instead of the Firewire 800 port.

So much simpler – only three cables needed now.

So what are the benefits?

Thunderbolt has blisteringly fast bandwidth compared to Firewire
That means there are virtually no bandwidth limitations on the number of Apollo-hosted UAD plugins you can use. This wasn’t too much of a problem with Firewire anyway – unless you wanted to share your Firewire port with, say, an external hard drive. Which you tend to do quite a bit when you’re recording/mixing music.

Thunderbolt gives way lower latency times than Firewire – I can now run Logic quite happily with a 32 sample buffer, (as long as my project isn’t too huge). It used to choke a bit before.

I’ve also noticed Logic feels a little snappier and more responsive when starting/stopping playback, and the songs load and close slightly faster – perhaps due to the UAD plugins loading and unloading faster. Perhaps it’s just my imagination – further testing will reveal whether this is the case.
I also haven’t taken the time to check out the higher sample rates yet.

Convenience
A cool thing about Thunderbolt is that it’s daisy-chainable (as is Firewire, by the way).
But what I particularly like about it is that I can plug my laptop into the Apollo’s new Thunderbolt port, and then there’s still a second Thunderbolt port on the card which I can then plug into my studio’s LCD monitor – just using my usual laptop DVI-VGA adaptor. Magic.

Even better – the two Firewire 800 ports on the back of the Apollo now become a Firewire hub, so one of the ports can now be plugged into my external Firewire hard drive – and it all goes through the Thunderbolt cable into my laptop.

So that little skinny little Thunderbolt cable is handling audio going to and from the Apollo, UAD plugin data going to and from the Apollo, video being sent to my studio LED monitor, and data to and from my external hard drive. That’s a lot of stuff going on.

For me, convenience is a big thing. I want a simple, tidy connection setup – especially as I use my laptop as my main studio computer and want to be able to come home and plug it into my studio setup with a minimum of fuss. As you can see, Thunderbolt does this really well.

Installation
This was easy – unplug the Apollo’s power, use the included allen key to take the little slot cover off the back of the Apollo (make sure to remove any static from yourself), then slide in the Thunderbolt card. Put the screws back in. Plug the power back in, connect Thunderbolt cable and turn it on. Took about 2 minutes.
It showed up in the computer exactly the same as before, but without the Firewire bandwidth meter and settings in the Apollo Control Panel.

Note that you will have to update your UAD software and the firmware in the Apollo before fitting the card if you haven’t already (I had). You also can’t connect to your computer with Firewire AND Thunderbolt at the same time. It doesn’t like it, apparently.

Downsides
Expense. It’s currently about $700NZ for the interface card – I think that’s still a little over-priced, but perhaps the price will drop as they produce more of them.

Thunderbolt cables are also a little expensive at the moment – $78NZ for a 2-metre cable.
Ouch. Still – not as bad as the first Firewire cable I ever bought – that was $300NZ!
Also – for those that didn’t go and check out the Thunderbolt links at the top – the cables are in part so expensive because they have circuitry in each of the connectors so they can combine (and then separate at the other end) the video and the data.

Conclusion
Most people will probably find that the Apollo is just fine with the existing Firewire 800 option.
Those who are pushing the limits of their setup all the time (like I am!), or want ultra-low latencies for tracking, or who prefer a simple and tidy studio cabling setup, would definitely benefit from this Thunderbolt card.
And as more and more Thunderbolt-compatible devices become available, the convenience of being able to daisy-chain all these devices together will be even more of a priority. 

3 Common Mistakes of Lyric Writing

As a producer, one of the things that is most apparent to me is the difference between an amateur and professional songwriter – even if that amateur is talented and doing well in their career. Many bands and artists come into the studio with what initially seems to be a great song, but in the process of putting down the vocals, it can become increasingly apparent that the lyrics have not had the same level of development (or writing expertise) as the rest of the song, often with basic mistakes that can leave an otherwise excellent song fundamentally flawed.

Lyric-writing is a craft as well as an art – words have more or less power and meaning depending on the order and context in which they are conveyed, and knowing some tricks to getting the maximum impact (and least amount of song self-destruction) from your lyrics should really be of high priority. Of course, there are no “rules” in writing, but there are observable effects on the listener depending on how you construct the lyric, and you can simply choose to use these tools or not.

Here’s my three worst contenders for shooting yourself in the lyrical foot.

1) Don’t use perfect rhymes

This is probably the most amateur mistake of all.
Try to use other types of rhymes instead – eg family rhymes, internal rhyme, additive, subtractive, assonance or consonance rhymes.
Although a part of our brain always desires perfect rhyme, we have come a long way since the early days of songwriting, and all those obvious perfect rhymes have been so well-used that they are now totally cliched and too-predictable.

Get yourself a rhyming dictionary (there are online versions too, although I prefer the MasterWriter app) and choose rhymes that are less obvious and maybe pleasantly surprising. Instead of using the perfect rhyme “Bread” and “Head”, maybe use a family rhyme – eg “Bread” and “Web” or “Tear”. In singing, we generally tend to rhyme vowel sounds, and the consonants matter less. Check out books/articles/workshops by lyric guru Pat Pattison for more details on rhyme types.

Note that sung rhymes are not usually the same as written rhymes, so make sure you sing them as you write to make sure that they ARE singable.

2) Use “spotlighting” effectively

There are natural accents within a musical bar that will automatically highlight or spotlight to the listener any word or syllable placed upon it. These spotlights tend to be on the downbeats of the bar, plus a big one at the end of a line, and even bigger at the end of a verse.

Ignorance of this behaviour means that you may end up with “nothing” words like “the”, “and” or “but” placed on these prize positions in the bar rather than your cool meaningful words.
This risks weakening your lyric and can even undermine the meaning of it by placing importance on the wrong word.

Back in 2007 I wrote a song, just before going to a Pat Pattison workshop, that included this lyric:
“The tide is slowly rising, Blood red sun on the horizon”
Spotlighting these words:
Tide, slow, rise, bloodred, sun, the, horizon.

Notice how “the” has a spotlight that it really doesn’t deserve?
I fixed it in this example by removing it from the spotlighted position:
“The tide is slowly rising, Blood red sun on ….the horizon”
Here’s the link if you want to listen (warning – ultra-demo quality!): Tied up in Knots
Note also that “rising” and “horizon” rhyme when sung. 

And in relation to syllable position:

3) Don’t put the “emPHAsis on the wrong sylLAble”*.

As much as possible, try to sing as you would normally speak in conversation. If you don’t, you risk breaking the meaning of what you are trying to get across, and it can sound contrived, amateurish, or just like you haven’t taken the time to make the lyric fit the music properly.

You should be able to read your song lyrics out, spoken-word fashion, and the phrasing shouldn’t be too far away from how you sing it. Or vice-versa. This is most noticeable when you’re going for an “authentic”-style delivery (rock/blues/indie) rather than stylised (r’n’b, soul, pop) – accenting the wrong syllable can instantly break authenticity. The listener will go “huh?” and the flow and belief is broken.

There are many more lyrical tips than this, of course, and some equally or more important, but the best idea is to do a proper workshop or short course on it, or at least get a decent book or two about how to structure your lyrics.

For those of you who balk at being told what to do – I remind you that these are not rules as such – they are simply based on observable effects on a listener, and you can still go ahead and do whatever you want.
Sometimes you might need to make a call between including a word that adds the perfect meaning to your lyric, and having to jam it in there a bit more clunkily since it doesn’t quite fit. But you should definitely be aware of the risks on how the listener will receive and decode your meaning when you decide to do things like this.

And finally – you should ALWAYS use some kind of rhyming dictionary – otherwise you are relying on a choice of only the rhymes that you can currently remember. Which is often only a small fraction of the huge amount of available rhymes – many of which are probably more interesting than the one you can currently think of. 

*As spoken by Mike Myers in “A View From the Top”.

References:

Pat Pattison: Essential Guide to Rhyming (Formerly titled Rhyming Techniques and Strategies) Berklee Press, distributed by Hal Leonard January, 1992
You can order all these three books – Writing Better Lyrics (second edition), Essential Guide to Rhyming and Essential Guide to Lyric Form and Structure for a special price here

Jason Blume: Writing Hit Lyrics with Jason Blume – get the book here