Let's
begin this month's post with a quick recap. Your guide is down, drums
and bass are recorded and now your next job is building the track up
with guitars and keyboards.
First
thing to be aware of – the vocal melody. EVERYTHING you do is in
support of this. You need to be aware of the notes, the rhythmic
placement, the frequencies and the placing in the stereo spectrum and
reverb levels. So you really need to ensure that you're working with
a guide track.
Consider chord voicings that “frame” the melody rather than
encroach on it's space – for example, classic rock tunes tend to
have higher pitched vocals, so chugging power chords will sit nicely
underneath them, but the Smith's King Of Jangle Johnny Marr has often
talked about capoing his guitar parts to sit above Morrissey's
distinctive lower pitched vocals.
Consider
dynamics – how to build a rhythm part for a verse that develops. A
good trick is to keep the first half of the verse sparse and gently
weave in another layer behind the vocals in the second half. A great
example of this is in the classic “Sweet Child O' Mine” - whilst
Izzy strums the D, Cadd9 and G chords, during the first half Slash
hits a higher chord voicing at the 10th, 8th
and 3rd frets, holding it for a bar before doubling Izzy's
open chord voicings but picking through them as arpeggios.
Consider
tones and texture – do you want an “in your face” hard
distorted sound for something like grunge, punk or thrash metal? Or
are you looking for something gentler, perhaps blending acoustic
sounds highlighted by clean (or clean-ish) electric single notes and
chord fragments, as exemplified by bands like the Rolling Stones?
Once
you've made your choices and worked up the guitar parts, it's time to
track them. Every studio I've ever recorded at has always doubled the
basic electric rhythm track, one hard left, one hard right. This
gives a nice fat sound that leaves plenty of room for the vocals in
the centre. Additional texture parts can be brought in towards the
centre slightly, but always ensure that the centre is for the vocal
line. Just as the singer always gets centre stage, the vocals always
get the centre of the stereo spectrum.
EQ
is a powerful tool here, and one of the key reasons for having a
guide track – although most audio software has EQ plugins for “Male
Rock Vocals”, “Female R&B Vocals” etc, the simple fact is
that every voice is unique and you're going to need to tailor the EQ
to your singer. Solo the vocal track (ie mute all the other stuff)
and boost a frequency. Change that frequency until you find a sweet
spot that boosts the vocal in the way you like and start cutting
stuff that is too tinny and too boomy. Then, make sure you're cutting
that frequency (not necessarily completely, but at least slightly)
from any guitar parts that are playing alongside the vocals. You
might lose something from the guitar sound, but the overall mix will
be better – sometimes a really good, full, rich guitar sound
actually works against you in a mix by covering frequencies needed by
the vocals. Don't worry. When the guitar solo comes in, you won't
have to share those frequencies.
Speaking
of solos (assuming you have one) I tend to put them in the centre
where the vocals would be, with ambience levels (reverb, delay) to
taste. If you have a particular song in mind as a model for
production values, this is a good place to reference it – something
like AC/DC will have solos very upfront with little reverb and no
delay, whereas a Pink Floyd solo will often have a hefty chunk of
reverb and delay synchronised to the song's tempo.
Turning
our attention to keyboards.... oh boy. You see, the problem here is
that keyboards can mean damn near anything. Splodgy synths, piano,
strings, brass... each one of these presents a different problem. So
we'll limit our focus to what are probably the two most common sounds
– piano and strings.
Now,
far and away the easiest and most flexible way to go is not to record
the sound of the keyboard at all, but to use the MIDI output to
trigger the samples stored in your computer's audio software (if that
doesn't make sense, then Google “MIDI” as this is a topic far too
big to be addressed in a single blog post.. I'll get to it in time!).
If you have the capability on your audio software (I use Cubase, but
popular packages include Logic, Ableton and Pro Tools which all have
this function), designate your track as “stereo”
For
most piano-led songs, I find that emphasising the low end, the
no-man's land between guitar and bass, works a treat in filling out a
song's production without stepping on the toes of the guitar and
vocals, and the now familiar EQ approach can be applied too – find
which frequencies are emphasised in the vocals and cut them from the
piano. If it's a guitar-based song with piano, cut the emphasised
guitar frequencies too – if it's piano led, cut the frequencies
from the guitar.
For
strings and pads it's the same approach, although for actual melodic
string parts I find that placing them above the vocals in terms of
pitch works very well to help a chorus ring out, and a healthy dump
of reverb can provide that Phil Spector-style ambience. A trick I use
when I want a pad to be present but barely noticeable – felt,
rather than heard, so to speak – is to set the reverb level to
100%, meaning that only the reveberated signal is heard, none of the
“dry” signal. This provides a ghostly background wash of sound
which can be very effective.
So
ends this month's info dump! I hope this is of use to some of you out
there – remember, TUNEICEF 2019 is alive and kicking and any
contributed tracks will be welcome, and we'll be rocking the Cask Bah
on Sunday December 15th to launch this year's album!
No comments:
Post a Comment