If you’ve read, Music VS Sound, you should be thinking of music in the paradigm of sound. When you make a new lick, not only should you ask yourself musical questions, “Are these the right notes and rhythm?” But sonic questions, “Does the tone of this instrument just sound awesome?” Both your music and sonic choices should be guided by one question, “Is it good?” The answer comes from your experience, taste, and guttural instinct.
Let’s get practical by looking at all the ways sound needs to be nurtured for you to end up with the best quality track possible. Hear are the 10 biggest ways the quality of your sound will be altered in the music making/recording process. If you don’t record anything and only make music with digital samples and software, skip right to number 6. If you are recording directly in a guitar, bass, or synth, skip numbers 1 and 3. Number 7-10 affect your ability to monitor the sound, thus altering the choices you will make in changing the sound but they do not change the sound directly.
What affects the sound?
- Your room
- Your performer and their instrument
- Your mic/mic placement
- Your Preamp/Other outboard gear(sound desk or mixer)
- Your A/D conversion
- Your manipulation of digital audio/sample choices/plugin choices
- Your D/A conversion
- Amplification after D/A conversion
- Studio Monitors/Headphones
- Your room
There is a story I read from some reputable magazine about a group of engineers tracking the Rolling Stones in the studio. Keith Richards was KILLING IT on guitar that day. The sound they were getting in the control room was unreal. When the band left to eat, Keith put his guitar on the ground and didn’t touch anything on his amp. Being curious and a little naughty, the engineers went into the live room to play Keith’s guitar to see how he was making such a great noise, but try as they might, they couldn’t come close to the spectacular sound Keith Richards was making. Being a great player, whether consciously or not, Keith is making micro adjustments in how hard he picks the strings, where he picks the strings, the angle, ect., there are a million tiny choices that happen because Keith is listening so close to not only the music, but to the sound as well. He has done it that way for years so by now his brain is a super computer of calculations changing details of his playing to get the best sound possible. All these tiny adjustments for the benefit of sound are magnified exponentially under the microscope of recording.
A bass player who has never recorded before might sound great live but crappy when recorded. Why? He never learned to properly mute the strings he wasn’t playing. Those ringing strings might not be audible over a live band but in the studio they muddy up the whole mix. It is much easier to make a great sounding recording with a performer who is conscious of all sounds they are making.
Sound Design and Sample Selection
For those of you making sounds in the box, there are tons and tons of software synths and patches out there but a lot of them sound like doodoo. Something is usually uneven with the sound, it’s too harsh, too dull, it just doesn’t give you that feeling. As a producer or sound designer it is not enough to find sounds that fit the music you are making, but you have to find sounds that are truly awesome. If you are making a soul song and pulling up a software rhodes patch, you potentially could have 50 different virtual choices of the same instrument made by a cornucopia of different companies. Listen to them carefully and find the one with the magic. The one that has the detailed highs, the body, the warmth, that sounds great coming out of the speakers already.
If you are designing a patch in Massive or Sylenth or whatever, don’t stop designing if is something doesn’t feel right. Design the sound to be awesome right when it comes out of the speakers so when you draw a note or press a key on your midi controller the sonic quality moves you already. Strive to make it the best you can as soon as possible.
Do this for everything in the box and out! Remember, planning on fixing it in the mix is a recipe for pain.
If you start out making music with a crappy $100 dollar interface, a crappy $50 microphone, and listening on crappy little ear buds, the first time you go to a professional grade studio with a treated room, quality monitors, and a $10,000 mic, you will need to bring a change of pants. Quality gear is soooo much easier to use. You should pay some dues and figure out exactly where to put your crappy microphone to make it sound the best it can, so that magical moment when you get your hands on something excellent, you know how to really use it…you will understand the ways it beats the snot out of your crappy mic in ways you could not even hear when you first started. When you are really experienced you will think of scenarios where your crappy mic might be better than the $10,000 dollar one. There’s a lot more too it than just the mics though.
Let’s follow some sound from a singer’s mouth on it’s path to playing back out of the monitors to see how sound quality is affected by different gear all along the way. Keep in mind, this whole process happens almost instantaneously.
The singer sings, let’s say it’s Adele and therefore we know the singing is awesome. She is vibrating the air which then hits the microphone capsule, vibrating some magnets in the microphone. Depending on how close or far she is to the microphone, that will affect the quality of the sound. You want to find the sweet spot for the microphone in relation to Adele depending on the way she’s singing and the microphone you’re using. Where you place the mic is really, really important to the sound quality you can get out of it. Then there is the microphone itself. What the microphone does is change the vibrations on the air (sound waves), into vibrations on electricity (analog signal). Different microphones make this conversion from airwaves to “electric waves” in different ways, thus giving microphones their different sound qualities.
After the sound has been changed into an analog signal by the mic it is not very strong. It travels down your mic cable, riding electricity, into your preamp. If you don’t have an outboard preamp then there is probably one built into your interface. The preamp takes the signal from the mic and amplifies it. By amplifying the signal the preamp changes the shape of the waves, thus changing the sound again. In my experience, preamps are where some of the real magic is in getting a great sound when recording. The Sure SM57 is a $100 microphone used in every studio. Part of the reason this mic can sound so awesome one place and crappy in another is the quality of the preamp. Ok… after the signal leaves the preamp, in this scenario, we are going straight into your interface’s converters.
The part of the interface this analog signal is going to hit now is called the Audio to Digital converter or A/D converter. The A/D converter takes the analog signal from the preamp and coverts it to code (digital audio). Your A/D converter says, “Ok I see at this moment in time the electrical signal is up here, so I’m going to make a note of that, and then the next moment it says, Ok I see the electrical signal is down here so I’m going to make a note of that. Like points on a graph when you zoom out look like a waveform. It can do this 41,000 times a second, or 48,000 times a second, all the way up to 192,000 times a second, this is called your Sample Rate. Not all A/D converters are created equal. Some are not as accurate at noting the electrical signal they receive and this affects the quality of the sound. You have to have pretty solid ears to notice the affect an A/D converter has on sound quality but it is there. This isn’t something someone brand new to making music will generally hear, especially if they have crappy headphones or monitors. As a rule though, <better converters = better sound. If you never record anything and work exclusively with samples or patches, then you’ve read all that stuff for no reason, this is where you start from. Your A/D doesn’t really matter. But let’s recap it all so far anyway because most of us want to record some audio at some point. We are following Adele’s voice…remember? She sings, air moves, mic changes air vibrations into analog signal (waves riding electricity), the preamp amplifies the analog signal, the A/D converter changes the analog signal into code(digital audio). Now we are in the computer, the code is interpret your software called a DAW, the digital audio workstation. Here’s where you can really scramble the sound around.
Adele is in the computer.
Digital Audio Manipulation
When you are working with digital audio what is actually happening when you manipulate the sound in anyway (like adding a plugin effect) is an algorithm takes the code(digital audio), and says, “Ok, this bit of code tells me the sound wave look like this, I’m going to do some math to change the code and make the sound wave look like this”. And it is generating new code(manipulating the digital audio) thus generating a different sound. I can understand if this is confusing or boring…all I’m trying to say is the quality of these algorithms, i.e. the quality of your plugins, greatly affect the sound. Better plugins have better algorithms and better programming(adjustment of algorithms/usability) to make better sound.
Good plugins (algorithms) = Good Sound
Monitoring Your Work
You’ve effected the digital audio with your DAW and now you want to hear Adele through the monitors, you want to get her out of the computer and back into the air. We kind of do the whole process now in reverse. Take a special note though, from here on out this is only for listening back to what you are making. What you are doing when you make recordings is you are creating a new code(digital audio) from a bunch of other codes. Hearing back that code will sound different on your phone, earbuds, car, stereo, whatever because of the steps we are about to look at. What we want now is to ensure the sound that you are creating in the computer can accurately translate well to other systems. Ok, so where does Adele go?
She came into the interface through the Audio to Digital converter (A/D), remember? So now we go out of the interface through the Digital to Audio converter (D/A). The digital audio is changed back into an analog signal. It is basically in the same form as when it was changed from riding air waves to riding electric waves by the mic, right before we hit the preamp and did all the computer stuff.
Now we are back to analog signal and need to be amplified again. This is done by your interface, an external amplifier and/or your powered monitors. As you may have guessed, the amplifier also can change the sound but our objective is different than the prior steps. We don’t want to sweeten the sound necessarily, we are trying to hear the most accurate representation of the digital audio as possible.
So after the digital audio is changed back into analog signal, the signal is boosted at which point the waves travel by electricity to vibrate the tweeters and woofers of your monitors or headphones. Obviously the monitors or headphones have a huge affect on how well you can hear what the sound is actually doing at every single point up to now. If you’re monitors accurately vibrate the air, then you have a more accurate idea of how the signal from the D/A conversion did, what the algorithms in your plugins are doing, what the A/D conversion did, how the preamp changes the sound, and how the microphone is picking up Adele’s voice. Monitors are important. They can also change the sound you are hearing but when you are mixing and producing music the monitors purpose is to accurately let you know every way you have affected the sound so you can make something that works on hi-fi systems (which sweeten sound), to radio, car systems, laptop speakers, ect.
There’s one more thing to take care for…
You have Adele, she’s great, you have a great mic, a dope preamp, quality A/D conversion on your interface, sweet plugins in your DAW to manipulate the sound that you’ve used with great skill, the digital audio is accurately changed back to analog signal with your D/A conversion, that signal is amplified cleanly and you know your monitors are good because they were expensive. You mix the song, listen to the mix in your car and it sounds like crap. What?
The room! The sound is bouncing around your room and that is misrepresenting what the digital audio actually sounds when you play it on other systems. Oh no. The room is both the first and last link in the chain for sound quality. If you record Adele in a bathroom you are going to get all those reflections along with Adele’s voice on your recording. If you mix in a bathroom you are going to hear frequencies amplified and combined that aren’t actually in the digital audio. You might try to alter the digital audio to get rid of frequencies so it sounds good in your bathroom but sounds like crap everywhere else. This is why studios have so much acoustic treatment. If you can’t work in a decent room, always check on headphones where the room is not a variable.
What effects sound?
- Your room - Bad reflections during recording make bad sound.
- Your performer/instrument - Better performers make better sounds from the same instruments. Better sounding instruments obviously sound better.
- Your mic/mic placement - Quality mics and where they are = vital.
- Your Preamp/Other outboard gear(sound desk or mixer) Mic Pres can color the sound very pleasantly.
- Your A/D conversion - Better conversion = better sound
- Your plugins/manipulation of digital audio/sample choices - Find Fidelity
- Your D/A conversion - At his point we are only affecting our listening experience in this studio.
- Amplification after D/A conversion - We want accuracy
- Studio Monitors/Headphones - We want accuracy...and ideally no ear fatigue.
- Your room - No bad reflection to distort are perception.
Again to recap, if you are not recording audio but only using samples, start at number 6. If you are recording direct in (like a guitar) skip numbers 1 and 3. And if you listen back on headphones, skip number 10.
Everything in the articles can be summed up to this, “Pay close attention to sound quality”. The quality of your productions, mixes, and music depend on you listening to both the music and sonic quality. I’m glad you read this article but also note, at the root of it, music is more important than the sound. If you do a crappy job recording a song that is awesome, there is a chance that it will be a hit despite the crappy sonic quality, if it does become a hit you can always pay (or your label will pay) an experienced mixing engineer to get the quality as good as possible. On the flip side, there are really crappy songs that are hits all the time because they meet a certain level a professional sonic quality and marketers do a great job of shoving them down the throats of tweens who don’t know any better…that’s how it is.
Developing an ear for sound takes years, you learn by listening at different studios, actively judging the music and sonic world around you, and by making your own music. Making music is always filled with compromises, do the best you can do with what you have and you will get better.