2 Flares Twitter 0 Facebook 0 Google+ 0 Reddit 2 Email -- Pin It Share 0 2 Flares ×

Alright people, here we are for the second part (click here for the first part) of this guide to a better sound! This issue is about actual mixing and I think it’s going to be more interesting for most of us. Here we go!

PT. 2 – Mixing, levels and processing tips.

Last time, we had our room set up properly to best reproduce the frequencies that are actually in the mix, and not the ones your walls add to it; we are now ready to sit down and start clicking the brains out of our DAW… figuratively speaking, don’t get shocked!

Mixing is about finding the right processing and best sound for each track, but most of all is about setting the right levels related to each other. Levels are sometimes overlooked, or at least some of them are: we’ve all been told, or read somewhere, that if we aim to avoid digital clipping we mustn’t reach 0dBFS on the master bus (FS stands for Full Scale, the digital dB. All reference to dB from now on will be of this kind). That is correct, 0dB represents the maximum level our computer can conceive: over this value nothing exists for the computer, so it gives back an error (clipping). We should set the output to have its peak anywhere between –18dB and -6dB (for future mastering purposes). What many people might not know or tell is that, although DAWs’ architectures nowadays work internally with a much larger dB range (made possible by 32bit floating point audio operations), lots and lots of plug-ins aren’t that well fitted to process signals surpassing 0dB. To simplify, here’s an example: you set a single audio track with an EQ in it and it is playing a looped sample; the track’s fader is positioned half-way to the top and the master fader is at 0dB. The meters tell you that the output is peaking at -12dB so it’s all fine and you avoided clipping, right? Well, maybe not: you see, what’s coming out of the track (and then out of the master) has an acceptable level, but what about what’s going into the EQ? Suppose you didn’t pay attention to the fact that the sample itself has a level too high, so the audio entering the plug-in is clipping. In this scenario, that EQ won’t perform well, it’ll be overwhelmed with input data and degrade output quality even though what you actually hear won’t clip, because your master output is -12dB. When starting a new project you should always check the levels at each point of the FX chain inside the tracks as well as at their output: digital audio should never surpass 0dB as a rule of thumb at any stage, not only on the master bus. Some plug-ins deal better than others do with clipping inputs, but they’re a small minority and they won’t eliminate the problem.

levels
Another issue regarding levels applies when comparing tracks and master buses. I must admit I’ve always found opposing opinions about this, but my way of thinking is “better safe than sorry”. As said, your master output should never hit 0dB; but if it does, should you lower the master fader or each track’s one? In analogue mixers it is best to leave the master fader to 0VU (Volume Unit, a standardized level of operation across pro devices) and lower the tracks in order to use all of the dynamic range the machine can offer and have the best signal-to-noise ratio. But what about digital? Digital audio has practically no audible noise, so there is no real signal-to-noise ratio. That said, it should make no difference to turn down the volume in the tracks or at the output on the master. On the other hand, the way algorithms work makes it so that if digital clipping is generated when summing up all signals entering the master bus, turning the fader down might not eliminate the error resulting at the output, but merely round it off (because you’ve already exceeded the “thinking” capacity of your DAW). So, while this fact is heavily discussed, I’d say you’re probably better off turning down each track separately even in the digital world, also considering that software like Ableton Live lets you do it on every fader at once while maintaining relative levels so you won’t lose your mix.

Equalization, compression, side chaining and FX.

2-Red_2_plugin-2

Equalization has a technical and a creative nature and I’m going to talk a little about both. Artistically speaking, EQs let you enhance the sound they process according to what you think are its main features. So, if you want to make the harmonics of an acoustic guitar stand out you’ll boost the mid-highs and recess the mid-lows, or if you want to make a kick sound bigger and fatter you’ll pump up the low-end. This is a very personal and decisive practice when composing your own music, but an EQ is also a useful tool to polish the mix by assigning the various instruments to their region in the spectrum. I always find it fascinating how a mix can sound amazing as a whole but if you listen to each track separately, they may appear unnatural or even cutoff in the highs or the lows. Most of the times, this is the sign of a good mix, especially when there’s a lot of stuff going on. When you add together audio streams, each with its own full spectrum, you are gradually getting closer and closer to noise with every signal you throw in. This is bad, because noise is one of the most masking sound of all: in a couple of words, auditory masking is a psychoacoustic phenomenon that makes you hear some sounds over others based on precise laws. In everyday life, it means you will hardly tell instruments apart, and the whole thing will sound “muddy” and recessed. The good news is that not all tracks need their full spectrum in a mix: for example, a bass (be it synth or guitar) won’t usually make any use of frequencies above 2-5KHz; in fact, they will only bring harmful noise. Attenuating it with an EQ or even cutting it off very hard with a 4-poles low-pass filter in that region will save you precious room for those sounds that indeed need it, and in the final mix you won’t even notice. Another good tip is to do just the opposite: other than basses, kick drums and maybe some particular synth patch, most instruments and samples won’t need anything below 100-300Hz, or even higher depending on their timbre. You can easily get away with high-passing nearly all other tracks in that region, to make room for a punchy and precise bass output. However, you should take this advice with a grain of salt: while low-end is a tricky region that will usually benefit from being as much “a small club” as possible (as may have shown through in part 1 of this article), there are some sounds generally considered high-pitched that actually reach down quite a bit (for example, the attack of most acoustic hi-hats is located in the 200Hz region). So, give Caesar what’s his but nothing more, always watch what you’re cutting off!

tube_compressor_xxl
Another way you can enhance intelligibility of a certain sound is compression. Human hearing is much more susceptible to sounds whose peaks are maybe lower, but more uniform in loudness over time as opposed to peaks loud as hell that only last a couple of milliseconds. A compressor is a dynamics processor that does a pretty simple job: lower the level of those peaks that surpasses a set threshold and raise everything below this, so that a sound is perceived as louder even though it maintains the same top peak level. To have a good compression, you must first set the threshold to the value in which that component of sound you want to raise is, then set how much of a difference you want it to be compared to the max peaks, through the ratio control. Only now you should turn up the gain to make up the volume you lost in the compression: the job of this gain is to match the original peak level, not to shoot the volume to the Moon! The attack time setting on compressors tells how much time it takes to the processor to actuate full effect, and release tells how much it takes to return to a no-compression state, to put it simple. It’s up to you to experiment with these settings to achieve the sound you want, extremes will result in cool (or not so much) distortion effects.
Side chaining (or more correctly: key inputting) is a method of compression extremely popular in EDM, but actually used in many other genres. It basically is a compressor whose threshold refers to an external signal (i.e. the key input), varying the output level according to the ratio set. It’s usually applied to sounds that would otherwise tend to mask each other: a bass that disappears whenever the kick’s playing is a classic, but it can be used in many other similar cases. This method can be both technical and creative as well as equalization: using it can create many effects and, on the other hand, makes both of the sounds emerge, a very useful feat to clean up your mix.
aphex-vintage-aural-exciterLast but not least, to spice up individual tracks or entire mixes you can use an exciter: this is a pretty cool effect that will add to the original signal a saturated specified frequency band, thus generating more harmonics generally perceived as pleasant, more or less subtly, by an adjustable control. This will also shift the timbre towards higher and sharper traits, so use it wisely.

Overcrowded mixes, reverbs and stereo imaging.

I personally experience the tendency to put lots and lots of instruments and new parts in my music. It feels like there’s always something missing: this melody or that arpeggio, and I never want to throw out previous scores. While it can deceive those of us who do this into thinking we’re great musicians, and give us our 15 minutes of self-proclaimed glory, it is a bad habit. For the same effect of auditory masking already mentioned, adding layer over layer of different sounds will get us closer and closer to noise: not only you won’t tell scores apart, but it will degrade audio quality so bad there will be practically nothing to do about it other than remove stuff. As usual, this is a key aspect that involves both the creative process and technical concerns and limitations; it is up to you to find the right balance in the mix: maybe slightly change some score to better fill in the gaps you perceive or just revise the entire thing. Just keep it as small and clean as possible and don’t count on the fact the you can add things indefinitely.
A similar concept applies to reverbs. Reverbs involve a basic contradiction: their presence is necessary to convey naturalness and air to single signals and mixes alike, but they will screw up (and badly) your songs if you let your hand slip and take over. Reverberation effects are an emulation of the natural phenomenon, and digital ones are made of complex reiterations of delays (the only 260_originalexception being convolution ones). It’s the same old story: when you reverberate a sound, you’re adding layer over layer of the same signal slightly varied over time, thereby raising that noise component which can be extremely dangerous, as seen before. Generally speaking, you can either keep reverb volume low enough, or set a more or less short duration if you really want it to be loud. It’s very useful to filter some regions of reverb signals you won’t notice in the mix, especially low ones.
A very underestimated way of cleaning your mix to sound better is to arrange your various tracks inside the stereo image. Having everything in the middle forces each speaker to have the same oscillation for the most part, thus wasting precious individual headroom. Bass tracks are better suited to be in the middle in modern musical genres, but any other instrument can be placed in the entire range from left to right. In a purely theoretical way, dividing the load between the speakers can potentially free 50% of headroom on each one; this translates to a much lower “crowded mix” effect compared to having everything in the middle (NB. different from dual mono), and will also generate a much more interesting scene to listen to, unveiling details otherwise lost in the mix.

Export settings
maxresdefault

Your mix is finally done! And now, to export, what settings to choose from file format, sample rate, bit depth and dithering options? It depends (of course)! I advise to export Wav or Aiff files whose sample rate is either 44.1KHz or 88.2KHz, preferably the former as oversampling can produce audible artefacts in the final mastered file. Why not 48KHz, 96KHz or even 192KHz? Well, if your music is going to be written on Audio DVDs then, by all means, export it to 48KHz! But if you are going with CDs or mp3 and aac files (more likely) keep it simple: CD standard is 44.1KHz, and compressed file formats are based on this sample rate; exporting to 96KHz, again, can produce possibly audible artefacts when downsampling to 44.1KHz more so than doing it from 88.2KHz, because it’s not a multiple. And 192KHz? Well, that’s just overkill. It goes without saying that if you want your final mastered file to be 96KHz – 24bit from the beginning, you should stick with these settings right from the mix export. But keep in mind that sample rate has not that huge of an impact on final quality these days (at least not from 44.1KHz up!). Much more importance bears the bit depth or resolution: this sets the dynamic range of the file and it’s fundamental to export to 24bit files in order to edit them at best during the mastering stage. Even though CD standard is 16bit and the conversion from 24bit will downgrade audio quality a little, the benefits you gain in the mastering stage outweigh the losses greatly. This brings me to dithering: dithering is basically adding noise to the file on a tiny scale in order to correct those artefacts the bit depth conversion will eventually create. Guess what? Adding noise to the mix is bad (surprise!), a necessary evil that should only be committed once: when exporting the mastered file to 16bit from any higher bit depth; so as far as it concerns mixes, stay away from dithering of any form! Or don’t, whatever, it’s your choice.

If you stayed with me to this point, congratulations: you deserve an applause! You survived this long article and hopefully learnt some useful basics and tips about acoustics and mixing! Alas, everything of what I just said, although true, requires years of experience to master, or sometimes to even get it just fine. And if your asking me about the secrets, you’re asking the wrong guy: there’s a key difference in knowing the theory and the actual know-how, and I’m still way offshore, swimming my way through to BestAudioEngineers island like you. Now go and enrich the world’s most beautiful form of art.

But, hey, let’s do this again!

LiteFlow

Lorenzo Furlanetto
Stay tuned!

Lorenzo Furlanetto

Freelance Artist & Editor at Liquid Audio Network
Lorenzo, aka LiteFlow, an Italian producer based in Rome. Born in 1993 in the north-east of Italy, currently refining my skills as a producer at “Saint Louis College Of Music” in Rome studying music theory, composition, electronic production and sound engineering.
Lorenzo Furlanetto
Stay tuned!