Setting Levels In A Mix

If you're serious about your bass sound, you need speakers that tell you what's going on below 100Hz, as well as acoustic treatment to prevent the room skewing that information. But even without these, you can improve your LF decision-making. Make a habit of judging the bass balance from a few different points in the room.

Set mixer levels to unity and adjust the speaker output level. Turn up the Main mix fader level to 0 (unity). Play audio through a channel (microphone, guitar, or phone/computer playback). Be sure to set the channel’s gain knob while its its level fader is at 0 (unison).

The room's resonance modes will affect each location differently, so they're easier to factor out mentally. High-resolution spectrum analysis can also help you assess the sub-100Hz region. Some people suggest resting a finger on your woofer cone to gauge sub-bass levels from the drive excursions (as pictured), but I don't recommend it, as a bass note's woofer excursions are heavily dependent on its pitch and can often seem counter-intuitive.Most importantly, compare your mixes with commercial work you admire.

Questions of bass frequency balance, dynamic range, mix level and effects use are highly era- and genre-dependent, and commercial tracks are your best guide to your audience's expectations, whether they be Radio 1's millions of listeners or the other member of the Chris De Burgh fan club! Broadband hiss in bass recordings is usually easy to handle unless the arrangement is very sparse, because what isn't masked by other instruments can normally be low-pass filtered without any loss of tone. Where long note-decays reveal the noise unduly, try using automation to close down the low-pass filter further as the overall level reduces.

Although specialist plug-ins such as ToneBoosters TBHumRemover can zap mains hum in an instant, you can't just 'set and forget' on bass, otherwise you'll also remove any bass pitches that correspond to your local AC frequency! Again, automating the strength of the plug-in's processing offers a workaround.Low-frequency thuds (perhaps from the musician tapping their foot, jogging the mic stand, or hitting/slapping the instrument's body/strings) can't easily be removed with high-pass filters, and I favour patching over each note using copy/paste audio edits where possible.

Where that's unfeasibly tedious, a multi-band dynamics processor swiftly limiting the sub-200Hz region can bring some improvement. Pick noise and fret buzz/squeak can be a pain too, and if low-pass filtering doesn't yield a solution, I normally turn to multi-band limiting again, this time over the upper half of the spectrum, hammering down the undesirable HF surges and spikes. Detailed fader automation can dip out isolated fret squeaks, but can also punch holes in your low end if used during sustained passages.Where a synth bass's upper spectrum is garnished with high-resonance filter sweeping, it can be difficult to maximise the bass's sense of power, warmth and textural thickness without the filter peaks slicing your ears to ribbons. Normal compression and EQ are no help whatsoever, because the filter peaks are always there and move their frequency the whole time. Saturating the sound can help, by increasing the synth's general 'background' level of harmonics in relation to the filter peaks, but sometimes that's not enough.

In extremis, I'll split the synth's upper frequency response into half a dozen bands using a multi-band dynamics engine, and set each band to skim the top off the itinerant filter peak whenever it's in range. This way, I've always got one of the compression bands ducking a small section of the frequency response, but the bands are all fairly narrow, so the cure usually sounds better than the disease. Check the polarity/phase relationships of mic and DI tracks. Cut back over-eager sub-100Hz harmonics with EQ, using Q values as high as possible. Treat further sub-100Hz inconsistencies with multi-band dynamics processing, or replace those frequencies with a sub-bass synth line. Heavy compression isn't unusual, but take care with attack and release times to avoid unwanted distortion or lifeless dynamics. Compare the mix with relevant commercial records.

Use your main monitors to focus on the bass's low end and warmth/mud frequencies, but switch to smaller speakers to assess mid-range audibility. Mute the bass while you tweak the low mid-range balance of other instruments. To conserve mix headroom, try briefly ducking the bass 2-3dB in response to each kick hit. Boost at 1kHz for better mid-range cut-through, but add a low-pass filter if HF noises become obtrusive. Parallel distortion can be even more effective, but be careful of phase cancellation.

Limiting above 1kHz with multi-band dynamics can reduce distracting picking or fretting noises. Multing allows the bass sound to adapt to dramatic arrangement changes, and can also combat any unwanted bass-ducking side-effects of your mix-bus compression. A touch of stereo chorus can connect the bass with wide-panned guitars, but be wary of sub-100Hz energy from the effects return. Use fader automation to draw attention to nice fills or licks, so the listener doesn't miss them. This is easier if listening to single-speaker mono playback. If level rides overload the mix with low end, automate a wide 1kHz EQ boost instead. Check the polarity/phase relationships between separate mic and DI tracks.

Cut back over-eager sub-100Hz harmonics with EQ, but keep Q values as high as possible. Tackle remaining sub-100Hz inconsistencies with multi-band dynamics processing, or patch up individual notes using copy/paste editing.

Try not to push beyond 9dB of compression, because fader automation will sound more natural. Set the attack time low enough to usefully control the dynamic range, but high enough to leave some life in the note onsets. Parallel compression can exaggerate note sustains more naturally, if necessary.

Compare the mix with some relevant commercial records. Use your main monitors to focus on the bass's low end and warmth/mud frequencies, but switch to smaller speakers to assess mid-range audibility. Kick drum will naturally tend to dominate over acoustic bass in the bottom octave, so try high-pass filtering the latter from around 35Hz. Mute the bass while you tweak the low mid-range balance of other instruments. Boost at 1kHz for better mid-range cut-through, but be mindful of HF noises or spill.

Subtle parallel distortion can be effective too, if well matched for phase. Limiting above 1kHz with multi-band dynamics can reduce string slap transients. The global send effects you use to blend together your drums and other instruments should also work fine for the bass. Use fader automation to draw attention to nice fills or licks so that the listener doesn't miss them. It's easier to do this while listening to single-speaker mono. If level rides overload the mix with low end, try automating a wide 1kHz EQ boost instead.

If there are multiple synth layers, avoid LF phase-cancellation difficulties by choosing only one layer to carry the sub-100Hz energy. High-pass filter the rest. Check stereo synth patches for mono compatibility at the low end. Adjust MIDI/synth programming to tackle dynamics concerns. If sub-100Hz inconsistencies remain, address them with multi-band dynamics processing, or replace the frequencies with a sub-bass synth.

For layered synth parts, solo all layers together and listen through the whole track carefully. If you discover any LF loss from phase-cancellation, bounce the MIDI parts as audio and adjust the inter-layer timing for offending notes. Compare the mix with relevant commercial records. Use your main monitors to focus on the bass's low end and warmth/mud frequencies, but switch to smaller speakers to assess mid-range audibility. If your bass hogs the low end, your kick may need more energy than you expect at 100-200Hz. To conserve mix headroom, try briefly ducking the bass 2-3dB in response to each kick hit.

Where upper-spectrum filter sweeps are too abrasive, saturation can make them less obvious. Multi-band limiting can go further, but works best with lots of narrow bands. Use fader automation to draw attention to nice fills or licks, so that the listener doesn't miss them. If possible, do this while listening to single-speaker mono playback.

If you make sure that bass instruments are tuned before recording, bass pitching problems aren't usually a huge issue at mixdown. That's partly because synths and (to a certain extent) fretted basses have pre-quantised pitches, but also because tuning is a relative judgement: even an out-of-tune bass can sound fine if the other parts have been recorded to fit around it!If you do detect some sour notes at the mixing stage, the monophonic nature of most bass parts usually makes it easy to correct them adequately, even with a DAW's built-in pitch-processing. The only time I've bothered to get something specialised like Auto-Tune or Melodyne involved is where the performer of a fretless electric or acoustic upright seems to have been on the bevvies!Bear in mind that your pitch-processing judgements can be biased according to the way you listen.

For example, if a bass note's harmonics are slightly out of tune with its fundamental and you adjust the tuning while working on headphones, you might end up with something that sounds more out of tune on a full-range system. Listening level also has an effect on pitch perception, such that you may perceive bass instruments to be shifting subtly flatter the louder you listen.Timing is usually a more pressing concern with home-brew bass tracks.

The bass contains so much of the audio power in a track, and is often mixed so loud in modern styles, so it constitutes a powerful driver of the song's groove. It's thus rarely a good idea for its timing to disagree with other important rhythmic elements in the track. It's amazing how much tighter it can make a mix feel if you just ensure that the bass and kick drum are fairly closely aligned, for instance. This doesn't mean just lining up the waveforms by eye (which can get you to a good 'starter' position for each note), as things that 'look' in time can sound out of time.

There's also a good chance that the groove might sound better with the bass notes slightly trailing or anticipating the drum hits — so, as with all things mix-related, your ears should always be the final arbiters. Don't just concentrate on note onsets, either, as the end-point of a bass note can also make a big difference to the groove.I've never felt the need for special software for doing bass edits, because crossfaded audio edits always seem fine for the job.

Periodically I've tried tangling with time-stretching for bass timing corrections, but I've always ended up feeling that digital chorusing and 'gargling' artifacts induced in the mid-range have been detrimental to the mix tone, so have always reverted back to using simple edits.Most of the time, in the case of bass edits, you can just snip in a gap between bass notes or at a point just before one of the kick-drum beats, and no-one will notice a thing if you apply a few milliseconds of crossfading. On occasion, though, you need to edit in a more exposed location in the middle of a bass note, in which case the trick is to try to match the waveform as closely as possible across the edit point, because any big discontinuity will result in a click. But won't a crossfade just smooth that over? Nope, it'll turn it into a thud, which may well interfere with your rhythmic groove even if it isn't clearly audible in its own right. Even when you've matched the waveform across the edit, though, it's still wise to put in a short crossfade (over a single waveform cycle or so), but try to select an 'equal gain' crossfade if you can, rather than an 'equal power' one, or else you'll get an unwanted level bump at its centre.

You’ve probably heard it a thousand times before: You need to pay attention to your gain staging! You need to leave headroom when you’re mixing music!

But what are they exactly? Is there a specific amount of headroom you need to have? How should you configure your gain structure in order to optimize the gain staging throughout your mix?

This quick explanation will help you understand exactly what headroom and gain staging are and why they’re necessary—even in the brave new digital world.

What is Gain Staging?

Back in the good old days of analog recording, you had 2 main things to consider when it came to recording a healthy yet clean signal:

Taken 3 game download for windows 10. Play Tekken on PC and Mac to experience a new thrill from an all-new perspective with classic fighters like Panda, Nina, Law, and more. The number one fighting franchise in history makes its way to mobile with the deepest fighting game on the market. More than 20 different fighters are featured with over twenty special moves per character.Tekken gives players multiple game modes including an intense solo campaign with powerful bosses and specialized encounters. You are the Dojo Master and your job is to assemble the ultimate team of fighters from Tekken history.

  1. Noise floor
  2. And headroom

Good gain staging was an engineer’s way of navigating safely between them. It meant making sure that the gain structure between devices was set up properly.

It ensured that any device in the signal path would receive an optimal signal level to its input, and output an optimal signal level to the next device in the chain.

Noise floor is the inherent noise of the signal path, including the recording medium (in those days it was magnetic tape).

Setting Levels In A Mix

The goal was to keep your signal as high above the noise floor as possible to maximize your signal-to-noise ratio.

Every recording medium has a finite amount of headroom. If you try to record a signal that’s louder than what the medium is capable of handling, it will clip the tops of the waveform and you’ll hear that as distortion.

This meant that quieter passages wouldn’t be obscured by a bunch of hisses and other undesirable noise.

The only problem with trying to keep your signal high above the noise floor is that you ran into the other issue—headroom.

What is Headroom?

Headroom is how much room your audio signal has before it starts to get compressed and distorted.

Every recording medium has a finite amount of headroom. If you try to record a signal that’s louder than what the medium is capable of handling, it will clip the tops of the waveform and you’ll hear that as distortion.

Headroom in analog circuitry and tape recorders was a gradual thing. When pushed above a certain limit you’d get a soft compression/saturation effect at the beginning and the louder you pushed the input signal the more overt the distortion became.

Engineers would try and find the best balance point between noise at the bottom and distortion at the top, which is ultimately what gain staging is all about.

Digital Gain Staging: Perfectly Linear

Digital audio gets rid of a lot of these gain structure issues. The noise floor is no longer really a concern in most modern DAWs as the level of system noise is so low that it doesn’t really impact the signal at all.

Headroom is less of a concern too. But it still matters!


In digital we have an absolute limit (0dBFS, or decibels Full Scale). Any signal above it will get clipped. But up until we reach that point digital is a perfectly linear medium, so we don’t have that gradual onset of compression and distortion that analog recording exhibits.

Don’t Stop The Staging

Great. So why should you care about gain staging if it looks like digital recording has fixed all the problems?

Well, the primary reason is that any digital chain still contains at least one (and usually 2) analog stages (hint—it’s the A in your AD and DA converters).

When you record, your signal has to go through an analog stage prior to being converted into digital. It also has to be converted back to analog on the way out to your monitors.

These analog stages are subject to the same gain structure problems we mentioned earlier:

  • Record too low and you’re fighting the noise floor.
  • Record too high and you push things into distortion and clipping.

So when you’re recording it’s best to set your levels conservatively. A good rule of thumb is to equate -18dBFS with the analog standard of 0dBVU.

If you keep your peaks hitting not much above -10dBFS, and keep the average level around -18dBFS you should have a signal that’s right in that sweet spot.

Just keep in mind that more dynamic instruments like drums or percussion might need more space as their signals can have very large peaks.

Gain Staging for Plugins

The need for proper gain staging doesn’t end when your tracks are recorded…

Even though you’re in the digital domain take a look at all your plugins. How many are modeled after old analog gear like compressors, EQs, console channels, tape machines, etc.?

If the modeling was done properly, most of them will exhibit the same “non-linear” behavior as their analog originals. So the same rules apply: the harder you push them the more they’ll start to compress, saturate and distort.

This isn’t always a bad thing. It can be used for creative tone-shaping purposes. But in general, if you’re pushing all your plugins with high levels it may start to make your mix sound brittle, harsh, and 2-dimensional.

So maintaining the same concept of optimal gain staging that you use during recording is your best bet: -18dBFS is a good average level to aim for.

Keeping it conservative will help you maintain proper gain structure throughout your mix.

Better Gain Staging Means a Better Mix

If you did it right, you should find that your master bus levels are low enough that you don’t have to worry about clipping.

So give headroom and gain staging the time they deserve. And get on with the task of mixing without fearing your faders!