Musings on Mastering (and Robots)
Mastering is the final stage of the recording process. It’s also the first step in the manufacturing/distribution. Old school mastering, by traditional definition, is the process of physically putting the music onto a master ready for copying. So technically the only real mastering engineers are the people cutting lacquers or DMM (Direct Metal Masters) for vinyl production. However, we’ve evolved past that definition. Mastering now, as we know it today, is the balancing of the overall tonality of the music. This is achieved by the Mastering Engineer using frequency-based amplitude (EQ) and selective band or wide-band dynamic amplitude adjustment (Compression). Any processing beyond that is not mastering. I won’t die on that hill of a statement. I’m not a gatekeeper of what is and what is not mastering. I will say if you can’t regularly master with just an EQ and a Compressor/Limiter you’re not really a mastering engineer. You’re just an audio engineer doing mastering. Fight me.
Before you hit that all-caps in reply let me explain on what I think creative mastering is in the modern age. It’s a best guess. There is no right way to master a song. There are a lot of wrong ways. Mastering, as an engineering art, has become so diluted over time that what would have typically been part of the recording or mixing process (or even sound design), is now routinely practiced in mastering studios.
Also, there now is “AI” and actual fucking robots doing mastering. Literally, robot arms turning analog EQ knobs. Soon Skynet will be eliminating all the human mastering engineers in a global plot to take over the mastering racket.
A well-mastered song should sound good on every playback device. From phone speaker to Bluetooth to Mac-Truck sized sound system. Often, after the mixing process, the mix engineer will provide a ‘reference master’ this is a version of the mix approximating the sound of the master so the artist can hear what the mix might sound like after mastering. Sometimes the artist will say they like ‘the ref’ better than the master. Which makes sense. It’s usually just the mix made louder and both the artist and mixing engineer are used to its sound. More often than not though, if that ref-master is played on multiple playback systems it won’t sound consistent. A well-mastered track may not sound as hype as the ref-master, maybe not even as loud, but the mastering engineer would have made tonality decisions based on a wide range of playback scenarios. We’re not going for the best sounding master in our studio, we’re going for the best overall sounding master.
Loudness is important in mastering. The mastering engineer will set the overall loudness of the song and the album as a whole. There is no standard for this and it’s still the wild-west of loudness when it comes to music. There is no one answer to how loud should it be. Every song/album has a loudness sweet spot. A mastering engineer should be able to find that and a lot of time will go into achieving it. Can a robot understand ‘the sweet spot’? That’s purely an earthing, a ‘feeling’ that plays an important role in mastering.
Loudness wars are still a thing but mastering engineers are not frontline fighters anymore. The mix engineers often set the loudness even before they hit a mastering studio. Some mixes come in louder than a mastering engineer would make it after mastering. A mix with dynamic range allows me to do my job better. That’s the rule but there are always exceptions. Sometimes it’s the mixing engineer’s vision of the song for it to be loud and I have to respect that and work within it. I can ask them to turn it down but only if I think it’s been smashed to shit by mistake or if I know without a doubt that I can’t make it sound better (or the same) after mastering because of the loudness. Sometimes, if I ask for a quieter mix I lose the job and that always sucks. So I have to be mindful of this. The bottom line is if it comes it loud it stays loud. A mastering engineer can adjust the volume but if it’s already loud, that’s permanent.
Many mastering engineers will try to impart a sound of their own via tape, tubes transformers, and various forms of saturation processing. I understand that. Mastering can be boring and uncreative especially if the mix is already near perfect. Mastering Engineers should put their ego aside. We’re there to be seen not heard more-or-less. Do what’s best to see through the vision of the artist, producer, recording, and mix engineer. We don’t add a sound we enhance it by doing as little as possible. Lemme repeat that for those in the back. Mastering Engineers are to do a little as possible every time. We are not creatives. That’s not to say mastering doesn’t require creative thinking and engineering, of course, it does, but we don’t add something new to the creative process. Again, there are always exceptions but as a Mastering Engineer, ‘your sound’ is the sound of the audio you’re working on. Start with a little EQ. Then, add some compression. Does it help? If not, take it out. Is the EQ enough? If so, you’re done, just adjust the loudness and then adjust the EQ again. If not, start adding all those tubes and such but only if they improve the sound not just to make your mark.
So if it’s that simple, why can’t robots master? Well, the answer is they can. The basics of mastering are super simple. Robots can do a pretty good job at basic mastering and automated mastering is a great tool to get an approximation of what your mix might sound like after mastering or for a mastering engineer to compare. What robots can’t do is ‘feel’. Feeling in audio engineering is hokey-pokey but your emotions and your ability to connect with the music have a direct psychological connection to the frequencies choices you’ll choose to attenuate or accent. Robots won’t use feelings they will just use math. Math will always tell you to fuck your feelings. Sometimes I add supersonic (16khz+) frequencies above human hearing, not because I can hear what it’s doing but because of how it affects the feeling of the music heard in the changes in lower frequencies. Sometimes I run it through a compressor at a 1:1 ratio (no compression) because of the ‘sound’ of the compressor. That sound is hard to measure and quantify. It’s especially present on analog devices. Each analog device sounds different. Why is hard to explain. It’s a ‘feel’ thing especially in devices that offer non-linearities in sound, like tubes. That feel is ephemeral and important in mastering. No robot or machine learning can express it. We don’t need to use those ‘feel’ devices every time but when we do they are invaluable.
Despite some engineers’ and manufacturers’ claims, analog is not better than digital. Digital is not better than analog. They sound different. Even the best digital modeled analog doesn’t sound like analog. It’s great but it’s something else. Every analog device is different every time you use it. Its components are slightly older, the power is different every day, the temperature of the room is slightly different. Plug-ins are the same thing every day in the hands of every engineer. Analog has a ‘je ne se quo’. Digital has perfect repeatability. Both of those traits are equally a bug and a feature. I’m biased but when choosing a mastering studio I feel it’s super important to use a studio that can offer both digital and analog. If a studio is digital-only, while they can get great results, they can only ever offer digital. From experience, I can tell you that some mixes benefit greatly by going through an analog mastering stage. Every tube, transformer, EQ filter, and compression circuit sounds different and at that, it sounds different depending on how hard you drive it. Analog gain staging will change the sound just as much as choosing the different frequencies on an EQ. Digital doesn’t quite have that trait but new DSP is being designed all the time to approximate the sound of analog. And it’s amazing what digital can do but as good as digital is getting I can’t see myself ever fully giving up analog.
A well-designed room and full-range speakers are probably the most important tool in mastering. Often, more than 50% of a mastering studio’s build budget will be just in design and build. That said, we’re in a new age of headphone design. My headphones were expensive and paired with an equally expensive headphone amp, challenging the need for a proper studio. I could easily, and sometimes do, master just on headphones. That said, some jobs must be done in a proper room and I couldn’t imagine mastering without it. If a mastering engineer is only using headphones they can only ever use headphones. Some music you just need to feel with your whole body to make the right mastering decisions. Good headphones are really great at playing back bass but nothing can replace the moving air of large speakers in a well-designed room. My mastering room is flat down to 27hz and you can feel those sub frequencies as much as you can hear them. That connection to ears, mind, and body is not the full experience in headphones.
Lastly, it’s a mastering engineer’s job to be up on all the sound technology, distribution formats, musical genres, trends in audio production, and all things music-related. This is the new QC/QA. We are the last people to manipulate the music. It’s not enough just to do equalization. We have to be aware of where the music will go. Not just the physical playback systems but who will be listening and on what platforms. From audiophile to TikToker these are the people who will be hearing our masters and each ear matters. The music is the message and the tonality will help the listener hear and connect with it. Knowing where the music will go between the mastering studio and the final listener is just as important as what frequency or amplitude we manipulate.