So if you’re a audio engineer or just an artist mixing your own songs, we’ve got something in store for you today as we’ll be talking about the ideal volume levels for your music when submitting them to streaming services such as Spotify, Apple Music and Youtube (the only streaming services that matter LOL).
How do we even measure the volume level of our songs?
The first thing you’re going to want to do is figure out how loud your song currently is. You can do this pretty quickly and with the VST plugin such as the Youlean Loudness Meter (oh yeah did I mention it’s free?).
So now you’re probably wondering, how the heck do I use this thing? First you need to download it and install it and then insert it as an FX to the stereo out on your DAW’s mixer.
Now that you’ve done this, all you need to do is play your song and open up YouLean and take a look at the short term integrated LUFS. What you’ll find is that for the quiet parts of the song the LUFS will be lower and for the louder parts of the song the LUFS will be higher, so you’re going to want to wait for the louder parts of your song to play in order for the short term integrated LUFS to be calculated properly.
What are LUFS?
Now if you’re wondering what the heck are LUFS? You don’t really need to know but if you want to completely nerd out, they’re defined as Loudness Unit Full Scale or LUFS for short. I know this sounds like a complicated term but all it means is the average level of a song’s perceived loudness. Now if you want to learn more about LUFS check out this sweet article from sweetwater (see what I did there?).
Also we used to use RMS, which stands for Root Mean Square, as a way of figuring out a song’s average peak volume, but found that it wouldn’t produce consistent results: two songs with the exact same RMS could have one sounding louder than the other. This is not the problem when using LUFS to analyze the average peak volume as two songs with same LUFS will pretty much sound as if they’re at the same volume level.
Why is this good? Because then streaming services and even radio and television broadcasters can set their own LUFS standard for media creators to follow, thave one sounding louder than the other. This is not the problem when using LUFS to analyze the average peak volume as two songs with same LUFS will pretty much sound as if they’re at the same volume level.
Why is this good? Because then streaming services and even radio and television broadcasters can set their own LUFS standard for media creators to follow, thereby avoiding any kind of loudness wars between songs and even TV shows and commercials. For example, if you’re watching TV (especially at night), sometimes you’ll find the commercials come on really loud scarring the crap out of you as you scurry for the remote.
What is the mastering level for Spotify?
Let’s take it back to music as say you’re on Spotify listening to whatever crap you’re listening to and then another song comes on and it’s much quieter than the previous song, this is going to force you to turn the volume up right? Or even worse, if the song after is much louder, you’d be forced to lower the song.
So by setting broadcast standards with regards to LUFS we can avoid these kinds of pleasurable listening situations. Right?
Not exactly, because as we’ll see different streaming services have different guidelines for normalization (LUFS). So now you’re probably like “wait what is normalization?”. It’s basically when we bring the audio signal up (or down) to a certain level and this is exactly what all streaming services do in order to prevent songs being louder or quieter than others. To put it short, normalization makes all songs at the same volume level.
But there is a consequence to this and that depends on if your song is too loud or too quiet. For example, if you submit your song to Spotify and it’s LUFs are at -5, then Spotify will normalize it by lowering it to between -13 and -15 (we’ll just say -14 from now on just to keep it simple).
|Streaming Platform||Normalization (LUFS)||True Peak (dBTP)|
|Spotify||-13 to -15||-1.0 to -2.0|
|YouTube||-13 to -15||-1.0|
Now you maybe thinking “oh great, I don’t need to do anything on my end”. But you’re wrong, because when you go hear your own song on Spotify it’ll sound very squashed (as if someone has been forced it through a funnel). In fact, if you look at the wav form of your song, it’ll look like a block with very little difference between the loud and quiet parts of the song.
This difference between loud and quiet parts is called the dynamic range. So when you hear audio engineers or audiophiles or just plain snobs talking about how the “dynamic range is squashed” this is what they are referring to.
So now you maybe thinking, “Okay fine, then I’ll submit my songs at -14 and Spotify won’t apply any normalization to them and they’ll have dynamic range and I’ll be happy”. Yes, you can do this but it depends on a few things.
From experience, we’ve found different genres have a sweet spot for what LUFs we like to have them at. For example, our mixing and mastering service not only takes into account the song’s genre, but also the style of the song as well as the sonics of the song and the actual recording of the song. So it’s not possible to say, keep your song at this certain level, as it does take a lot of trial and error to figure out what works and what doesn’t. But since I hate to give you non-answer so I’ll just say keep it around -10 LUFS.
Now it’s your turn: in the comments let me know where you set your levels at when submitting to streaming services, also let me know what genre you make?