Difference Between 120 Kbps And 320 Kbps Songs

82 views
Skip to first unread message

Melissa Hassel

unread,
Jul 25, 2024, 4:53:03 AM7/25/24
to liascapouteb

I have friends that are very picky when it comes to MP3 bitrate, and will always look for the 320 kbps version of a file. However, I have never noticed any differences, they sound the same to me. I remember reading somewhere, can't remember where, that the human ear is simply incapable of sensing the difference, even if present.

difference between 120 kbps and 320 kbps songs


Download Ziphttps://tiurll.com/2zNBbj



I myself took a similar test and failed as much as I succeeded in identifying which track was which (160 vs 320), a result which is no better than random guessing. I can hear a very slight difference most of the time between LAME-encoded (--alt preset standard*) MP3 files and CD audio, but only on an expensive system with terrific speaks in a quiet room. For earbuds and car listening it doesn't really seem to matter.

Mp3 compression is commonly used to reduce the size of digital music files but introduces a number of potentially audible artifacts, especially at low bitrates. We investigated whether listeners prefer CD quality to mp3 files at various bitrates (96 kb/s to 320 kb/s), and whether this preference is affected by musical genre. Thirteen trained listeners completed an A/B comparison task judging CD quality and compressed files. Listeners significantly preferred CD quality to mp3 files up to 192 kb/s for all musical genres. In addition, we observed a significant effect of expertise (sound engineers vs. musicians) and musical genres (electric v.s acoustic music).

The impact of using loudspeaker versus headphone playback on the subjective quality of compressed audio is investigated. It is shown that reverberation and to a lesser extent cross-talk, which both are introduced naturally in loudspeaker playback, can effectively hide coding artifacts.

This other paper describes the differences between different bitrates and different ways of testing. In all cases it shows a very minor difference between 192Kbit/s and 256Kbit/s and basically no difference between 256Kbit/s and 320Kbit/s.

Okay, so the latter makes it sound like there is plenty of data available, so you can rip at 256Kbps or 320Kbps without interpolating (let alone downloading a song from an e-store which has access to the original sources at even higher than CD fidelity).

However, I've had a few CDs with a few songs in the industrial genre (a remix by Nine Inch Nails) which have very rapid, rhythmic sounds overlaid by very chaotic sounds like an electric guitar. There is one particular section of a song which features this type of music. Unless the encoder is set to a very high bitrate the music will skip beats or stretch them out, both of which are jarring to the listener. This may be due more to the sample rate being to low rather than the encoding bitrate, but I found that it would only work well on very high lossy or lossless encoding. That said, this particular song is fairly unusual, and most people would probably consider the song indistinguishable from noise.

The outro to this song beginning shortly after 5:10 is a good indication of the type of sound which codecs seem to handle poorly. It's not exactly what I'm thinking of, but I can't remember the name of the other song. Even this YouTube video seems off to my ears, though, and it's a copy of the studio album.

For example, imagine a file that's been compressed using a lossy algorithm such as MP3. You then use this file to, say, record a DJ mix, which is then compressed again to MP3. You email this mix file to a friend of yours who wants to transmit it over satellite or internet radio.

You're now looking at three compression-decompression steps. Each time you're going to lose different data. Even if they're all, say, 192kbps, it's going to sound a LOT worse than an original 192kbps compression.

Disks are cheap and your music collection will, in theory, last you your lifetime. Who knows if one day you'll want to use your 192kbps files (which sound fine today) as the background music for a DVD or Blu-ray Disc sometime down the road? (These compress their audio using lossy codecs as well.)

Huge hard disks are incredibly cheap these days, so storage isn't really a concern. You should archive your audio in the highest possible quality so that it's viable for uses other than direct listening in the future, by you or others.

You also point out that when purchasing music online you download the highest quality, and that you transcode accordingly for mobile use, but only rip your CDs as 192kbps. This presumes that you still keep your CDs around as the "master" copy, should you need the original data. For many people, getting rid of the CDs is the whole idea - optical media is slow and bulky compared to modern storage systems like 2.5" hard drives or flash. The representation on the disk becomes the master -- and it sounds like you already recognize the value of having the highest quality master available (versus a lower quality "working" copy).

PS: You are misusing the term "sampled" when you say "sampled at 192kbps". The sampling rate and the data rate of the compression algorithm are entirely different things. The sampling rate (the number of audio samples per second) does not change regardless of how the data is compressed.

Along the same line, if exiftool says the things below for the original recording, it means that it is 96 kbps for both channels and one could convert it to mono and export as mp3 at 48 kbps, and it is the same quality, minus the deterioration for one mp3 compression.

(Joint stereo might also mean that both channels are the same and thus the stereo is redundant and one might be better off recording as mono, but that is another story and I may not be able to change that)

If it is a stereo recording and there are two channels, and I export as mp3 at 96 kbps, I could say that each channel gets 48 kbps. If I have converted that file to mono first, I need to export at 48 kbps to get the same quality. Is that so?

The recorder puts Joint Stereo into the metadata (it seems to use MS mid-side channel and not intensity stereo, which is good for quality according to your links), whether it uses variable bit rate or constant, the metadata do not say, but I assume bit rate is constant because when I just open, tracks>stereo tracks to mono, and export at 48 kbps constant bit rate, I get roughly half the size. If bit rate were variable, the difference would not be that large, from what I understand.

Where the question came from was not only the mono/stereo kilobits per second and channel confusion but also the quality of the recording. From what I find here, 96 kbps stereo should not sound half as good as these recordings do. So I thought, what if the recordings are really 96 kbps per channel. Maybe they are good because the mic is quite good (AKG perception series with this spider-web rubber band suspension)

Keep in mind that most MP3s on the Internet are really badly encoded, or, very often have been re-encoded several times. Even at quite a low bit rate, MP3s can sound quite reasonable - it depends a lot on how demanding the audio is and how well it has been encoded.

If you intend to do any editing / processing of the file, it should be in the best quality possible, ideally in WAV format. Audacity can only work with uncompressed audio, so when an MP3 is imported, Audacity must decode it. If you then export from Audacity in MP3 format, Audacity (or Lame to precise) re-encodes it and in so doing adds a bit more damage.

On the other hand, if you have a stereo track and export it as stereo or joint stereo at a given bit rate or quality, then make the track mono and export that at the same bit rate or quality, the mono version will have better quality than the stereo or joint stereo export.

VBR will reduce the bit-rate for very quiet audio and will increase the bit-rate for loud complex audio. The assumption is that very quiet audio is less audible and so slight deterioration of the sound quality should not be very noticeable, whereas deterioration of loud audio will be more noticeable.

I then re-imported the 0 dB sample and checked the peak level. The Amplify effect reported the peak level to be 0.0 dB. Nyquist reported the peak level as -0.039 dB. Zooming in and carefully inspecting the highest peaks showed no sign of clipping.

The issue with Lame encoded files having a higher peak level than the original is most evident when dealing with audio that has been heavily compressed / limited. For audio that has a large dynamic range (ie. not compressed or limited) I see little evidence of the issue.

The only point I meant about ReplayGain was just if you actually normalize to the analyzed ReplayGain level rather than use a ReplayGain tool to encode the gain to the metadata. You might do that if you still have a player that is not ReplayGain capable.

4a15465005
Reply all
Reply to author
Forward
0 new messages