Concluding the Great MP3 Bitrate Experiment

Interestingly I failed the test. I felt pretty certain as I took it. At least I’d figured I’d get a better result… Oh well. I made sure not to bias myself by listening in a particular order, I downloaded all files and just listened in a random order and sorted them during listening through 5 times.

Thanks for making the test Jeff!

We’re talking about a low quality item, with (relatively) high quality encodings. It’s like comparing the conductivity of silver, copper and gold for transmitting electricity for your cheap radioshack light dimmer. Sure they make a difference, but can you really tell when it’s just light?

Most can’t, some experts probably can. A lot more would notice the difference if you were transmitting a tremendous amount of electricity over a long distance to run a huge spotlight, power a car, or (ironically) to run audio equipment. The variances would be noticeable, the light would be brighter on higher quality, the car would be faster.

But here, there is nothing that needs perfect tone quality: Good test items would be a perfectly balanced choir, or a full orchestra with various solos jumping out throughout the song.

But yeah, if you’re just running a light in your house, cheap wires work fine. Give us something that has quality and we might be able to tell you if it degrades.

If what you are listening to is garbage to begin with, poor encoding it isn’t going to hurt it much.

Here’s how I voted:
5 - Gouda (raw CD)
4 - Limburger (160)
3 - Cheddar (320)
2 - Brie (192)
1 - Feta (128)
So, excluding Limburger, they were in the right order. I’ll think about what conclusions I should make. This was an interesting experiment.
I don’t have a special audio card and I didn’t use a headset to test this. I listened to the music through the speakers integrated in my Asus 24T1E monitor (it also works as TV). It was connected directly to my ASRock motherboard.

The 160Kb VBR sample scored best overall. Why is this?
Some hinted this, but nobody really pursued this logic:
If you compress something, you lose information.
If you lose information, the information that remains becomes more prominent.
What if the info remaining in the 160Kb VBR experiment just happened to catch the essentials of the song?
In other words, another song with the same collection of bit rates might produce another “best” sample, because the information remaining may be best suited for that song.
If this is true, experiments like this are totally futile, because they cannot show a best bit rate in general (only for a particular song). I scored 160kB VBR better than the 320kB CBR and about equal to the uncompressed CD.

Internet makes people think they suddenly expert in every field they can imagine. Jeff Atwood also falls into this category. He is an official smartass :).

The computer boots up and starts Windows. The 1st snap/click of the amp sounds off. Let there be light. The 2nd snap click of the Denon AVR tells me that there is sound waiting to be heard. Heard at very high levels compared to what the average troll listens to audio at. Silly people with ½" plug 3" desktop speakers. O that nasty hollow crackle as you turn your tiny nob to adjust your volume. Or the user with the other 3 to 4 plug ½" PC surround sound kit. With a hand sized “powered sub woofer”. (but can it hit 30Hz and below? NO!) Better yet, the headphones guy. They can hear eeeevvvvverything right? The headphones are “surround sound” (WTF? 2 Speaker surround? next you will tell me Jesus made them just for you). Then there are the men, the men who live with sound. Creators, producers, master’s,audiophiles. Those who do not live with mother and father. Those who do not rent, where they stay. Those of us who can play songs so loud you can feel it in the fibers of your clothing. So loud you can hear when the song that you are playing was ruined by compression via some space saving chump with no sense of hearing. You can hear someones ipod crap mix. Crackle crackle fizzle fizzle. And dont let there be any cymbals. O my, did anyone hear that tweeter fry? Im sure those who cant/ couldn’t hear the difference, really could not because their equipment could not reproduce the entire “experience”. My system automatically changes its settings for best listening experience “if you want it to” depending on the source. It will display for you the original file/sources info, on either the face of the unit, remote, or on whatever monitor you are viewing. Now obviously, a 16BIT, 128kbps song, will not utilize anywhere near the full spectrum of sound my AVR is capable of performing. Not even close to say, half, which is 7.2 surround. It would simply click, and route the song to the front 2 channels. 32Bit, 356kbps, now we might get somewhere around 3 channels with sub woofer if the song is properly coded. Then there are the audio DVDs that I own. O what a treat when you have properly encoded blast beats, and double kicks, and insane hammer-on’s disturbing air-waves and watering eyes.

I have hated the mp3 codec since forever. FLAC, AIFF, even the bloated .WAV is acceptable. But user beware, go to your restrooms, grab a Q-tip, and clean your ears out. Not too far in , might loose something there. Go buy a component of caliber, and finally bask in the embrace of sound.

@David Hayes

In addition to headphones sounding different: my pair of Sony earbuds (I wanna say EX-51s but I’m not sure, cheaper IECs) have virtually no bass response if you’re listening to mono and only have one earbud in.

That and my home stereo (fairly old stereo receiver, Yamaha Natural Sound RX-7) sounds very different based on the setting of its “variable loudness.” - Basically you set the loudness to its maximum, set the volume to your maximum desired listening level, and then adjust the loudness.

Adjusting the maximum volume adds bass very quickly, whereas increasing the loudness adds more treble than bass.

So really, imho, to get accurate results I’d need to use tone bypass (no treble/bass equalizer adjustments) as well as not using the variable loudness on my receiver.

tl;dr: It’s incredibly complex to account for listening equipment! It’s as unique as a fingerprint.

2 things:

I used to sell high end audio gear. While most of the people who worked there were convinced of the audiophile nonsense (now expressed via Gold Plated USB Cables and CAT 5, because bits can tell how they got transmitted), I was skeptical.

So I screwed with things. I went to the back panel and changed the connections on the demo boards so the wrong gear was being demonstrated. Of those who claimed to be “audiophiles” (the kind of guys who bragged about spending $3000 on each component of their system) I never had anyone figure out that they were glowing with praise for mid-range Sony gear, even though our demo room was soundproofed from the outside and had a precisely located chair at the sweet spot for the speakers (which were, admittedly, really good surround speakers).

I don’t dispute that you can hear differences in audio quality to a point, but the reality is that as long as you avoided absolute junk that caused amp clipping and other obvious artifacts, the difference was negligible, even when listening to classical music with a massive set of spectral and volume ranges over time.

Personally, I’m hanging onto my CDs: they make an excellent backup for my computer’s music collection and eventually I will rip them uncompressed because my drive will simply be large enough that I won’t care. In the meantime, 192 bit variable is my choice for “good enough”.

Disregard previous comment…

2 Things:

 - I tried this experiment twice on different days.  The first day, I used my high-quality Sennheiser HD515's.  On the second day I used my Apple iPod earbuds.  Using the Sennheisers, I almost perfectly ordered the tracks (I swapped the 192 VBR for 320 CBR).  However, with the iPod earbuds, all of the tracks sounded roughly identical (I actually ordered the 128kbps track SECOND best).  So, I wonder if the listening setup of the respondents caused some of the anomalies in the results.

 - Also, I think the reason for the 160kbps track being highest could be due to the specific qualities of the song chosen. As others mentioned, this song contained a lot of compressed, synthesized sounds. It is possible that the compression artifacts were "masked" somewhat in the 160kbps track, or the resulting sound was most pleasing, but the artifacts could be clearly heard at higher bitrates, confounding the results.  It would be very interesting to know the compression characteristics of the synthesized instruments used in the original.

“I’m comfortable calling this one as I originally saw it.”

Well, it is hardly conclusive when the methodology does not eliminate all confounding variables that would likely work in favour of the initial hypothesis.

Still, the whole “no one can tell the difference” meme makes some people feel superior to those fancy-smancy engineers with their edumacation and those rich bastards with expensive stereos (obvious parallels to the psychology of conspiracy theories about 9/11 and the moon landing). Yes, we are expected to believe highly trained engineers went ahead and spent time and money creating high-resolution audio deliberately knowing (or being too stupid to realise) that it makes no difference. Please.

I suspect the average person has never heard real music and has no idea how it should sound and how much more musically involving it does sound when reproduced well.

“The first principle is that you must not fool yourself and you are the easiest person to fool.” - Richard Feynman

For programmers your posts used to be interesting and useful but you have clearly lost it now. Unsubscribing, sorry Jeff, had good times. It is still a good blog for general readers and parents now I guess.

People are more used to 160kbps - 192kbps than 320kbps. Give everyone 320kbps MP3s and they’ll start saying how much 192kbps sucks. The quality matters a lot, but habits matters most.

Just my .02

“Well, first off, it’s incredibly strange that the first sample – encoded at a mere 160kbps – does better on average than everything else. I think it’s got to be bias from appearing first in the list of audio samples. It’s kind of an outlier here for no good reason, so we have to almost throw it out.”

You probably know this, now, but it’s fairly common in this sort of “blind taste test” thing to not always offer the same product 1st, to “shuffle the deck” between survey takers. Even for in-person tests, there’s a bias if you always serve the Coke first, then the Pepsi, or vice versa.

It must be a coincidence that the samples were rated in the order of the length of their respective codenames; as if such benign things as a name could alter the results. It’s nonsense, right?
Or was it the order of their appearance in the article? It’s the same as the order of the results.
No flamewar intended.

I think MP3 has serious limitations after 192kbps. It never sounds as full as an OGG or WMA file to me.

A flawed test shows the expected results! Classic.

But you’ll never admit it, because of the pervasive confirmation bias.

Honestly don’t really care about the results, but as a scientist, I have to lambast you for your procedure and your bias. Really atrocious. Sorry.

Jesus, I can’t even pick the 128kbit recording out. Either I need a new set of headphones, or a new set of ears. I’m not ruling out a new auditory cortex, either.

I guess that means I’ll be able to save a bit of space on my iPhone.

I never take seriously all those “dog’s talks” about audio quality, compression, digital vs. analog and so on, after I read an article in authoritative Russian “Stereo” magazine with a review and comparison of 5 optical TOS-link cables. They used so many words to describe characteristics of sound of each cable: one was warmer, another was metallic… you know the stuff.

I understand that there are people, who can notice the difference, but they are not the people who talk about it.

To the people saying “Yes, but what if I need to transcode my collection some time in the future?” - do you people really believe that there is going to come a time when portable audio players don’t play mp3 files? I really, really doubt that is going to ever happen.