|View Poll Results: Are Blind Audio Comparisons Worthwhile?|
|Yes, blind audio comparisons are worthwhile||121||86.43%|
|No, blind audio comparisons are not worthwhile||19||13.57%|
|Voters: 140. You may not vote on this poll|
Steve Guttenberg, author of cnet's Audiophiliac blog, maintains that blind comparisons of audio products are meaningless. He might be right—as you can see in the graph below, listening tests conducted by Floyd Toole and Sean Olive reveal that blind comparisons of four speakers resulted in much more equal preference ratings than the same comparisons in which the listeners knew what they are listening to. Guttenberg also argues that the tester's ears are psychophysiologically biased by the sound of one product while listening to the next product. And the conditions under which the test is conducted are rarely the same as those in any given consumer's room, so the results mean nothing in terms of deciding what to buy.
On the other hand, many audiophiles believe that blind listening is the only way to remove expectation bias and honestly evaluate the performance of audio products. The differences might be more subtle than expected, but they are still evident. Take another look at the graph above—even though the range of preference ratings is much narrower with blind listening, the ratings follow a similar pattern except for the last data point.
What do you think? Are blind comparisons of audio products worthwhile? On what do you base your position?
But I think I agree that variable conditions do tend to render the point moot. At least for speakers.
Every single time I see something like this I laugh. Maybe once we get some descent sample sizes that are nor picked via convenience then we can have something to consider. Until that happens I get the impression that such articles/threads are nothing more then click bait.
Far from it; I believe this to be a valid debate, and I'm genuinely interested in seeing what the AVS community has to say about it.
From a statistical perspective it could be worthwhile if the conditions are the same and the sample size is adequate to achieve at least 95% percent Confidence Level.
From a practical perspective: No - it's not worthwhile.
What I want to see is the results of a panel of actual blind people, who go through the same listening tests. I wonder if the fact that the blind become more attuned to what they hear would impact the results.
To really focus on the audio, you have to turn off all other senses - not just the visual.
I always close my eyes if I want to listen critically, but it is not necessary for me to evaluate whether I like what I'm hearing or not. It's about focus - "remove" all other distractions and senses. Blind Testing only partly achieves this, and therefore is rarely worthwhile for audio comparisons. There are too many other factors involved in audio comparisons.
Here we go again with yet another ABX thread. You won't find anything different here that hasn't been argued about a million times.
I agree with the clicks comment
Television: Mitsubishi WD65737 DLP
Processor: Emotiva UMC-200
Amps: Carver AV 806x/Behringer EP4000
Mains: DCM TimeFrame 600 Center: AT 453C
Surrounds: AT 251.1 Sub: Danley DTS-10
Blu Ray: Panasonic DMP-BD655
With all of that being said, humans prefer whatever they prefer for whatever reason. For example, punk kids prefer bass that rattles the world when they drive around in their cars, even though it sounds nothing like the original. So, if you are someone who wants perfection, do something like what I mentioned above. If you want something that sounds good to you, then just compare everything in your listening environment and tweak it to your tastes and see which one sounds best to you.
One may as well ask "Is The Scientific Method Worthwhile?"
The point of blind testing is to try to be more responsible (epistemologically speaking) in coming to a conclusion, in the same way you try for in science.
I agree with Art. It's of course possible that certain blind tests have not been ideally designed. But that sure as hell doesn't ratify the even more preposterously loose conditions under which so many audiophiles make their judgements, which are primed to support pretty much every known type of biasing possible into the evaluation. The reason you do blind testing has to do with
very well known, well studied problems in human biasing. You don't do better by "ignoring" these problems and appealing to conclusions derived in weaker control situations.
I was involved in high end audio, feverishly for many years, and the amount of mysticism and appeal to subjectivity in otherwise tech-headed people was just astonishing. It really was like mixing religion with science. (And it was starting to do my own blind testing that helped disabuse me of some of my own biased conclusions. E.g. "obvious" sonic differences between super high end AC cables vs a $15 AC cable that completely disappeared when I didn't know which cable I was listening to).
Louder is NOT better!
It's when objective claims are made about "real" sonic differences that are produced by a product, based on dubious technical claims, that you want to be more careful in testing, if you want a more accurate, "responsible" conclusion about the claims.
I can tell someone how I feel about the sound and experience when I fire up my tube-based system and spin some vinyl. But if I'm going to start making more objective claims, that the sound is really being altered in the way I subjectively perceive it to be, or making claims of some technical advantage to the equipment in this respect, then I owe it to myself and others to back up those claims in a much more careful manner of investigation, blind testing being one possible tool.
(On a similar note: Not that I am able to magically remove my bias, but nonetheless I also find it amusing the way the visual presentation of a system, and it's technical claims, can play into my perception. Sometimes I play with this a bit. I've been in front of systems that are very expensive, supposed to be super transparent sounding or whatever, and thought while looking at the system "Yeah, I guess I can see what they mean." But then I close my eyes and try to "forget" the claims and looks of the system and just concentrate on what it really sounds like. Not a few times it's been sort of shocking to realize just how un-impressive the sound it on the whole, like big suck-outs in the lower mid-range or whatever, lack of coherence, and I realize it's putting out a sound that is actually more reminiscent of a cheaper bose satellite system, rather than, say, the tall floor standing speakers they actually are).
I'll tell you exactly where double-blind testing is valuable: in a manufacturer's demo room.
A loudspeaker company I worked for would often have non-blind comparisons of newly designed products. These tests were often conducted by a popular, ambitious engineer to "convince" the sales department that his latest-and-greatest project would best the competitors' products. Needless to say, the sales guys' "True Believer" psychosis (a necessary state of mind for any effective salesperson) would nearly always return a positive verdict.
For myself, whenever I participated in new product evaluation, I would always deploy the most competitive product I could find as the "B" sample, and would go to great lengths to disguise the appearance and even the location of the sources. (We had an ABX box too for switching.) I would seek not only industry types but office and factory personnel to participate and offer opinions.
The results from these tests proved more valuable in developing successful products than the somewhat loaded "beauty contest" method. Those results also allowed me to, over time, come to appreciate how my hearing and sensory acuities differed from the "norm" to such an extent that we could usually hit the design target on the first couple of attempts.
when 90% of the cost to build speakers is spent on R&D and they've most have been practically unchanged for decades, it shouldn't be a surprise that value priced speakers can still sound great.
Displays: Samsung PN64F8500/JVC X35
AVR: Pioneer VSX-1018AH, 5.1 audio
Sources: HTPC(Mediabrowser), PS3, XBOX360, Wii, Sony DVP-CX995V
Control: Harmony One
What is the objective? If you're simply trying to differentiate A versus B then you had better be dead certain that the target room is where you're going to deposit one set of speakers versus another, and that you've optimized each set for the correct listening position in all the things that will affect the overall sound (and there are many).
Reasons ABX is pointless: the human element.
1. A person needs to get used to the acoustic environment.
2. Barometric pressure affects how sounds is perceived.
3. Familiarity of the recorded materials.
4. Every person perceives things differently and have different hearing acuity.
5. In the end it's still a subjective comparison.
If all else fails, go outdoors. It's pretty uniform everywhere.