or Connect
AVS › AVS Forum › News Forum › Community News & Polls › Are Blind Audio Comparisons Worthwhile?
New Posts  All Forums:Forum Nav:

Are Blind Audio Comparisons Worthwhile?

Poll Results: Are Blind Audio Comparisons Worthwhile?

 
  • 86% (118)
    Yes, blind audio comparisons are worthwhile
  • 13% (19)
    No, blind audio comparisons are not worthwhile
137 Total Votes  
post #1 of 119
Thread Starter 

Steve Guttenberg, author of cnet's Audiophiliac blog, maintains that blind comparisons of audio products are meaningless. He might be right—as you can see in the graph below, listening tests conducted by Floyd Toole and Sean Olive reveal that blind comparisons of four speakers resulted in much more equal preference ratings than the same comparisons in which the listeners knew what they are listening to. Guttenberg also argues that the tester's ears are psychophysiologically biased by the sound of one product while listening to the next product. And the conditions under which the test is conducted are rarely the same as those in any given consumer's room, so the results mean nothing in terms of deciding what to buy.

 

 

On the other hand, many audiophiles believe that blind listening is the only way to remove expectation bias and honestly evaluate the performance of audio products. The differences might be more subtle than expected, but they are still evident. Take another look at the graph above—even though the range of preference ratings is much narrower with blind listening, the ratings follow a similar pattern except for the last data point.

 

What do you think? Are blind comparisons of audio products worthwhile? On what do you base your position?

 

Like AVS Forum on Facebook

Follow AVS Forum on Twitter

Follow AVS Forum on Google+

post #2 of 119
Seems to me that the Olive/Toole listening test results as described make an argument for blind testing rather than against it.

But I think I agree that variable conditions do tend to render the point moot. At least for speakers.
post #3 of 119
So, you really need a blind test in your own room. I would still say blind testing is the best way to prioritize what you want to take home and try out.
post #4 of 119
Every single time I see something like this I laugh. Maybe once we get some descent sample sizes that are nor picked via convenience then we can have something to consider. Until that happens I get the impression that such articles/threads are nothing more then click bait.
post #5 of 119
Thread Starter 
Quote:
Originally Posted by FilmReverie View Post

Every single time I see something like this I laugh. Maybe once we get some descent sample sizes that are nor picked via convenience then we can have something to consider. Until that happens I get the impression that such articles/threads are nothing more then click bait.


Far from it; I believe this to be a valid debate, and I'm genuinely interested in seeing what the AVS community has to say about it.

post #6 of 119
I've voted No.

From a statistical perspective it could be worthwhile if the conditions are the same and the sample size is adequate to achieve at least 95% percent Confidence Level.

From a practical perspective: No - it's not worthwhile.
post #7 of 119

What I want to see is the results of a panel of actual blind people, who go through the same listening tests. I wonder if the fact that the blind become more attuned to what they hear would impact the results.

post #8 of 119
@imagic: It would likely have an impact.

To really focus on the audio, you have to turn off all other senses - not just the visual.

I always close my eyes if I want to listen critically, but it is not necessary for me to evaluate whether I like what I'm hearing or not. It's about focus - "remove" all other distractions and senses. Blind Testing only partly achieves this, and therefore is rarely worthwhile for audio comparisons. There are too many other factors involved in audio comparisons.
post #9 of 119
Why do we believe quality scientific research in all aspects of our lives... no make that depend on quality scientific research, yet in audio so many have made it a religious debate instead ? Yes, well designed testing has a place but unfortunately that diminishes the mysticism as it goes. Obviously for those in the church it will be attacked , branded as heresy and dismissed. The one with the expensive cables vs lamp cord was my favorite.

Art
post #10 of 119
I don't think its any more worthwhile than watching test patterns to test real world display content...useful to a certain extent, but no one sits down with a bowl of popcorn to watch a marathon of test patterns (or, maybe they do?). Either way, too many other physical variables that affect sound much more than sight.
post #11 of 119
Quote:
Originally Posted by pottscb View Post

...too many other physical variables that affect sound much more than sight.
It's not about sight, per se, but about cognitive biases.
post #12 of 119
Quote:
Originally Posted by Scott Wilkinson View Post


Far from it; I believe this to be a valid debate, and I'm genuinely interested in seeing what the AVS community has to say about it.

Here we go again with yet another ABX thread. You won't find anything different here that hasn't been argued about a million times.

I agree with the clicks comment
post #13 of 119
I think it brings to light just how much audiophile nonsense is out there. When you are blindly testing a component, then you can judge it strictly by the way it sounds and not be biased by looks or brand name.
post #14 of 119
I voted yes, but the value varies with each individual.

If you're interested in seeing a funny take on cognitive bias watch "penn & teller bulls#!t: water"
post #15 of 119
IMO blind testing would be relevant but I'm still not buying any ugly audio biggrin.gif
post #16 of 119
ABX the only way to avoid bias, but it might take prolonged tests at home to really notice differences - i.e. not just a few songs at a demo.
post #17 of 119
In my opinion, if you want to scientifically know exactly which product is the best when comparing them, the best way would be to use a microphone to record the sound in the final listening environment and compare it to the original source. The problem is there are so many variables that can be tweaked with each product such as room correction, speaker placement, volume and accuracy at different frequencies, etc. To combat that, every product being compared should be tweaked until it is as close to the original as possible in most situations. Then after every product is set up the best it can be, see how far off each is from the original source, and choose the one that is closest in most cases.

With all of that being said, humans prefer whatever they prefer for whatever reason. For example, punk kids prefer bass that rattles the world when they drive around in their cars, even though it sounds nothing like the original. So, if you are someone who wants perfection, do something like what I mentioned above. If you want something that sounds good to you, then just compare everything in your listening environment and tweak it to your tastes and see which one sounds best to you.
post #18 of 119
"Are Blind Audio Comparisons Worthwhile? "

One may as well ask "Is The Scientific Method Worthwhile?"

The point of blind testing is to try to be more responsible (epistemologically speaking) in coming to a conclusion, in the same way you try for in science.

I agree with Art. It's of course possible that certain blind tests have not been ideally designed. But that sure as hell doesn't ratify the even more preposterously loose conditions under which so many audiophiles make their judgements, which are primed to support pretty much every known type of biasing possible into the evaluation. The reason you do blind testing has to do with
very well known, well studied problems in human biasing. You don't do better by "ignoring" these problems and appealing to conclusions derived in weaker control situations.

I was involved in high end audio, feverishly for many years, and the amount of mysticism and appeal to subjectivity in otherwise tech-headed people was just astonishing. It really was like mixing religion with science. (And it was starting to do my own blind testing that helped disabuse me of some of my own biased conclusions. E.g. "obvious" sonic differences between super high end AC cables vs a $15 AC cable that completely disappeared when I didn't know which cable I was listening to).
post #19 of 119
I would love to participate in a double blind test of various types of audio equipment to listen for any differences. As a long time audiophile, I agree that out hobby is filled with so much BS and near religious fervor about various topics. I just listened to a very expensive system at a demo where the owner hated solid state and digital, yet the sound was so bright and shrill, the exact opposite of the type of sound qualities that he claims to prefer. Yet, the system sounded great to him. Ii would be great to participate in a test where as much expectation bias is removed as possible as there is So much of it in the hobby of ours. If anyone on the Philly area would like to put one together, count me in.
post #20 of 119
What something looks like can clearly influence how we perceive it's sound. I don't mind using this to my advantage to some degree. I've always loved the look of a speaker with a beautiful wood finish; it sends the message of a certain "warmth" or "organic" feeling. And it can influence how I feel about listening to the speaker, and it's sound. I love the look of my tube amps and that too can play into how I react to, and perceive the sound of my system. There's nothing wrong per se with availing ourselves of the real-world effects, even pleasures, of how our perception can be biased.

It's when objective claims are made about "real" sonic differences that are produced by a product, based on dubious technical claims, that you want to be more careful in testing, if you want a more accurate, "responsible" conclusion about the claims.

I can tell someone how I feel about the sound and experience when I fire up my tube-based system and spin some vinyl. But if I'm going to start making more objective claims, that the sound is really being altered in the way I subjectively perceive it to be, or making claims of some technical advantage to the equipment in this respect, then I owe it to myself and others to back up those claims in a much more careful manner of investigation, blind testing being one possible tool.

(On a similar note: Not that I am able to magically remove my bias, but nonetheless I also find it amusing the way the visual presentation of a system, and it's technical claims, can play into my perception. Sometimes I play with this a bit. I've been in front of systems that are very expensive, supposed to be super transparent sounding or whatever, and thought while looking at the system "Yeah, I guess I can see what they mean." But then I close my eyes and try to "forget" the claims and looks of the system and just concentrate on what it really sounds like. Not a few times it's been sort of shocking to realize just how un-impressive the sound it on the whole, like big suck-outs in the lower mid-range or whatever, lack of coherence, and I realize it's putting out a sound that is actually more reminiscent of a cheaper bose satellite system, rather than, say, the tall floor standing speakers they actually are).
post #21 of 119
"...nonetheless I also find it amusing the way the visual presentation of a system, and it's technical claims, can play into my perception."

I'll tell you exactly where double-blind testing is valuable: in a manufacturer's demo room.

A loudspeaker company I worked for would often have non-blind comparisons of newly designed products. These tests were often conducted by a popular, ambitious engineer to "convince" the sales department that his latest-and-greatest project would best the competitors' products. Needless to say, the sales guys' "True Believer" psychosis (a necessary state of mind for any effective salesperson) would nearly always return a positive verdict.

For myself, whenever I participated in new product evaluation, I would always deploy the most competitive product I could find as the "B" sample, and would go to great lengths to disguise the appearance and even the location of the sources. (We had an ABX box too for switching.) I would seek not only industry types but office and factory personnel to participate and offer opinions.

The results from these tests proved more valuable in developing successful products than the somewhat loaded "beauty contest" method. Those results also allowed me to, over time, come to appreciate how my hearing and sensory acuities differed from the "norm" to such an extent that we could usually hit the design target on the first couple of attempts.
post #22 of 119
What is the objective? If you're simply trying to differentiate A versus B then you had better be dead certain that the target room is where you're going to deposit one set of speakers versus another, and that you've optimized each set for the correct listening position in all the things that will affect the overall sound (and there are many). On the other hand, if you want to add in a 3rd variable, musical realism, then you'd better bring in a non-speaker setup (real musicians). But here you run into some real world problems. Some speakers can sound great on small combo jazz, for instance, or maybe a string quartet, but fail miserably with large orchestras or bombastic rock. And I suppose we shouldn't always let reality get in the way of personal taste or enjoyment. Some might want a set of speakers that simply sound the way they want, rather than the way some middling majority of people think it should sound. And with every person having a different background, and perspective, on music, we may all have a valid set of taste/ear buds (same as preferences for spicy food versus the discriminating palate that avoids the hot stuff).
post #23 of 119
Click-bait thread. Nothing more. rolleyes.gif
post #24 of 119
They're less profoundly useless, but even with an ABX test, there's plenty to lead one astray. If I'm really used to a overly bassy, muddy system, I might think a more balanced reproduction sounds thin and tinny. Personally I'm interested in objective performance parameters and human senses cannot evaluate that, period.
post #25 of 119
yes, definitely! if blind testing shows no clear winner, then there's no clear winner. it shouldn't be a surprise either, ppl prefer different things, and that's ok. if you want to find out what speakers are 'most accurate' then there shouldn't be any listening necessary, only precise lab measurements, but frankly most ppl don't need or want 'accurate' and when you take about brand bias that becomes obvious.

when 90% of the cost to build speakers is spent on R&D and they've most have been practically unchanged for decades, it shouldn't be a surprise that value priced speakers can still sound great.
post #26 of 119
"Click Bait"? Isn't that the raison d'etre of AVS Forums?

Quote:
Originally Posted by sjschaff View Post

What is the objective? If you're simply trying to differentiate A versus B then you had better be dead certain that the target room is where you're going to deposit one set of speakers versus another, and that you've optimized each set for the correct listening position in all the things that will affect the overall sound (and there are many).
Professional listening rooms usually are treated to have a fairly uniform (i.e., somewhat dead) acoustic above, say, 150 Hz. Bass reproduction is another thing, but these days that means subwoofers (or "woofers," as they used to be known before being sold as a separate component). So, most systems being evaluated use common subwoofers to eliminate bass eigentones as a variable.
Quote:
On the other hand, if you want to add in a 3rd variable, musical realism, then you'd better bring in a non-speaker setup (real musicians).
Relying on musicians can be tricky. A friend of mine is an accomplished pianist in the chamber music circuit and invited me to a get-together at her group's house. We all gathered after dinner to listen to a recent recording of a Dvorak quartet one of them had just made. Everyone was listening intensely to the piece, but acoustically it sounded quite strange to me. Afterwards I checked out the speaker in question, a well-regarded small two-way system, and determined that the high frequency tweeter was blown out or disconnected. These were string players, fer crissakes! I mentioned it to the owner/violist as casually as I could and, allthough surprised, he shrugged, "It's not important -- we're concentrating on the bowing and playing."
Quote:
But here you run into some real world problems. Some speakers can sound great on small combo jazz, for instance, or maybe a string quartet, but fail miserably with large orchestras or bombastic rock.
Shortcomings with "larger" music like rock usually come down to inefficient and/or underpowered speakers (or, with rock music, not enough subwoofers).
Quote:
And I suppose we shouldn't always let reality get in the way of personal taste or enjoyment. Some might want a set of speakers that simply sound the way they want, rather than the way some middling majority of people think it should sound. And with every person having a different background, and perspective, on music, we may all have a valid set of taste/ear buds (same as preferences for spicy food versus the discriminating palate that avoids the hot stuff).
Of course that's true -- we all have our own tastes. And High-End Audio relies on appealing to a small niche of customers that perceive themselves to be more discriminating than the "middling majority" (who, by the way, are pretty good at noticing blown-out tweeters). But if you are a manufacturer, do you want a customer base of the two Absolute Sound guys or the other ninety-eight who walk in the door at Best Buy? And, if so, do you really want to get into the rather snarky infighting among the Absolute Sound sponsors for those two guys?
post #27 of 119
I 100% agree on click bait. It's been discussed to DEATH on all of the forums. But..... it did get me to click tongue.gif
post #28 of 119
One way to do the test would be to do it over several different days because sometimes what seems to sound "best" can change from day to day for whatever reason especially if the sounds being compared are very close to one another.
post #29 of 119
Quote:
Originally Posted by N8DOGG View Post

I 100% agree on click bait. It's been discussed to DEATH on all of the forums. But..... it did get me to click tongue.gif

Reasons ABX is pointless: the human element.

1. A person needs to get used to the acoustic environment.
2. Barometric pressure affects how sounds is perceived.
3. Familiarity of the recorded materials.
4. Every person perceives things differently and have different hearing acuity.
5. In the end it's still a subjective comparison.
post #30 of 119
Quote:
Originally Posted by David Susilo View Post

Reasons ABX is pointless: the human element.

1. A person needs to get used to the acoustic environment.
Only if it's a poor environment. I still haven't gotten used to the toilet-bowl acoustics of Disney Hall L.A.

If all else fails, go outdoors. It's pretty uniform everywhere.
Quote:
2. Barometric pressure affects how sounds is perceived.
It affects how sound is propagated. Perceived? Well....that's a stretch.
Quote:
3. Familiarity of the recorded materials.
It's better with unfamiliar material -- eliminates the sentimental attachment, if a somewhat objective conclusion is your goal.
Quote:
4. Every person perceives things differently and have different hearing acuity.
Of course. So what?
Quote:
5. In the end it's still a subjective comparison.
If you're not staring at an instrument, be it SPL meter or speedometer, yes -- everything in life iis subjective.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Community News & Polls
AVS › AVS Forum › News Forum › Community News & Polls › Are Blind Audio Comparisons Worthwhile?