AVS Forum banner
1 - 20 of 21 Posts

·
Premium Member
Joined
·
3,258 Posts
Discussion Starter · #1 ·


Jon Iverson, audiophile and web monkey for Stereophile.com, SoundandVision.com, InnerFidelity.com, AudioStream.com, and AnalogPlanet.com, talks about his interpretation of blind audio testing and what it does—and does not—reveal about the performance of audio equipment, recordings, and the participants in the test. He also discusses his own experiences listening for differences in DACs, the importance of listener training, the music-recording process, the AVS/AIX high-resolution audio experiment, answers to chat room questions, and more.


Like AVS Forum on Facebook
Follow AVS Forum on Twitter
+1 AVS Forum on Google+
 

·
Registered
Joined
·
1,011 Posts
Thanks again Scott, a very informative show by someone who understands the practical science of the issues under discussion.
 

·
Registered
Joined
·
2,358 Posts
Thanks Scott. I agree 100% with the assessment that we are testing peoples ability to discern differences.

He made a great point about statistical difference. If you have a room of 1000 people and 2 ace the SBT/DBT/AB-X or other discrimination you have 0.2% of a population that follows. It's not buried in the data. There is no scrum. You have a hard number. Now repeat this 1000N often enough then extrapolation can take place.

When comparing amps or any other gear: Manufacturers can make items purposefully sound different and then market that difference.

The other thing to consider: Of the two people at the 1989 event: Did they have a preference? Did that preference follow price of the amp even if they didn't know the DUT? Would they use the effusive term: "Night and Day Difference"?
 

·
Registered
Joined
·
1,439 Posts
Very cool. Is there a transcript available?
 

·
Registered
Joined
·
6 Posts
Where do I start?

Jon Iverson, audiophile and web monkey for Stereophile.com, SoundandVision.com, InnerFidelity.com, AudioStream.com, and AnalogPlanet.com, talks about his interpretation of blind audio testing and what it does—and does not—reveal about the performance of audio equipment, recordings, and the participants in the test. He also discusses his own experiences listening for differences in DACs, the importance of listener training, the music-recording process, the AVS/AIX high-resolution audio experiment, answers to chat room questions, and more.

https://www.youtube.com/watch?v=p6kC5vck2JA

Like AVS Forum on Facebook
Follow AVS Forum on Twitter
+1 AVS Forum on Google+
Jon, I sense your pain but there is no need for it. You have a fundamental misunderstanding of statistics, experimental design, and sampling theory (I used to teach these). Please don't take offence. Why study a fairly dry discipline (experimental design) if it is not needed for your day to day function? An audio reviewer need not understand statistical methodology or the nature of a sample before doing subjective evaluation of components or music. In subjective evaluation, everyone knows you are making statements about what YOU experience. To the extent that you have proven yourself to be a good judge of quality (lots of people agree with you), your statements have relevance to the state of the world in general - what we call external validity. Hence, there can be a demonstrable link between what one skilled observer reports and the state of the real world. That is what underlies the entire field of audiophile reviewery.


However, the present discussion goes beyond subjective observations. Here you are making statements about objective testing and experiments. You suggest that a limited sample (the people in the room at the moment) corrupts your belief in any observation that amp A is "really" different than amp B for example. Take heart. It is possible to make such a statement and to give a formal definition of your confidence in that statement. I would suggest contacting a friendly audio engineer or someone with a background in psychophysics and having a chat. Failing that, take an on line course in xperimental design. It would give you everything you need to sort out this foolishness. Basically, if your sample is well constructed and if your methods are appropriate to the experiment or survey in hand, the issues you raise are non-issues. Hence, back to my point about there being no need for pain regarding the validity of blind testing. Whether it is valid or not is dependent upon how it is done, not upon some fundamental flaw in the concept. Of course you are testing only the people in your sample - there is no one else there to test. The trick is to make sure it is a representative sample (most simply by being randomly sampled but lots of other ways) so that it tells you something about the real world.


Rant over.
 

·
Premium Member
Joined
·
12,591 Posts
I stopped listening after the conversation wandered away from blind listening. But Scott asked the question: "If only one person can consistently tell the difference, does that not mean there is a difference?" But Mr Iverson kept going back to his position that it was more about the testers rather than the equipment. I believe the answer to Scott's question has to be "yes".

Mr. Iverson's comments on DAC differences sort of sums up my position on virtually most electronics. If DAC A has very very subtle differences over DAC B AND it takes multiple hours to eventually hear the difference AND the next time you listen you have to start all over again to hear the difference, I have gotten to the position that the differences must be too small for me to spend time (and money) worrying about. Or said in another way: If product A sounds different than product B but I can't very easily hear the difference, there is no difference.

This session notwithstanding, I will continue to use blind testing as I evaluate electronic products. I am not trying to demonstrate that there are or are not differences - only if I can or can not hear them.
 

·
Registered
Joined
·
26 Posts
However, the present discussion goes beyond subjective observations. Here you are making statements about objective testing and experiments. You suggest that a limited sample (the people in the room at the moment) corrupts your belief in any observation that amp A is "really" different than amp B for example. Take heart. It is possible to make such a statement and to give a formal definition of your confidence in that statement. I would suggest contacting a friendly audio engineer or someone with a background in psychophysics and having a chat. Failing that, take an on line course in xperimental design. It would give you everything you need to sort out this foolishness. Basically, if your sample is well constructed and if your methods are appropriate to the experiment or survey in hand, the issues you raise are non-issues. Hence, back to my point about there being no need for pain regarding the validity of blind testing. Whether it is valid or not is dependent upon how it is done, not upon some fundamental flaw in the concept. Of course you are testing only the people in your sample - there is no one else there to test. The trick is to make sure it is a representative sample (most simply by being randomly sampled but lots of other ways) so that it tells you something about the real world.


Rant over.
I guess I'm missing something here. You're saying that you can take the subjectivity out of the test, by choosing the right people?
 

·
Registered
Joined
·
6 Posts
I guess I'm missing something here. You're saying that you can take the subjectivity out of the test, by choosing the right people?

Actually, that is close to correct if you substitute the term "error" for the term "subjectivity". We can minimize the chance that our test gives us an erroneous result (fails to detect a real effect or detects a false effect), even if we are recording subjective data.


My point is that there are methods for doing all this. There is nothing remotely mysterious about it - it is first year undergraduate material in psychology, sociology, many aspects of biology, marketing, political polling etc.


Yes, there is a whole library of methods that allow you to ask whether people can hear a difference between components - and even how big the difference has to be before it can be heard. Sadly, doing quality tests is much harder than doing informal ones. So much easier to use measurements (which we use to "operationally define" the good) or reviewers or our own listening. In fact, a combination of these works pretty well because people keep buying new gear and liking what they buy. Clearly, these people hear a difference.


So it does come down to picking the right people. In making inferences about a population, we need to use formal methods to define the population (people in general, audiophiles, experienced audiophiles, whatever), select the samples, and present the test. Then the uncertainties that Jon raises are minimized.


Or not. We can just realize this is a hobby and have fun with it. Maybe you believe you hear a difference between two brands of speaker cable. Believing that makes you happy. Maybe you could demonstrate the cable sensitivity on a proper blind test or maybe you couldn't. For example, I can't hear cables though I hear amps and preamps pretty reliably on informal tests. So, I use low cost cables and great amps/preamps. Works for me, and I agree with Jon when he implies that the differences that are real are the ones that matter to you. Just don't bash formal testing methods (such as blind testing) unless you understand the theory behind them.
 

·
Registered
Joined
·
87 Posts
I thoroughly enjoyed another TWiT Home Theater Geeks Netcast.
Jon Iverson is special and I agree with most of what he
stated here. I've been involved with a number of A/B/X tests.
Certainly the group consisted of "Golden Ears" and pros
from the Audio recording industry. We pretty much came up
with the same conclusions Jon articulated so well here.
A/B/X does NOT tell you what's better then the other.
It simply tells you if there is a perceptive difference
between two items under test. We did the classic 16ga zip
cord vs $10k speaker wire A/B/X testing and under normal
conditions it was difficult to tell the difference.
I also agree with Jons observation that it's a learned
process that has to be relearned after time goes by.
 

·
Registered
Joined
·
859 Posts
I thoroughly enjoyed another TWiT Home Theater Geeks Netcast.
Jon Iverson is special and I agree with most of what he
stated here. I've been involved with a number of A/B/X tests.
Certainly the group consisted of "Golden Ears" and pros
from the Audio recording industry. We pretty much came up
with the same conclusions Jon articulated so well here.
A/B/X does NOT tell you what's better then the other.
It simply tells you if there is a perceptive difference
between two items under test. We did the classic 16ga zip
cord vs $10k speaker wire A/B/X testing and under normal
conditions it was difficult to tell the difference.
I also agree with Jons observation that it's a learned
process that has to be relearned after time goes by.
Jon's most important point, in my opinion, is that blind testing is primarily to test the person, not the product.

There's a bit of a disconnect occurring when audio buffs assert that blind/double-blind/AB-X testing is the only way to properly assess differences in audio products. The subtext is that they fundamentally don't believe there is a discernible difference between a sampling of amps of reasonable quality, or a sampling of audio cables, etc. The desire for such testing is more of an assertion that since there is presumably no difference between such products, then reviewers who lavish flowery praise over certain (expensive)products must be willingly or unwittingly deceiving themselves and their reader base. Such testing in this case is really more about the desire to show the reviewer is either a liar or a fool.

Whether or not this happens is another matter entirely; but I'm glad Jon at least recognizes the subterfuge. I personally tire of hearing reviewers go into ecstatic prose over exorbitantly expensive products, especially cabling. But I'm not so callous as to claim that there is no discernible difference; I just could care less because it has no real impact on me or what my budget allows.
 

·
Banned
Joined
·
16,643 Posts
Jon's most important point, in my opinion, is that blind testing is primarily to test the person, not the product.

There's a bit of a disconnect occurring when audio buffs assert that blind/double-blind/AB-X testing is the only way to properly assess differences in audio products. The subtext is that they fundamentally don't believe there is a discernible difference between a sampling of amps of reasonable quality, or a sampling of audio cables, etc. The desire for such testing is more of an assertion that since there is presumably no difference between such products, then reviewers who lavish flowery praise over certain (expensive)products must be willingly or unwittingly deceiving themselves and their reader base. Such testing in this case is really more about the desire to show the reviewer is either a liar or a fool.

Whether or not this happens is another matter entirely; but I'm glad Jon at least recognizes the subterfuge. I personally tire of hearing reviewers go into ecstatic prose over exorbitantly expensive products, especially cabling. But I'm not so callous as to claim that there is no discernible difference; I just could care less because it has no real impact on me or what my budget allows.
I really like your reply; it sounds/resonates right.
 

·
Banned
Joined
·
10,026 Posts
This reminds me of something. This audio salesman once confided in me that if he really wanted to sell an amp over some other model, he'd simply raise the volume on it ever so slightly. He said this worked "every time".
 
  • Like
Reactions: Shadowed
1 - 20 of 21 Posts
Top