Originally Posted by jim19611961
For me, this muddies the results of blind A/B testing because we would have to know how trained the listeners are. In one test, you could have a room full of untrained listeners that would skew the results in the direction of there being no differences (or smaller ones) between A & B.
And I always say, this depends on what you are trying to test.
If you are testing some audiophile blowhard's
, er, I mean, self-proclaimed disciminating listener's claim that *HE HIMSELF* hears a difference between a particular A and B, then consider him already 'trained' in that dimension (fine aural discrimination of this A and this B), and just subject him to a blind comparison of A and B (after training him in the methodology of ABX). Use his gear, his music, if possible. Use sufficient trials, avoid fatigue, and employ the right statistics to analyze the results. (Although typically these blowhards
golden ears claim it's *easy* for them to hear A vs B; they do it routinely...it shouldn't really tax them if true.)
Obviously this is not a scientific test; scientists aren't that interested in proving or disproving individual claims of prowess (though James Randi is ;> ), they prefer to get a sense of what exists in a population. And there the question is, how *likely* or how *easy* is it for A and B to be told apart, in general. For that, for academic publication, you want well-trained subjects, excellent controls both positive and negative, a sufficient sampling of subjects to give the test statistical 'power', the whole nine yards. But for the sort of claims of "even I can do this" you sometimes see here on AVSF, and much more often at places like Stereophil or the 'like a light switch' claim referred to in Harley;s screed, I hold the the simple 'ok, let's just see if you can do the same thing, with one control in place' test suffices to dispose of them one by one.
Recommendation for formal (scientific) studies are laid out for example here