Originally Posted by amirm
Originally Posted by arnyk
The simple answer is that unlike some people, I don't write a magazine aritcle every time I blow my nose. ;-)
But you post on forums before and after you blow your nose.
Surely a prolific poster like you that has the maniacal focus on double blind tests would have shared tests that he created and ran. I know in the 3 or so years I have debated you that you have not described any.
Amir, I provided you with a undeniable answer for your question.
And it was an embarrassing answer. It was the work of three people and not just you. Looking for the back up that you have run more double blind tests than anyone but JJ.
Before I accede to any more of your threats and demands underscored as they are with undertones of incredulity and apparent attacks on my credibility, please honor my humble offering by providing a piece of evidence of equal or superior stature that shows that you have ever personally organized and participated even just one DBT and that similarly specifies the procedures you used.
Not attacking your credibility. Want to examine your test protocols to see if they are consistent with how the science says we should administer them. As for me, it is a strange question because I have documented a number of them. The first one was in post #16: http://www.avsforum.com/t/1515576/validty-of-blind-testing#post_24302549
. I explained the same in our first debate thread where I quoted Chu.
Here is another one: when I was at Microsoft we acquired Pacific Microsonics (PMI). Not for the HDCD technology but for their speaker correction system (wanted to make cheap computer speakers sound better). Being an audiophile, I wanted to determine how good HDCD was. So I asked for their reference tracks before and after conversion to HDCD (i.e. 16 bit vs HDCD encoded 20 bits). I was pleasantly surprised that I could easily tell the extra "air" and resolution. One of the PMI testers sat a few offices down from me. So I go there and i say, "hey, I just listened to HDCD and boy, it sounds good." To my surprised he said, "you don't really buy into that pseudo science, do you?" I was like, what? You don't think it works? I explained that I had just tested a bunch and the improvement was real. He said, "OK, let's test you again." He gives me his headphones and asks me to turn around. He plays one sample and then the other. I heard exactly the improvement I had heard before. It didn't even require concentrating! I turn confidently to him and say which one was the HDCD. The son of a you know what tells me, "I was playing the same 16 bit CD track over and over again!" So here I was telling him two identical files sounded different. You want to know why I believe in double blind testing? This is one fo the reasons.
As you see Arny I am very transparent with my experiences whether they support certain point of view or not. To wit, I have documented the above on forums and then some. So I ask again, why is there such paucity of your double blind testing and protocols online?
I see none of the evidence that I asked for, and I do see more insults, such as the claim that the magazine article that I cited was "embarassing". It is quite obvious that my procedures for DBTs were published in Clark's landmark article about ABX since it mentions me as a contributor. Ask him what I contributed. There is a second set of documentation of my prodcedures for doing DBTs in the PCABX web site as archived in the Wayback machine. Since you may lack the wherewithall to access it, I will post a major component of its documentation at the bottom of this replay.
I see no documentation of your claims. Therefore i have every reason to doubt the responsiveness and good faith of this reply. No deal.
Ten (10) Requirements For Sensitive and Reliable Listening Tests
(1) Program material must include critical passages that enable audible differences to be most easily heard.
(2) Listeners must be sensitized to a audible differences, so that if an audible difference is generated by the equipment, the listener will notice it and have a useful reaction to it.
(3) Listeners must be trained to listen systematically so that audible problems are heard.
(4) Procedures should be "open" to detecting problems that aren't necessarily technically well-understood or even expected, at this time. A classic problem with measurements and some listening tests is that each one focuses on one or only a few problems, allowing others to escape notice.
(5) We must have confidence that the Unit Under Test (UUT) is representative of the kind of equipment it represents. In other words the UUT must not be broken, it must not be appreciably modified in some secret way, and must not be the wrong make or model, among other things.
(6) A suitable listening environment must be provided. It can't be too dull, too bright, too noisy, too reverberant, or too harsh. The speakers and other components have to be sufficiently free from distortion, the room must be noise-free, etc..
(7) Listeners need to be in a good mood for listening, in good physical condition (no blocked-up ears!), and be well-trained for hearing deficiencies in the reproduced sound.
(8) Sample volume levels need to be matched to each other or else the listeners will perceive differences that are simply due to volume differences.
(9) Non-audible influences need to be controlled so that the listener reaches his conclusions due to "Just listening".
(10) Listeners should control as many of the aspects of the listening test as possible. Self-controlled tests usually facilitate this. Most importantly, they should be able to switch among the alternatives at times of their choosing. The switchover should be as instantaneous and non-disruptive as possible.