Originally Posted by terry j
very quickly…can we draw a line between 'normal audiophile preference testing' and research? Where I am going is that they are two different activities are they not? We can assume there is a completely different mindset at least.
It is more than mindset. In the type of blind testing that is talked about in forums all the time, the person is always challenged to identify something. The goal and there is only one, is to determine if he can perform that identification. To the extent that we tell the tester which is which, then the whole experiment loses all of its value. "X" will no longer be "X" in ABX. It will be "A" or "B" as the tester will know the identity. So a little bit of information biases the experiment completely. Although not quite as bad, we don't want to test speakers by showing people the units under test and then ask, "which one sounds better?" We may have handed them the potentially wrong answer by showing them the size, brand, etc. of the speaker.
Now look back at the Ando test where the listener is given one clip and then another with a reflection added and asked which sounds better. The fact that he knows he is evaluating two sounds doesn't give him the final answer we are asking him.
Can the wrong vote still be cast? Yes. For that reason, we don't rely on one person or one test. We look to objective measurements and see if we can correlate the results. We look to how we hear to see if we can correlate the results. We look to other tests done in similar area to see if we can correlate the results. If they all directionally point to the same conclusions then we run with them. Such is the research that I have been putting forward. And how I became convinced of its validity. It was not a single DBT from Harman whatsoever.
Here is an example from, Journal of ASA, "The Active Listening Room: A Novel Approach to Early Reflection Manipulation in Critical Listening Rooms." There, Naqvi, and Rumsey actually performed double blind ABX tests: "The listening test was based on the A/B/X methodology, which is summarized in ....In this experiment X was assigned randomly either to have artificial reflections or not, whereas A and B were the “with reflections” or “without reflections” comparisons, assigned randomly."
Here is the final statement in the conclusion of the paper:"The findings of the pilot experiment were found to be in close agreement with the findings of Olive and Toole’s  and Bech’s  experiments in terms of the threshold of image shifts in IEC standard listening rooms."
So while Toole and Olive tests were not "blind" their results some 20 years later is confirmed just the same using ABX tests. Threshold shift by the way is one of the things we like about reflections: they broaden the point source of the speaker.
It is important to remember that the bulk of the research we are talking about is accepted as being proper in peer-reviewed journals of Audio Engineering Society and Acoustic Society of America. So clearly if there are major issues, no one has told them! If we take a position here that what is good for ASA and AES journals is not good for us, then that is a lot of pain for us to endure as we go and try to make other arguments in the future. Vast majority of our proof points would go out the window then.
Speaking of such, look at this peer reviewed paper from Zwicker and Zwicker, folks that are more often quoted on psychoacoustic matters than any:”Using the loudness exceeded in 10% of the time as an indication of the perceived loudness, it can be expected that the speech is 1.2 times louder in the room with 0.6-s reverberation time and about two times louder in the room with 2.5-s reverberation compared with the loudness produced in the free-field condition. This increment in loudness is often very helpful for the intelligibility of speech in rooms as long as the reverberation time does not produce temporal masking, which reduces the audibility of faint consonants appearing in sequence to loud vowels.”
So now we see objective data backing us both with respect to measurements and how our auditory system works that we do benefit from reflections up to a point. Early reflections increase sound power and therefore help with intelligibility. Later reflections which linger on cover up softer sounds so we don’t want our rooms to be “too live.”
Check out this *introduction* section of the Journal of ASA paper, ” The influence of spectral characteristics of early reflections on speech intelligibility,”
by Arweile and Buchholz, dated 2011:” Early reflections (ERs) of a sound in a given environment are characterized by arriving at the listener’s ears shortly (approximately within 50 ms) after the direct sound (DS). They are integrated with the DS in the auditory system, i.e., within a certain time window their energy is added to the energy of the DS. With regards to speech intelligibility the DS and the ERs form the useful part of the speech signal, whereas late reflections are considered detrimental for speech intelligibility. Thus, the effective level of a speech signal depends on the energy of the DS and the energy of the ERs at the listener’s ears. ER [early reflection] energy increases the effective speech level and has been demonstrated to improve speech intelligibility (Lochner and Burger, 1964; Na´beˇlek and Robinette, 1978; Soulodre et al., 1989; Parizet and Polack, 1992; Bradley et al., 2003).”
For someone to say the same thing I did in such strong manner in the introduction of a paper in a peer reviewed journal, you have to assume there is incredible weight behind it. Dismiss it at your own peril as they say.
Understanding speech is critical in movie content where other sounds in the track can mask it (i.e. acts like noise in research). Likewise, when you listen to music, aren’t we looking for clarity and understanding of the singer? Why would we then cast a negative light on reflections in post after post here? What do we believe if it is not the real expert’s views per above?
Is it not possible then that we can still get academic results in the research case even if sighted? At least we can assume the researchers are disinterested where we can safely bet the audiophlie is not. I am thinking of *all* the commonly accepted bits of research we rely on, say fletcher and munson, hass affect blah blah blah. A whole screed of them collected over the years.
The list is even longer than that: think of masking which covers the distortion if it is within the sound of a louder main tone. That was determined using the methodology we are talking about. Without masking, audio compression would not work remotely as well as it does. And what would we do if we took masking out of our vocabulary in this forum as we argue other distortions in audio? Folks better think hard about what they are saying here
Keep in mind that it is not like we are comparing sighted tests to double blind tests. Folks objecting have no listening tests to back their point of view whatsoever! The two examples given in this thread were both the opposite of what people thought. And at any rate, were not “DBTs.”
We have a choice: we can continue to believe something is right because we keep reading it on forums, or follow the path of research and science. The latter data is not perfect. It never is even if it were “DBTs.” But it is heck of a lot more perfect than believing what one reads on the Internet.
It is remarkable to me that we don’t remotely accept someone can become a doctor or a lawyer by reading online posts, but somehow we think we can become master acousticians giving advice to others on how that science works, without every setting foot in the journals of ASA or AES. We think all we have to do is read what is posted here. Here is a great example of a post just yesterday:
Originally Posted by jevansoh
If you will kindly read the first page (second post) of this thread, you'll see that I started this thread to help people set goals and work towards achieving them regarding the acoustics of their listening rooms.I've offered my time and participation in this thread to give back to this community for all the invaluable information and knowledge I've received from this forum and forums like it over the last several years.
I know what it's like to spend a lot of time reading, learning, and then through further research and way down the line find out I was given so much misinformation that I have to try and unlearn and then start all over relearning things the right way and I want to try to teach people the "right" way from the beginning.
Bolding mine. Poster is honestly disagreeing with someone in the industry (Nyal) who actually has read some of the research I am talking about by saying he is putting forward what he has read on forums? I know he means well but surely you want to consider for a moment that you can’t trust everything you read on the Internet. And certainly not to the point of disagreeing with folks on that basis alone. For his proof points, he offers a power point by Dr. D’Antonio and an article in EDN magazine. No listening tests, no real papers like I am quoting here, no nothing.
He makes statements like, ” The problem is, "right" isn't necessarily fully defined unless your goal includes an acoustical model in which you are trying to achieve.”
Acoustic model? Exactly how does he propose someone determine that? Folks in that thread are struggling to get REW to measure the sound in their rooms. Now they are expected to go and determine their desired “acoustic model?” What is going to happen is that folks are going to be told what the acoustic model is: they will be told to hate reflections and try to measure them with tools that show them improperly no less. Such is life on the Internet.
To be clear, folks there have done a phenomenal job documenting the mechanics of REW there. But they are about to throw out all of it and then some with the non-scientific folklore they have read online.