Originally Posted by PhilharmonicDennis
Thanks. Now I remember seeing that, but I'm hoping Dr. Toole can add a little detail concerning how those measurements were weighted and what were the major misconceptions concerning human perception of sound.
How Consumer Reports came to their ratings remains something of a mystery. I got access to some, but not all of the process many years ago. It was incomplete, but enough to see why their ratings could not be right. They published one or two articles in audio mags decades ago, no useful details. I have the references buried in files somewhere.
They had an anechoic chamber - don't know how good, because when we showed how wrong they were they said they needed a new one. In it they measured 1/3-octave resolution frequency responses at points on a sphere enabling an estimate of total radiated sound power. This, they assumed, was the key factor in what we hear in rooms - wrong, because domestic rooms are not reverberation chambers, and 1/3-octave resolution is not adequate (I was using 1/20-octave then and still). This was an idea then promoted by a few east coast audio people.
Sound power is one of the processed curves in a spinorama presentation but it is not one that correlates well with sound quality.
Then they processed the data above some frequency - 150 Hz comes to mind - to generate a 1/3-octave band curve measured in sones, the metric for subjective loudness. This cannot work for broadband sounds evaluated for timbral accuracy. There was more to it, having to do with bass I think, but I have not tried to remember bad science, only good stuff. I visited them before joining Harman and saw where they did the subjective tests in support of their method. It was a large, high ceilinged concrete room with what seemed like canvas drapes on some of the walls with dozens of folding chairs. It was quite live. As described to me staff would assemble there at lunch time and while munching sandwiches listen and make ratings.
The first time I encountered the people at CU was at a meeting of loudspeaker designers and researchers I organized at the NRCC in the mid '80s. Knowing they were coming I put together an overhead transparency showing our double-blind subjective evaluations on four loudspeakers that they also had rated. The correlation was -0.7 (that's minus, meaning that to interpret their ratings one must invert the page). Many years later Sean Olive did his benchmark correlations and he found a correlation coefficient of - 0.22. This needs to be compared to Olive's correlation of 0.995 with a very high statistical significance. The best loudspeaker in our evaluations ranked lowest in theirs. This is all described in Olive's papers and summarized in my book.
The people at CU were nice, earnest and truly believed in their method. I never found out who was responsible for creating it, but I have some suspicions. It was a long time ago . . . best forgotten.