Originally Posted by craig john
They used a single speaker, placed to the left of the LP for the "bias controlled listening tests." Who listens to mono? Who listens to a single, left speaker? How is that POSSIBLY contrived to be a valid test of room correction products that were designed and intended to be "multichannel" room correction systems???
Not sure why you say this because we all listen to “mono” and do so all the time. When the vocals are playing in a movie, it is only coming out of the center channel. OK, some amount may also be in others but for the most part, it is coming out of that single channel. Yes, if you allow the other channels to play music or whatever, it can mask fidelity errors that may exist in that center channel. It is for that reason then that we want to test in mono. We want to know the truth and not confuse ourselves. Look at the America's Test Kitchen and how they test individual ingredients like rice. They don't put curry on the rice and ask people what it tastes like. They serve the rice bland with just a bit of salt so that the essence of the flavor of each rice is compared.
Likewise, published research which I am happy to quote shows that when it comes to aberration in room correction products, we are more critical with respect to system errors when fewer and fewer speakers are used. Just think whether you can evaluate the performance of a speaker in a quiet room better than a noisy one. I think everyone would agree that the former is better. By the same token, if you want to test an Auto EQ system which independently corrects each channel as are the systems we are talking about here, then it is absolutely proper to test it on one channel. You don’t want to try to add extraneous elements that would reduce the precision of our evaluation in such tests.
They provided NO INFORMATION on the levels used to listen to the various RC products.
Of course there is. This is what it says: “The relative loudness was adjusted to within 0.1 dB according to the CRC (Communication Research Center of Canada) ITU loudness meter . The absolute playback level was a controlled variable throughout the test and was approximately 78 dB (B-weighted, slow) averaged across all three programs.”
I don’t know how you could have read this yet say there is “no information” about the playback level used. It is a very clear and simple stipulation that a) levels were matched and b) the average across all the tracks tested was 78 db.
If you take a product that is designed to provide flat response at Reference Level, and then you listen to that response at levels well below Reference Level, the "equal loudness curves" will come into play. You're not likely to "prefer" that response to a response that has been optimized for the levels you're actually listening at, and has the equal loudness curves factored in at those levels. Audyssey Dynamic EQ was developed to address exactly this problem, but it hadn't been developed when this test was performed.
This is a common talking point regarding poor performance of Audyssey but alas, it is without merit. Loudness compensation is an orthogonal enhancement that can be applied to any system. As such, it is not a feature of a room EQ system to be tested. You can’t try to justify Audyssey actually degrading the performance of the system over doing nothing as shown in this graph from the research in my article: http://www.**************.com/Librar...alization.html
And say, “oh but you can turn on this other thing to make it sound better.” No sir. If I am using a room eq system, I don’t expect it to sharply reduce the performance of the system over doing nothing as Audyssey did in the right graph (#6).
Now, if Audyssey had made their loudness compensation mandatory then sure, we could talk about that. But that is not the case. You are also going by some theory that having such a feature would have made things better but you have no research data to present.
Audyssey also has other ailments that I explain in the article and shown in this graph:
Notice the big trough at 2 Khz. Every bit of research data we have says this is wrong. You should not notch out the response that way. The listeners heard this and indeed punished Audyssey for it and hence my recommendation in the article that if you are going to use Audyssey, get the Pro kit and defeat this misguided approach.
While on this graph, note that another system, the Lyngdorf DPA-I, ranked #3
, well above doing nothing or that of Anthem and Audyssey. So your theory that Harman listeners are somehow biased to only pick the sound of Harman products is invalidated. In blind tests without knowing which is which, they almost ranted the Lyngdorf as good as their own. We can objectively tell why. If you look at the Lyngdorf target response (navy blue), we see that it closely mimics that of Harman products. The two that deviated where the Anthem and Audyssey (pink and teal) and rated worse than doing nothing. So we have objective results matching our subjective results. This is powerful.
They provided NO INFORMATION about how the Audyssey setup was performed. All they showed was a magnitude response graph. It clearly showed the was setup less than optimal. I can achieve MUCH better measured results than that! Furthermore, I don't know how you can even run Audyssey with only one speaker. It won't run. It will give a "speaker error" and not continue. So how the hell did they even perform the Audyssey calibration/EQ with just one speaker in the system?
Once again there is plenty of information in the paper and you would get even more if you talked to authors as I have done. Here is the paper:
The instructions and manufacturer’s recommended settings in the user manuals were strictly adhered to in setting up the different room correction products. Each room correction product generally involved a similar set of steps:
(1) A measurement of the main loudspeaker and
the subwoofer was performed to determine the
frequency at which each should be crossed
over. In some cases, the crossover settings
could be manually entered.
(2) The measurement was repeated at several
points in the room as recommended by the
manufacturer: either at the listening seats or
somewhere out in the room to better assess the
sound power response of the loudspeaker.
From this, the optimal time delay and level and
between the subwoofer and main loudspeaker
are determined, as well as the corrective
The authors found that several of the room correction
products had difficulty automatically determining the
ideal crossover frequency in step 1. One product chose a
subwoofer crossover at 800 Hz, while another thought
the ideal crossover should be at 40 Hz. In the end, it
required some expert judgment and manual intervention
to force the products to use reasonable settings and set
the crossover to 80 Hz. Several of the products had
instructions that were ambiguous, and could easily be
misinterpreted. Overall, the usability of these room
correction products could be improved so that the user
experience and results are more consistently positive.”
Going back to the start of this discussion, the advantage for Audyssey was stated as its ease of use. Now you are saying that correct application requires having someone like you come over and run it? And what do you mean apply it to one speaker? The correction is applied to all the channels on all the systems. It is that for testing phase, only one channel is listened to.
They provided NO INFORMATION about the skill level of the person running the Audyssey cal/EQ. They stated: "Calibrations for each room correction product performed based on manufacturer's users manual." I can just hear that conversation: Sean O: "Hey Frankie, take this thing to the lab and set it up." Frankie: "Sure thing boss... then muttering, "How the hell does THIS THING work?" Sean O: "Here's the manual. See if you can figure it out." Frankie: "Gotcha boss... then muttering, "Yeah, like I'm gonna do that!"
Well, thankfully they did far more than just about any reader of this forum would do in setting up the equipment as described in the paper. Remember, when you publish a competitive paper like this, you absolutely have to be prepared for the other side to find such cheats and come back with their own test and give you a black eye. As you well know, no counter test or counterpoint argument has been presented since. If such easy fixes existed to make Audyssey better, it would have been used to counter these results but there has not been any.
You have not shared with us what comparison data you have done of this type to be so sure of the points you are making. I have done so and my experience once again matches the research. To wit, when I founded Madrona Digital, we put in a $110,000 speaker, amplification and Auto EQ system made by Wisdom. The EQ was Audyssey Pro. Try as I might, I could not get Audyssey to produce results better than doing nothing. Yes, on a few tracks I would find it to have an improvement but all I had to change was what I was listening to and I could not stand the results. I reported by dissatisfaction to Wisdom and they sent one of their people over. He spent a full day trying every which way and provided what he thought was an optimized Audyssey correction. I tested it and found the same outcome. It degraded the system performance and we continued to demo the system without Audyssey.
Maybe you know more than a manufacturer who had designed this into their system, me and the researchers who have published the data we are discussing. But the odds are against you
. I suggest getting some unbiased listeners, stand behind them and do an AB test and ask them which one they like better. With or without correct. Only then do you have something unbiased to go by.
At the time of this test, (2009), Audyssey required their Pro Kit installers to be "factory trained" and "registered" in order to use and sell the product. They wouldn't even sell it to consumers. Was the Harmon tech who performed the Audyssey cal/EQ for this unbiased test "factory trained" by Audyssey, or just some lab tech who had never seen the product before? How thoroughly did he read the manual and understand what it said? How much care was taken to ensure the Audyssey calibration was as optimized as possible?
I addressed this in my sample data with Wisdom above.
"...solidly supported research"???? Really? The only research they quoted was their own. There was ZERO, peer-reviewed science to validate their testing methodologies... only their own internal unsubstantiated and un-duplicated "research."
That is your opinion but you have not provided any foundation to back yours. What research have you shown, peer reviewed or otherwise, that Audyssey performs as you say? This is published work and trumps opinion of random posters in forums every day of the week and twice on Sunday
. Remember, Dr. Olive who spearheaded this testing is the outgoing president of Audio Engineering Society. We are not talking about a drunk who showed up one day at the conference and decided to publish some paper. His reputation is at stake if he produced bad work. His work is respected in the industry. To wit, here is a peer reviewed paper published in the Journal of AES referencing the same work: http://www.aes.org/e-lib/browse.cfm?elib=16324
Subjective Preference of Modal Control Methods in Listening Rooms:
The performance of commercially available room correction methods have also been investigated by Olive et al. .
 S. E. Olive, J. Jackson, A. Devantier, and D. Hunt, “The Subjective and Objective Evaluation of Room Correction Products,” presented at the 127th Convention of the Audio Engineering Society (2009 Oct.), convention paper 7960.”
So no, the work is authoritative and stands on its own two feet. Certainly no Joe forum post can trump it lest it comes with at least the same level of authority and standing in the industry.