AVS Forum banner

1001 - 1020 of 5675 Posts

·
Member
Joined
·
844 Posts
I understand averaging of responses, and how it works. I'm just a bit puzzled on the impulse response correction and the supposed benefits.

Here's some of the information I found.



There's more stuff on their website such as time-aligning of drivers, phase correction etc , which I don't doubt it'll do... Seems all you need is any room and any loudspeaker, and Dirac will turn it into a great experience :)
The original Dirac system was conceived to assist in designing car audio systems, which are distinctive in that the individual drivers are distributed all over the place and need to be coordinated to deliver intelligible sound to a few listeners, all in the wrong locations for stereo. Hence the need for time domain manipulations. It is elaborate, in situ, loudspeaker system design. When applied to home theaters such concepts need reconsideration.

Figure 12.4(f) in my book shows that the Dirac default target curve is almost exactly the steady-state room curve that results from using highly rated loudspeakers in normally reflective rooms. Some call it the "Harman" curve. At the time I wrote the new book, my old book was the only technical reference in their manual. It needs to be noted that this is not a target room curve for loudspeakers with flaws, but that is not what "room EQ" implies. In fact reading the details in some of the manuals for such systems the instructions are to try things with the suggested curves. The fact that there are sometimes several suggested curves is a clue that this is not a precision "calibration". It is an admission that the room curve is not a definitive statement of sound quality, which is a fact. If that does not sound good they provide user friendly methods to change the shape of the target until it sounds better. This is not a calibration, it is a subjectively guided equalization, with the circle of confusion included because decisions are made by listening not measurement.

As humans do not hear phase, correcting impulse response is not necessary. However, phase matters in crossovers between drivers - as in the original car audio application of Dirac, but not in home theater.

EQ at low frequencies is almost essential, and because bass performance accounts for about 30% of our overall sound quality ratings it is very important to get it right. Above the transition/Schroeder frequency, if the loudspeakers have been well chosen, nothing may need to be done except gentle tone-control adjustments and these will change with different programs.

The claim that Room EQ can turn any loudspeaker in any room into a flawless system is rubbish, but there are people who believe this. This paper, and my book explain it: Toole, F. E. (2015). “The Measurement and Calibration of Sound Reproducing Systems”, J. Audio Eng. Soc., vol. 63, pp.512-541. This is an open-access paper available to non-members at www.aes.org http://www.aes.org/e-lib/browse.cfm?elib=17839
 

·
Premium Member
Joined
·
4,594 Posts
I understand averaging of responses, and how it works. I'm just a bit puzzled on the impulse response correction and the supposed benefits.

Here's some of the information I found.

Quote:
Take control of your domain with iRC (Impulse Response Correction), a cost-effective and more flexible alternative to unsightly baffles, diffusers and other room treatments.
I don't think any serious calibrator, acoustician, or CI would suggest that you can completely replace baffle walls and room treatments with electronics. The electronics MAY improve a room by correcting in both the time and frequency domains (e.g. Dirac, RoomPerfect, Trinnov), but even with the infinitely configurable Trinnov Optimizer, it may be possible to provide adjustment of the impact of indirect sound within a specific MS window, and due to the mathematics, smooth out phase, group delay etc. past a few hundred Hz for the impact of the room, but it's not a cure all, especially inside a listening area with multiple seats and/or multiple rows.

None other than Arnaud Laborie, CEO of Trinnov Audio, said in an interview a few years ago that while SOTA room EQ is like a Ferrari, you don't get the benefit of driving at top speed through the streets without also addressing room treatments (I'm paraphrasing; the actual quote is in a link somewhere on the Trinnov Altitude thread). There's still no substitute for proper absorption, diffusion, and the combination of multiple sub placement and judicious room EQ over an electronic magic wand of one kind or another.

iRC Impulse Response Correction, based on advanced technology by Dirac, brings these under control. The result is a more focused soundstage with improved phase coherence, bringing more realism and natural sound to your music
There's more stuff on their website such as time-aligning of drivers, phase correction etc , which I don't doubt it'll do... Seems all you need is any room and any loudspeaker, and Dirac will turn it into a great experience
With the right settings and applied with knowledge, you'll get a better experience IMO with a Dirac than without. But if you prefer a very specific sound or are used to, say, Audyssey Reference and prefer it by default, Dirac (or the other room EQs) may not give you what you personally consider "better" results. That may make you an outlier compared to the Harman/Sean Olive research, but the beauty of a robust target curve editor like Dirac or Trinnov's is that you can season to taste with the right touch the range in which you want the room EQ and the slope of the curve for the sound you prefer. Or have more or less finer tone control, depending on taste :) .
 

·
Registered
Joined
·
25 Posts
Thanks for the insights. I had read them all before. However, they didn't address my actual question. And since you're here, I have a few more...



This is the question I asked:



@SoundnWine, do you have any insights on the above question? Was there a significant amount of boost added to the subwoofer with the JBL system to get that response? If the subwoofer was this one:
https://www.jblsynthesis.com/productdetail/id-1500-array.html
...it appears to be a sealed sub with a 15" driver, a 1,000 watt amp and a specified FR of 25 to 400 Hz but with no F3 specified and no measurements provided. :confused: (Hard to know what the 25 Hz spec means without an F3. Def Tech and Klipsch are notorious for this, but I thought JBL/Harman was more "honest" in their specsmanship.) Nonetheless, to get 20 Hz response with that sub would likely require at least some boost. How much was added, and how much headroom was lost with that boost?

More importantly, if boost was added by the JBL system, in an effort to be fair, wouldn't it have been appropriate to at least try to get similar response from the other systems? Please don't tell me you couldn't because that would just confirm my concerns about the technical ability of the person doing the EQ setups. I know for a fact that Audyssey corrects to 10 Hz, because it does so in my system:



I also know that Audyssey is capable of FAR better overall results than the bottom curve in the graph. I previously posted an RTA graph of my Audyssey results. I'll post it again for convenience:


That response was obtained using Audyssey XT, which was virtually the same as the Audyssey Pro Kit in terms of the EQ. Pro had a more expensive mic, and it allowed for more measurement positions, but it used the same algorithm to calculate the EQ, i.e., the same "fuzzy logic" to group measurements, the same number of FIR filter taps applied across the whole spectrum, etc. If I could get that kind of response, why couldn't your people? At that time Audyssey was marketing the Pro Kit only to Custom Installers. They strongly advised the Custom Installers to get the Audyssey training before they attempted to use the kit. Where the people who set up the Audyssey Pro Kit for your test trained by Audyssey to use the kit? I doubt that reading the manual would have given him/her ALL the information needed to optimize Audyssey.

Was the crossover in the Audyssey graph lowered from what Audyssey found? If so, that will explain a lot about the inadequate Audyssey result. Audyssey doesn't correct the speakers below the crossover, so if it found a crossover of say, 150 Hz, and that was lowered to 80 Hz, there would be no correction applied between 80 and 150 Hz. You said that some systems found really weird crossover frequencies. Did you investigate WHY that would happen, because surely none of those systems should apply weird crossovers.

Your exact words were: "In some cases, we manually intervened because the product decided to cross over the subwoofer at say 150 Hz in stead of 80 Hz because the algorithm decided the B&W N802 didn't have sufficient bass to be crossed over at 80 Hz (clearly this was not the case). Some of the differences in low frequency roll-offs are based on automated decisions within the algorithm based on the output capability of the subwoofer." I bolded the last sentence because it is incorrect, at least in Audyssey's case. Audyssey measures the in-room F3 of the subwoofer, which is NOT the same as the output capability of the subwoofer. If something was causing Audyssey to measure an in-room F3 that was too high, that was a room problem, and someone should have investigated why that was happening. IOW, someone should have measured the actual response of the system at the same point where the Audyssey mic was placed. And where was the mic placed? That is critical in getting an appropriate Audyssey result. How many mic positions were used? IIRC, with that version of Audyssey, you had to measure at least 3 positions in order to get Audyssey to calculate the EQ filter taps. However, up to 12 positions were possible. How many were actually used?

You also said: "We tried to be as objective as possible, and not intervene too much with how the room correction was implemented." But then you said: "If you read the paper we came to the conclusion that none of the products were refined to the point that you can rely on them to always make good decisions, and therefore some form of intervention is necessary to get the best results." So these systems need intervention... but you decided not to intervene? The next sentence was: "That means the person doing the room correction needs some knowledge and expertise..not your typical consumer." Are you sure your tech had that knowledge and expertise? I always check measurements before and after running Audyssey. I will change speaker and subwoofer positions to ensure Audyssey has the best starting point possible. I also often make some interventions to the settings Audyssey comes up with. I check and adjust the crossovers, (but always raising them if necessary, never lowering them.) If I find a crossover that doesn't make sense, I figure out what is causing it and correct it... and then re-run Audyssey. I check the subwoofer Distance setting and the response around the crossover point to be sure the speakers and subs are in phase at the crossover, and that the splice between them is correct. If it's not optimal, I adjust the subwoofer Distance setting until it is. I check the post-Audyssey levels with an external test tone disc... because Audyssey is not engaged when the internal test tones run. If all these things were not done during the Audyssey Pro setup, then the setup is wrong. A Custom Installer who had attended the Audyssey training would have known this and done all of it. It doesn't appear that the person who set up Audyssey for your test did ANY of it.

Another anomaly I see is the dip at about 200 Hz. The only system that didn't have that dip was the JBL system. I am certain that Audyssey can correct a shallow depression like that, and it will do it automatically, and I suspect that the other systems can and will also. Why didn't they? With a clear and consistent issue with all the other measurement systems, someone should have noticed that and corrected it.

All these same setup and optimization questions could be asked for the other EQ systems in the test. To me, it seems clear that the only system that the tech had enough familiarity with to make the proper interventions was the JBL system.

Finally, in your response above, somehow the response irregularities in the other systems turn out to be about "filling in spectral holes in the sound power response." You then go on to say: "If a speaker has constant directivity then the on-axis/sound power response generally has similar bumps and dips and you can have you cake and eat it too. This is not the case with the B&W, which has a decent on-axis response but a less-than-perfect sound power response along with a bumpy DI. This speaker is particularly problematic for full-band blind room correction EQ as demonstrated in this study."
The way I read this, it seems possible? that the B&W speaker was specifically chosen BECAUSE it handicapped the other systems. And if the response of this speaker was so problematic, but the JBL system was not adding any EQ above 500 Hz, how did you achieve the smooth, tapered response all the way out to 20 kHz? Shouldn't the problematic B&W speaker have displayed it's response anomalies from 500 to 20 kHz when no EQ was used?

And just to tidy things up at the end here... none of this matters anymore anyway. Audyssey has made some significant improvements to their system: 1. they've introduced XT32, which adds a logarithmic increase in the number of FIR filter taps, and focuses most of it's correction in the bass range; it only sets a target curve above that; and 2. they've introduced a phone app that allows the user to pick the cutoff point for correction, above which no correction is applied; and 3. they've added the ability to edit the target curve if the user wishes to do so. In the final analysis, they listened to you and incorporated many of your principals, even if you didn't do them justice in this test.

Craig
I believe I did answer most of these questions. Some of your questions indicate you haven't read the paper in its entirety or if you did, you misunderstood it.
1. Differences in LF response below 50 Hz are due to decisions made by the algorithm in each product. We specified a target response with our no roll-off below 50 Hz with our implementation while others did. This paper is pretty old and I don't recall how much gain was applied but the subwoofer was capable of producing it. And " yes " the person who set these experiments up knew what they were doing and I was supervising their work throughout the process. Could it have been done better or differently? Probably, but that can be said for most listening experiments I have conducted or read about. We did two calibrations based on spatial averages: one made using a microphone in 6 seats and the other with the mics focused on the seat where the listening was done. It's in the paper.

2. The intent of the paper was not to show how well Audyssey or how any product performs. It was not a commercially driven/motivated study but intended to provide research and insight into the correlation between measurements and sound perceived sound quality. We purposely hide the names of the products in the results for that reason. The fact that you seem super defensive about how certain products performed 10 years ago make me think you have missed the point. We can also speculate or argue whether it was "fair" or not, but given the purpose of the paper and the fact that we hid the identifies of the manufacturers in the results, I don't think such an argument is relevant.

3.I would like to think this paper provided many manufacturers some guidance in how to improve their products. I think you actually admit that above. Certainly, we learned a lot from it, and used that information for designing subsequent experiments on preferred listening room targets.
 

·
Registered
Joined
·
626 Posts
I was of course being ironic when I said any speaker in any room, would do. :) I was merely addressing the claims made by Dirac.

Thanks @Floyd Toole for the background on the Dirac system.


Verstuurd vanaf mijn SM-A510F met Tapatalk
 

·
Registered
Joined
·
785 Posts
One of the problems with room correction is where the software is trying to smooth out a cancellation; as a result, you only waste amplifier power. This is common with a woofer on a stand-mounted speaker which creates a floor bounce / dip cancellation.
 

·
Member
Joined
·
844 Posts
One of the problems with room correction is where the software is trying to smooth out a cancellation; as a result, you only waste amplifier power. This is common with a woofer on a stand-mounted speaker which creates a floor bounce / dip cancellation.
Almost always a cancellation is a "local" phenomenon, occurring only at one or a few locations. Filling such a dip adds a boost to the direct and all other sounds radiated from the loudspeaker, making it a worse loudspeaker and filling the room with excessive sound at the boosted frequency. All such acoustical interference dips, and peaks, are non-minimum phase phenomena and cannot be corrected by EQ. It is the principal reason why room curves are not definitive descriptors of sound quality.

EDIT: I should add that these non-minimum-phase acoustical interference ripples in room curves are almost always not audible problems. They may look like "comb filtering" but because the direct and delayed sounds come from different directions two ears and a brain perceive them as spaciousness, not coloration. Equalizing them flat/smooth is a mistake - it results in a degradation of the direct sound, which is the most important event.
 

·
Premium Member
Joined
·
12,382 Posts
I believe I did answer most of these questions. Some of your questions indicate you haven't read the paper in its entirety or if you did, you misunderstood it.
I admit that I never read the paper in it's entirety. I am not an AES member and I'm not going to pay $35 for the privilege of reading one paper. However, I have seen the Powerpoint you released, and I've read your blog post about it.



1. Differences in LF response below 50 Hz are due to decisions made by the algorithm in each product. We specified a target response with our no roll-off below 50 Hz with our implementation while others did. This paper is pretty old and I don't recall how much gain was applied but the subwoofer was capable of producing it. And " yes " the person who set these experiments up knew what they were doing and I was supervising their work throughout the process. Could it have been done better or differently? Probably, but that can be said for most listening experiments I have conducted or read about. We did two calibrations based on spatial averages: one made using a microphone in 6 seats and the other with the mics focused on the seat where the listening was done. It's in the paper.
It is clear to me that you don't fully understand how Audyssey works. Audyssey measures the IN-ROOM response of the subwoofer, which is a completely different measurement that the LF capability of the subwoofer. If it set the crossover to 150 Hz, it must have found the F3 of that subwoofer, in that room, with that mic, to be 150 Hz. Audyssey will not boost the response below the measured in-room F3 because it assumes that it would be over-driving the sub in a range it's not capable of reproducing. But that was not the problem in this situation. It was a matter of the transfer function from the subwoofer to the measurement mic. If Audyssey saw a roll-off below 150 Hz, and you knew that the subwoofer was capable of much deeper output, the proper intervention would NOT have been to lower the crossover to 80 Hz. As I explained previously, one should not lower the crossover after running Audyssey as Audyssey will provide no correction below the measured F3. The proper intervention would have been to investigate WHY Audyssey was measuring such a high F3 on the subwoofer. Maybe it was placement of the subwoofer. Maybe it was the listening position. I can't say for sure as I wasn't there 10 years ago when the study was done. It appears that all the other systems found some similar transfer function issues as well. The fact that they all rolled off at 50 Hz tells me that there was some issue with the transfer of sound from the subwoofer to the mic. Clearly this problem that wasn't corrected for on any of the other systems, but it was corrected by the boost you added for the Harman system. I'm sorry, but I can't find a way to say that doesn't add an inherent element of bias in the test.



2. The intent of the paper was not to show how well Audyssey or how any product performs. It was not a commercially driven/motivated study but intended to provide research and insight into the correlation between measurements and sound perceived sound quality. We purposely hide the names of the products in the results for that reason. The fact that you seem super defensive about how certain products performed 10 years ago make me think you have missed the point. We can also speculate or argue whether it was "fair" or not, but given the purpose of the paper and the fact that we hid the identifies of the manufacturers in the results, I don't think such an argument is relevant.
The identities were eventually released. That is how I know them. Even so, if this was research just for the sake of enhancing the knowledge base, then it would have behooved you to ensure that it was the most accurate and unbiased test possible. That would mean ensuring that everyone of these systems was OPTIMIZED before making any kind of judgements about them, especially if one of the systems was optimized and the others weren't. It's very clear to me that Audyssey was not optimized, because if it had been, you would have had a much different measurement result, and then a much different preference result. I suspect none of the other systems were optimized either. Why add knowledge to a knowledge base if that knowledge doesn't reflect what the systems are truly capable of?


3.I would like to think this paper provided many manufacturers some guidance in how to improve their products. I think you actually admit that above. Certainly, we learned a lot from it, and used that information for designing subsequent experiments on preferred listening room targets.
What I took from this study was a high level of skepticism about Harman's research. Looking at everything that has followed has shown that, whenever Harman does a preference test, the Harman products always win. Based on my general level of skepticism about manufacturer funded studies, which I explained earlier, my level of skepticism about Harman's listening preference studies has not decreased.



Craig
 

·
Registered
Joined
·
108 Posts
And if the response of this speaker was so problematic, but the JBL system was not adding any EQ above 500 Hz, how did you achieve the smooth, tapered response all the way out to 20 kHz? Shouldn't the problematic B&W speaker have displayed it's response anomalies from 500 to 20 kHz when no EQ was used?
I was wondering about this too. I found an answer in @SoundnWine's blog comment section:

"Equalizing above 200 Hz did indeed result in an improvement in sound quality for this loudspeaker by filling in the 2 kHz hole in its sound power response."

From the comments there it sounds like some rules were applied on when equalization can be applied to improve cases of non-constant directivity in the higher frequencies. This is interesting especially in light of @Floyd Toole's comments above on the difficulty of doing so.

Can more details be given on this equalization, or is it Harman secret sauce?
 

·
Registered
Joined
·
15,295 Posts
Very interesting info on the use of (and non use of) rear firing tweeters. Thanks Kevin!



SouthernCA,

I had always been disappointed with the timbral degradations from all such speakers, which led me to the Mirage M1 bipolar design, which was intended to provide the huge spacious sound that could be achieved with large panel speakers, but without the deficiencies in timbre.

Interesting, as one of the things I never could love about the Mirage Speakers was the general timbre of voices and instruments. Spacious and open, but to my ears always somewhat blanched or dull of timbre. Which makes me intrigued as to whether or not I might have chosen them in blind testing.





With the availability of competent multi-channel up-mixers, and sadly too-few true multi-channel recordings, I do not believe there is really any argument for large panel, dipole or bipole L, R, or C loudspeakers. By definition, they do not excel at reproducing a sense of space, since they are creating an artificial sense of space that cannot be deactivated.

To be clear: Are you arguing against the worth of dipole/bipole/panels only for multichannel surround set ups? Or are you arguing against their worth (in terms of adding spaciousness) for 2 channel listening as well?


(If the latter, I think one could reasonably disagree).


Thanks.
 

·
Registered
Joined
·
8,986 Posts
Very interesting info on the use of (and non use of) rear firing tweeters. Thanks Kevin!


Interesting, as one of the things I never could love about the Mirage Speakers was the general timbre of voices and instruments. Spacious and open, but to my ears always somewhat blanched or dull of timbre. Which makes me intrigued as to whether or not I might have chosen them in blind testing.

To be clear: Are you arguing against the worth of dipole/bipole/panels only for multichannel surround set ups? Or are you arguing against their worth (in terms of adding spaciousness) for 2 channel listening as well?

(If the latter, I think one could reasonably disagree).

Thanks.

I just might do that.

And I'm not minimizing the placement difficulties of getting them there.
 

·
Registered
Joined
·
6,013 Posts
I can personally attest to the inadvisability of applying RC above a certain frequency.
When ARC was initially released I followed the default freq of 5khz in my acoustically treated room.
2-channel, multi-channel, music, soundtracks everything sounded more coherent especially the lower freqs.

I then made the mistake of reading the Anthem D2 thread where people were cranking the freq all the way to 20khz and raving about the sound.
Soo, I recalibrated to 20khz and talked myself into thinking it sounded "better".
A classic case of confirmation bias.

A little while later watching the Omaha Beach landing scene in Pvt. Ryan with the bullets metallic ricocheting off the landing craft,
all of sudden something didn't sound right. What happened to the high end?

Pause the film and look behind the AT screen at the L\C\Rs.
Pow! Ribbon tweeter in the center channel blown to shreds and 1 of the Excel mid-woofs frozen fully extended forward.
Got the ribbon replaced & order another woofer from Madisound.
Ended up the mid-woofer was slightly stuck and not like I thought, a welded voice coil.
Not only did ARC crank up the tweeter trying to recreate a flat line, there were other anomalies like the woofer that were just weird.

When the tweeter was repaired I read Sean Olive's write up about a RC software test\comparo he did.

Re-calibrated, set the limit at 500hz and boom. System never sounded better. Lesson learned.
DUH
 

·
Registered
Joined
·
8,499 Posts
so speakers arent flat in chamber as far as I know...but I do run some rew graphs every 6 months it seems...would the attached waterfall be meaning less? I see my fridge humming at 55hz or so...what is recommended if speakers are not flat in chamber?


I have no clue on rew or doc floyds book...so I ask bunch of dumb repetitive questions....sry
 

Attachments

·
Registered
Joined
·
1,198 Posts
Even with XT32, I thought the audyssey sound was horrible. An improvement over XT, but, thin bass, weird sounding highs. Ruined the sound in my old treated room, then in another untreated room. Made every speaker I bought sound the same, basically, looking back.

ARC isn't perfect, but it's definitely an improvement.

I may not be the smartest person, but, I understood how to set up Audyssey and nothing helped it. No matter how many REW measurements I ran, and iterations made.

I'm sure Dr. Sean Olive can more than manage. :rolleyes:
 

·
Registered
Joined
·
25 Posts
I admit that I never read the paper in it's entirety. I am not an AES member and I'm not going to pay $35 for the privilege of reading one paper. However, I have seen the Powerpoint you released, and I've read your blog post about it.




It is clear to me that you don't fully understand how Audyssey works. Audyssey measures the IN-ROOM response of the subwoofer, which is a completely different measurement that the LF capability of the subwoofer. If it set the crossover to 150 Hz, it must have found the F3 of that subwoofer, in that room, with that mic, to be 150 Hz. Audyssey will not boost the response below the measured in-room F3 because it assumes that it would be over-driving the sub in a range it's not capable of reproducing. But that was not the problem in this situation. It was a matter of the transfer function from the subwoofer to the measurement mic. If Audyssey saw a roll-off below 150 Hz, and you knew that the subwoofer was capable of much deeper output, the proper intervention would NOT have been to lower the crossover to 80 Hz. As I explained previously, one should not lower the crossover after running Audyssey as Audyssey will provide no correction below the measured F3. The proper intervention would have been to investigate WHY Audyssey was measuring such a high F3 on the subwoofer. Maybe it was placement of the subwoofer. Maybe it was the listening position. I can't say for sure as I wasn't there 10 years ago when the study was done. It appears that all the other systems found some similar transfer function issues as well. The fact that they all rolled off at 50 Hz tells me that there was some issue with the transfer of sound from the subwoofer to the mic. Clearly this problem that wasn't corrected for on any of the other systems, but it was corrected by the boost you added for the Harman system. I'm sorry, but I can't find a way to say that doesn't add an inherent element of bias in the test.




The identities were eventually released. That is how I know them. Even so, if this was research just for the sake of enhancing the knowledge base, then it would have behooved you to ensure that it was the most accurate and unbiased test possible. That would mean ensuring that everyone of these systems was OPTIMIZED before making any kind of judgements about them, especially if one of the systems was optimized and the others weren't. It's very clear to me that Audyssey was not optimized, because if it had been, you would have had a much different measurement result, and then a much different preference result. I suspect none of the other systems were optimized either. Why add knowledge to a knowledge base if that knowledge doesn't reflect what the systems are truly capable of?



What I took from this study was a high level of skepticism about Harman's research. Looking at everything that has followed has shown that, whenever Harman does a preference test, the Harman products always win. Based on my general level of skepticism about manufacturer funded studies, which I explained earlier, my level of skepticism about Harman's listening preference studies has not decreased.



Craig
You are certainly entitled to your opinion about our research even though by your admission it is based on a paper, which you haven't read in its entirety . If you want to make a more educated opinion I would suggest you read the paper and some of the other 50+ papers we've published in the AES. The papers are generally peer-reviewed and well-received, and some have even won awards for "best paper"

You say you cannot afford paying $35 for a single paper. AES memberships can be purchased for $125 a year and you can access and read all of our papers and thousands of others in the AES e-library. That's a pretty good deal and investment especially if audio is your profession (I'm not sure that is true or not) Here is the link where you can join:
http://www.aes.org/membership/

As I explained before, the intention of the study was to study the relationship between room correction approaches and their subjective effects -- not to hand-tweak every product to see how well we could make them perform. In my estimation, manually intervening and tweaking each product would have introduced a significant experimenter bias that would made the results invalid, and even more controversial. Instead we tried as best to rely on the "default" automated processes to minimize bias, and see how well they perform for a typical user.. My personal opinion is that room correction products will never be used by consumers unless they are simple to use and require little or no intervention.. When this study was done, none of them were quite there yet. How many receivers have been sold with room correction that are never used by the consumer?


Your other "beef" with our research is that our products are always preferred. I'm not sure that is an accurate statement, and certainly that information is kept out of our AES papers where the identities of the products are removed from the results. In the room correction, the "Harman product" was not even a product but some Matlab code on a computer. For the past 6 years our research has focussed on testing listener preferences of headphones, and the product identities have also been removed from the results. In fact, the "preferred headphone" was a virtual headphone or product that didn't exist. It was equalized to a preferred frequency response or target that was tested and validated against 30+ Harman and competitor headphones. Like loudspeakers, we found there is a strong correlation between preferred sound quality and a set of technical measurements. Furthermore, the listener preference ratings can be accurately predicted to within 86-91%, based on measured frequency response, and the preferred frequency response is closely tied to the sound produced by an anechoically flat loudspeaker calibrated/equalized below 500 Hz in a reference listening room. To me, this objective predictor or metric of sound quality preference makes your point about bias in our listening tests/research rather moot.. If a computer can accurately predict how good a loudspeaker or headphone sounds that removes a significant bias from the testing process.

You can read more about it here:

A Statistical Model that Predicts Listeners’ Preference Ratings of Around-Ear and On-Ear Headphones
http://www.aes.org/e-lib/browse.cfm?elib=19436

A Statistical Model that Predicts Listeners’ Preference Ratings of In-Ear Headphones: Part 1—Listening Test Results and Acoustic Measurements
http://www.aes.org/e-lib/browse.cfm?elib=19237

A Statistical Model that Predicts Listeners’ Preference Ratings of In-Ear Headphones: Part 2—Development and Validation of the Model
http://www.aes.org/e-lib/browse.cfm?elib=19275
 

·
Registered
75" Samsung Q80R QLED, Denon AVR3300, Revel F36, C25, W263, FV15HP x 2, ATV4K, Sony Blu Ray, Harmony
Joined
·
7,746 Posts
It doesn't make sense that people would want a flat response since human hearing is not flat. Then, there is also the huge portion of the public that has some level of hearing damage.
Not sure if I am arguing for or against you, but it seem as though you might be claiming that, since some people have hearing damage they may not prefer accurate speakers...perhaps inaccurate speakers or an inaccurate response might be preferable to compensate for their hearing damage.

I would argue just the opposite. Hearing damage usually takes place slowly over time. So someone with hearing damage listens to music, with their damaged hearing, and probably considers THAT sound to be normal. Therefore, they would still prefer accurate speakers in order to present the music or content as they hear it given their hearing damage.
 

·
Registered
Joined
·
6,722 Posts
Even with XT32, I thought the audyssey sound was horrible. An improvement over XT, but, thin bass, weird sounding highs. Ruined the sound in my old treated room, then in another untreated room. Made every speaker I bought sound the same, basically, looking back.
Odd. If anything, Audyssey brings more thunderous bass than ARC or Dirac due to Dynamic EQ. It will boost down to at least 10Hz to compensate for the Fletcher-Munson effect - and quite strongly, unless you dial in an offset.
 

·
Registered
Joined
·
6,722 Posts
Not sure if I am arguing for or against you, but it seem as though you might be claiming that, since some people have hearing damage they may not prefer accurate speakers...perhaps inaccurate speakers or an inaccurate response might be preferable to compensate for their hearing damage.

I would argue just the opposite. Hearing damage usually takes place slowly over time. So someone with hearing damage listens to music, with their damaged hearing, and probably considers THAT sound to be normal. Therefore, they would still prefer accurate speakers in order to present the music or content as they hear it given their hearing damage.
Even without damage, human hearing is not flat. Take a look at the Fletcher Munson curve, or "equal loudness".

Still, I think the better approach is to have content recorded/mixed with that curve already in consideration, and played back with a flat loudspeaker - one that won't change the balance of that recording. Of course that is my opinion, but it doesn't make as much sense to create "flat" recordings, then try to optimize speakers for equal loudness.
 

·
Registered
Joined
·
1,198 Posts
Odd. If anything, Audyssey brings more thunderous bass than ARC or Dirac due to Dynamic EQ. It will boost down to at least 10Hz to compensate for the Fletcher-Munson effect - and quite strongly, unless you dial in an offset.
Not a fan of Dynamic EQ at all and always turned it off. The surround is awful at lower volumes with the huge bass, then when I turn it up and the bass goes flat it's disappointing.

Good premise, but not great execution.
 

·
Premium Member
Joined
·
12,382 Posts
Even with XT32, I thought the audyssey sound was horrible. An improvement over XT, but, thin bass, weird sounding highs. Ruined the sound in my old treated room, then in another untreated room. Made every speaker I bought sound the same, basically, looking back.

ARC isn't perfect, but it's definitely an improvement.

I may not be the smartest person, but, I understood how to set up Audyssey and nothing helped it. No matter how many REW measurements I ran, and iterations made.

I'm sure Dr. Sean Olive can more than manage. :rolleyes:
Did you take any measurements before and after running Audyssey? Did you change anything based on those measurements? Neither did Dr. Olive.



Not a fan of Dynamic EQ at all and always turned it off. The surround is awful at lower volumes with the huge bass, then when I turn it up and the bass goes flat it's disappointing.

Good premise, but not great execution.
Did you try Reference Level Offset?
 

·
Premium Member
Joined
·
12,382 Posts
You are certainly entitled to your opinion about our research even though by your admission it is based on a paper, which you haven't read in its entirety . If you want to make a more educated opinion I would suggest you read the paper and some of the other 50+ papers we've published in the AES. The papers are generally peer-reviewed and well-received, and some have even won awards for "best paper"

You say you cannot afford paying $35 for a single paper. AES memberships can be purchased for $125 a year and you can access and read all of our papers and thousands of others in the AES e-library. That's a pretty good deal and investment especially if audio is your profession (I'm not sure that is true or not) Here is the link where you can join:
http://www.aes.org/membership/
I didn't say I couldn't afford it. I said I didn't want to. Audio is not my profession. Since it appears you haven't read the whole thread, I've explained my profession here:

https://www.avsforum.com/forum/89-speakers/3038828-how-choose-loudspeaker-what-science-shows-31.html#post57486116


As I explained before, the intention of the study was to study the relationship between room correction approaches and their subjective effects -- not to hand-tweak every product to see how well we could make them perform. In my estimation, manually intervening and tweaking each product would have introduced a significant experimenter bias that would made the results invalid, and even more controversial. Instead we tried as best to rely on the "default" automated processes to minimize bias, and see how well they perform for a typical user.. My personal opinion is that room correction products will never be used by consumers unless they are simple to use and require little or no intervention.. When this study was done, none of them were quite there yet. How many receivers have been sold with room correction that are never used by the consumer?
You're between a rock and a hard place. I get that it could add some controversy if you were to start manipulating the results on all the systems. However, optimizing one system, (your own), and not optimizing the other systems still adds an element of bias. Even if you couldn't optimize each system, you could have at least not handicapped the other systems. And making the mistake of inappropriately lowering the crossover on one system is more than a handicap. It's a detriment.


In addition, it wouldn't have controversial to measure the response at the measurement position before running any of these systems and making an effort to optimize the that response, especially if you knew that you weren't going to make any interventions with any of the other systems.



How many receivers have been sold with room correction that help the user to improve the sound in their systems?


Craig
 
  • Like
Reactions: Scotth3886
1001 - 1020 of 5675 Posts
Top