The Audyssey Pro Installer Kit Thread (FAQ in post #1) - Page 4 - AVS Forum
Forum Jump: 
Reply
 
Thread Tools
post #91 of 5614 Old 07-17-2011, 07:24 AM
AVS Special Member
 
bodosom's Avatar
 
Join Date: Aug 2002
Location: Niagara Frontier
Posts: 6,367
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 5 Post(s)
Liked: 21
Quote:
Originally Posted by SoundofMind View Post

Distances (delay) of multiple speakers can only be correct at one spot, with Audyssey or without. That spot, referred to as MLP, is mic pos #1 for Audyssey.

True. It's impractical (but if you had a Kinect ...) to compensate for movement without something like a Realiser. A deeper question is how much it matters to the end result. The EQ is a single static transform on the signal and will be more correct in some spots that in others.

Quote:


On the topic of using systems for measurements, it has been repeatedly stated by Chris (audyssey) that to more accurately measure what Audyssey is doing, one needs to take multiple measurements at the same mic positions used for Audyssey. There was no mention of time alignment.

As I said I don't know if or how Audyssey compensates for phase changes.

Quote:


I have posted that for OmniMic I used a small grid, less than 2'X2' and for Audyssey I use a slightly larger one, maybe 3'X3'.

The example provided by markus767 is ~ .7 m (2.3 ft).
bodosom is offline  
Sponsored Links
Advertisement
 
post #92 of 5614 Old 07-17-2011, 08:01 AM - Thread Starter
AVS Special Member
 
SoundofMind's Avatar
 
Join Date: Sep 2008
Location: SE MI
Posts: 7,962
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 1 Post(s)
Liked: 158
^Interesting. OK, so what's the bottom line? What system, which measurements are to be trusted? I feel like I walked into the wrong classroom and haven't taken the prereqs.

What I'll do is continue to experiment with the Audyssey Pro settings (delete midrange compensation, reset xovers to 80, select a different target curve), listen for changes and post results.

Maybe I'll end up sending OmniMic back w/i 45 days as it is not worth $300 if I can't understand and trust the graphs. Besides, the HT system sounds great now and I really need to catch up on the day job stuff...

Yes, I still like playing with Dalis.

SoundofMind is offline  
post #93 of 5614 Old 07-17-2011, 11:34 AM
AVS Special Member
 
bodosom's Avatar
 
Join Date: Aug 2002
Location: Niagara Frontier
Posts: 6,367
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 5 Post(s)
Liked: 21
Quote:
Originally Posted by SoundofMind View Post

OK, so what's the bottom line? What system, which measurements are to be trusted?

If you want to compare Audyssey results to other systems you need measure from a single spot. As soon as you start doing averaging to emulate multiple Audyssey measurement positions you've introduced things you cannot know or control because they're Audyssey secrets.
bodosom is offline  
post #94 of 5614 Old 07-17-2011, 01:22 PM
AVS Special Member
 
bodosom's Avatar
 
Join Date: Aug 2002
Location: Niagara Frontier
Posts: 6,367
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 5 Post(s)
Liked: 21
A standard run sets my subs to different distances and trims. A pro run sets them to be the same. Is that typical?

Here's one sub versus two (in the front corners).
LL
LL
bodosom is offline  
post #95 of 5614 Old 07-17-2011, 01:46 PM
AVS Addicted Member
 
Kal Rubinson's Avatar
 
Join Date: Mar 2004
Location: NYC + Connecticut
Posts: 28,416
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 52 Post(s)
Liked: 122
Quote:
Originally Posted by bodosom View Post

If you want to compare Audyssey results to other systems you need measure from a single spot. As soon as you start doing averaging to emulate multiple Audyssey measurement positions you've introduced things you cannot know or control because they're Audyssey secrets.

Agreed. The approach I take is to make before/after Audyssey calibration with an independent system and compare these, not to the Audyssey measurements.

Kal Rubinson

"Music in the Round"
Senior Contributing Editor, Stereophile
http://www.stereophile.com/category/music-round

Kal Rubinson is offline  
post #96 of 5614 Old 07-17-2011, 02:15 PM
AVS Special Member
 
markus767's Avatar
 
Join Date: Apr 2009
Posts: 4,977
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 556 Post(s)
Liked: 214
Quote:
Originally Posted by Kal Rubinson View Post

Agreed. The approach I take is to make before/after Audyssey calibration with an independent system and compare these, not to the Audyssey measurements.

That is the right thing to do. Unfortunately nobody publishes these kind of measurements online.

Markus

"In science, contrary evidence causes one to question a theory. In religion, contrary evidence causes one to question the evidence." - Floyd Toole
markus767 is online now  
post #97 of 5614 Old 07-17-2011, 03:11 PM
AVS Special Member
 
M Code's Avatar
 
Join Date: Mar 2003
Location: Joshua Tree, CA
Posts: 9,852
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 83 Post(s)
Liked: 117
Quote:
Originally Posted by Kal Rubinson View Post

Agreed. The approach I take is to make before/after Audyssey calibration with an independent system and compare these, not to the Audyssey measurements.


One point to keep in mind is that each Room EQ scheme has its own unique target final EQ response curve..
Once the system's calibration software is run, then the AVR's processor takes those coefficients, plugs them in, runs the equations and then...
Out comes a revised, room transfer function..

The challenge is that each flavor of Room EQ software, be it Yamaha, Pioneer, Harman/Kardon, NAD, Meridian and those brands running Audyssey or Trinnov S/W can/will sound differently..

Maybe its time for a respected publication or testing house runs a comparison face-off between the major EQ software players. The room and loudspeakers should be neutral, but one could eliminate the variable of amplifier contribution by measuring each output @ pre-outs..

Just my $0.02...
M Code is offline  
post #98 of 5614 Old 07-17-2011, 04:11 PM
AVS Special Member
 
AustinJerry's Avatar
 
Join Date: Jan 2003
Location: Austin, Texas
Posts: 7,019
Mentioned: 3 Post(s)
Tagged: 0 Thread(s)
Quoted: 376 Post(s)
Liked: 669
Quote:
Originally Posted by bodosom View Post

A standard run sets my subs to different distances and trims. A pro run sets them to be the same. Is that typical?

Here's one sub versus two (in the front corners).

I see the same behavior. It looks like this is the way Audyssey designed the two pieces of software. I certainly hope Pro is establishing the correct delays for my two subs, which are not equidistant from the MLP.
AustinJerry is offline  
post #99 of 5614 Old 07-17-2011, 04:18 PM
Wireless member
 
pepar's Avatar
 
Join Date: Jul 2002
Location: Quintana Roo ... in my mind
Posts: 24,969
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 60 Post(s)
Liked: 146
Quote:
Originally Posted by M Code View Post

One point to keep in mind is that each Room EQ scheme has its own unique target final EQ response curve..
Once the system's calibration software is run, then the AVR's processor takes those coefficients, plugs them in, runs the equations and then...
Out comes a revised, room transfer function..

The challenge is that each flavor of Room EQ software, be it Yamaha, Pioneer, Harman/Kardon, NAD, Meridian and those brands running Audyssey or Trinnov S/W can/will sound differently..

Maybe its time for a respected publication or testing house runs a comparison face-off between the major EQ software players. The room and loudspeakers should be neutral, but one could eliminate the variable of amplifier contribution by measuring each output @ pre-outs..

Just my $0.02...

The realist in me say that it'll never happen and even the optimist agrees.
pepar is offline  
post #100 of 5614 Old 07-17-2011, 04:22 PM
AVS Club Gold
 
counsil's Avatar
 
Join Date: Mar 2008
Location: Kansas City, Missouri
Posts: 1,979
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 13
Quote:
Originally Posted by bodosom View Post

A standard run sets my subs to different distances and trims. A pro run sets them to be the same. Is that typical?

Not in my experience. Audyssey Pro always set the distances differently. Maybe Audyssey Pro was hitting it's limit on how much delay it could handle? Maybe that's part of the reason why Audyssey Pro isn't calibrating my subs very well?

Never argue with an idiot; they drag you down to their level and beat you with experience.

Counsil Basement HT
counsil is offline  
post #101 of 5614 Old 07-17-2011, 04:57 PM
AVS Special Member
 
bodosom's Avatar
 
Join Date: Aug 2002
Location: Niagara Frontier
Posts: 6,367
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 5 Post(s)
Liked: 21
Quote:
Originally Posted by bodosom View Post

A standard run sets my subs to different distances and trims. A pro run sets them to be the same.

Quote:
Originally Posted by counsil View Post

Not in my experience.

I did another run and got different values for the distances but the trims are still the same. They still don't match standard XT32 but they're close. It's also not useful that the certificate graph only gives a single value for Subwoofer 1+2. And as long as I'm complaining it would be nice if all the trim numbers in the GUI matched the 'CV?' command.
bodosom is offline  
post #102 of 5614 Old 07-17-2011, 05:36 PM
AVS Special Member
 
M Code's Avatar
 
Join Date: Mar 2003
Location: Joshua Tree, CA
Posts: 9,852
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 83 Post(s)
Liked: 117
Quote:
Originally Posted by pepar View Post

The realist in me say that it'll never happen and even the optimist agrees.

Really depends upon the targeted outcome..
We know there are significant differences between the various EQ software but the tests could be run in (2) parts..
Part 1 could be for for system setup specifications including channel trims, x-over points/slopes and delays but without EQ.
Part 2 could be a frequency sweep of the listening room, then an overlay chart for each showing what changes were calculated and respective final transfer function curve.

To keep the face-off between brands from becoming a finger pointing exercise, the publication can simply refer to each as Brand A, Brand B, Brand C..

In closing...
Since virtually all AVRs and processors include a Room EQ software each portraying itself as the end-all solution..
I can tell you we have installed all of the major brand AVRs, run their software with wide varying end-results....
@ certain times the system just simply sounds better with the EQ OFF which means it is doing more harm than good...

Just my $0.02...
M Code is offline  
post #103 of 5614 Old 07-17-2011, 05:45 PM
Wireless member
 
pepar's Avatar
 
Join Date: Jul 2002
Location: Quintana Roo ... in my mind
Posts: 24,969
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 60 Post(s)
Liked: 146
Quote:
Originally Posted by M Code View Post

Really depends upon the targeted outcome..
We know there are significant differences between the various EQ software but the tests could be run in (2) parts..
Part 1 could be for for system setup specifications including channel trims, x-over points/slopes and delays but without EQ.
Part 2 could be a frequency sweep of the listening room, then an overlay chart for each showing what changes were calculated and respective final transfer function curve.

To keep the face-off between brands from becoming a finger pointing exercise, the publication can simply refer to each as Brand A, Brand B, Brand C..

In closing...
Since virtually all AVRs and processors include a Room EQ software each portraying itself as the end-all solution..
I can tell you we have installed all of the major brand AVRs, run their software with wide varying end-results....

Equipment reviews, as far as I know, are with the cooperation or at least consent of the manufacturers. What would they have to gain by participating in such a comparison? What they have to lose is obvious.
Quote:


@ certain times the system just simply sounds better with the EQ OFF which means it is doing more harm than good...

My experience is with two pre/pros with Audyssey, and I have done Audyssey Pro calibrations on both of them. There has never been any time that I even considered listening with Audyssey off.
pepar is offline  
post #104 of 5614 Old 07-17-2011, 06:36 PM
AVS Club Gold
 
counsil's Avatar
 
Join Date: Mar 2008
Location: Kansas City, Missouri
Posts: 1,979
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 13
Quote:
Originally Posted by bodosom View Post

I did another run and got different values for the distances but the trims are still the same.

My trims are always different as well.

Never argue with an idiot; they drag you down to their level and beat you with experience.

Counsil Basement HT
counsil is offline  
post #105 of 5614 Old 07-17-2011, 06:40 PM
AVS Addicted Member
 
Kal Rubinson's Avatar
 
Join Date: Mar 2004
Location: NYC + Connecticut
Posts: 28,416
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 52 Post(s)
Liked: 122
Quote:
Originally Posted by M Code View Post

One point to keep in mind is that each Room EQ scheme has its own unique target final EQ response curve..
Once the system's calibration software is run, then the AVR's processor takes those coefficients, plugs them in, runs the equations and then...
Out comes a revised, room transfer function..

The challenge is that each flavor of Room EQ software, be it Yamaha, Pioneer, Harman/Kardon, NAD, Meridian and those brands running Audyssey or Trinnov S/W can/will sound differently..

Maybe its time for a respected publication or testing house runs a comparison face-off between the major EQ software players. The room and loudspeakers should be neutral, but one could eliminate the variable of amplifier contribution by measuring each output @ pre-outs..

Just my $0.02...

I do agree but it is unlikely that the investment in time and facilities will be made.

Kal Rubinson

"Music in the Round"
Senior Contributing Editor, Stereophile
http://www.stereophile.com/category/music-round

Kal Rubinson is offline  
post #106 of 5614 Old 07-17-2011, 06:49 PM
AVS Special Member
 
AustinJerry's Avatar
 
Join Date: Jan 2003
Location: Austin, Texas
Posts: 7,019
Mentioned: 3 Post(s)
Tagged: 0 Thread(s)
Quoted: 376 Post(s)
Liked: 669
Quote:
Originally Posted by counsil View Post


My trims are always different as well.

Bodosom, perhaps I misunderstood your OP? When I run a standard XT32 calibration, the distances for my two subs are set to 11.2 and 13.1, which is very close to what I actually measure. When I run Pro, it shows the two distances as being both 13.6. The trim values for both XT32 and Pro are the same. Is it the distance settings differences what you are questioning?
AustinJerry is offline  
post #107 of 5614 Old 07-17-2011, 07:39 PM
AVS Special Member
 
bodosom's Avatar
 
Join Date: Aug 2002
Location: Niagara Frontier
Posts: 6,367
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 5 Post(s)
Liked: 21
Quote:
Originally Posted by AustinJerry View Post

Bodosom, perhaps I misunderstood your OP? When I run a standard XT32 calibration, the distances for my two subs are set to 11.2 and 13.1, which is very close to what I actually measure. When I run Pro, it shows the two distances as being both 13.6. The trim values for both XT32 and Pro are the same. Is it the distance settings differences what you are questioning?

No, initially the distances and trims -- per the GUI -- were the same and wrong. I did another run and the distances changed and are different, the trims changed but are the same for each subwoofer. None of the values quite match a standard calibration (which see below). I'm distressed that this is so fiddly.

Of course there is good news:
The estimated post EQ response of the pair is much better than a single sub although not so much in the working range.
The (higher) subwoofer level now matches the other speakers.
bodosom is offline  
post #108 of 5614 Old 07-17-2011, 07:50 PM
AVS Special Member
 
M Code's Avatar
 
Join Date: Mar 2003
Location: Joshua Tree, CA
Posts: 9,852
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 83 Post(s)
Liked: 117
Quote:
Originally Posted by pepar View Post

Equipment reviews, as far as I know, are with the cooperation or at least consent of the manufacturers. What would they have to gain by participating in such a comparison? What they have to lose is obvious.

Keep in mind that the publications already have products for revue, just run additional testing under controlled conditions for the EQ article. As mentioned the brands don't have to be mentioned, but the objective of the article would not be to rank by brand but rather illustrate and elaborate their differences then let the consumer decide.

Quote:


My experience is with two pre/pros with Audyssey, and I have done Audyssey Pro calibrations on both of them. There has never been any time that I even considered listening with Audyssey off.

I too have the Audyssey Pro Kit plus the hands-on experience of running each of the mentioned EQ systems in real-world installations. Even though I have my own conclusions, as posted previously I am not going to rank one to another but suffice it to say that there are significant, sonic differences..
This is especially crucial how each EQ software addresses the challenging low frequencies, if the listening room's LF nodes/peaks are not smoothed out they can/will corrupt the system's resolution significantly masking its possible sonic performance....

By experience we know there are significant sonic differences between rooms and loudspeakers, and less sonic differences between cables and electronics. However for the Room EQ systems there are major audible differences, even when rerunning the same software and moving the microphone very slightly....
But again which one is correct..

Just my $0.02...
M Code is offline  
post #109 of 5614 Old 07-17-2011, 08:03 PM
AVS Addicted Member
 
noah katz's Avatar
 
Join Date: Apr 1999
Location: Mountain View, CA USA
Posts: 20,465
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 165 Post(s)
Liked: 148
Quote:
Originally Posted by M Code View Post

Keep in mind that the publications already have products for revue, just run additional testing under controlled conditions for the EQ article. As mentioned the brands don't have to be mentioned, but the objective of the article would not be to rank by brand but rather illustrate and elaborate their differences then let the consumer decide.

How would a consumer make a choice w/o knowing which is which?

But I agree this will never happen, unless by a highly motivated non-pro, in which case the brands would presumably be identified.

Noah
noah katz is offline  
post #110 of 5614 Old 07-17-2011, 08:18 PM
AVS Special Member
 
M Code's Avatar
 
Join Date: Mar 2003
Location: Joshua Tree, CA
Posts: 9,852
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 83 Post(s)
Liked: 117
Quote:
Originally Posted by noah katz View Post

How would a consumer make a choice w/o knowing which is which?

You are putting cart before the horse..
As of now the majority of the users hit the button and run whatever Room EQ software is available in their AVR or processor. They have no basis of comparison other than it either it sounds good or sounds bad...
Just because it sounds different with Room EQ activated doesn't mean it is correct and more accurate...

Quote:


But I agree this will never happen, unless by a highly motivated non-pro, in which case the brands would presumably be identified.

I would not say never..
In today's electronic guru world, there are innovative students/enthusiasts that with clever PC software could actually determine certain creditable results..
In fact..
A challenged university class for its electronic engineering school could be a possible place to consider doing this...

Which by the way was instrumental in how Audyssey got its
start @ USC..

Just my $0.02..
M Code is offline  
post #111 of 5614 Old 07-18-2011, 01:42 AM
AVS Special Member
 
markus767's Avatar
 
Join Date: Apr 2009
Posts: 4,977
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 556 Post(s)
Liked: 214
Quote:
Originally Posted by M Code View Post

Maybe its time for a respected publication or testing house runs a comparison face-off between the major EQ software players.

http://seanolive.blogspot.com/2009/1...uation-of.html

Discussion: http://www.avsforum.com/avs-vb/showthread.php?t=1192916

Markus

"In science, contrary evidence causes one to question a theory. In religion, contrary evidence causes one to question the evidence." - Floyd Toole
markus767 is online now  
post #112 of 5614 Old 07-18-2011, 06:09 AM
AVS Addicted Member
 
noah katz's Avatar
 
Join Date: Apr 1999
Location: Mountain View, CA USA
Posts: 20,465
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 165 Post(s)
Liked: 148
Quote:
Originally Posted by M Code View Post

... elaborate their differences then let the consumer decide.

What would the consumer decide w/o knowing brands?

Noah
noah katz is offline  
post #113 of 5614 Old 07-18-2011, 06:14 AM
AVS Special Member
 
bodosom's Avatar
 
Join Date: Aug 2002
Location: Niagara Frontier
Posts: 6,367
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 5 Post(s)
Liked: 21
Quote:
Originally Posted by M Code View Post

the objective of the article would not be to rank by brand but rather illustrate and elaborate their differences then let the consumer decide.

I really don't mean to drag out off-topic points but ...

I must be missing something. I've read the posts related to your suggestion and I'm still baffled. What metrics would you use to "elaborate their differences" that would be both useful yet not lead to a ranking?

Quote:


Quote:


Originally Posted by noah katz
How would a consumer make a choice w/o knowing which is which?

Quote:
Originally Posted by M Code View Post

You are putting cart before the horse.

Not really, assuming noah katz means brand of RC software. Although the specific manufacturer and model also matter when they releases patches to the RC software or find problems with the mic.
bodosom is offline  
post #114 of 5614 Old 07-18-2011, 10:50 AM
Advanced Member
 
fitzcaraldo215's Avatar
 
Join Date: Feb 2008
Posts: 980
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 5 Post(s)
Liked: 20
Quote:
Originally Posted by bodosom View Post

If you want to compare Audyssey results to other systems you need measure from a single spot. As soon as you start doing averaging to emulate multiple Audyssey measurement positions you've introduced things you cannot know or control because they're Audyssey secrets.

Yes, but single spot measurements are not very useful in acoustics, because they vary all over the room. Even your two ears are not in a single spot. And, if you move or turn your head slightly, that's a bunch of different spots.

I guess I have a hard time understanding this need to measure independently. Either you trust Audyssey or you don't. And, you will never duplicate their results independently with spot measurements. It is deliberately trying to EQ for an area, not a spot, because that is what we need for actual listening. That's what we bought in Audyssey. Yes, there are secrets in how they do their "fuzzy logic" spatial averaging. But, I tend to believe they have thought it through quite thoroughly and do a better job than most other approaches.

So, I personally think the thing to do is to do the Audyssey calibration as exactingly as you can, then listen. With Pro, you can tweak the target curves if you feel it necessary (I don't) until the sound is to your liking. End of story. Independent measurements only confuse the issue, as I see it, and it is doubtful they are "better".
fitzcaraldo215 is offline  
post #115 of 5614 Old 07-18-2011, 11:04 AM
Advanced Member
 
fitzcaraldo215's Avatar
 
Join Date: Feb 2008
Posts: 980
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 5 Post(s)
Liked: 20
Quote:
Originally Posted by M Code View Post

You are putting cart before the horse..
As of now the majority of the users hit the button and run whatever Room EQ software is available in their AVR or processor. They have no basis of comparison other than it either it sounds good or sounds bad...
Just because it sounds different with Room EQ activated doesn't mean it is correct and more accurate...



I would not say never..
In today's electronic guru world, there are innovative students/enthusiasts that with clever PC software could actually determine certain creditable results..
In fact..
A challenged university class for its electronic engineering school could be a possible place to consider doing this...

Which by the way was instrumental in how Audyssey got its
start @ USC..

Just my $0.02..

Yes, and a different approach is likely to yield different results because of differing measurement techniques and assumptions. Acoustics, starting with the measurements, is very complex and tricky. Which measurement is "right"?

Ultimately, only your own ears can be the judge of that. If you are able to go to live concerts of acoustic music, that should give you a basis to decide which comes closer to live sound, which is the real objective here. Trying to "optimize for best sound" via measurements is, I believe, impossible.
fitzcaraldo215 is offline  
post #116 of 5614 Old 07-18-2011, 11:09 AM
AVS Special Member
 
M Code's Avatar
 
Join Date: Mar 2003
Location: Joshua Tree, CA
Posts: 9,852
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 83 Post(s)
Liked: 117
Quote:
Originally Posted by bodosom View Post

I really don't mean to drag out off-topic points but ...

I must be missing something. I've read the posts related to your suggestion and I'm still baffled. What metrics would you use to "elaborate their differences" that would be both useful yet not lead to a ranking?

Reread my post.
I specifically said that the tests should have a series of output curves.
1. Frequency sweep of the frequency response of the listening room indicating the targets needing improvement such as LF peaks/nodes
2. Curve indicating the specific EQ adjustments such as frequency point & boost/cut, slope for the target improvements
3. Curve showing the final transfer function for each brand of EQ software

Next the consumer can review each of the curves and evaluate whether or not the specific EQ changes actually improves or not the final response for the listening room. Again by using the Pre-outs as the data source, the amplifiers are removed from the component chain.. This info simply provide more data of comparing 1 brand to another before deciding which one to purchase..

Also no brand needs to be mentioned, keep in mind the objective of the testing would be to inform the consumer What frequencies and parameters the Room EQ software is actually manipulating.. And compare this to other brands..
Think about this as providing the same type of competitive comparison information one uses when comparing loudspeakers, amplifiers. As one becomes more aware and sees the actual end-result of running some of the more popular EQ softwares they would be greatly surprised..
Especially for some of the more frequently mentioned ones..

As stated previously, my intent is not to bash 1 brand over another but to provide a technical disclosure about what the Room EQ schemes actually do to final system's frequency response...
Although some user may accept the premise that the auto Room EQ software delivers the most accurate system performance..
I do not..

Just my $0.02..
M Code is offline  
post #117 of 5614 Old 07-18-2011, 12:25 PM
Wireless member
 
pepar's Avatar
 
Join Date: Jul 2002
Location: Quintana Roo ... in my mind
Posts: 24,969
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 60 Post(s)
Liked: 146
Quote:
Originally Posted by M Code View Post

Keep in mind that the publications already have products for revue, just run additional testing under controlled conditions for the EQ article. As mentioned the brands don't have to be mentioned, but the objective of the article would not be to rank by brand but rather illustrate and elaborate their differences then let the consumer decide.



How could the consumer decide if the brands were not revealed?

Jeff

edit: I see that Noah already teed off on this one ...
pepar is offline  
post #118 of 5614 Old 07-19-2011, 05:56 AM
AVS Special Member
 
bodosom's Avatar
 
Join Date: Aug 2002
Location: Niagara Frontier
Posts: 6,367
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 5 Post(s)
Liked: 21
Quote:
Originally Posted by M Code View Post

Reread my post....
... 3 Curve showing the final transfer function for each brand of EQ software...
... no brand needs to be mentioned, keep in mind the objective of the testing would be to inform the consumer What frequencies and parameters the Room EQ software is actually manipulating.. And compare this to other brands ...

So far three people don't understand this. Perhaps you're overloading the word "brand" or something else but it's not clear.

There are other problems as well but that's the most obvious one.

Since this is peripheral to using the Pro Kit I'll stop now.
bodosom is offline  
post #119 of 5614 Old 07-19-2011, 06:11 AM
AVS Special Member
 
bodosom's Avatar
 
Join Date: Aug 2002
Location: Niagara Frontier
Posts: 6,367
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 5 Post(s)
Liked: 21
Quote:
Originally Posted by fitzcaraldo215 View Post

Yes, but single spot measurements are not very useful in acoustics, because they vary all over the room. Even your two ears are not in a single spot. And, if you move or turn your head slightly, that's a bunch of different spots.

In my room (and I suspect in general) the dramatic effects described don't happen. Of course I've looked at sweeps taken from the Audyssey measurement points. The differences are small. The differences between the three fronts are much greater.

Quote:


I guess I have a hard time understanding this need to measure independently. Either you trust Audyssey or you don't.

I don't think everyone has a binary view of things. I certainly don't.
Quote:


And, you will never duplicate their results independently with spot measurements. It is deliberately trying to EQ for an area, not a spot, because that is what we need for actual listening.

Perhaps I was unclear. The way you compare is to take your first three Pro measurements from the same spot. Load those filters and then compare to a sweep.

Quote:


So, I personally think the thing to do is to do the Audyssey calibration as exactingly as you can

Exactly . A common compaint among the cognoscenti is the need to be exacting to get ideal output.
Quote:


Independent measurements only confuse the issue, as I see it, and it is doubtful they are "better".

Well ... okay.

I've taken some pains to do multiple pro runs as well as "stacking" measurements (take three, stop, add three more, stop ... repeat). In my (small) room things don't change much. Sweeps (averaged or not) with Audyssey off versus on clearly show the improvement. Except when Audyssey fails and I've had it fail in a way that isn't quickly exposed by casual listening although it's immediately obvious with a sweep. So I always check after the first three.
bodosom is offline  
post #120 of 5614 Old 07-19-2011, 08:03 AM
Wireless member
 
pepar's Avatar
 
Join Date: Jul 2002
Location: Quintana Roo ... in my mind
Posts: 24,969
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 60 Post(s)
Liked: 146
Quote:
Originally Posted by bodosom View Post

In my room (and I suspect in general) the dramatic effects described don't happen. Of course I've looked at sweeps taken from the Audyssey measurement points. The differences are small. The differences between the three fronts are much greater.

I don't think everyone has a binary view of things. I certainly don't.
Perhaps I was unclear. The way you compare is to take your first three Pro measurements from the same spot. Load those filters and then compare to a sweep.

Exactly . A common compaint among the cognoscenti is the need to be exacting to get ideal output.
Well ... okay.

I've taken some pains to do multiple pro runs as well as "stacking" measurements (take three, stop, add three more, stop ... repeat). In my (small) room things don't change much. Sweeps (averaged or not) with Audyssey off versus on clearly show the improvement. Except when Audyssey fails and I've had it fail in a way that isn't quickly exposed by casual listening although it's immediately obvious with a sweep. So I always check after the first three.

Very quickly one realizes that even slightly different mic positions yields different .. sometimes more than slightly different ... results. As long as there is MultEQ and Pro kits, I will have them in my theater, but to be too cemented to reference when the results can be changed so easily puts the lie to the very meaning of reference.

What sayeth you, Gary J, to that?

Jeff
pepar is offline  
Reply Receivers, Amps, and Processors

Tags
Denon Avr4310ci Receiver , Audyssey
Gear in this thread

    Thread Tools
    Show Printable Version Show Printable Version
    Email this Page Email this Page


    Forum Jump: 

    Posting Rules  
    You may not post new threads
    You may not post replies
    You may not post attachments
    You may not edit your posts

    BB code is On
    Smilies are On
    [IMG] code is On
    HTML code is Off
    Trackbacks are Off
    Pingbacks are Off
    Refbacks are Off