Originally Posted by craig john
Let me try this again...
When Audyssey applies a set of filter taps to a channel, it implements a group of boosts and cuts at various frequencies. If, for example, the cuts are greater over a larger number of frequencies than the boosts, the "average" level of the whole signal will be reduced. Audyssey will then raise the average level of the whole signal to compensate. Audyssey does this by looking at the entire bandwidth of the signal or "chirp.".
When you measure the SPL of a bandwidth-limited "noise" signal, (the receiver's internal test tones), if Audyssey has made changes to the FR within that bandwidth, but you're not measuring them because Audyssey is not engaged in the signal path, you end up with a measurement that is different than what Audyssey measured. Therefore, if you re-set the levels based on these measurements, and then re-engage Audyssey, you now have a different calibration than Audyssey's, but one that doesn't take Audyssey's filters into account.
Why would anyone feel the need to do this? Audyssey measures the levels for all channels using the same mic, in the same spot with the same test signal. It then calculates the filters for all the channels. Finally it compensates for the average level changes the filters induce, and sets the level trims to ensure that all the channels are outputting equal *average* signal levels. It calculates them all the same way, using the same algorithm. Why would anyone think they're wrong... and then re-set them without taking the filters into consideration?
If you make changes to the trim levels after running Audyssey, using the receiver's internal test tones, it means you are uncorrecting the corrections Audyssey has made to compensate for it's filters. If you change a channel level by 1.5 dB using the internal test tones, when you turn the test tones off and the Audyssey filters are re-engaged, that channel's calibration is off by 1.5 dB from the rest of the channels.
The way to confirm this is to use external test tones that run through the Audyssey filters. I have 2 such test tone discs, Avia
and The 5.1 Audio Toolkit
. I have checked the calibration with the internal test tones and both test discs. Here are the results:
5.1 Audio Toolkit
Using the internal test tones results in a 4.5 dB difference in calibrated levels between channels. Using 2 different test discs, with Audyssey engaged for both, results in 1 or 1.5 dB of difference. If we could measure and average the exact same way Audyssey does, (sweeps + averaging), I'm sure we would see that Audyssey correctly compensated for the EQ filters it applied, and set the levels correctly, and that using the test tones afterwards results in un-correcting the corrected levels. I know it seems intuitive to want to change the levels when your SPL shows that they don't appear to be correct. Nonetheless, the ARE in fact correct and it's the reading you get from the SPL meter that are not. There's nothing wrong with your SPL meter; you're just not using the proper test signals.
Audyssey doesn't get everything right. However, one thing it does *very* well is to get the level calibration correct. To override that, based on measurements taken with a different mic, different test signals and without the Audyssey filters engaged is...
... well... I'll just say that I don't suggest it.
PS. Sorry for going off-topic in the JTR thread.