Originally Posted by Kal Rubinson
From a purely mathematical point of view, how you do this makes a difference. The iterative and subjective process you describe is one way. Executing a program that automatically combines data from multiple measurements based on an algorithm that objectively weighs amplitude and decay is another. I never said that one was superior, just that they are not the same nor will they generally result in the same endpoint.
I've spend many hours with the SMS-1 in several systems and, in fact, have used it to monitor the operation of other EQ systems. It's only real flaw is the limited display resolution and, in that regard, may fail to distinguish audibly different corrections.
Well then you know that when you first ran the auto function in the SMS-1 it produced a better FR but it also did unwanted things such as boosting nulls excessively because the program was following a predetermined set of rules as it ran its sweeps, measured and adjusted amplitude. It took your own tweaking to lower the bars and shift the filters to where they were needed most based on your analysis of the visual data. You likely applied your own sense of what Q was necessary or ran RoomEq to compute it for you. Your manual adjustment process though it followed the same process as the auto program, involved a level of sophistication and adaptation that no program can match and in this case we are only talking about the frequency domain.
Now lets look at the Audessey MultiEQ which reportedly uses fuzzy logic to not only adapt as you did with the manual settings but proposes to also do it simultaneously in the time domain across several seats. Keep in mind that fuzzy logic also uses a set of someone else's rules to solve problems which you may not want solved. Its chic today to say that fuzzy logic can substitute for human decisionmaking and simplify control processes like we are doing when we try to optimize FR and time delay issues. If you consider the scale of the problem of dealing with directed and reflected long wavelength in a closed room with a sub which will likely have some group delay, I fail to grasp how someone's simplified rules in a fuzzy logic based algorithm will optimise my acoustical problems in space and time.
" MultEQ uses Finite Impulse Response (FIR) filters for equalization that use several hundred coefficients to achieve much higher resolution in the frequency domain than parametric bands. Furthermore, by their nature, FIR filters simultaneously provide correction in the frequency and time domains. FIR filters had been considered to require too many computational resources. But Audyssey solved this problem by using a special frequency scale that allocates more power to the lower frequencies where it is needed the most.
So rather than rely on the required computational power which is not available in the device, and in any event cannot approximate your adaptive response to the measurements, it uses a "special" frequency scale. Its no surprise why the Denons that I have seen have made a hash of low level equalization in the two rooms I investigated. How can a special scale with a predetermined set of rules adapt to everyone's unique acoustics? The only light I see in this long narrow tunnel is that the marketing sheet makes reference to using the device in conjuction with a personal computer. I hope what that is referring to is using the computers processor to augment the chip in the AS-EQ1 for an exponentially larger number of iterations. Based on my experience the Audessey in the Denons and what has been described in the marketing literature , I do not see anything in the materials which would lead me to believe it can outperform your manual tweaking without a huge boost in computations.
The more I dig into this the more I appreciate Uli Behringer.