Originally Posted by mogorf
Originally Posted by batpig
I couldn't imagine a less clear sentence before the statement "Hope I'm clear"!
I can EASILY detect level differences of +/-3dB. I can level match speakers by ear with test tones much closer than that. If I bump up the center channel by 3dB (something that I can do instantly through the web application on my Denon, rendering audio memory transience moot) it is VERY audible. I'm sorry Feri but the theory you are pushing is just as wrong as it was in the Audyssey thread.
That is of course your opinion bp. Try to get the hang of the convention of +/- 3dB and you will u'stand what it really means.
BTW, it is not my theory.
I can't continue to ignore this misinformation (obviously Keith hasn't seen it yet or it wouldn't have stood unanswered for this long).
Why continue to spew this misinformation Feri? Is your selective memory SO selective that you've already conveniently forgotten how your 'theories' were soundly trashed and debunked in the other thread?
3db is NOT and NEVER HAS BEEN
the industry standard for the minimum discernible change. 3db is acknowledged as a common change that everyone can perceive as noticeably louder/softer
. 1db is the level touted as being near the Just Noticeable Difference level, although it's possible to perceive level changes even lower than this.
In addition, Roger Dressler (remember him? The guy who helped develop many of the Dolby standards and specs?) has also mentioned that when it comes to balance between Left and Right channels, changes less than 1db can be easily heard.
As far as this +/-3db standard. It was adopted by speaker manufacturers as a standard that was not too difficult to attain with reasonable care. Speaker manufacturers can (and some do) build speakers to tolerances of +/-1.5db anechoic, but it generally means much higher costs. The +/-3db tolerances were unofficially adopted because they set some standards (vs crap speaker manufacturers that don't even bother attempting to have somewhat even frequency response), that was not too difficult to obtain.
Someone claiming that this +/-3db standard means that people can't hear any changes in level lower than 3db is just as misguided as someone claiming that the vast majority of movies have been filmed at 24fps (frames per second) because humans can't notice any difference with higher frame rates. WRONG
. The 24fps standard was adopted for reasons similar to the +/-3db standard, i.e. MONEY. In the case of film, they were well aware that higher frame rates produced smoother, more realistic motion. They chose 24fps because it was the lowest frame rate where the majority of viewers weren't complaining about headaches and eye strain from the jerky motion. By using a lower frame rate, they used less film stock (expensive at the time) which meant lower costs, for film stock, less time in post production and processing and lower costs overall.
When serious reviewers are attempting to properly test equipment, they use multimeters to measure the electrical output to the speaker terminals to match it within fractions of a volt.