Originally Posted by desertdome
Pink noise used for calibration is bandwidth limited and typically covers 3 octaves. A peak needs to be about 1/3 octave in width before its increase will significantly affect the entire bandwidth of pink noise being played. If you measured a flat frequency response from 500-2000 Hz and the line was drawn at 80 dB, the actual output would be 89.54 dB when measured with an SPL meter. If one of the 1/3 octave band was a "peak" of 10 more dB, the actual increase seen in the SPL meter is 3 dB. However, I've observed that peaks are usually much less than 1/3 octave in width. If using full bandwidth pinknoise (10 octaves) to calibrate, then a peak of 10 dB that is 1/3 octave in width adds 1.16 dB to the overall level. See Adding decibels of one third octave bands
Conversely, just because you EQ doesn't mean you magically get less dB out of a speaker. Driver headroom is never lost. Signal headroom can be lost, but not in most setups using subs with built in amps with gain control or pro amps. Since this discussion is about pro amps I'll use them for an illustration. Many pro amps require around 1 volt (I've seen it range from .775 to 1.4) for maximum output. If you output 9 volts from your receiver then the gain control on the amp needs to be set to -21 dB. Now you add some EQ in your system and the maximum output voltage drops to 5 volts. Now you adjust the gain knob to -16 dB and your amplifier can still deliver maximum power to your speakers. No headroom is lost!
With most pro amps, you can drop the entire signal at the source by 20 dB and still not loose any headroom. EQ will never drop the entire signal by that amount so again no headroom is lost. Extra output voltage that you can't use or amplify isn't headroom.
A miniDSP with 2V of maximum output can still loose about 6 dB over the entire signal and still be able to drive many pro amps to maximum power.
A few questions:
If I apply a -6 dB cut to an 89 dB efficient subwoofer, how much volume will it now produce at 1 m with 1 watt? The answer is "89 dB" and efficiency has stayed the same.
If I apply a -6 dB cut to an 89 dB efficient subwoofer, have I reduced the amps capability to produce maximum power across its entire bandwidth? If the answer is "no" then I haven't lost any headroom.
Does EQ increase the noise floor? It can only increase it in the inverse shape of the filter at the same frequency. The noise floor of the signal is usually so low that a filter of 20 dB of EQ or less won't be audible.
A final question for thought: How much headroom was lost when the music you listen to was mastered using EQ?
I know the math, but my experience tells me the opposite. In my old home, I had a nasty, broad 60Hz peak of over 10dB (room mode). When I would run a -20dBFS sweep with REW (calibrated with an SPL meter), the peak would register at 85dB on the trace, with most of the rest of the trace at or below 75dB. When playing LF-band limited pink noise at -20dBFS, my meter would hover around 85dB, not any lower, even though the majority of the sweep was lower in level, IIRC. I did not measure with full-spectrum noise, though. Maybe my meter was defective, I don't know. But I have experienced over and over that the largest modal peak will result in the SPL level read in several different room setups. If I EQ the peak out, SPL drops. Maybe this is a modal ringing phenomenon (nontrivial in small rooms and bass freqs) at work making the meter read erroneously based on integration time (slow v fast)? Or maybe I was actually using a different dBFS level than I thought I was for the noise measurements....
Cutting that peak to make the trace flat and bringing up the overall signal level would result in more distortion during playback than cutting the peak less and subsequently using less overall signal boost to get the same SPL level by pink noise and meter.
EQ decreases the headroom of the distortion-free playback system, not necessarily the amp itself. If you are feeding a speaker more power at a certain frequency, distortion rises at that frequency. If I apply cuts throughout the spectrum, then increase the overall signal level to meet reference level SPL, I need a more capable amp and/or speaker to play at the same distortion level, as I am asking for 'more' SPL everywhere except where I applied the cuts. Audyssey works in this fashion by my measurements, and it can lead to some real horrible noises coming from a less than capable sub that sounded 'OK' before Audyssey (the iteration in my Denon 2809Ci).
Most room correction algorithms allow for as much as a 9dB boost. That's nearly 10x the power needed when compared to an un-EQ'ed system.
I understand that signal content will not solely include the frequencies subject to this boost at 0dBFS, but the possibility is there. More and more high signal strength sweeps are used for LF effects in films. If you have boosted a particular band within the sweep, you will know it straight away during playback, unless you allowed or planned for extra headroom to be present because of EQ changes.
Have you ever been to an IMAX digital screen calibrated with the new Audyssey MultEQ-XT based system they use? I have experienced it at two separate venues, and the sound is horribly distorted. My theory is that when fed an unprocessed signal, IMAX Digital signal/speaker chains can reach Reference Level with 'acceptable' distortion. But add the hundreds of cuts it applies (literally) and raise the overall signal strength by 9dB to compensate for all the cuts, and you have a recipe for disaster in a speaker that was barely capable enough to begin with. That's my theory as to why those theaters sound so bad....
JSSEdited by maxmercy - 5/3/13 at 12:16pm