Originally Posted by Linkwitz Riley
A quote from Rod Elliot's L/T circuit build intructions:The board is small, and can be installed into almost any existing preamp or power amp as desired. It requires ±15V at no more than ~10mA (opamp dependent).
Is that something to be disregarded because of your statement in post #44? My Power Supply outputs almost 2 1/2 times the current that Rod is specifying.
Current vs voltage.
An op-amp can theoretically provide infinite boost. The circuitry takes little current but it boosts the voltage of the signal coming into it according to the circuit design. I guess all I can say here is what I already have, except to clarify the obvious; when I say 'clip the PS', I mean to say that if you exceed the stable voltage of your power supply, the signal will be clipped and/or otherwise lo-fi.
In measuring the SW out of a typical AVR, if you play music at 0dBMVL with the SW trim calibrated at 0, the voltage measured might be somewhere between 1-2V. If you then play the bridge collapse scene in WOTW, you might measure a jump in voltage of 4 or 5 times that. Obviously a big difference.
As Josh and I touched on, there are many people who calibrate with a rumble tone and many of them pay little heed to where the SW trim ends up. This is a hugely compromised method if the user has a MiniDSP or DCX and has implemented a "L/T' of 'x'dB boost in the bandwidth that is omitted by the rumble tone. Then, after the fact, they "bump the SW trim by + 'x'dB to run the subs hot".
As Josh points out, setting the gain stage is everything when it comes to optimal performance from whatever subwoofer system you've built. Well, if you're designing your L/T, you have the opportunity to have control over one link in the signal chain. And, regardless of the boost method used, the assumption should be that the signal being sent to the amplifier will cause the amplifier to clip (exceed its capacity of full power).
I'm just saying, FWIW, that the heavy hitters of soundtracks that we all build subwoofers to enjoy will send an already hot voltage (several times the AVRs specification) to your L/T which will then increase that voltage according to your design ( or according to your fiddling with a MiniDSP or DCX or whatever device to affect a boosted signal) and send that to your amplifier. I've measured the voltage and the spikes can be huge at reference level playback.
So huge that no system that will reasonably fit our spaces and mains capabilities can handle it. So, some sort of limiting will be required. As Josh mentions (and I wholeheartedly agree), the limiting should be at the amplifier output stage and it should be of a level of sophistication that does not horribly distort the signal we all cherish.
If you have to attenuate the boosted signal and then use your amplifier to bring it back up, you'll raise the noise floor of an already noisy signal (this is basically how the LFE+10dB channel gets its headroom). That's even more incentive to use caution in designing the signal shaper. My solution is to create a dozen circuits, each with a different specific curve that acts as a limiting system when higher playback levels are desired. This way, the altered signal remains intact throughout the listening session vs being constantly and infinitely distorted to an unknown degree during the listening session. But, it's still a method of limiting.
When you build a single circuit with no limiting, use caution to center that circuit between too little and too much, leaning slightly toward the too much side and utilizing an amplifier that will deal with the result from there (if that makes any sense).