Originally Posted by Ricci
This seems like a decent place for a discussion I'd like to have.
There always seems to be an argument over just how long an amplifier should be able to maintain output, ratings and some other things. Some of us would prefer that an amp could sit there and do a sine wave for an hour without cooking itself. Others feel that 30ms is enough duration. Most are probably somewhere in between. Do you use a real unregulated AC line? Lately it seems that manufacturer's are siding with marketing and the bigger the # you can claim the better, so we are getting burst ratings of 4000w (EP4000), 6000w, 18000w etc...A lot of people suggest that you "test the amp in the real world" and that's what I'd like to discuss.
Exactly what would be a valid real world test procedure of an amplifiers performance that could be used for nearly any amplifier? I'm not talking about evaluating SQ differences in some kind of listening panel, but I guess subjective comments could be a part of it. I'm interested in determining real world output limits both dynamically and long term, overload characteristics, reliability, long term performance with heavy demand, etc. That kind of thing. I'm thinking it would be something involving a bank of high power speakers probably limited to the bass range mostly as the default load? I don't think anyone wants to try max output or overload testing with tweeters and mids involved...
Perhaps a normal high power full range speaker to gauge the subjective performance of the amp with music?
How long should you be able to do RMS output? I would say that this is entirely based on the music itself.
I saw you mention drum transients later on...with uncompressed music, it could easily introduce 30dB of crest...which is to say peaks that require 1000x more power than the RMS level. So if you're listening to 90dB sensitivity speakers at 90dB (1W), then you're going to need an amplifier that can drive the speakers with 1000W for the duration of the kick drum transient...so what, like half a second maybe? Since most speakers will introduce 3-10dB of compression with those transients, you probably won't notice much of a difference between 500W or even 250W feeding the same speaker.
But now change your source material to some kind of synth bass sound that needs to deliver +20dB sustained for a few measures...so like 10 seconds? Now you need an amp capable of delivering 100W for 10 seconds in order to for the amp to not introduce distortion when listening at a nominal 90dB.
One of the advantages to class AB designs is that the rail voltage is often much higher than the RMS voltage the amplifier can deliver continuously. This is because class AB designs are usually limited by thermal performance. For this reason, a 100W class AB amp is probably going to sound more ballsy than a 100W class D amp. The class D amp will (should) be cable of delivering the RMS voltage that corresponds to the rail voltage (within a few percent). So where the 100W AB design might be able to reproduce a short 400W transient, the class D amp is always going to clip at 100W, regardless of RMS or peak transients. All that is really saying is that Class D output stages are not limited by thermal performance.
With a class D output stage, you just need to make sure the RMS power is enough to handle your loudest transient....and then you don't have to worry about how long it can sustain transients. With an AB output stage, you can theoretically undersize it a bit if you know how it handles transients.
To save on costs, I'm sure commercial class D designers are gonna cut back on the power supply feeding the class D output stage, which in turn can be a source of compression at sustained output levels. However, this is not something that is inherent to the class D amplifier topology. The same would be true for class AB or any class of amplifier actually.
So with that in mind, I would propose measuring how long the amplifier can deliver various voltages limited by 1% THD into various resistive loads. The absolute instantaneous maximum would be determined by the unloaded rail voltage (so just measure the rail voltage). Then I would set the output to 1dB lower than the rail voltage and measure how long it can deliver output before reaching 1% THD. Then drop the output 1dB and continue dropping by 1dB increments everytime 1% THD is hit. You can probably stop the test once it takes longer than 1 min to reach 1% THD. And then you'll want to repeat at 10Hz, 100Hz, 1kHz, and 10kHz into resistive loads of 2, 4, 8, and 16 (so sixteen total plots). What you're measuring is the output level limited by 1% THD over time, so what you should see is an exponentially decaying curve starting at the rail voltage until it reaches the steady state capabilities of the amplifier.
With this information, you can look at the amplitude envelope of your source material to determine if your amp can accurately reproduce the source material. If you want to be really crazy, you can even throw in the frequency content of the music against the impedance response of your speakers to determine where the breaking point is.
Sadly, such a test would be difficult to do accurately without automated gear. The new APx525 series should be capable of it though. The other problem is that you have the thermal history from delivering the higher output levels before stepping down the output...so it will look worse than a real case scenario. You might even want to measure the output when limited by 0.1% THD and 10% THD too.
The real problem with trying to find a single meaningful amplifier marketing spec that is that it requires many variables to fully describe all the behavior of an amplifier, and it is incredibly dependent on the source material and speakers being used.
Anyways, just a thought about how I might go about measuring an amplifier and capture some information about the time domain nature of its performance.