Originally Posted by qguy
In a perfect world, all amplifiers should sound the same, as the goal of the amplifer is simply to amplify the signal, it should not add nor detract anything from the signal... sadly we don't live in a perfect world and compromises in manufacturing are made. Take for instance a capacitor or a resistor with a 5 percent tolerance, at maximum tolerances between 2 capacitors one being +5 percent from value and the other -5 percent, we have a total of ten percent difference from 2 parts being used in both channels, multiply that difference between all the parts in the amplifier. The more expensive amplifers uses parts with tighter tolereances, they even matched parts which are already very close in tolerance. Transistors are gain match to ensure balance between left and right channels, these are practices that the mass market manufacturer does not include in their process.
Electronic circuits have a property called sensitivity, which is how sensitive the performance of the total piece of equipment is to variations in the given part. An amplifier is not equally sensitive to variations in all parts values. Many of the parts in an amplifier can vary over a wide range while having a negligible effect on performance. Some parts, especially the transistors can have important parameters such as their current gain that vary over a wide range - 2:1 or more while they are operating, and can themselves vary over a range of upwards of 6:1 while having a negligible effect on overall amplifier performance.
In fact transistors are not gain matched to ensure channel balance. Most transistors in AVRs are microscopic parts that are parts of op amp chips. The gain of amplifiers is usually set by a pair of resistors in the amplifiers inverse feedback path. Historically, the biggest detriment to amplifier gain matching were the analog mecchanical volume controls but for the past 10 years or so most AVRs have used electronic volume control chips that have tolerances on the order of 0.5%. or 0.05 dB.
If you actually ever get around to measuring the performance of real world audio amplifiers you will find that there are indeed slight variations in how nominally identical power amplifiers operate. It's easy enough to find differences in the gain and distortion that vary by a few percent per channel. IOW one channel might have 0.012% distortion under a certain set of circumstances, and another channel might have 0.011% distortion under the same circumstances. One channel might put out 98 watts under a certain set of circumstances while another channel might put out 99 watts.
Stage gains are usually set by 1% resistors, which means that channel gains will vary by up to 2% (0.2 dB) per channel but most channels will be within 0.5% (0,05dB). Again all trivial differences, but differences that can be measured reliability with modern test equipment.
I don't know where you obtained your ideas about amplifiers and parts sensitivities, but its not part of this world, and hasn't been for at least 40-50 years. For example the gains of even tubed amplifiers in amplifiers with any pretentions of quality going back into the 1960s and earlier were not allowed to be controlled by variations in the gain of individual tubes because the gain of tubes varied by dozens of percentage points over the life of a tube, or even over a few years of operation. Solid state equipment has always been designed to not
have its performance controlled by the gain of individual transistors.
It is true that manufacturers graded transistors (and tubes) back in the early days, but the goal was not obtaining the right gain, but rather obtaining consistent bias points and desired distortion and noise performance.
In this day and age resistors with 1-2% tolerance are commodity parts and are not reserved for the most expensive equipment. Resistors with higher tolerances are generally not used for audio components but show up in electronic devices like test equipment.