Originally Posted by rdclark
You make no attempt to figure out what's going on, but confidently declare a winner. It could be (and probably is) as simple as a 1 or 2dB difference in the output level of the AVR's decoder stage. No A/B comparisons, no attempts to level match, no science, no logic.
I will say it again: subjective posts like this one are worse than useless. They're harmful. They contribute no actual data, and are based on nothing but an impression. It's like saying "I can see farther on Tuesdays than on Wednesdays," without talking about how much cloud cover there happened to be. People read these posts looking for actual information, and don't always know enough to sort the science from the superstition.
How in the world did you deduce that I made no attempt to figure out what was going on? Your obvious tendedncy to jump to conclusions would be enough for me to disregard your opinion on the matter, but there can be other things that come into play.
1 - Some receivers are able to apply secondary processiong (meaning aside from simple decoding) when using bitstreamed input, and are unable to do this when being fed MPCM. In fact, some codecs like the Dolby ones actually call for this by default as part of the spec.
2 - Any time PCM is fed from one device to another, be it by way of HDMI optical or some other method you are susceptible to jitter and other clock-related errors. this is why studios use master clocks to sync all of their digital gear to a single source, rather than several individual ones.
3 - Bitstreaming is immune to external clock error because the audio is not decoded into PCM data until it arrives at the destination (receiver). Yes, there will still be jitter but it will be limited to errors originating inside the AVR, and as a result of inconsistincies in its' master clock. Introduce external clock errors and you will in effect double the adverse effect, which IS audible for those of us who choose to use our ears instead of protractors to judge sound quality.
4 - Since ultimately the audio is being decoded, converted from digital to analog, and amplified through speakers in order to be enjoyed by a pair of ears (at least that's what I do with it) then it only seems fair to let the ears make the decision.
I am sure you can find a generic and name brand product in the supermarket that have the same ingredients but taste very different no? Good lord that must be a brainbuster for you considering that "theoretically" they should be the same.
Enjoy your denial, but for Pete's sake don't deny others the chance to formulate their own opinion. As far as your couple of dB difference theory, if the PCM audio differs in any way post decode (and a difference in output does qualify as a difference) then I guess the decode process isn't as universal and straightforward as you suggest is it? the fact that you suggest there could be a difference of ANY kind suggests you don't trust your own theory to begin with.
For those of you still wondering if there is a real difference, just try it yourself. Many of us that know how to use and trust our ears have discovered that (at least with some setups) there is!