Originally Posted by budwich
Hey Andy... I will jump in again with my "bit error rate / counter" suggestion... :-) you suggest that bits are bits and that the cable somehow knows what bits to change and what ones not so that things are better or worse depending on the cable smarts. BUT that statement can't really be taken as the "full length data story"... cause as you know the bits "start" and "stop" on past the cable. Without you knowing the full story on the errored bits AND how the receiving end handles the "restore" activity, I don't think you can truly say that a given cable can't sound better than another.... at least that's my view on data transport and recovery.
That would be a different discussion since I never mentioned errors in my previous discussion. But, let' go down that path and see where it leads.
I'm going to limit myself to errors in the audio stream and to make it easier to discuss, I'm going to assume linear PCM. If we were to assume Dolby Digital or DTS or the higher resolution codecs, then the impact of an error becomes much greater.
So with LPCM, if I get a single bit error, then if that bit occurs in the low order bit of a 24 bit word, I'm very much likely not to hear it. If it is in the high order bit, then I might hear it. If it continues for a while in the higher order bits, then I most definitely will hear it.
What will I hear? If it is truly a random noise function, then the result will be white noise, which resembles the sound made when playing a DTS CD without a DTS decoder. It only resembles that because the act of encoding for DTS does not produce a truly random output.
If we then look at single bit errors, if the error is in the high order bit in a 24-bit word and is flipped to a 1, then I'll get a roughly 48dB increase in audio. If it is flipped to a zero and should have been a 1, then I'll lose 48dB. If it continues for a few frames, then essentially I've added a loud DC component and I'll hear that since it will sound like a digital overmodulation sound (assuming the case of the high order bit staying at 1).
The interesting thing is if it is the low order bit, I'll flutter that bit but with our current audio equipment, that 24th bit can't be heard. The present day D/A chips can't process that bit to make it audible. So, it really can't be heard and if it can't be heard how can changing that bit actually improve the audio?
But then we can look at what it would take for an error to improve the audio. If we take an easy case of just raising the sound by a small amount, then the noise would have to increase each word the same exact amount and it would have to do it in a way that would not cause any overmodulation effects. It would also have to perform this work throughout the entire song / symphony. I think you'll find that would be impossible.
Add in that the audio is also encrypted (and the encryption is not linear) then the noise would have to follow a complex formula to even come close to producing an improvement to the audio. That just isn't going to happen since it would require noise that wasn't noise but instead was a signal. This "smart noise" does not exist although you would be surprise how many times professionals will attribute an coherent error to random noise. It's not the way physics works.
So, really to claim that noise is producing an improvement in an audio stream would be impossible. Now, the one exception to that is in a 16-bit word length audio stream, sometimes the lower order bit (which may be audible at 16-bits) is toggled to introduce dithering which then makes some improvement with the D/A chip. But that is a well thought out process that does not involve introducing truly random noise. HDMI bit errors would not be a well thought out process like that.
And, yes, I hope the person who posted the statement will take the entire system into consideration and then attempt to explain how the bits are changed by introducing a different cable in such a way as to improve the audio signal.