Originally Posted by russ_777
In general I agree with what you're saying, however there is a limit to that claim. When timing smear across a cable (or any other distributed circuit element) approaches half of the symbol period, that is not the case. This is when viewing the received bit stream in an eye diagram format, the eye closes because of inter-symbol interference. The ability of the device in the receiver to correctly estimate the value of the symbol at each sampling instant deteriorates, and when the error correcting ability of any FEC is lost the link collapses. The "suddenness" at which this breakdown occurs depends on the coding gain...less coding gain, the more gradual the degradation.
In the case of a 2 channel audio stream even at a 192 ksps sampling rate and 24 bit resolution, jitter or smearing approaching 50 nsec starts to become an issue, if I'm doing my arithmetic correctly.
My comments are with respect to a playback chain that is not actually losing bits.
If the bits arrive, eventually, even if they are spaced incorrectly, the original signal can be losslessly recovered, clocked, and converted.
The entire process can be summarized as:
1) Mastering studio takes samples.
2) Long chain of shiny disk manufacture, pit spacing, playback mechanism, cable transmission, etc. Much wringing of hands.
In step #1, quantization errors are introduced in amplitude (bit depth) and time (clock precision). Nothing can make those errors go away.
In step #2, feel free to add other errors. They have no effect on step #3.
In step #3, read the samples, assume the samples were taken at the precise time specified by the sampling rate, and reconstruct the signal. Anything introduced in step #2 doesn't matter. Can't matter. Because the amplitudes aren't changed (else we would be getting bit errors, which I already said wasn't part of this), and step #3 is using the correct clock rate.
It's been claimed that buffering and reclocking can't work because then audio will get out of sync with video, but for that to be true they would have to have been out of sync during mastering. If they were in sync during step #1, then both the audio and video samples were taken at the correct rate, and using the correct rate in step #3 will again keep them in sync.
Basically, if you're relying on the clock rate derived from the wandering eye in the HDMI signal, you're doing something stupid.