Originally Posted by ex0du5
Yes, I understand that these matter. I want to understand the science behind the madness.
I will give a brief answer. For more, search for my name and Glimmie and you will find a huge thread where he took your role, and I took the opposite
Sorry to say, but I don't trust most audiophiles when it comes to digital gear. I have worked with and designed digital gear and synchronous communication, and I understand that it is entirely possible to eliminate jitter at a given stage by simply storing the information in memory. That is why I'm puzzled that there should be any jitter whatsoever introduced into the DAC by the transport.
As one engineer to another, let me say there is more to the story than part of the chain which ended above
Now, I understand it's completely possible if the protocol was badly designed, and that very well may be the case. I'd like some clarification here, because as I have worked in the field, I haven't worked specifically with HDMI or SPDIF.
As a digital transport, nothing is wrong with either. Well, I take that back. HDMI sucks at that also but that is for another topic, unrelated to jitter and fidelity
Now, back on topic...since HDMI has error correction, it needs a buffer on the receiving end. As such, I can't see the jitter from the sending device being preserved, since I assume an ECC buffer would need to be random accessible, and not FIFO (unless it's FIFO + addressible, I guess). This is, again, purely assumption, but I assume that the data is stored in the buffer, and essentially retransmitted to the device's processor at that point.
All correct. Let's make this very simple and assume that all digital data is extracted from the link perfectly. You now have audio samples (or bit stream from the compressed audio) ready to be played.
What happens next? As you know, a DAC requires a clock. Where do you get that clock? You have two choices:
1. Use any old oscillator. You take the sampling rate of the source and enable the clock at that frequency. Well, this won't work! Why? Because the sampling rate is the nominal value of the audio samples, not actual. When content is encoded for example, one could choose to put out 47,999 samples/sec instead of 48,000 and still be correct. If you clock the audio one sample faster than the source, over time you drift and pretty soon, the audio is no longer in sync with video.
2. Derive a clock from the HDMI source. You use a PLL and lock your frequency to the source. This gives you the correct data rate since you now are in sync with the incoming samples. But now, you have a performance issue. The HDMI clock is designed typically to be good enough for you to recover the data samples. Once there, designers think they are finished. Yet, we now need a very high precision clock to drive the DAC.
How precise? For 16-bit audio at 20 KHz, you need to achieve 500 picoseconds accuracy or you lose the low order bit. That will be challenging to maintain in a typical system. You have HDMI clock itself varying to some extent due to clock instability, cable induced jitter, etc. You have interferences inside the receiver which cause cause their own jitter.
There are solutions to this problem of course but they get complex and expensive since you only want to filter the jitter, but not true changes in source rate. Some use double-PLL circuits. Others use proprietary techniques.
To avoid the next phase of this discussion and what makes people frustrated with these arguments
, let's avoid talking about what is audible and what is not. Instead, let's agree that audio reproduction is part digital, part analog. Audio samples are digital in value. Timing of the samples is an analog event which must be in sync with the source. And to add insult to injury, we have high precision sample values which mean jitter has to be quite small for transparent reproduction.
Hope this gives you an overview. Now you are armed to do the above search and read through the rest of the arguments.