Originally Posted by goneten
I was having a discussion with a guy concerning HDMI and he said that despite us calling HDMI cables digital cables, the reality is that they are actually analogue cables. There are no 1's or 0's travelling down the cables, there are analogue voltages that represent digital data.
"Audio, when its digitised doesn't use the same frequency. Lets say it use a frequency of 9 9999. This is close to 10 000 and some fancy math can be used to get it even closer. The reality is however that as the receiver in a TV or amp is expecting to see 10 000 bits of data and not 9 999 there are going to be some times when some data is read incorrectly. This timing error can cause some audible issues.
Error correction does of course help.
If we now add the timing issues caused by different clocks being used, to issues that can be caused by the shape of the incoming signal (particularly over long or poorly designed cables) and there is the possibility that sound will be degraded."
What do you make of this?
His explanation is not precise enough to indicate that he is familiar with the real engineering concept here. Alas, he has hit on the right headlines though and those parts are correct. For example when he says what we transmit audio/video over HDMI what gets across is not entirely digital is quite correct. I provide the same headline in my article i post earlier: http://www.madronadigital.com/Library/DigitalAudioJitter.html
. Note similar headline of "Digital Audio is Not All Digital!"
What is being transmitted from the source to the receiver is two things: the digital sample value and where/when it is to be output. The former is digital. The latter is analog. It is analog in that it is a pulse happening in time domain and when it occurs, down to picoseconds, determines whether it is at a correct time slot or not. Please read the article and it will make that clear including proof points. Simply put, I can change the waveform coming out of the DAC by moving the data samples around by billionth of a second even though the digital samples remain perfectly recovered at the other end of the cable.
In the case of video, we are lucky today in that those analog timing variations, unless they are extreme and cause outright failures, do not manifest themselves in fidelity issues. We "know" where pixel 5 on line 1 is. So if the signal says it should be in pixel location 4.9, it does not matter. We will still display it in pixel 5. Not so in audio. The timing there if it arrives at location 2011.1, will be output at timing location 2011.1. We have no way of knowing inside the DAC that this should have been 2011 and the source added jitter to it and that jitter is error. The variation could have very well been intentional and matched when audio was sampled with it was first digitized and stored for that video frame. Once we violate the correct timing for audio, then we add distortion. That distortion can be shown mathematically to be there and of course in real devices. Here is a sample from Paul Miller showing all the various jitter sources each represented by a pair of spikes to the left and right of the center axis:
All of these spikes were created due to timing errors for audio clock over HDMI. In this case the overall amount of jitter is 4.8 *billionth* of a second. These are small numbers to be sure but we see that they create distortion products that are as highs -80 db. A 16 bit system by reference, has -96 dB dynamic range. So distortion so small in time domain has caused very high (relatively speaking) analog distortion in our DAC. BTW, earlier Arny said "If you actually believed that Amir, you would have never cited any of the HDMI jitter information from Miller labs, because it is free of information about the issue you raise below: The frequency spectrum of the jitter."
Well, you are looking at the spectrum of jitter above.
The only thing is that he chops off the spectrum at +- 3.5 Khz and I would like to see all of it. But the spectrum is there.
So the notion that a digital transmission only transmits digital data simply is not correct. It is probably the #1 myth spread in audio forums. The reason it gets so much play is that we assume that moving digital data over HDMI and S/PDIF is the same as moving content in our computers. Both share the same aspect in that jitter is created in the process of moving the data. However, as with video, jitter is discarded when data is finally stored in your computer (assuming it is not high enough to cause a failure).
USB transmission presents an interesting contrast as it can be both used as data or a real-time stream of audio data to be played. When you use USB to move digital data from your computer to an external drive, it likewise introduces fair bit of jitter but the target device, say a hard disk, doesn't care and puts the data where it is supposed to go much like video. Now try to use USB as an external sound device where the DAC attempts to play its sample using that jittery USB interface clock and a very different picture emerges. This is a sample output from a chip in design where attention was not paid to this factor:
The 100 Hz spikes were getting generated at the end of every block as new frame of data was fetched. The DAC clcok would jump to a different value and then back to where it was supposed to. Digital data was 100% captured correctly but the timing had these "distortions" in it every 0.01 seconds. The result was clearly audible distortion which was fixed with a design change to the circuit that filtered out these timing variations. All the while digital samples were received successfully but was no sufficient to play the signals without added distortion.
I don't think the person you are arguing with knows any of this but he does have the correct conclusion on his side. That these interfaces are conveying more than digital values. Sadly we are here because the people designing them don't really understand or care about these fidelity problems. HDMI was based on DVI which was a computer video interface. Audio was added to it as an afterthought. Even for video it is wrong. Audio+video on your Blu-ray player runs at max of 48 megabits/second (for 2-D). The player decodes that video stream which balloons to a whopping 297 megabits/sec and then sends it over HDMI to the receiver. It is a heck of a lot harder to transmit 297 megabit/sec signal around than 297. As a result we have all of these transmission issues when HDMI cables start to get long.
The data that is sitting on Blu-ray disc is all digital. It has no timing issues as of yet. If we simply read the data and sent it to the display to be decoded and played there, none of these problems would be there. The display (or AVR), would decode the streams locally and feed the appropriate streams to audio and video path and the notion of "analog" transmission over the HDMI cable would not have existed. The current scheme is there to make things simpler in the display. But today, TVs have to do what I just explained to play things from the Internet and local/LAN storage anyway. So for the convenience of a few years of display development, we got stuck with a transmission technology which yet again puts analog timing in the transmission path.