Originally Posted by MLXXX
I did not mean to imply that all attempts to address jitter through buffering are successful in terms of measured reduction in jitter.
That's fine but the assumption that in any case buffering improves jitter is still wrong. See my article on jitter and how our audio systems work: http://www.**************.com/Librar...dioJitter.html
. For now, here is a brief overview.
Let's look at one of two cases here. That the DAC clock is running a bit faster than the source clock. In this case the buffer does nothing whatsoever because it never fills with any data! You are consuming data faster than it comes. So soon enough, you will run out of samples to play. While not a manifestation of jitter, this happens sometimes in a PC that gets too busy to feed audio samples to the sound card. You hear a glitch, pop or a pause in music. The operating system has huge amount of buffering available to it yet it does it no good when the sound card runs out of data to play. No matter how deep the buffer, if you the source is slower than the target, you will run out of samples.
You may think the opposite situation is helped with buffering. But such is not the case there either. Let's assume the DAC clock is slower so it is consuming data slower than it is being sent to it. Now the buffer comes into picture, storing data that starts to back up because the DAC can't consume them fast enough. What happens if the source is my cable box, feeding my AVR over HDMI? I leave the cable box on all the time. Let's say I do the same with AVR. After hours and days, any reasonable buffer will overrun and the source data is forced to be thrown away.
So you see that the buffer by itself in one case does nothing useful, and in the other case is a partial solution that eventually runs into a ditch.
Now, our problem would go away if somehow how we set our DAC clock to the same speed as the source clock in the Blu-ray player, Cable box, etc. But how do we do that? No, you can't use the sample rate of the audio! That is a nominal value. 48 Khz does NOT mean you will get 48,000 samples/sec. When video is authored for example for video, the timing may very well be modified to 47,999 or 48,020. You just don't know. The DAC must determine this unknown value somehow.
Unfortunately in addition to above authored variations we also have noise and timing variations that get induced into the source clock and cable that feeds our input. And our input circuit itself adds more noise and timing variations. So what the DAC clock sees is a combination of valid speed variations+invalid speed variations. There are good and lousy solutions to this problem. I won't get into them now but suffice it to say, mere deployment of a buffer does not contribute to either solution working.
If they are unsuccessful then that is poor engineering if the purpose of the buffering is to reduce the jitter. If the purpose of the buffering is merely to stop overruns and underruns in the processing of the received datastream then the buffering has achieved its purpose.
The purpose of the buffer is not for jitter reduction. It is needed for other reasons. For example if you give a compressed bit stream, e.g. dolby digital, to the AVR/DAC, it would have to first decode it. That decoding happens in blocks of data at a time. So buffering that input chunk of input and output data is mandatory. Such buffering plays no role however in jitter reduction. Once you have the uncompressed PCM audio samples, you must then consume them at the rate that the source sets. It is this synthesis that can increase, or reduce the jitter. The buffer's role is immaterial other than its necessity in how the system operates.
It is true that over the medium and long term the clock at the receiving end is ultimately a slave to the data rate that is sent. However what controls the average data rate that is sent? The clocking out of data from the CD player or Blu-ray player will be governed by a frequency that is synthesised and/or divided by reference to a crystal oscillator. This will provide ample stability. Over time, a small drift of so many parts in a million may occur. That is of no consequence to the human ear. I accept that some designs may use an aggressively short time constant to search for a lock and maintain the lock, but the question must still be asked: "is the additional jitter created by an aggressive phase locked loop audible?". If it is not audible, then the engineer has done a sufficient job for human ears, despite the additional jitter being measurable.
No. You are making idealistic assumptions here that are not backed by how the system works or the data we have in hand. The source clock is not a synthetic thing in the player. In the case of authored audio/video it is actually buried in the content on the disc itself! And at any rate, the audio samples are locked to the video rate. For every frame of video, you have x amount of audio samples. In other cases like the CD, the rotational speed of the media itself could be the source clock and that can have very high variation in clock. Similar jitter reduction systems as I have explained then needs to exist to reduce this jitter.
As to audibility, yes, that is the ultimate question. Problem is, you assume an answer for it and run with it. When you bought your last AVR/DAC, did you look at its jitter profile? And if so, how did you determine it did or did not have audible jitter? I suspect you didn't look at its jitter performance. Nor if you had, you would know how to evaluate its audibility. It is not like you could set up an ABX test and make the jitter come and go on demand. The jitter is always there so such testing is darn near impossible.
You take comfort in buffering doing good and an empty assumption that jitter must be inaudible and bought your gear. Sometimes not knowing is a good thing
. Unfortunately I do know so can't go where you can. My son bought a $400 DAC because the cheaper one I had bought him would, just like the fancy internal sound card, pick up system noise as he played his games. He then came to me complaining the $400 DAC also not sounding good. I said let's go and measure it. While setting up my instrument I lectured him on placebo effect and how he is pretty much imagining what he thinks he is hearing. Then this monstrosity popped up:
Our source signal is the center spike. The ideal system would only have that and nothing else. Yes this DAC spits out jitter that shoots up 50 db above its noise floor! Not only that, it is distanced so far from our main signal that it is not subject to masking anymore. There is no way to make a case for inaudibility of jitter here. Buffering and all, you are in trouble.
But it gets better. Changing inputs gives us this clean as a whistle output:
We just proved that whatever buffering was there played no role at all as to whether there was, or was not any jitter.
Unfortunately the story does not end there. Instead of Media Player classic, we fired up Windows Media player, playing the identical test signal file on the PC
. This is what we got now:
These are things that we think are "impossible." Which media player we use is not supposed to make any difference. Digital is digital, right? Buffering helps with jitter, right? Wrong!
We make wrong assumptions that lead us astray. You must verify your assumptions. You have not done so when you say buffering does this and that, or that jitter is or is not audible. You may very well be right about inaudibility but the moment you go on to say how the system works to eliminate the effect of jitter and that is not so, I will chime in