Originally Posted by Swampfox
Actually, you said it, not showed it.
On my i7 920, I can't see any appreciable increase in CPU utilization when I'm playing a flac file.
Your tool is not accurate enough. It is like needing to measure .001 mph using your car's speedometer. Remember again how little it takes to impact the S/PDIF clock to make it less accurate than the spec requires (less than 0.5 trillionth of a second).
You need instruction level tools to know which is which. And at any rate, please remember that I have repeatedly said that there is no prediction here as to which is better as the overall load is random due to vagaries of the OS.
Engineers at Linn Audio have mentioned in their forum that they have measured various cpu / memory issues in WAV vs FLAC play back and have found that cpu utilization necessary to convert the FLAC file is offset by having to move half as much data. They measured the voltage drop on the rails and found it to be in the micro-volt range, and they have measured clock jitter and found it to be the same.
I have read that before and assume you mean this post:"We have done extensive measurements on power supply disturbance recently, and have compared results for both FLAC and WAV streaming. Our findings are as follows :
1. If we measure the power rail that feeds the main processor in the DS we can clearly see identifiable disturbance patterns due to audio decoding and network activity. These patterns do look different for WAV and FLAC - WAV shows more clearly defined peaks due to regular network activity and processing, while FLAC shows more broadband disturbance due to increased (but more random) processor activity.
2. If we measure the power rails that feed the audio clock and the DAC we see no evidence of any processor related disturbances. There is no measurable difference (down to a noise floor measured in micro-volts) between FLAC and WAV in any of the audio power rails.
3. Highly accurate measurements of clock jitter and audio distortion/noise also show no difference between WAV and FLAC.
The extensive filtering, multi-layered regulation, and careful circuit layout in the DS ensure that there is in excess of 60dB of attenuation across the audio band between the main digital supply, and the supplies that feed the DAC and the audio clock. Further, the audio components themselves add an additional degree of attenuation between their power supply and their output. Direct and indirect measurements confirm that there is no detectable interaction between processor load and audio performance."
As you see, they clearly say they found "increased (but more random) processor activity." And that there was *measurable* power supply fluctuations in decoding audio in each format combined with the characteristics being different. So clearly my point is proven by their findings that CPU activity does impact the voltage rails feeding it.
Perhaps you meant the voltage to the DAC doesn't change. Before I get into that, please keep in mind that their scenario is a networked playback which increases the CPU usage when fetching more bits in the case of .Wav due to overhead of the TCP/IP code in the kernel. The authors in our test did not do that and played things locally. So you should assume lower overhead for the .wav file in this instance.
Back to the point of DAC voltage, the Linn testing was done on their box with great attention to quality as evidenced by the last paragraph. The authors of the test in TAS did not use a Linn DS which is a dedicated music server/player built by a high-end audio company in an integrated manner with attention to audio quality.
The authors in our test used an off-the-shelf PC which lacks any measurement/quality assurance with regards to jitter and at any rate, runs a different OS and works differently than the Linn. Additionally, the test in our situation in one case involved a 25 foot cable to an external DAC so the quality of the driver matters a ton more here than Linn's case where the parts were internal.
All that said, I want to make sure that it is not forgotten that I do not believe that the audio fidelity necessarily changes for the worse with Flac. I am only saying that the system load changes in character when running Flac vs. Wav. The Linn report 100% supports this and clearly so.
Other observers have also calculated that decoding a FLAC file consumes less that 1% of available CPU resources with a modern chipset.
You keep mentioning this point and I keep saying it is unrelated to the point being discussed.
It matters not whether we max out the CPU or not. Only that the character of it has changed (note how the Linn engineers were aware of the same and noted it in the post).
Let me try this. Take two situations: 1) your system is 100% idle and CPU doing absolutely nothing and 2) CPU usage is 0.1%. This means your Perfmon will show essentially 0 in both cases as you state.
Now let's further assume your CPU is single core, running at 2.5 Ghz and the CPI (cycles per instructions) is 1. That means that your CPU is executing 2.5 million instructions more in the second instance versus none in the first so there is a big difference here in the character of the system.
Keep in mind that there is no such thing as "1%" CPU usage. The moment the CPU executes anything, it is 100% busy during that time. What you see in perfmon is an average
load of a binary system: either 100% busy or 100% idle. For that 2.5 million instructions then, your system peaked to its full working load whereas in the idle case, it did nothing. That difference is distinct and significant when we care about small power supply disturbances.
Think of how little the tires in your car can be out of balance to cause your steering wheel to vibrate. Surely your tire will not be out of round by just looking at it.
We can therefore say that any CPU activity creates massive peaks in power consumption relative to idle. Taking an i7 processor with a TDP of 130 watts, Intel specs peak current consumption of a whopping *150* amps!
If your cooktop is broken in the morning, you can cook your eggs over your CPU.
In our .1% case, the average
current is pretty low so you would have to use the real stove
. But the instantaneous power consumption is still well into many amps. And it is these current pulses that are the problem, not steady state usage.
Furthermore, and this can not be overstated, all of the conjecture of clock instability, jitter, power supply instability are moot because the effect persisted even when the file was converted back to wav.
I am afraid that is still an incorrect conclusion
. As long as the two files are different, then the system activity is different. If there is a difference, no matter how remote, then you can't say jitter did not change. It very well could have.
What points to the results being wrong is not that. It is the fact that with every conversion, they claim the fidelity got worse and with multiple listeners. Therefore we get to multiply our small probabilities above by each other and get to an astronomically small number, essentially equal to can't happen especially since they provided no measurements to show objective differences in system output. If they had not done these consecutive tests, you could not have dismissed what they found easily. As it is, even my theory needs to be put to test by having them repeat the tests.