Originally Posted by rnrgagne
Who said anything about AVR's?
. Was picking as an example of a common product where the question here can be directed. Do we have a belief that they are transparent to each the source?
But you don't have to "believe" something is transparent to the source, you can simply measure it with instruments that are sensitive well beyond human hearing capabilities and prove it.
What is the jitter measurement for the latest AVRs on the market? And what is the level for human hearing detection? Answer to the former is that it does not exist from manufacturer or third parties. For the latter, see below.
Human hearing is a finite beast, there's a limited frequency range you physically hear and only so much signal variation that is possible to hear.
If the equipment input to output signal variation is outside of those capabilities then what is it exactly that is being heard?
The ear can be a very complex animal. See how you can hear a 3-d soundstage using two speakers. Or how with some processing we can make the sound come around you even though we still use the same two speakers! (think simulated surround). The linear effects such as large dip in frequency response are indeed easy to quantify and prove audibility. While some argue for ultransonic coverage there, we can wave our hands on that
. The problem becomes non-linear distortion. Take 128 Kbps AAC compressed audio. That encoding can have ruler flat frequency response and near zero distortion. Yet it can have a non-linear, data-dependent distortion on transients called pre-echo that can be audible. The system has no distortion in one instance, and large amounts of it just a few milliseconds later. For this reason we never evaluate compressed audio using audio measurement tools. We only use listening tests as the measurements simply do not detect the distortions that are clearly audible. Computer modelling exists for those artifacts but is insufficient (or else we would dispense with listening tests).
Our current audio measurement techniques are quite ancient and deficient. Take THD for amplifiers. It is a sum power of the distortion harmonics. But the ear does not hear each harmonic distortion component equally. The ones closer to the tone may be less audible and at any rate, audibility varies based on the source frequency itself. Earl Geddes has a great paper that on this that is worth a read: http://www.gedlee.com/distortion_perception.htm
Complicating matters is that there are some really strange things gone on here. I just wrote an article for the 20th anniversary issue of Widescreen Review magazine (comes out next month). In there I show how there are acoustic distortions that look really bad in a measurement, yet human perceive them as positive and desirable. Taking them away make things worse, not better! How do we quantify these? How do we continue to trust our gut where our gut is misleading us this way?
Personally I have no better wish than to determine these limits and take out the human from the equation altogether
. To that end, I have a set of criteria that I use to establish transparency. But try as I have, I can't convince many of the vocal members here of them. They like far lower thresholds. Take CD audio at 16 bits. I say we should have a system that actually resolves 16 bits of dynamic range. They say no, that should not be the goal. They say that we don't need that because X, Y or Z reason. That's where they lose me because it is not expensive at all to achieve such comfortable targets. I don't get why I should settle for 13 bits of resolution where I can mathematically demonstrate it to exceed threshold of audibility. And again, can achieve that with very low cost. Why advocate how bad can we make the system before someone complains?
Anyway, back to my question to you, it is clear we all have beliefs here. The argument therefore is not that one side has beliefs and the other not. It is all beliefs. How well-researched and thought the beliefs are is the question.