Originally Posted by amirm You clearly said then that the mass market companies have tools that when it comes to audio fidelity, high-end companies lack.
I am challenging you to list those tools. There is nothing in common between the two messages above.
(bolding mine) No, that's not what I said, Amir. You were touting boutique capabilities and I provided a counterpoint that mass marketers have some firepower towards providing high-quality audio at their disposal too. Meaning, stuff that would have been 'high end' at one time can trickle down to the mass market thanks to economies of scale. Period. Give it up!
Oh no you don't
. You said that I should ignore a measurement because you know it is too good. It is your assertion that it is too good. So you are the one who has to demonstrate that.
Huh? Ignore a measurement because I know it is too good? Haven't a clue what you're on about here. Apparently we're not communicating.
Remember, on my side I have the math and physics of the thing saying it is not good. So any counter position is the one that requires proof. You want me to ignore a performance metric, show me why.
Now, we both know that you can't answer that question. Given that, you are advocating bad advice. You are telling me to stick my head in the sand without any justification.
I'm asking you to provide the evidence if you heard something different on your system, and it was due to jitter. Haven't you
claimed to do so? With the Revels?
Read the article. It wasn't my test. Their trained listeners didn't just say the system sounded back, but said it sounded broken! This is a system with a measured THD of 0.03%. Direct quote from the article:
“When the guys in charge listened to the prototype I saw dubious faces and was asked a variety of questions such as "Is the source coming from the PC corrupted?" In the end I was told to measure the audio performance. When I announced the results in a subsequent meeting I was told the distortion was an order of magnitude too high; the THD+N was 0.03%.”
You think they needed a blind test to tell the system was corrupting data? Do you need a blind listening test if your audio system acted in that manner one day?
As I've written, measurements + past research on human hearing can indicate when an audible difference is likely, without a blind test.
I will repeat again: I will be getting better than 90% if not 100% of the people to say there is no difference to them between the original and one shifted 5%.
Well for one thing, show me the methods and data. Details...this is what people keep asking of you. For another thing, is that success rate anything close to what success you'd get with jitter audibility tests? If 5% green shift on TVs is significantly more obvious than the audio jitter levels we're talking about, your analogy breaks down. Just trying to keep things apples to apples here.
Heck, why not just use 320 kbps mp3? That is for sure vanishingly unlikely to be audible to most listeners, on most material, in most situations, but no one knowledgable says it COULDN'T EVER be audible. So some listeners, even those who have ABX tested themselves and have never been able to tell the diff, might still never use it, 'to be safe' (personally I'm fine with 198kbps VBR, LAME, for my lossy codec needs) . That seems a better analogy, at least it's still audio.
I would posit, though, that we haven't even established that jitter levels reported in from HDMI are audible as often as 320 kbps mp3! So again, we should worry about hearing audio jitter in our home HDMI connections? Seriously?
Remember, the stats are in my favor. Tens of millions of TVs are sold which are way, way more screwed up than 5%. Do you see headlines anywhere in mass newspapers and magazines that people are buying non-performing devices? Nope.
And of course, buying or not buying is not the same as seeing no difference
. Tolerating difference doesn't mean it isn't seen. Again, though, I really need to know if you have data showing that a 5% green shift is comparable to jitter levels reported for 'failed' audio gear, in terms of perceptibility. Perhaps we can start with what your own success rate at both is, in controlled tests?
I am not going to let you change the topic
pretty funny for you to write that, given your divergences on this thread. It's not like I'm the one who brought up video, for example. Sheesh.
I gave you a perfect analogy and one that I know you associate with as much as I do. I said I am proposing a *measurement* standard and quality bar for digital audio. You are advocating against it. I want to know why you won't do that for video, but will for audio. They are both measurements.
If I came here and said I have measured my projector with my $20,000 Minolta meter and my colors were off by 5%, would you ask me to ignore that and go run a blind test to see if I can tell the difference that way and if not, ignore the problem? If so, I like to see a few posts in this forum where people have done that.
'Perfect' analogy? Well, we can differ on that. Video analogies to audio are a minefield, in my experience. I'm all for quality standards, btw. Let's aim for the stars. But let's not imply that if you use HDMI for 2-channel audio, you're losing out, time to upgrade! I'm not for claims of artefact audibility that aren't backed up well.