Originally Posted by krabapple
But there are economies of scale that a mass market mfr can employ that a boutique can't.
How many CE companies have you visited Krab? I have been to all of them from Korea to Japan and back. Have you worked for a CE company? I have, for five years for Sony. I can tell you that your generalization is completely misapplied to this topic.
Much of what we are talking about in this thread comes not from VLSIs but great attention to details in analog design and in some cases, the cost of getting there. Boutique companies have an incredible advantage here: Cost is not a driver. Weight is not a driver. Size is not a driver. Getting BestBuy to carry is not a driver. As an engineer and a manager who has had to live within bounds, I can't tell you how many shortcuts one has to take to live within this barbed wired jail.
Mass market companies get incredible advantage when it comes to latest VLSI, software stack for it, etc. So if we were talking about who was going to get to HMDI 1.X faster, you would be very right. But a power supply design for a DAC? Clock circuit and PLL? Nope.
Now, being a high-end company doesn't mean you know what you are doing and many don't. They lack test equipment to even measure some of the things we are talking about. Unfortunately, mass market companies are no better as there is no marketability in the stuff we are talking about. It is not a logo to slap on the front of the box. So it doesn't matter. If the box performs well, is more of an accident than attention to design. Look at Pioneer's latest AVRs taking a step backward on jitter.
And even if that didn't make for better DACs on average, in the end, it's about what's audible, and under what conditions they are audible, right?
Let's try a fresh angle on this age old question. If you bought a TV, and its color was shifted 5% to green, do you still get enjoyable pictures out of it? Answer is sure. Millions of people watch such TVs.
As you know, folks who care about the best picture quality get their set calibrated to correct settings and avoid display that cannot calibrated. We want to comply with SMPTE 709 for HD images. We have a metric of what is right in that standard.
By the same token, I advise people to buy audio equipment that meets the minimum measured specifications. And I don't pick high targets. I just say let's get an honest 16 bits of fidelity at 44.1 Khz. 30 years after the introduction of the CD, we should not reward companies that routinely butcher the last 1-3 bits of such samples. A lazy specification for this 500ps peak to peak. Think of this as SMPTE 709 for digital audio.
Many times this doesn't require spending more money although sometimes it does. In this thread, I have suggested an option of a USB bridge that takes a $500 PC and elevates it to the status of $30,000 transports. And level of convenience that simply does not exist with those products. Normally this would be time to go and celebrate but instead, we have 7 pages of debate. Why would you not want to have superlative jitter spec out of your PC when the whole package costs so little???
Your argument is if you can't prove it is audible, I don't want to listen to you. I find that inconsistent with the video analogy I used. I enjoy CSI the same amount whether the screen is 5% green or not. But I like the color correct if I can get it. When a skin tone comes on screen and it is off, it bothers me. You can't come and tell me it shouldn't because it is happening to me, some of the time, with some content and not under blind test criteria. It is my money and if I am searching for the best -- which this forum is designed for -- I should be able to use the measured criteria to choose the right product.
Of note, I am not trying to as you to adopt this philosophy. I am OK with everyone making their own choice. What I am not OK with is stopping every discussion and demanding audibility tests. I tell you that I want to use measurements and you can't tell me I shouldn't. After all, you are not going to argue that equipment that measures better is worse, are you?
Is this the case, and if so, what loudspeakers did NOT prove so revealing?
No speakers. As I shared with him in PM, at the time I was deeply in DAC testing, I would use my headphones exclusively because I could do that at work and home and not bother anyone. Lack of usage of speakers by me is not indicative of anything other than how I could get my testing done.
And does it say something that it requires a setup of THAT calibre for even someone with YOUR fine hearing, to perceive the difference?
No it doesn't. All the major audio codecs in the world are tested with corner cases which heavily accentuate compression artifacts. By your logic, we should not use them since it might indicate we are deaf otherwise, or that other music doesn't have such characteristics. That is not how the real world of audio evaluation works.
Given a job, we want it to be as easy as it can be. I am always looking for the most challenging material to use DUT (device under test) and pair it up with other equipment that is as transparent as it can be. Such an approach does not mean that without such choices, the problem goes away. You can't invert something and always expect it to be true.
Besides, these choices aren't always expensive. I have tested components recently with the Revel M22s which are $2,200 speakers and they are amazingly transparent and scary close to the performance of Salon 2s in the mid and highs. I would use them happily to test DACs and such as much as high-end gear. Sure, I would not detect differences in bass as much as I can with Salon 2s. And perhaps I lose a bit of detectability in the other region too. But work can still get done and get done well.
Interesting. Currently I play all my music from external HD --> laptop optical S/PDIF out, to my AVR, (which 'FAILED' the miller jitter tests, btw, but it sounds pretty good to me...). My AVR accepts USB audio input directly. I only ever stream 2-channel LPCM, or multichannel DTS/Dolby lossy compressed bitstreams, not multichannel LPCM. Is there a good chance the fidelity of the signal from laptop to AVR output would be higher using the USB connection than the S/PDIF?
You should test it. All else being equal, I expect it to actually be worse than S/PDIF due to similar issues reported in the TI USB DAC. USB jitter spectrum is awful and as I have noted time and time again, that is often more important than the single number jitter value.
If you are on the PC, pop up the sound device manager. Start playing something and then change which device that is the default and while that is switching over, do the same on the AVR. This will still mean a multi-second delay but is livable. WMP will usually continue playing so you don't need to restart it. Ideally you would have two PCs playing the same track so that switchover is instantaneous. I am actually building a version of this at work.