AVS Forum banner
1 - 11 of 11 Posts

· Registered
Joined
·
6 Posts
Discussion Starter · #1 ·
I'm hooking up an Oppo HD981 to a Marantz SR5001 for multi-channel music (SACD and DVD-A). Which is the preferred connection for sound quality, HDMI or analog?


I would prefer to avoid buying the analog interconnects, but if the sound quality is better, then I'll just have to bite the bullet.


Thank you!
 

· Registered
Joined
·
1,315 Posts
oblio98's wrong. Your equipment is nice and shiny and modern, so you can use HDMI.


The very first players would have required you to use 6-channel analogue, but HDMI is fine for you.


The only minor wrinkle is that the DSD sound will be converted to PCM by the Oppo to be transferred across the HDMI link. It's possible that if you used the analogue outputs it would go straight from DSD to analogue. This difference would upset purists. However, the Oppo may still go via PCM for its analogue outputs anyway; I'm not sure.
 

· Registered
Joined
·
6 Posts
Discussion Starter · #5 ·
Thank you for the responses from both of you. I will hold off on the analog interconnects for now. I would have ordered the analog cables right away if there was a strong feeling that the SQ was better. Maybe I'll borrow some analog cables from a friend after I've listened through HDMI for a few weeks.


--rsilk
 

· Registered
Joined
·
2,159 Posts

Quote:
Originally Posted by KMO /forum/post/0



The only minor wrinkle is that the DSD sound will be converted to PCM by the Oppo to be transferred across the HDMI link.

That's not really a minor wrinkle. If you want to listen to the recording as the producer intended, use 5.1 analog cables.
 

· Registered
Joined
·
3,657 Posts

Quote:
Originally Posted by Rupert /forum/post/0


That's not really a minor wrinkle. If you want to listen to the recording as the producer intended, use 5.1 analog cables.

That's only useful advice if he doesn't plan to use any bass management or time alignment. His equipment will not allow those features to be implemented W/O a conversion someplace.


In fact, it's not even clear if BM/TA can be done to a native DSD signal (even players that don't "convert" DSD>PCM for output purposes do a conversion for processing purposes--along with a re-conversion to DSD in some cases). The only way to be certain that you have a "pure" DSD path is to go with the analogue cables to a MCH input that is a true pass-through AND forget about BM/TA. Without the proper speakers and space to place them, this is not the best choice.


I have a player that outputs a pure DSD signal (if I leave all BM/TA settings to OFF) but I choose to redigitize the signal with my receiver (it can apply BM/TA to its MCH inputs) because A) I don't have five FULL-RANGE speakers to go with my sub and B) I don't have the space for a proper ITU configuration for those non-existent full-range speakers. I have tried the straight DSD signal, unprocessed for BM/TA, vs what I normally do and the result, while "pure" is worse--given the limitations of my placement options and speaker sizes.


If the equipment the OP has lets him avoid an A/D/A (which I have to use to get proper BM/TA), that's great. But a "pure" path, while theoretically the best option, is not always so in the real world.
 

· Registered
Joined
·
1,315 Posts
Quite a lot of players can perform bass management by going through DXD, which is as close as you can get to pure DSD bass managment. There's a commonly-used Sony chip that does this.


Not so sure about time alignment - I still can't understand why time alignment would be performed any differently on DSD to PCM. If you want to delay a DSD signal by 1ms, why not just hold it back by 2.8kilobits?
 

· Registered
Joined
·
3,657 Posts

Quote:
Originally Posted by KMO /forum/post/0


Quite a lot of players can perform bass management by going through DXD, which is as close as you can get to pure DSD bass managment. There's a commonly-used Sony chip that does this.


Not so sure about time alignment - I still can't understand why time alignment would be performed any differently on DSD to PCM. If you want to delay a DSD signal by 1ms, why not just hold it back by 2.8kilobits?

I read a paper two years ago (can't find the link to it) that says the Sony chip to which you refer may include a DSD>PCM>DSD conversion--it goes from one bit to eight bits and back. Regardless, it is not a "pure" DSD signal if it undergoes any conversion. As for time alignment, it appears more difficult to do than BM on DSD (conversion or no) as far fewer players support time alignment for SACD playback than those that do.


Beyond that, though, the fact remains, in the real world, most people need to do some form of BM/TA to the signal and the "pure" path becomes moot.
 

· Registered
Joined
·
1,315 Posts
By it's very nature, it's basically impossible to do anything to a 1-bit DSD signal without doing some calculations in multi-bit form. It can only be "pure" if you don't bass manage at all. Even doing a 1-bit calculation introduces more noise - every requantisation to 1-bit adds to the noise levels.


The question is - is the intermediate form high enough resolution for it not to matter too much? The Sony chip does its calculations in "DSD-Wide" (2.8MHz/8-bit). This is much higher-resolution than the general PCM paths in general receivers, and maintains the transient response, so in theory it should do a better job of bass managing the signal than the receiver could (at 192kHz/24-bit).


I'd agree that all the equipment gives the impression that time alignment is even harder than bass management for DSD. Leaves me totally stumped though. Don't understand the problem at all.


I'd say the pure DSD path is more likely to be used when playing 2-channel only. There are a lot of high-end audiophile 2-channel SACD players out there that won't be going through any sort of nasty, dirty home-theatre receiver.
 

· Registered
Joined
·
3,657 Posts

Quote:
Originally Posted by KMO /forum/post/0


By it's very nature, it's basically impossible to do anything to a 1-bit DSD signal without doing some calculations in multi-bit form. It can only be "pure" if you don't bass manage at all. Even doing a 1-bit calculation introduces more noise - every requantisation to 1-bit adds to the noise levels.


The question is - is the intermediate form high enough resolution for it not to matter too much? The Sony chip does its calculations in "DSD-Wide" (2.8MHz/8-bit). This is much higher-resolution than the general PCM paths in general receivers, and maintains the transient response, so in theory it should do a better job of bass managing the signal than the receiver could (at 192kHz/24-bit).

I'm no engineer (or scientist, for that matter--just an historian) but I've read several articles over the years that basically say any such "resolution" comparisons are really apples vs oranges. Even within the PCM camp, if you will, debates go on about what constitutes "hi-res". I've found the most compelling arguments to be those that maintain bit-depth is quite a bit more important than sample rate (so 16 bit/ 192 khz would NOT be hi-res, but 24/48 (or even 24/44.1) would be hi-res). If I were to apply that logic to DSD, then even "DSD wide", at 8 bit, would NOT be hi-res. But, even 1 bit DSD IS hi-res, so clearly we cannot make simple calculations/comparisons of that nature.

Quote:
I'd agree that all the equipment gives the impression that time alignment is even harder than bass management for DSD. Leaves me totally stumped though. Don't understand the problem at all.


I'd say the pure DSD path is more likely to be used when playing 2-channel only. There are a lot of high-end audiophile 2-channel SACD players out there that won't be going through any sort of nasty, dirty home-theatre receiver.

I agree regarding 2 channel "pure DSD". That is the most accessible way to experience "pure DSD" (the MCH gear/placement necessary for "pure DSD" is prohibitive in space and/or cost for just about everyone). All that said, I've done level matched comparisons in 2 channel of pure DSD vs the A/D/A of my receiver (I set the speakers to large, sub OFF and activated the A/D/A of the receiver on the MCH inputs--my receiver has 192/24 Wolfson DACs) and I found no difference that did not require a level of concentration I would NEVER employ, even for critical listening. I've also tested the MCH w/BM of each and there, I found a clear favourite as the xover slope of my receiver is steeper than that of the player. That trumped the "Sony chip" found in my player (a Marantz DV6400--not the last word in hi-res, of course, but a capable audio player when new, comparable to the Denon DVD-2900 for audio, if not video). Lastly, I compared playback with and without time alignment and, again, a clear favourite emerged. Steeper xover slope + time alignment + DSD>PCM conversion + A/D/A sounds better in my room (without full range speakers and space constraints preventing an ITU configuration) than the "pure DSD" path. Would I like a "pure DSD" path? Sure. But even with my Frankenstein's mess, both SACD and DVD-A are clearly better than redbook CD (where the mastering and recording quality are of high standards for each).
 
1 - 11 of 11 Posts
This is an older thread, you may not receive a response, and could be reviving an old thread. Please consider creating a new thread.
Top