Hi PortaPro:
___I am not really good at explaining this nor if this is what you wanted but I will give it a shot.
___DPL or DPL II are multiple outputs created from just two front (R & L) channels alone and mixed for a simulated surround. DD or DTS sound tracks are individually and discretely recorded and/or created channels and are provided off the disc in their digital format as the director of the DVD/LD wants you to hear it. These are two completely different sound setups. Using DPL or DPL II created sound channels and sending it to a DD 5.1 encoding scheme in a digital format, which is what the nForce APU (Audio processing unit) or MPC as Nvidia calls it, isn’t going to work like you want nor is it probably coded properly for the APU to perform the sound processing in the first place.
___What the nForce APU is supposed to do in a game, is create the 5.1 discrete sounds as if you were interacting inside the game you are playing. The game must also be coded for the process. Imagine you are a player and you hear an enemy/player moving up from behind and you spin to engage the enemy/player with guns a blazing. You first hear the sound of the intruder behind you in a 5.1 format, you than would hear the sweep of sound as you spun and finally the sound of the intruder in front of you as your head to head firing away. This is dynamic in that the sound will change as you change position in the room (spin) as you see fit. The same can be said for the sound source (player/bot) in the room as he/she changes position (turns and runs with guns blazing toward you) or the room itself changes as you walk/run at your pace or direction through it (sound in a narrow hallway sounds different than in a large auditorium if you were to walk from one to the other). Again, this is dynamic as it should be when you interact with the surroundings in the game.
___When you watch a movie, there is nothing dynamic about it in relation to where you are viewing it from other than the action on the screen itself. You do not change position (interact with) the movie but only the display is changing and the sound recorded/played back as the director sees fit to match the scene being displayed. Neither you nor your wife will hear the scenes much differently (sitting in the back or far side does sound different than sitting in the middle of course) and this is how the sound is locked down so that no matter who watches the film, they will hear basically the same sound as you do no matter where they watch the film from because its placed on every disc exactly the same. There is no processing other than decoding the 5.1 DD/DTS signal inside of your DD/DTS capable receiver or from the SW DVD players to a 5.1 sound card via analog output to a 6 channel + amp.
___All the above was posted to show that a movies sound is locked down on the disc as it is supposed to be to match the action on the disc and the nForce’s APU should not change any of the attributes of those sound channels. It should simply pass-thru to your DD/DTS decoder what the director wanted you to hear in the first place. In a game, the nForce’s APU is supposed to calculate what the various sounds will sound like from your point of view albeit your position, direction of movement, the room around you, and the various sound sources as they move as well. This of course changes continuously.
___As far as gaming is concerned, the problem with a dynamic game sound is that there is processing going on to calculate how the sound should be output and with those calculations comes a small amount of latency. As you spin for example, the APU is calculating just a small fraction of a second behind the action what/how the sounds is supposed to be presented to you as it matches the scene on your display of choice. This small fraction of a second is sometimes noticeable in that the APU gets it wrong or is to far behind the scene and you notice. You can limit the sound algorithms complexity or input even fewer variables which reduces the life-like sound you would expect. The life-like sound processing that the nForce APU is currently capable of is ~ 1 to 2 frames behind the scenes being rendered from what I have read. This is almost not noticeable in most cases but is supposedly noticeable in some situations.
___Anyways, as far as the nForce processing a 5.1 output from an already pre-processed true 2-channel to 5.1 DPL II signal, it is likely to give you a mess if I understand your question correctly. Also, I do not have a grasp on the technology nearly as well as I should so hopefully some of the audio processing astute members can teach us both a thing or two
___Good Luck
___Wayne R. Gerdes
___Hunt Club Farms Landscaping Ltd.
___ [email protected]
___I am not really good at explaining this nor if this is what you wanted but I will give it a shot.
___DPL or DPL II are multiple outputs created from just two front (R & L) channels alone and mixed for a simulated surround. DD or DTS sound tracks are individually and discretely recorded and/or created channels and are provided off the disc in their digital format as the director of the DVD/LD wants you to hear it. These are two completely different sound setups. Using DPL or DPL II created sound channels and sending it to a DD 5.1 encoding scheme in a digital format, which is what the nForce APU (Audio processing unit) or MPC as Nvidia calls it, isn’t going to work like you want nor is it probably coded properly for the APU to perform the sound processing in the first place.
___What the nForce APU is supposed to do in a game, is create the 5.1 discrete sounds as if you were interacting inside the game you are playing. The game must also be coded for the process. Imagine you are a player and you hear an enemy/player moving up from behind and you spin to engage the enemy/player with guns a blazing. You first hear the sound of the intruder behind you in a 5.1 format, you than would hear the sweep of sound as you spun and finally the sound of the intruder in front of you as your head to head firing away. This is dynamic in that the sound will change as you change position in the room (spin) as you see fit. The same can be said for the sound source (player/bot) in the room as he/she changes position (turns and runs with guns blazing toward you) or the room itself changes as you walk/run at your pace or direction through it (sound in a narrow hallway sounds different than in a large auditorium if you were to walk from one to the other). Again, this is dynamic as it should be when you interact with the surroundings in the game.
___When you watch a movie, there is nothing dynamic about it in relation to where you are viewing it from other than the action on the screen itself. You do not change position (interact with) the movie but only the display is changing and the sound recorded/played back as the director sees fit to match the scene being displayed. Neither you nor your wife will hear the scenes much differently (sitting in the back or far side does sound different than sitting in the middle of course) and this is how the sound is locked down so that no matter who watches the film, they will hear basically the same sound as you do no matter where they watch the film from because its placed on every disc exactly the same. There is no processing other than decoding the 5.1 DD/DTS signal inside of your DD/DTS capable receiver or from the SW DVD players to a 5.1 sound card via analog output to a 6 channel + amp.
___All the above was posted to show that a movies sound is locked down on the disc as it is supposed to be to match the action on the disc and the nForce’s APU should not change any of the attributes of those sound channels. It should simply pass-thru to your DD/DTS decoder what the director wanted you to hear in the first place. In a game, the nForce’s APU is supposed to calculate what the various sounds will sound like from your point of view albeit your position, direction of movement, the room around you, and the various sound sources as they move as well. This of course changes continuously.
___As far as gaming is concerned, the problem with a dynamic game sound is that there is processing going on to calculate how the sound should be output and with those calculations comes a small amount of latency. As you spin for example, the APU is calculating just a small fraction of a second behind the action what/how the sounds is supposed to be presented to you as it matches the scene on your display of choice. This small fraction of a second is sometimes noticeable in that the APU gets it wrong or is to far behind the scene and you notice. You can limit the sound algorithms complexity or input even fewer variables which reduces the life-like sound you would expect. The life-like sound processing that the nForce APU is currently capable of is ~ 1 to 2 frames behind the scenes being rendered from what I have read. This is almost not noticeable in most cases but is supposedly noticeable in some situations.
___Anyways, as far as the nForce processing a 5.1 output from an already pre-processed true 2-channel to 5.1 DPL II signal, it is likely to give you a mess if I understand your question correctly. Also, I do not have a grasp on the technology nearly as well as I should so hopefully some of the audio processing astute members can teach us both a thing or two
___Good Luck
___Wayne R. Gerdes
___Hunt Club Farms Landscaping Ltd.
___ [email protected]