Allow me to state a few things, in order for the possibility of more accurate answers.
Firstly a great deal of us here talks in the context of audio only. This is the audiophile part. Those asking for bit perfect output, won't be satisfied with answers in the direction of "but now it's floating point calculations, therewith all being much better than in XP". Bit perfect = bit perfect, and all which is not destroys sound (for the audiophile it is, and this is not stupid theory, but similar to induced jitter, which highly impacts the resulting sound).
At the same time, however, the last thing the audiophile would want is any processing, including any volume slider in the digital domain; What I understand from it, is that exclusive mode gets around software mixing, but that doesn't say (yet) that all volume sliders are disabled ...
Note that when *all* environmental rules are met around the current kmixer and when it would destroy sound because of resampling, it still wouldn't output a bitperfect stream because of the volume not being at exactly 100%, or whatever it is that destroys natively (!).
This part concluded, the audiophile just can't use anything else than bit perfect, and no, the audiophile won't ask for a dingdong when a new email arrives ...
The other part of us (in this AUDIO thread), is the HT user. He (she) too, wants the best sound possible, but he hardly can ask for bit perfect bass management.
I think this was explained properly by Armin, but hard to understand when the environment (and proper context) isn't set in advance of that. So :
Vista will calculate with floating point arithmics, will use 24 (32 ?) bits internally, possibly add dither underway, and all 'n all the results must be many factors better than how it's done in XP (with just 16 bits as I derive from all).
It does this when (e.g.) bass management is applied, *and* without altering the stream bass management just can't
The latter too, was indirectly explained by Amir, in his one line about the data not being allowed to exceed 100%;
For example, when the front channels -by the data itself calibrated to 100% (maximum, but it just could be the case, just as it is with about any recent CD production)- ... there's just no headroom to fill the fronts with e.g. center channel data (when you'd have no center speaker), because it would come to a rather lineair addition of amplitude data, therewith letting clip the sound in the digital domain.
Vairulez, from whom I originally learned these principles
, will know himself he's asking for the impossible (bit perfect playback) ... if it were about bass management in the digital domain.
Amir, however (knowing it all perfectly as well), should respond to questions in the area of volume sliders, that these should be on the amp -and in the analogue domain- only. Just because there it is perfectly allowed, then with the basis of bit perfect (per channel) streams. Again, this matters similarly to the jitter phenomenon.
Only when the amp doesn't have this provision (and this is about always the case), then the solution can be brought by bass management in the digital domain, as an option.
With the above I tried to get a more clear basis for the discussion, sensing that I myself couldn't get the answers, no matter the questions were good, but without the IMHO needed context.
Of course it is *necessary* to get the appropriate feedback where I was wrong in the above ...
I'd like to add a question :
The in-depth explanation so far was about exclusive use, connecting to the audio renderer's pin, therewith addressing the soundcard directly;
This would come to DirectShow in the context of XP.
But what about DirectSound ? I'm not sure, but I don't think the DirectSound programmer can talk in terms of connecting to pins etc. He's (XP) talking in terms of Secondary and Primary Buffers. Would it be possible in the DirectSound-representative of Vista to have the exclusive use with the same results ?
Please note that getting exclusive use to the Primary Buffer wouldn't do the job, because -as it is in XP- this would still go through kmixer as soon as the soundcard doesn't provide hardware mixing. This, I think, depends on the size of the buffer needed, IOW, when the buffer is too large for the soundcard you'd go through kmixer again. My question would come to something like : would it be possible to get exclusive use to (all !) the secondary buffers in order to guarantee the kernel that nothing (with different sample rate etc.) may cause the kmixer to jump in.
I know, the question looks simple, but the answer probably is not at all, because of the very different way of working (??) in Vista.
Sorry for the long post,