I think this has already been covered in the forum - I remember posting something maybe 6-8 months ago.
The signal quality or 'resolution' inherent in a digital signal is completely determined by the number of significant bits in the signal - basically 6 dB per bit. Consumer audio is 16 bits, which gives us the standard 96dB figure. Note that it's important to focus on *significant* bits - you can't improve things by tacking on extra meaningless bits, (theoretical information content here - in practice some devices may change their behavior when fed a 24 bit signal rather than a 16 bit signal).
In order to reduce the volume of a digital signal, you have to multiply it by a number less than 1. In digital terms, this is not particularly intiutive, can at one level be viewed as shifting the bits to the right - this isn't completely accurate, but is the easiest way to think of it. Now, if you have a 16 bit value and shift it to the right, you end up with a zero in the leftmost (most significant) bit, and the lsb shifts out of the value altogether. For a 6dB reductions, this is a straight shift - the bit pattern stays exactly the same, just like dividing a decimal number by 10. For other values, the bit patterns change, but the theoretical basis is the same, like dividing by 7.
The root of the 'digital volume reduction is BAD BAD BAD' comes from trying to do this and preserving only the topmost 16 bits of the signal - in this case you have in fact lost one bit of resolution for each 6dB you attenuate the signal.
The problem changes completely if you can preserve/output 24 bits. In this case, when you start with the 16 signal bits in the 'top' of the 24 bit word, you get to keep that 'lost bit' because it just shifts down into one of the previously 'unused' 8 extra bits, and it gets sent out as a normal part of the signal. In fact, since you now have 8 extra bits, you can in theory attenuate by 48 dB before you are 'losing' any information at all.
To put this analogy in decimal terms, it is like being able to keep extra spots to the right of the decimal. eg you start off with a value of 25025. dividing by 10 gives 2502.5 - in a 16 bit world, you would only be able to 'use' the value 2502, but in a 24 bit world you actually started with 25025.000, and so can use the full 2502.500 value.
In practice, the performance of soundcards limits the amount that you can actually do this to far less than the theoretical 48 dB. Good sound cards are providing measured performance in the 105 to 115 dB range (the LynxTwo being at the top of the heap). This sets the upper bound of attenuation before the limits of the card itself start degrading the process. Still, even with things like the 410, you should get a solid 10-12 dB of attenuation before any theoretical degration takes place.
The main practical problem with this is that it is geared around using digital full scale as the 'normal' level. If your normal level is more than the 10-12dB value below 'maximium', then you'll typically be listening to a degraded signal. In other words, in order for digital attenuation to work well in practice, you have to very carefully look at the analog signal levels in your equipment chain, and attenuate/balance them to allow running as close to digital full scale as possible for 'normal' use.
It should be pointed out that this does not involve dither or upsampling or anything like that - this is just basic digital math. dither is a related but separate issue - one that allows you to tailor the degradation that is imposed by shortening the word length of a signal. This *can* make digital reduction sound better when staying with a 16 bit output signal, but it does not prevent loss of resolution, it just helps mask it.