On slide 40 of 45 in "What do we mean by audibility" from the Parson's website, it is stated,
In the case of 20-bit vs. 24-bit, the actual change in voltage magnitude is less than 1/10,000th of 1%.
What do you mean by that?
Thanks for the question!
20 bit yields a ratio of approximately 1:1,000,000, while 24 bit yields a ratio of approximately 1:16,000,000. Assuming that the voltage ratios have a fixed and identical upper value of, say, 1, the 20 bit lower value will be 1:.000001, or 1/10,000th of 1 percent, while the 24 bit value will have a lower value of 1/16th of 1/10,000th of 1 percent, or, slightly less than 1/100,000th of 1 percent.
These are extremely small magnitudes and magnitude changes, and are essentially well below the noise floor of any reasonable audio signal. The actual CHANGE in magnitude will be from, say, 1 millionth of a volt to 1/16 millionth of a volt. Viewed as a percentage change, this will be, by definition, less than the % of signal that the 20 bit signal represents, which is, once again, 1/10,000th of 1%.
This all works out this way because we tie everything to a maximum level of 0 dBFS. All changes in magnitude ratios happen at the vanishingly small end of the range.
I hope this helps. If not, give me a call.
Thanks for writing.