Originally Posted by cansp6
I might be hitting my head against a wall here, but we go. Quantization errors are due to encoding of the waveform in some sort of PCM scheme. It has absolutely nothing to do with the sampling frequency. The 44.1kHz or 48kHz is simply a measure of how many samples (of the waveform) you are taking each second. There is mathematical proof, based on our current knowledge of the universe, that in order to be lossless the sample rate needs to be double the frequency of the wave. Since human ears are incapable of hearing anything about 20kHz or so, there is no audible reason to sample more than 40,000 times per second. Let's be really sure and use 44,100 data points.
So now we have 44,100 data points for each second of music. We then need to package these data points in some sort of way so that we can exchange this information between the player and the receiver or whatever. CDs use 44.1/24 which means that each data point is encoded using 24bits, which is enough to record values anywhere from zero to 16,777,216 if we use the absolute basic, first grade level of PCM.
Now I suppose you could argue that 16.7 million is not enough to properly represent the height of the wave at each of the data points. That somehow some information is missed by using this constrained quantum (range). I suppose you could argue about quantization errors. In theory, it's possible. On the other hand, in theory anything is possible.
But if we're going to talk about theory, then we really need to remember that no modern day ADC is going to use such a simple and inefficient PCM algorithm. Modern day ADCs use QPCM, which encodes the full height only for the very first data point and thereafter encodes differences between this data point and the next. I don't think I need to explain how this method allows for a significantly higher range to be encoded.
Now getting back to the 44.1 to 48kHz conversion, a very basic approach to create the missing 3900 data points is simply to make them up. You slot the missing data so that each new point is in between 2 existing points. Then, you calculate the average value of the 2 existing points and assign it to the newly added point. Using the same proof as before, we can guaranteed that not only is no new information added, but also that no old information is lost. Thus, you get a lossless conversion from 44.1 to 48kHz.
Do I win the internets now?
Let's be cool and keep it informational rather than confrontational. This stuff is pretty interesting, though I am sure we are boring most of the readers.
I looked a bunch of this stuff up as a result of our exchange. I looked it up because I suspected that you were oversimplifying things and because I wanted to understand how resampling worked and what its limitations are. I remain convinced that I don't want my hifi system to upsample from 44.1 to 48KHz because it definitely changes the audio, for the worse, even with a high-quality resampler. That change may or may not be audible, but I'd rather not take the chance. And it seems likely that the $15 USB sound card doesn't use a high-quality resampler, so I stand by my suggestion not to use it.
Let's say that upsampling from 44.1 to 48 requires about one new sample for each 12 samples in the original file. So imagine the following 12 samples from the original file:
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11
Your proposal is to squeeze in an extra sample with a value that is the average of the two adjacent samples. So here is the resampled version, with the new sample added in the middle (another 6):
0, 1, 2, 3, 4, 5, 6, 6, 7, 8, 9, 10, 11
These two series of samples will play with precisely the same duration. Notice that the audio produced will not be exactly the same. The output of the resampled version becomes 1 at just 1/13th of the way into playback, where the original does not go to 1 until 1/12th of the way through. The resampled version also plays two 6s in a row, while the original progresses linearly from 0 to 11. The resampled version is inferior to the original.
This is the simplest example I could come up with. And though the resampling algorithm you proposed is not a good one, I think it does as well as any could do on this set of samples. No upsampling algorithm is perfect when the new sample rate is not an integer multiple of the original sample rate.
If 13 samples were taken from the original analog audio waveform (and not the 12 samples we started with in the example), the extra sample could end up anywhere in the sequence. Let's say that it should have occurred at sample 2. That means all the samples between sample 2 and where we actually added the new sample are off by 1. Perhaps calling this an increase in quantization error isn't quite right (or maybe it is?), but the effect is quite similar, since it is caused by the limited resolution of the samples.
Here are some links about Nyquist's Theorem and resampling that you may find interesting:http://www.wescottdesign.com/article.../sampling.htmlhttp://ccrma.stanford.edu/~jos/resample/http://www.mega-nerd.com/SRC/