Originally Posted by VeniVideoVici
The sample rate of an Analog to Digital Converter (ADC) can be likened to sand paper. The bigger the grains of sand, the coarser the feel, the lower the number. 60 grit is way rough, 800 grit feels almost perfectly smooth. The lower the number of samples/cycle the bigger the square steps that contour the analoge wave form, the less closely the infinite variations of the wave are followed. Sandpaper, like music does not feel smoother if you use 1000 grit. Smooth is smooth.
I think you're on the right track, but the analogy is not quite right. It's the bit depth
(usually 16 bit, 20 bit, or 24 bit) of the ADC that determines the "coarseness" of the digital representation of the input analog waveform. At 16 bits, the amplitude of the waveform at each sampling point is represented by a whole number between +32767 and -32767 (and that fine a division already looks like "really fine sandpaper" to me
). At 20 bits, the amplitude at each sampling point is represented by a number between +524,287 and -524,287. At 24 bits, the amplitude at each sampling point is represented by a number between +8,388,607 and -8,388,607.
The sampling rate
(the rates commonly used for digital audio are 44.1 kHz, 48 kHz, 88.2 kHz, 96 kHz, 176.4 kHz, or 192 Hz) determines what frequencies are captured by the digitization, and can be accurately represented in the "reconstructed" analog waveform. According to Nyquist's theorem, 44.1 kHz is more than enough to reconstruct a waveform with frequency components up to 20 kHz. 192 kHz would allow us to reconstruct a waveform with frequencies up to 90 kHz (maybe needed to reproduce musical compositions of some non-human sentient species, that I often suspect are lurking on these boards
To illustrate, consider what would happen if we "downsized" either the bit depth or sampling rate to a much smaller (worse) value than is actually used in digital audio.
Imagine digitizing an audio waveform with 24 bit depth, and sampling rate of 11 kHz. Then we could only reconstruct frequency components up to about 5 kHz, so we would lose the top two octaves of human hearing - "no highs". But the audio information at frequencies below 5 kHz would be undistorted - so the result would be similar to using a high quality digital recording and playback scheme, and then tacking on a 5 kHz lowpass filter at the output.
Now imagine digitizing the waveform with 8 bit depth (levels between +127 and -127) and sampling rate of 192 kHz. We would have information about frequency components extending well above the audible range - but because of the "choppy" digitization caused by the insufficient bit depth, we would get weird sounding distortion and artifacts in the output waveform, and the problems would be present at all audible (and higher) frequencies.