Just to elaborate on a couple of digital basics
The sample rate (192KHz) controls the highest frequency that the digital system can "see" and encode. The theory and practice say that if you have 2 samples per cycle, you can accurately represent the frequency. At 192 KHz, then, the system could encode up to 96 KHz sounds. The standard, seminormal expression of human hearing extends to 20 KHz, so the 192 KHz system can encode a little over two octaves that humans will never hear.
As a corrolary, if you feed a digital encoding system a signal with higher frequency than it can handle (say 25 KHz into a 44.2 KHz analog-to-digital converter), ugliness happens, because the system never sees the "whole" 25 KHz wave. It will ineterpret what it can see and spit out some likely completely unrelated lower frequency that's fairly sure to be within the human hearing range, and very likely to be dissonant. So they have to filter out the frequencies that are higher than the digital system can handle before the signal gets to the analog-to-digital converter. It is often said that in early digital, that "brickwall" filter actually was a major source of less-than-pristine sound. Generally much better now.
The bit depth (16, 24 or 32 bits) controls the (digital") dynamic range - - the difference between the loudest and the quietest sound the system can encode. As indicated above deeper bit depth can exceed 100 dB dynamic range. But most of the analog steps that follow conversion the digital to audio have, at the very best, around 100 dB or so of dynamic range above their noise floors. So arguably the extra dynamic range is "wasted" because our systems can't reproduce the quiet end of the scale (assumming they can reproduce the loud end without distortion, compression etc (especially from teh speakers).