Originally Posted by hogger129
I understand that it's necessary to master
in 24/88 or 24/96, but that 16/48 or 16/44 is good enough for the consumer level.
Except that's not how it is.
At the pro level every box has a 24/96 mode, and its hard for people doing production to avoid it just because it is there. Some people are thinking about the relatively tiny incremental additional sales of actual high-rez media, given that SACD and DVD-A have become niches. But let's be clear about this - today the vast majority of media that is ever going to be sold is going to 44/16 or less, and often its less or much less. I'm talking about digital online sales which are generally and overwhelmingly perceptually coded.
44/16 is actually an overkill format, people have pretty uniformly never had the ability to hear the difference associated with anything much more than 32/13. 48/16 justifies itself by fitting into video formats.
Does anybody know what benefit there is in 48khz over 44.1khz? I notice nearly every movie soundtrack is encoded in 48khz. Is there any benefit to it for music?
48 KHz sampling has historically fitted better into frame rates and data formats used in video production. At one time it simplified things for video and the sound associated with it to be synchronized at several levels - data words, frames, etc. Today lossy compression of both audio and video are the de facto standard and the hardware and software is sophisticated enough to not really benefit that much from being rigidly synchronous, but customs and comfort zones persist.
I mean, what does sample rate mean when everything is lossy-compressed and perceptually coded?
Originally Posted by arnyk
In my reading of this, the word Mastered
in "Mastered for iTunes" is most likely the truly relevant word.
DACs, not so much.
The purpose of mastering is to make recordings sound different. Since all good DACs sound the same, their purpose is to avoid sounding different (from sonic perfection)
I am finding that Mastered For iTunes is supposed to be a 256k AAC VBR file created from a high resolution master. Also seen forums that say it's just a way for Apple to sell more Macs to mastering engineers.
I think that Apple is genuinely interested in sound and video quality, both the perception and the actuality.
I don't really know what difference there is in it if it's Mastered For iTunes or not. Wouldn't a good CD-quality file encoded to iTunes Plus sound just as good as a high-res 24-bit file encoded to iTunes Plus? In the end, it's all 256k AAC VBR.
Exactly. The rule of the weakest link persists, and in those production chains the weakest link from a sonic perspective is potentially AAC, if there is a weak link at all. The real weakest link is usually the listener's ears and the sonic environment in which he is listening.