Originally Posted by coyoteaz
There are places where PQ can vary between boxes. Some steps in the decode process like IDCT do not require that all implementations have identical outputs given an identical input. There are tolerances that have to be met, but depending on how precise the math is internally, there can be noticeable differences in the output. This is largely the result of the fact that codecs like MPEG2 were designed 15-20 years ago when processing power was still expensive. One would expect that newer hardware would give better results because of how cheap processing power is these days, but that's not always going to be the case. There could also be other factors such as postprocessing (deblock, dering, denoise, soften, sharpen) of the decoded video that may produce an image some consider better or worse.
None of that is either relevant to, or changes the fact that the decoded video signal as it enters the HDMI receiver chip in your HDTV would be exactly identical to the video just before it is encoded, were it not for the artifacts added during compression. Were the video to be decoded at that point, it would be exactly identical to the video as decoded in your DVR, that immutable ability to be impervious to being otherwise changed being the basic reason digital delivery is used in the first place.
The PQ will not vary as long as the file is still a file, which it still is until it is decoded, or until that file is changed by virtue of having mathematical operations done on it, which does not happen between final compression at the uplink and decode just before entering the HDMI transmit chip in your DVR, and when finally done is done identically by all decoders.
Inverse DCT is a step basic to decoding, so technically is "before" decoding completes, yet there will be no variance there either, because the formula to do that also does not vary; the IDCT method is fixed at decode to conform to the identical reverse of what the fixed DCT does at encode, which is where it gets the moniker "Inverse DCT". Only a theoretical rogue decoder that does not follow the MPEG decode algorithm, something which does not exist, would IDCT the signal differently than MPEG instructs it to.
"How precise the math is internally" is also fixed. In the case of consumer HD, it is fixed at exactly 8 bits. You can't get more precise than doing the math perfectly, which is what decoders do. The "lossy" part of limiting quantization to a particular bit level, or rounding, happens during digitization just before the encoder, so the errors that creates are baked into the signal for everyone equally. The level of math can become less precise with chained operations, but those have taken place well before the signal is delivered, maintaining the exact same level of preciseness, or impreciseness, for everyone.
"Tolerances" only apply to analog thinking. When you are limited to ones and zeroes, there are no tolerances. You can't have "almost a one" or "just about a zero"; instead you have either one or zero, and nothing in between and nothing more. "Tolerances" in the digital domain are a non-existent concept, because all that exists are two integers, and integers are precise entities.
There may be a loss of some bits during transport, but FEC corrects for that. If the errors are large enough, the signal is muted rather than decoded, and the threshold for that is lower than the threshold that would allow the picture to become degraded, so a loss of PQ is never possible before the point where you might lose the picture altogether.
How well that math is done at decode is also the same for all decoders designed to work with that signal, and that math is limited and fully quantified and therefore simple enough to be done perfectly, with ease. There is no impreciseness there, only during encode and pre-transport processing. Bottom line, that means all MPEG-2 decoders will enjoy identical PQ when decoding the same signal. The same exact level of errors, the same exact level of artifacts due to those errors.
Regardless of the age of the MPEG encoding algorithms, they have not changed in any significant way since they were created. Once a recipe for encoding and decoding is standardized, it can't be allowed to change. If it was allowed to change, legacy decoders would not continue to work. You have to have a standard DCT to allow a standard IDCT, and changing either without changing the other in an identical inverse manner would only result in degradation of the PQ. Improvements come only in new algorithms and new ways of applying them at non-consumer levels. What MPEG-2 and MPEG-4 AVC do at the consumer level has not changed since standardization, which is what "standardization" means.
Denoise, deblock, etc. are all "turd-polishing" attempts that only happen in the analog domain, well after decoding. They are attempts to mask artifacts, but rarely provide much in the way of picture improvements. They are not really found in STBs, and they really are not even needed. You might find rudimentary versions of those in some DVD players or some HDTVs, but even the best of them don't really use them anymore.
Bottom line, the video coming over the DBS sat decodes identically for every STB out there, bringing true equality in PQ for everyone (at least until they muck it up in their HDTV settings). The images that you see in Dallas while watching "Fringe", for instance (assuming we have the same model TV calibrated identically), are indistinguishable from the images that I see in Phoenix, because they start with a common digital file that remains in the digital domain all the way to the TV, and the parallel processing chains to get it there are identical.