Originally Posted by davez82
since really no source is 10-bit does 10-bit matter for 8 bit sources?
Contrary to what nm88 states, it is very important.
If you did no video signal processing, no adjustment of contrast, brightness, tint, color saturation, etc., it wouldn't matter two hoots.
However, I am unaware of anyone who doesn't adjust something and most display units do it by default. And the moment you do, you end up with contouring (gradient banding) in the image.
The reason is very simple. An 8-bit image contains 256 distinct levels. As soon as the image is tinkered with, 8-bit math becomes a big problem ... adjustments cause significant rounding errors. A smooth video ramp which ought to be "134, 135, 136, 137, 138" might be calculated as "134, 135
, 138". The discontinuity shows up as a distinct "step" change and is a very annoying visual artifact. Think of underwater, smoke or sky scenes where you notice multiple bands of different colors/brightness instead of a smooth transition.
If the video processing is done in a 10-bit environment, the problem effectively disappears.
So ... an 8-bit source should be processed in a 10-bit space even if it is then displayed as an 8-bit image.
Here is an example of what 8-bit banding looks like. The original was a smooth shading with no 'steps' at all. Note one step is missing altogether in the center.