Originally Posted by krotchy
The advantages of higher bit processing is in the mapping.
I just posted on a sharp thread about this but this thread seems to directly discuss the issue which is:
I haven't seen anyone including westa and krotchy show the scenario where you benefit from this.
The mapping technique described above does not make any sense if you're just displaying a 1080p image - the data is already quantized (mapped) by the encoder.
Now where I could see it make a mathematical difference were if the display were taking the HD data, performing special image processing on the data with 10-bit or 14-bit precision, then re-quantizing back down to 8-bit or a 10-bit display.
However if that's the case, the question becomes exactly what image processing is being done that benefits from 14-bit precision?
If it's some kind of denoising technique, well, there are pros and cons to that (you may not even like it) and again the benefit may not be clear cut.