Higher bit depth (e.g. 12bit from Dolby Vision vs 10bit from HDR10/+) is much better, period. It’s better for current budget display capabilities (when combined with metadata), it’s better for current flagship display capabilities, and it’s better for future-proofing -- since 4K -> 8K up-scaling will work better, and obviously will be much better if/when we get extremely
high dynamic range microLED TVs.
Originally Posted by SpeedDemon
Originally Posted by cam94zee
I purchased the movie Christine (4K) from Movies Anywhere. It propagated to Prime Video (standard UHD), VUDU (Dolby Vision), and directly in the Movies Anywhere App (HDR10). I am seeing HUGE differences between the formats. In the Movies Anywhere app on HDR10, the detail is acceptable, but there is a ton of noise. I'm assuming it's film grain, but it's very noticeable, and sometimes distracts from the scene, with noticeable patches, or splotches of higher noise. On VUDU, the format is Dolby Vision. This version looks Amazing, with great shadows, highlights, and very little or no noise/grain. The Prime Video version looks washed out and hazy, but I would expect this, not being in an HDR format.
Anyway, My question is, should there be such a noticeable difference between Dolby Vision, and HDR10? I will have to watch the movie entirely on both, and get an overall opinion of which I prefer more. At the moment, I 'm leaning toward Dolby Vision.
What you're describing has everything to do with the fact that each streaming service often has their own different encode of the movie.
The grain, detail, and noise you're describing has little/nothing to do with the HDR format.
With that said, there can be a significant difference between DV & HDR10 and the easiest way to observe this is to playback a DV video on a player that allows you to toggle off the DV metadata and watch with just HDR10... Oppo players let you do this.
DV should have more natural brightness/contrast/gamma since the metadata allows the image to be optimally tone-mapped to your display as the mastering engineer intended.
While I don't disagree with your assessment here, I would point out that 12-bit is a distinct improvement over 10-bit that actually will show up even on 10-bit (or lesser, for that matter) displays for reasons that are perhaps somewhat subtle. This could manifest not only as differences in posterization, but also in 'grain' and 'noise' depending on how the 10-bit content was mastered (specifically, to what extent they encode dithering into the content).
While in theory there would be nothing gained from 12-bit content on a 10-bit display if the content was simply presented as-is, this is not the case in the real world because each TV has a different subpixel structure, brightness/contrast/sharpness capability, calibration, and core display technology (e.g. FALD + LCD vs OLED, etc.) As a result, the input content must be mapped somehow to the TV's capabilities, and the unique limitations of the TV (which is always the case, even with HDR10 -- at least looking at the current state-of-the art in consumer TVs) must be factored in as well.
I hope it's well known that 10 bits is NOT enough precision for much HDR content out there (e.g. posterization in sky gradients, extreme dark scenes, etc.), and this must be mitigated with various tricks (e.g. dithering) either during mastering, or by the TV as it retargets the content from 12bit -> 10bit (or in some cases as in 'smooth gradation' filters, tries to intelligently figure out where quantization is a problem, and intelligently tries to recover what was lost during the mastering process).
Take dithering for example: Either approach (applied when mastering a 10bit video, vs applied dynamically by the TV's processor to map a 12bit master to 10bit panel) works well, but the problem with applying dithering etc. during mastering is that you have now permanently
baked in your set of tradeoffs into the content, and the parameters you chose for dithering will not (and cannot) be ideal for all TVs (because when nonlinear operations like tone mapping, gamma curves, etc. are applied on top, the dithering no longer 'works' the way it's supposed to). In contrast, when the dithering is applied by the TV's internal processor chip (e.g. when rendering 12 bit content to a 10 bit panel), the best dithering algorithm for that particular TV can be chosen, and can be applied after
tone mapping, gamma correction, color calibration, etc. And, in the future, when native 12+ bit panels are available, perhaps no additional dithering will need to be applied at all.
So in this sense, 10 bits of precision could very well cause EITHER more grain or more posterization artifacts than 12 bits -- even when the content is being rendered to 10-bit panels!
Also, this is NOT simply a question of Dolby Vision being better just for the less capable TVs today. The extra bit depth will enable older content to take advantage of future TVs with greater dynamic range, but more than just that it will also result in much better 4K -> 8K upscaling results! Again, consider dithering as an example: If dithering is encoded into a 4K 10-bit signal to achieve an effective resolution of 12 bits on 10 bit panels, that content upscaled to 8K will most likely still have the 4K resolution dithering pattern (which will be much more visible as noise) vs the ideal of performing dithering at the native 8K resolution by the TV internally *after* upscaling.
In theory it is possible to retarget dithering across resolutions via “AI” (deep convolutional neural networks), but it’s much more computationally expensive AND less accurate (e.g. likely to accidentally smooth over genuine scene details). Ultimately, you can't circumvent the information theory: Encoding dithering trades better effective color and luminance dynamic range, in exchange for a bit less effective spatial resolution. Encoding more bits per channel (when this is appropriate to the 'ground truth' of the scene) is therefore always better both for old, current, and future TVs alike.