Originally Posted by vadergr
From your signature i understand that you also own Zidoo Z9s and Dune HD Pro 4K.
Can you make an HDR image quality comparison? How crucial/benefiting is the max cll/fall information to the image quality (Zidoo x9s vs Z9s) ? Is there obvious difference?
Thanks in advance.
Theoretically, MaxFALL and MaxCLL are important to how the display tone maps the encoded video to fit within that display's capabilities - say the display can max out at 400nits and MaxCLL is 900nits and MaxFALL is say 200nits, the display should tailor the gamma curve to tail off so 900nits maxes the display at 400nits and everything between diffuse (reference) white (100nits) and that target is compressed in that rolloff taking into account the 200nits of MaxFALL.
All this assumes the display is designed to do that, the colourist has got his numbers right (they do these days, but they didn't used to in the early days!) and those numbers have made it through the mastering process intact (sometimes they don't!).
In reality, since the display manufacturer knows the capabilities of the display - let's say the 400nits as above, and because diffuse white is defined as 100nits by the spec (which is the same as SDR) then I'd argue that a generic PQ (perceptual Quantiser) curve, compressing the highlights (ie a gamma rolloff from 100nits) up to the display's capabilities is perfectly fine.
I don't know enough about how TV manufacturers deal with the HDR message though, with the notable exception of LG who use dynamic tone mapping in some displays. I believe some manufacturers take into account the Mastering Display Luminance too which is a bit like making creative decisions based on the colour of the producer's toilet roll!
Things are far more complicated for projectors - a lot of which may struggle to reach reference white at 100nits, never mind above that - so the compression becomes more severe and perhaps diffuse white has to be lowered - but again, a curve can be designed to do this.
And don't forget, we're really only talking specular highlights here (pretty clouds being oft touted!!), the APL (average picture level) is pretty much equal between SDR and HDR
So the upshot of this rambling is, no, subjectively, there's no huge difference, and when you take into account calibration, viewing environment etc etc; well, those factors are going to have a much bigger impact.
In my opinion, dynamic tone mapping is the way forward - the display, or a processor, making frame by frame decisions based on the actual encoded image rather than a bunch of numbers painted with a broad brush. That's until displays catch up and become capable of displaying MaxCLL natively that is, at which point the whole conversation is moot.
(Can you tell I haven't posted for a few days?