Originally Posted by zombie10k
Darin, it's been a little while since HDR was discussed. Can you give some thoughts on how native contrast will affect the overall HDR experience?
I'm going to go with a fairly extreme example for this purpose and then we could talk about the cases that start out better.
Here is one of my concerns with HDR and CR. Let's start with a 1000:1 on/off CR display I heard about which supports HDR and I don't believe has a dynamic iris type system.
With SDR I'll use 100 nits for white since that is basically standard even though most of us don't hit it and there is the whole projector vs flat panel thing.
If we use a standard gamma we can figure out approximately where things should lie along the curve in nits. For example if we just look at the bottom 1/5th of the encoding levels, the 20% video level should at about 3 nits from 0.20^2.2=0.03 and 3% of 100 nits is 3 nits.
From our 1000:1 on/off CR we know the black floor is 0.1 nits, from 1/1000th of 100 nits. So, the contrast ratio from a 20% video level object to black is limited to about 30:1 from 3 nits divided by 0.1 nits.
Now let's consider HDR with this same setup. With HDR many parts of many images are likely to be encoded at levels that result in approximately the same nit levels as for their SDR counterpart versions when both are run at their correct white levels. The main differences as far as this are likely to be some really bright scenes and some scenes with little highlights.
For this case I'll take the extreme of using this display at 1000 nits for HDR, where it still has the same 1000:1, so now the black floor is 1 nit (from 1000/1000).
For the scenes that are encoded with everything bright this has some similarity to playing SDR at 1000 nits. In both cases the black floor would be 1 nit and all the objects in the scenes would be bright anyway.
For the scenes that are encoded mostly like SDR except for some highlights, doing those highlights bright means moving the white level up and the black level up. In this case we've moved the peak white from 100 nits to 1000 and the black floor from 0.1 nits to 1 nit.
Now if we consider an object that is 20% video level in SDR and is supposed to be at about 3 nits out of 100 is now encoded at a lower level on the curve in order to end up at 3 nits out of 1000. However, that means it only has a ratio of 3:1 from the black floor now instead of 30:1, which is the kind of thing human vision can see if there is not a lot of other light in the scene to bias our eyes.
It we consider the scenes that are basically not HDR (as in encoded like their SDR counterparts for nit levels) then the fact that the display had to be able to do 1000 nit whites for some scenes means that the black floor for these SDR scenes just went up 10x and the ratio between the 100 nits and the black floor went down by 10x.
A dynamic iris could help, but for one thing we are already using a dynamic iris to improve many scenes and we don't want them to become more noticeable. I'm also concerned about what happens with an HDR highlight toggles and off. A small highlight isn't likely to bias the eye much and you don't want the rest of the image jumping around so much that we see pumping.
The Sony case is of course far from this extreme example both because it has more on/off CR and we aren't going to go for a 10x multiplier from SDR peak to HDR peak, so shouldn't have as many problems, but I expect the basic issue to be there at times. If the encoding was done with a dynamic iris in mind that might help the situation, which is something they could do for digital cinema if they haven't already since they know what they are playing.
If we decide to run SDR and HDR at exactly the same peak nits (say 100) then the issue I have is that we now need to roll off the detail that is above 100 nits in the HDR version. We only have 1000:1 to work with in my extreme example case and if we rob some of that to keep from clipping at 100 nits we need to then we leave less CR for the range from somewhat below white to black. This is an option, but then you aren't really doing the highlights that HDR is supposed to provide. We could just go with a 2x multiplier for highlights and save the nits from 100 to 50 for everything encoded at 100 nits and above in the HDR version, but now that 20% video level object has about 15:1 instead of 30:1 to black if it is encoded at the same nits in SDR as HDR and we've moved all those lower end things down 50% to make room for the highlights.
Not sure if that is clear at all.