Originally Posted by Wookii
I'm struggling to understand how this would work in practical terms Flo (so please bear with me). Are you saying you raise the average luminance of a dark scene so that it occupies a greater proportion of the displays available brightness range, i.e. output nits > input nits?
As I say, its probably best demonstrating with some work examples:
So say you have a projector set-up with a measured peak nits of 100 nits. Say you want to target average brightness (excluding highlights) in movies to look similar to their SDR counterparts, so roughly 2 nits input to 1 nit output (on screen).
You have four frames (or Scenes) with a MaxCLL of 50nits, 150nits, 1000 nits and 2500nits respectively.
In my (basic) understanding of dynamic tone mapping, the first two frames/scenes, would effectively be shown nit for nit (or 2 nits for 1 nit on our projector) with no tone mapping since they both fall within the displays physical luminance range.
The 1000nit frame falls beyond the displays brightness range so the DTM system has to choose a proportion of the input range to maintain as nit-for-nit (2 nits input for 1 nit output on our projector), and a proportion to allocate for tone mapping the highlights (obviously with some sort of smooth transition between the two) - for example, maybe the first 150 nits input to the displays 0-75nit output range, and reserves the remaining 25 nits output range to tone map the highlights (151-1000nits input) into.
The 2500nit frame falls even further beyond the displays brightness range so the DTM system has to choose a smaller proportion of the input to maintain as nit-for-nit (2 nits for 1 nit on our projector) to allow greater display output range to map the highlights into. For example, maybe the first 100 nits input to the displays 0-50nit output range, and reserves the remaining 50 nits output range to tone map the highlights (101-2500 input nits) into.
It's obviously a very over-simplified example, with simplified round numbers just to get the idea across, but that is my (very basic) understanding of how dynamic tone mapping works. That prioritises the nit-for-nit range and maintains the average brightness level of that nit for nit range where most of the content exists, and as it would have been viewed during mastering (as in one movie scene to the next).
Now eventually to the question, which is the purpose of my post, how does changing the Display Max Light (Lmax) dynamically, affect how those scenes are rendered on screen?
In your initial post, you mentioned at the beginning that you had 150nits on the screen but where using a fixed display target nits (display max light: Lmax) of 500nits instead.
500/150 gives you a factor 3.3.
It means that instead of having your diffuse white of 100nits shown at 100nits, or even at 50nits if we speak about old sdr projector standard, you have effectively your diffuse white at 33nits.
In your example if you were using 300 target nits instead, you would be getting a nice 2:1 ratio for diffuse white brightness. Would look very nice for all dark scene.
Why did you not do it? Simple. Because you had to compromise. If you play the movie the Meg with some frame with a FALL over 1000nits (!!!), a display target nits of 300 would be too low and everything would get flattened, blown-out.
Why? Because you have to compress everything avove 300nits and put it below. If you compress 300nits to 4000nits within let's say 100(bt2390 knee start) to 300nits, you are compressed extremely heavily. And if like in the Meg 90% of the picture is above 300nits to begin with, you are basically heavily compressing the whole picture!
To compress less and retain some hdr effect, you would have to raise the target nits to something like 1000 or higher in some scene of the meg.
But at the same time, if you have 100nits, and you want a 2:1 mapping up to the diffuse white, you need to choose 200 Target nits. This will give you great results for any frame peak below 200. But 200 Target nits will be very inadequate for bright scenes. Depending on how bright you will want to go up with your target nits to retain hdr and image plasticity.
Therefore, dynamic target nits is a great solution to display dark content as bright as your projector can (1:1 or 2:1 depending on your taste) and bright content with a still a great in picture contrast/hdr with a higher target nits.
With a static, you have to pick a middle and it's never optimal.
And the smaller the real nits you get on the screen, the worse the compromise. With only 50nits on the screen for example, it would be a veeeery bad idea to pick a display target nits of 2:1 because most movie would look completely blown out. From experience a 300 static target nits is a good compromise with 50 real nits. But that's a factor 6 on the diffuse white!!
Here dynamic target nits is even more important so that you can have 100 target nits (2:1) for scene with a peak lower than 100nits. And higher target nits 200,300,400,... 1000 and more for brighter scene to retain some hdr effect and picture depth.