Originally Posted by Wookii
When you say "after inputting my nits level at 100% white", are you using a multiplier when entering Display Max Light in the Output > CMS settings? For a projector you shouldn't be inputting your actual measured peak nits. if that's what you've done, it would explain why the images look too bright.
For example, I run HDR at 155 nits, and my Display max Light setting is 500 nits (I probably need to revise that upwards some more, now dynamic tone mapping has been released, as it does seem to produce slightly brighter mid-tones).
If what you say is up to date for the Lumagen DTM, and from what you are describing, lumagen dynamic tone mapping seems to still be using a fixed "display target nits / display max light setting" as madVR used to do in the past.
In that state, only the movie peak defined in bt2390 is adjusted dynamically based on the measured frame peak.
In that level of development, projector users do have to lie and use a factor for the target nits so that the highlights become less compressed.
Here a bit of history about madVR evolution in dynamic tone mapping:
MadVR went through that phase at some point with a static target nits for all movies a long time ago.
While having a dynamic tone mapping up to the frame peak nits still helps a lot for decompressing the picture and give it back some pop, it's still far from optimal. A static target nits is a compromise for brightness and picture depth over all movies available.
introduced profile rules based on your REAL nits on the screen and the maxCLL of the movie to automatically select an appropriate target nits per movie.
For a movie like blade runner 2049 with 99% of all frame peak below 150nits, it makes no sense to have a display target nits higher than that... EXCEPT if your display real nits is higher. Then you should use that instead so that you watch the movie with the correct brightness, and not brighter than intended.
Later on came the possibility to pre measure movie and to get more usefull variable like avgfmll, which is the avg frame peak throughout the movie. And then the target nits profiled evolved to use that instead of the often useless maxcll.
And then later again, Anna and I introduced a dynamic target nits algo within a add-on tool for madVR based on the measured histogram and peak.
Goal 1) is to have nits for nits rendered at the screen whenever possible for example if the frame peak is lower than your real nits at the screen. For that to happen you need know to tell the truth about your real nits. In your case 150nits. Any frame with a peak lower than 150nits, will be rendered perfectly /no compression with the original brightness. Target nits =real display nits.
Goal 2) is to never compress to much the highlights which would create a flat image. If the analysed frame is bright (high Fall value), then you need a higher target nits than your real nits. How high will depend on the frame. The meg can need target nits above 1000nits in some cases. But never higher than the frame peak because this would throw away brightness.
The Fall algo from our tool "madmeasurehdr optimizer" is now directly available in the LIVE algo without pre measurement needed.
Dynamic Target nits was kind of a revolution since you get much more brightness whenever possible and much more picture depth whenever needed, per movie and changing dynamically during a movie.
And dynamic target nits is only possible if madVR/Lumagen knows your real nits.
So we should probably differentiate or at least clarify what exactly is doing the Lumagen compared to madVR.
MadVR dynamic tone mapping per frame/scene includes:
- dynamic movie peak adjusted to frame peak
- dynamic target nits
- dynamic clipping
As a consequence dark scene below your real display nits will all look identical regardless of the max projector lumens and nits on the screen.
And bright scene will preserve hdr effect and contrast but will only be displayed less bright than intended.