Flo, I'm cross-quoting your post from the other thread, as the discussion is more relevant here.
Jim, this is a discussion from another thread where Flo (owner of the excellent ProjectionDream website, and creator of a MadVR dynamic tone mapping add-on) is querying how Lumagen DTM is working, and more specifically how it is interacting with the Display Max Light setting, more specifically I think Flo's initial perception is that because we are entering a multiplier derived DML setting for projector set-ups, the system isn't tone mapping dynamically/correctly. I think Flo's perception is incorrect, so below is my attempt to explain it. I have brought the discussion here in the hope you can answer directly to Flo, and correct any errors in my explanations below.
Originally Posted by Soulnight
If what you say is up to date for the Lumagen DTM, and from what you are describing, lumagen dynamic tone mapping seems to still be using a fixed "display target nits / display max light setting" as madVR used to do in the past.
In that state, only the movie peak defined in bt2390 is adjusted dynamically based on the measured frame peak.
In that level of development, projector users do have to lie and use a factor for the target nits so that the highlights become less compressed.
I may be misunderstanding but I suspect you've got the wrong end of the stick on this, but I'll do my best to full in the gaps as much as I'm able, where I can understand what you're asking. I am only a user on the outside looking in, I don't have working knowledge of exactly how the algorithms are constructed - you'd be better placed asking more specific questions on this thread so
Jim can answer directly (as long as the answer isn't likely to expose any IP). But I'll have a stab at it:
The Radiance Display Max Light setting is simply inputting the displays measured peak brightness in its HDR mode. It is not in any way setting a hard clip point for the tone mapping algo (I think that what you were thinking above?). It works (as far as I am aware) in exactly the same way as your add-on that you describe works for MadVR, in that you enter the displays measured peak nits.
The only difference is, they way the Radiance handles the figure that is entered, as it seems to assume it is for a flat panel where 1 nit input = 1 nit output, which obviously isn't appropriate for a projector, so we currently have to apply a multiplier to our measured peak luminance and enter that calculated figure, otherwise the overall picture level is too bright.
I will admit, I don't like having to do that, as you rightly point out, it adds too much guess work into the setting, and leads to trial and error to get the average picture level (nit-for-nit area excluding highlights) to what you would expect with the equivalent SDR version. I would much rather enter an actual measured peak nits as you describe, and then select a check box to tell the system that I'm using a projector so the nit-for-nit range is scaled correctly for projector use. I have requested this before. Jim can probably outline if the system cannot be altered to work this way for projector owners?
Originally Posted by Soulnight
Here a bit of history about madVR evolution in dynamic tone mapping:
MadVR went through that phase at some point with a static target nits for all movies a long time ago.
While having a dynamic tone mapping up to the frame peak nits still helps a lot for decompressing the picture and give it back some pop, it's still far from optimal. A static target nits is a compromise for brightness and picture depth over all movies available.
introduced profile rules based on your REAL nits on the screen and the maxCLL of the movie to automatically select an appropriate target nits per movie.
For a movie like blade runner 2049 with 99% of all frame peak below 150nits, it makes no sense to have a display target nits higher than that... EXCEPT if your display real nits is higher. Then you should use that instead so that you watch the movie with the correct brightness, and not brighter than intended.
Later on came the possibility to pre measure movie and to get more usefull variable like avgfmll, which is the avg frame peak throughout the movie. And then the target nits profiled evolved to use that instead of the often useless maxcll.
And then later again, Anna and I introduced a dynamic target nits algo within a add-on tool for madVR based on the measured histogram and peak.
Goal 1) is to have nits for nits rendered at the screen whenever possible for example if the frame peak is lower than your real nits at the screen. For that to happen you need know to tell the truth about your real nits. In your case 150nits. Any frame with a peak lower than 150nits, will be rendered perfectly /no compression with the original brightness. Target nits =real display nits.
Goal 2) is to never compress to much the highlights which would create a flat image. If the analysed frame is bright (high Fall value), then you need a higher target nits than your real nits. How high will depend on the frame. The meg can need target nits above 1000nits in some cases. But never higher than the frame peak because this would throw away brightness.
The Fall algo from our tool "madmeasurehdr optimizer" is now directly available in the LIVE algo without pre measurement needed.
Dynamic Target nits was kind of a revolution since you get much more brightness whenever possible and much more picture depth whenever needed, per movie and changing dynamically during a movie.
And dynamic target nits is only possible if madVR/Lumagen knows your real nits.
As I mentioned above, as far as the system is concerned the user is entering their real nits, its just as a projector owner you've got to fudge the figure a bit to get there (as I say, I don't like having to do that, but that's how is currently works).
The end result however is still the same, and your Goals 1) and 2) are achieved, and is how I understand Lumagen DTM works currently in that every frame, or 'scene' is rendered based on its frame/scene MaxCLL and MaxY (Lumagens nomenclature for MaxFALL), plus no doubt other factors I'm not privy too.
Your comment "Dynamic Target nits was kind of a revolution since you get much more brightness whenever possible and much more picture depth whenever needed, per movie and changing dynamically during a movie." is exactly what many of us on this thread have found when Lumagens DTM was released.
Just out of interest, I assume your algo already takes account of the fact that a projector is being used, or you have some sort of check box to select a projector vs a flat panel? How do users account for differing preferences in average brightness level of the nit-for-nit range - similar to setting SDR peak luminance in that some might prefer 42nits (12FtL) while some prefer 61nits (18FtL)? I imagine there is still a bit of 'fudging' that has to be done on projectors
Originally Posted by Soulnight
So we should probably differentiate or at least clarify what exactly is doing the Lumagen compared to madVR.
MadVR dynamic tone mapping per frame/scene includes:
- dynamic movie peak adjusted to frame peak
- dynamic target nits
- dynamic clipping
As a consequence dark scene below your real display nits will all look identical regardless of the max projector lumens and nits on the screen.
And bright scene will preserve hdr effect and contrast but will only be displayed less bright than intended.
OK, I'll try; as far as I can tell, the above is essentially exactly what Lumagen DTM is doing.
- I don't believe there is any 'dynamic movie peak', just dynamic frame (or group of similar frames = 'scene') peak
- I'm not sure what you mean by dynamic target nits? I believe there is just the measured frame/scene peak nits, and the user entered display peak nits. DTM will then render the frame based on that. Where frame peak nits < display peak nits, no tone mapping will apply and the frame will be rendered nit-for-nit input to output (obviously scaled back on a projector as you don't want to display 50nits input as 50nits on screen for example). Where frame peak nits > display peak nits, the DTM algo will then tone map the highlights whilst trying to preserve as much of the nit-for-nit range as possible without overly compressing the highlights.
For more clarification, this is a recent post Jim made on that same subject:
Originally Posted by jrp
- Currently for Dynamic Tone Mapping the "Low-Set" Shape and Transition parameters do not affect the Dynamic Tone Mapping. Thrang was first to point this out (that I saw). Turns out to be that the "Low" curve is lower than his setting for Display Max Light of 400. Without getting into too many details, if you are doing HDR and the "display" says it has more range then the source needs (this case) then no Tone Mapping is needed since the "display" can render the entire range of the source. As an example if you have a 1000 nit TV and a 1000 nit source you can render the source without tone mapping, but for a 4000 nit source you would need tone mapping.
- So while this is *not* a bug, it is a choice we are reconsidering. Based on feedback the current choice works very well, other than the Low-Set Shape and Transition do not change the transfer function.
- I actually designed the low to high Tone Mapping Blend equations to allow the current Frame MaxCLL to be well below the low-curve-nit-point, and to also be able to be well above the high-curve-nit-point. This is because originally I planned on the low curve being higher than we ended up with, and just plain did not consider the effect using a much lower nit curve would have on using Shape and Transition.
- We are now planning to raise the "nit point" of the low-nit curve up so that for devices in the <700 nits range Shape and Transition will have some effect on the transfer function. This should give extra control for Dynamic Tone Mapping and *may* help tune the scenes referenced in the various posts.
Some general comments on Dynamic Tone Mapping:
It is not possible to get perfect results for every scene using Dynamic Tone Mapping. Also, Dynamic Tone Mapping results will be different than Static to some degree. For almost all scenes the Dynamic Tone Mapping results should be better than Static Tone Mapping.
Tone Mapping is all about trade-offs. If you are trying to reproduce Mad Max Fury Road with a MaxCLL of 9919-ish nits on a 100 nit projector, something's gotta give. In general we believe allocating more of the range to low and mid-tones and allowing highlights to show some clamping looks the best. You may not agree (and you have controls to change this if you wish).
I think saying that having more controls means the job is not done ignores that different producers do things differently, and every person has their personal preferences. The only way to deal with this is enough controls to tune the image to personal taste and compensate for differences in the content. I don't get that people say there are too many controls when they do not have to use them since the default settings are extremely good.
We spent a great deal of effort on tuning Static HDR Mapping. I want to thank everyone who gave us feedback on optimal settings for Static Tone Mapping. A special thanks to Kris Deering who's feedback and parameter settings were instrumental in getting the Static Tone Mapping tuned and selecting the current parameter defaults we use for Static Tone Mapping.
If a scene "looks better" with static as compared to Dynamic Tone Mapping, then it likely means that the Static MaxCLL number was "wrong," but happily so in that it improved the scene slightly. Of course as people point scenes where they think Static Tone Mapping looks better we can investigate and see if there is a way to improve Dynamic Tone Mapping.
Apologies all for the gargantuan post, I just wanted to lay the foundations for having this discussion on topic here, rather than off-topic on other threads.