AVS Forum banner

561 - 580 of 1220 Posts

·
Registered
Joined
·
1,479 Posts
Discussion Starter #561
I've deinstalled and installed back brand new 3.5.1 tool ... but it still reports me 3.5.0 ... why?




Anyway my post is not about this...
My real measured pic nits is 140.


Like you can see I've started evalutating of new FALL algo with default settings.
I'm started to see very low films (BR2049, Murder on OE, Ready Player one) and the image is very good.
After that I tested some "mid" movie like Ice Age: Collision Course. It's good.
But the MEG is too bright, very bright.


So it seems that default settings in my case do not cover any UHD content....


Can someone suggest me other settings?

HERE the file of The Meg

Thank you
P.s. Can some one explain me the meaning of "maximal target nits" in the advanced settings? thanks
We forgot to change the number in the standalone exe from 3.5.0 to 3.5.1
You may install it to SEE 3.5.1 ;)



To your other question, I am rather surprised that the MEG us too bright with those settings.

1) Did you make sure to apply the work around for current madVR ignoring sometimes the dynamic target nits?
Single profile in madVR with 100nits without profile rule.


2) Just to make sure you don't have the bug: try to put min and max target to 10000nits.
If everything goes almost black, all good. You can go to step 3.

3) With 140 real nits and FALL algo, you may want to rise maximum target nits to 4000nits. And dynamic tuning from default 75 to 200.
 

·
Registered
Joined
·
1,479 Posts
Discussion Starter #562
@Soulnight

I'm going to do some testing tonight and try and dial in my chapter settings, but one thing I've noticed is that cuts to black and fades to black don't seem to trigger a chapter change. I'll see if I can find a specific example and check that it's not just the settings I'm using, it's just something I've noticed a couple of times.
We changed nothing to the chapter logic. ;)
The only big change we did was a few version ago with the multipass "chapter merge".

Cut to black should be detected if scene merge setting is not too high.

Fade to black are difficult to detect when the variation from one frame to the other is small.
You can try putting all chapter settings at zero:
-Minimum duration:0
-Scene merge:0
-Chapter merge:0

And rolling avg to 100000 frames.

This should effectively reset the algo to constant target nits for each original scene detected and saved by madvr in the original measurement file. See if it was detected there.
 

·
Registered
Joined
·
7,948 Posts
@anna & Flo, so I think there are 3 main algorithms in your tool, in order to improve the HDR dynamic:

1) An algo which clips some highlights for each frame.
2) An algo ("FALL") which selects an ideal "target nits" value for each frame, depending on the measurements.
3) An algo which tries to adjust the "target nits" in such a way that the target nits changes are invisible to the eye, using a rolling average and chapter detection.

I might try to implement this for the live algo (first). I think I probably can't use your 3) approach for the live algo, so I'll have to find my own solution for 3). But it would probably be useful to use your 1) and 2) algos, since I should be able to use those for the live algo, and users seem to be happy with them. Is your 1) algo still the same one you PM'ed me a while ago? Could you PM me your 2) algo?

Thanks for all the work you're doing! :)

@Everyone, could you give me a list of movies/demos & timecodes which in your tests showed the biggest problems with the rolling average and chapter detection? Those are probably the scenes I should also test with, when trying to implement all of this for the live algo.
 

·
Registered
Joined
·
173 Posts
We changed nothing to the chapter logic. ;)
The only big change we did was a few version ago with the multipass "chapter merge".

Cut to black should be detected if scene merge setting is not too high.

Fade to black are difficult to detect when the variation from one frame to the other is small.
You can try putting all chapter settings at zero:
-Minimum duration:0
-Scene merge:0
-Chapter merge:0

And rolling avg to 100000 frames.

This should effectively reset the algo to constant target nits for each original scene detected and saved by madvr in the original measurement file. See if it was detected there.
Yep it was just the minimum chapter length. I thought it was odd as I wouldn't think a black scene would ever merge.

24 is the lowest I can go for the minimum chapter duration and still 'pass' the Fury Road sandstorm scene, 0 looks completely terrible here by the way, the lightening flashes become just a difference in colour rather than brightness :p

I think going back to a rolling average of 120 works better than lowering the chapter duration anymore, I've only found a couple of scenes where the brightness change is noticeable but overall it brings out more HDR pop and isn't as distracting as the target changing with every scene change. 240 is very stable but I think I prefer 120. 480 is probably still a good default, I'm yet to find a scene where the brightness change is noticeable at all with 480.

It's a shame that now that HDR is near perfect I'm so burnt out on the movies that I have from testing that I don't really feel like actually watching any of them :p
 

·
Registered
Joined
·
173 Posts
@anna & Flo, so I think there are 3 main algorithms in your tool, in order to improve the HDR dynamic:

1) An algo which clips some highlights for each frame.
2) An algo ("FALL") which selects an ideal "target nits" value for each frame, depending on the measurements.
3) An algo which tries to adjust the "target nits" in such a way that the target nits changes are invisible to the eye, using a rolling average and chapter detection.

I might try to implement this for the live algo (first). I think I probably can't use your 3) approach for the live algo, so I'll have to find my own solution for 3). But it would probably be useful to use your 1) and 2) algos, since I should be able to use those for the live algo, and users seem to be happy with them. Is your 1) algo still the same one you PM'ed me a while ago? Could you PM me your 2) algo?

Thanks for all the work you're doing! :)

@Everyone, could you give me a list of movies/demos & timecodes which in your tests showed the biggest problems with the rolling average and chapter detection? Those are probably the scenes I should also test with, when trying to implement all of this for the live algo.
I've found the Fury Road sandstorm scene (at around 28 minutes) to be really useful for testing chapter detection. For problems with the rolling average it was mostly films where there is a very dark scene between bright scenes that stood out, the one I looked at the most was The scene in Deadpool at 22 minutes where it cuts from Wade (bright) to the bartender (dark) and then to the skiball scene (bright again).
 

·
Registered
Joined
·
792 Posts
1) An algo which clips some highlights for each frame.
2) An algo ("FALL") which selects an ideal "target nits" value for each frame, depending on the measurements.
3) An algo which tries to adjust the "target nits" in such a way that the target nits changes are invisible to the eye, using a rolling average and chapter detection.
Just to hear a different opinion (similar to @Onkyoman's): I also like 1), but I can't use the rest of them on my system, at least not the way it is now.
I understand that these topics are for projectors and I have a SDR FullHD TV (profiled for 120 nits, gamma 2.4, bt.709). For me a fix 120 target nits in madVR does the job most of the time (only The Meg was different amongst that I watched). I can't use the dynamic target logic here, because the sensation of the picture will be completely different (mostly in sunny outside scenes).

We came to the conclusion that major factors are in this: environment, display size/brightness and (the biggest) the used gamma curve! As @Manni01 used to say, there are so many factors.
I'm pretty sure that everybody has an "old" SDR TV at home so this can be easily tested.

About the director's intent: I think it's overrated, you don't have to think about it much (unless something obviously looking bad, e.g. day looks like night and vica versa), mostly that's why:
... we have no way to know what's intended and what's not.
... you have no way to know which kind of blue the characters are supposed to be. Luckily, calibration allows us to be sure, because there is a standard ...
people don't care about them, the way they won't care if the blue in Avatar is exactly the blue that James Cameron wanted
This is not how calibration/profiling works, it's not about color accuracy - it will never be the "same" as it was intended -, but optimal use of the displays color gamut, the visually smooth rendition of tonal value shifts, etc.
Again, too many factors are involved, even with SDR profiling.
 

·
Premium Member
Joined
·
9,824 Posts
About the director's intent: I think it's overrated, you don't have to think about it much (unless something obviously looking bad, e.g. day looks like night and vica versa), mostly that's why:

This is not how calibration/profiling works, it's not about color accuracy - it will never be the "same" as it was intended -, but optimal use of the displays color gamut, the visually smooth rendition of tonal value shifts, etc.
Well, provided your display has the ability to reproduce the content as it was mastered for a specific medium, this is *exactly* what calibration is and how it works. Reproducing the picture as it was meant to be seen, and yes, it is, amongst other things about color accuracy and color volume (rather than just color gamut).

You won't see at home as what is shown in cinema (oftent it's better!), but you can have at least what is meant to be seen on whichever medium you are watching. With SDR, this is exactly what calibration delivered (bar the question mark about which gamma had been used for each title, but for most recent ones assuming BT1886 was usually a safe guess).

With HDR, it's gone out the window because of many questionable choices, primarily the idea that brightness levels would be mastered in absolute and not relative brightness, as well as unfinished standards (no standard for consumer HDR10 on the reproduction side).

This is why I used Avatar/The Smurfs as an example. Without calibration, you have no way to know which shade of blue is correct, because there is no reference in the real world. The only thing that can deliver this (in SDR) is accurate calibration (on a display able to reach 100% or more of the target color volume, otherwise yes it's an approximation). Now that you don't care whether the skin color in Avatar or The Smurfs is correct or not, I understand, but it doesn't mean that calibration doesn't allow to be sure that it is (in SDR).

The potential issue with the relative brightness of shots with dynamic tonemapping is similar.

It is your opinion that director's intent is overrated, but I'm a filmmaker and I like to watch films as they are meant to be seen, especially when it's other people's work. I want to know that what I watch is what the filmmakers intended, especially when it's about producing a specific effect. Otherwise how can I assess and appreciate their work, or be sure that I experience it in an optimal way? I'm aware that most people don't care about that, and you're one of them. That's fine. :p)

When you send an important email (be it work or personal), you expect the reader to read the text you have sent, especially when each word is important and when you've thought about the ideas you are trying to communicate. How would you feel if Google rewrote it and changed the words, just to make more room for some ads, because it doesn't have the room to display all the words (which is, more or less, what tonemapping does)? Would you only correct the words that are obviously wrong because they are out of place or don't make any sense, or would you also correct those that change the meaning of the sentence, even if there is no obvious grammatical or contextual error, hence no visible error unless you know the original sentence because you wrote it? In other words, what matters most? That each sentence can be read without noticing any mistake, or that each sentence conveys the meaning of the original sentence?

I think both aspects are important, the first because otherwise it's distracting, and the second because otherwise it betrays the intention of the sender. And one of the reasons why I love MadVR is because I know that Madshi has these two aspects in mind as well when processing video content.

Anyway, let's agree to disagree, this is mostly off topic, but please, director's intent isn't a snobbish invention if you actually understand what it means and why it's important (to some). :)
 

·
Registered
Joined
·
2,388 Posts
This tool is an add-on for the work which has been done in the "improving madvr hdr to sdr tone mapping" thread.

Therefore it's expected that people are already familiar or make themselves familiar with the work which have been done there.

The whole point is to output a hdr picture toned mapped on a sdr gamma.
So forcing hdr output would just make things completely wrong.
Make sure you have madVR properly setup. You can use the neighbor thread for that: "madvr support thread". Thank you! :)
Good to know! Clearly I need to RTFM a bit more. The tool indeed works great for my SDR PC monitor. I guess I assumed it also worked for HDR signals in the way that the HDR optimizer on the new Panasonic UHD players work, by adjusting the grayscale dynamically to avoid clipping based on your TVs capabilities (although maybe I misunderstand how that works as well).

I assume this tool has substantial benefits for people with SDR displays or projectors. For displays that do support HDR, do you still expect a PQ advantage with this tool? So, should you use the tool to map to an SDR gamma and output SDR, or is it still better to have madVR passthrough the native HDR data?
 

·
Registered
Joined
·
2,033 Posts
Here again the explanation to the "FALL algo":
https://www.avsforum.com/forum/26-home-theater-computers/3040072-madvr-tool-madmeasurehdr-optimizer-measurements-dynamic-clipping-target-nits-17.html#post57475508

In short:
Code:
Target Nits = MIN(2 * avgHL, 200 + MAX(1, 2 * Tuning / 50) * FALLNoblack)

* FALLNoblack is like FALL but only excluding all black pixels from the FALL ( insensitive to black bars and format change like in the Nolan Films )
Ok, looking at the code, the current approach is pretty straightforward. So there are likely three practical approaches to a dynamic target nits: use the FALL, use the scene peak or use a combination of the scene peak and FALL. Who knows which one is the ideal solution for all circumstances/tastes. Using the FALL to judge when to change the target does seem to keep the target more stable.

I still can't figure out if the dynamic target nits is working correctly or not. I am uncertain if the 100 nits fix is truly working when viewing things at a glance. The problem is no matter how I set things up (profiles or no profiles), I can override the target nits in the control panel. This should never be possible because the targets nits is always fixed to a specific value as specified in the measurement file. Inputting a static target nits into the measurement file does work as expected. You can't override it in the control panel, no matter what you do.

I guess I'll have to wait to see what becomes of this.

Regardless, keep up the good work!
 

·
Registered
Joined
·
2,033 Posts
Just to hear a different opinion (similar to @Onkyoman's):
I didn't say I was against a dynamic target nits. I was just presenting a different philosophy to judging the goal of a dynamic target nits. Soulnight is judging the image based on the overall picture composition and not the scene peak. I wasn't sure this was the ideal approach when matched with the logic used by the existing dynamic tone curve. I really don't have an opinion now because I don't know if the current FALL is working correctly and haven't watched any content.
 

·
Registered
Joined
·
1,479 Posts
Discussion Starter #571 (Edited)
@anna & Flo, so I think there are 3 main algorithms in your tool, in order to improve the HDR dynamic:

1) An algo which clips some highlights for each frame.
2) An algo ("FALL") which selects an ideal "target nits" value for each frame, depending on the measurements.
3) An algo which tries to adjust the "target nits" in such a way that the target nits changes are invisible to the eye, using a rolling average and chapter detection.

I might try to implement this for the live algo (first). I think I probably can't use your 3) approach for the live algo, so I'll have to find my own solution for 3). But it would probably be useful to use your 1) and 2) algos, since I should be able to use those for the live algo, and users seem to be happy with them. Is your 1) algo still the same one you PM'ed me a while ago? Could you PM me your 2) algo?

Thanks for all the work you're doing! :)
Yes, you are right, you can basically split it in 3 algos. (we have actually 2 versions available for step 2: FALL algo and avgHL algo)
To integrate dynamic clipping and dynamic target nits to madVR LIVE algo sounds very promising! :)

We'll send you very soon an email with the details of our algos and furthers ideas about things you could do directly in madVR to make it even better but that we can't make outside.

In the mean time, could you try to tackle the issue/bug madvr mentioned in the few last pages.

Basically, what everybody is expecting is that IF a "dynamic target nits" flag is selected in the measurement file, then all "static" target nits profiles within madVR with their rule and all should be fully ignored.
Right now, you can provoke the bug by setting up a single profile with 10000nits, and then load up an optimized measurement file with dynamic target nits. In that case the static target nits nits "overules" the dynamic target nits (very dark picture) despite madVR OSD showing a nicely changing dynamic target nits.

Thank you!
Anna&Flo :)
 

·
Premium Member
Joined
·
1,754 Posts
It's a shame that now that HDR is near perfect I'm so burnt out on the movies that I have from testing that I don't really feel like actually watching any of them :p
LOL, I can relate. I have been 'critically' watching the same chapters in approximately 20 different movies to either evaluate a new version of the tool or simply trying to dial in my settings. It is time consuming and while overall I love the results, I am getting a bit numb.

I think I should watch some SDR movies for a while and cleanse my palette. :)
 
  • Like
Reactions: Dexter Kane

·
Registered
Joined
·
1,208 Posts
Madshi's question is related to the fact that by changing the target dynamically while playing a title, you improve reproduction for each shot independently, so absolute tonemapping is better, especially for dark scenes, but you change the relative brightness of shots between each other. So reproduction of dark scenes improves (dramatically) but they might be shown brighter than they should compared to the scenes before or after them.
Well this is also not an issue to me anymore :)

I don't know if keeping the 0-100 range using the same target over the whole time would produce better or worse results (based on director's intent), but it would be harder to implement for sure.

Soulnight's "FALL" algo is an elegant solution and is easy to implement.

@Everyone , could you give me a list of movies/demos & timecodes which in your tests showed the biggest problems with the rolling average and chapter detection? Those are probably the scenes I should also test with, when trying to implement all of this for the live algo.
Two scenes that come to mind and show very big brightness jumps if you won't use the "Merge scenes" option, because of the false scene detection:

- Lucy in the first 5 seconds
- The Meg at 00:00:52

If you react immediately to new targets, you will get flickering and brightness jumps in these scenes (and a whole lot of others):

- Lucy at 00:47:48
- The Meg at 01:09:00

Of course, for the "live" algo, we need to set some limit so the brightness adaptation is not visible.

For instance, a maximum target change over time, no more than X nits change in 0.X seconds, for both brightness and darkness, and reset at scene change.

Also, the "merge scenes" [FALL change in %]" is great IMO to remove the brightness jumps caused by false scenes detection, with the default value of 100%.
I don't know if it can be used for the "live" algo though.


@Soulnight

I tried with various titles, but I really don't understand what I am supposed to gain by using a higher "no compression limit" value than my minimal target nits (150).

The target is more unstable and "randomly" higher in dark scenes. It is quite distracting and does not look reliable to me (to get "similar" brightness from scene to scene happening in the same environment).

This is what it does on John Wick with different values:

150 / 200 / 250 / 300


 

·
Registered
Joined
·
1,479 Posts
Discussion Starter #574
@Soulnight

I tried with various titles, but I really don't understand what I am supposed to gain by using a higher "no compression limit" value than my minimal target nits (150).

The target is more unstable and "randomly" higher in dark scenes. It is quite distracting and does not look reliable to me (to get "similar" brightness from scene to scene happening in the same environment).

This is what it does on John Wick with different values:

150 / 200 / 250 / 300


Well, again it's a matter of preference. :)
I also prefer like you do to be based on picture content instead of frame peak as much as possible.

With the current FALL algo, there should be almost no difference at all for you between 150 and 200 as we can see in the graph as well. Above 200, you will see clear differences (also seen in your graphs).
Just use what you prefer. :)

And I maintain that 200 is a safe value for everybody. ;)
Extending the "No compressing limit" higher than 200nits with target nits= peak nits , really can do what you describe.
Target is then based on the peak and not on the global picture brightness and can seem less logical.

But again, you know what there is to gain: a bit less compression the higher the target nits. To each his compromise. :)
 

·
Registered
Joined
·
20 Posts
you need to support Drag-and-Drop file, and remember the path before I choose to close the applicant before finishing (if waiting too long)
 

·
Registered
Joined
·
7,948 Posts
We'll send you very soon an email with the details of our algos and furthers ideas about things you could do directly in madVR to make it even better but that we can't make outside.
Looking forward to that, thanks! :)

In the mean time, could you try to tackle the issue/bug madvr mentioned in the few last pages.
Yes, I've seen the reports, just had no time to look into it yet.
 

·
Registered
Joined
·
1,131 Posts
@madshi


FYI there is still a bug present in HdrMeasure39 and jRiver where I am unable to play another media file after playing an HDR movie file. The green "play" icon remains on the movie thumbnail as well as the "now playing" field despite clicking "stop". At this point, I am unable to play ANY media files (1080p movie, or MP3s, etc) without restarting jRiver. Its as if jRiver isn't properly releasing the HDR file...



This is currently happening on the latest version of jRiver MC24 64-bit windows 10.


I rolled back jRiver to an earlier version and the problem persists.


I then went back to the latest version of jRiver + last official MadVR release v.0.92.17 --> things work as intended. That is the movie stops, and the icon/nowPlaying reflects that decision and I am able to play other media files as intended.


As soon as I copy files from HdrMeasure38, or today I tried HdrMeasure39 -- jRiver hangs on stopping an HDR movie and I have to restart jRiver.
 
561 - 580 of 1220 Posts
Top