AVS Forum banner

861 - 880 of 1220 Posts

·
Premium Member
Joined
·
9,859 Posts
@Soulnight, @AKJ89

I only had limited time to test and could only do so with a couple of movies, but I really like what you did with the new BT2390. :)

I concentrated on The Meg as it has both very bright and very dark scenes, and I'm very impressed at the way the same dynamic tuning value gives roughly the same brightness on screen.

I tested with three actual peak brightness:

150 nits (high lamp, iris fully open, on/off around 30,000:1, 0-100nits untouched up to around 300nits peak in the content)
115 nits (low lamp, iris fully open, on/off around 30,000:1, 0-100nits untouched up to around 150nits peak in the content)
100 nits (low lamp, iris at -3, on/off around 35,000:1)

The perceived overall brightness is very similar with dynamic tuning at 100 for all three tests, which is great.

The 150nits gives the best picture (most saturated, great dynamic range, 0-100nits better protected) but that comes at the price of high lamp, which shortens bulb life and raises fan noise and heat beyond what's acceptable here as we sit quite close to the PJ (about 4 feet) and there is not always A/C in the room. Also high lamp means a higher black floor, which isn't a great option without a working DI. If I could get this in low lamp though, I'd be very happy.

100nits gives the best black level and native contrast (important until JVC fix the DI on the new models), but overall I prefer 115nits: the range isn't as good as 150nits, but it still looks great, and the difference in black/contrast isn't that big. Hopefuly once JVC fixes the DI it will become even better. And 0-100nits isn't touched up to around 150nits peak in the content, so that still leaves a lot of content (including some entire titles!) untouched.

It was also nice to get dynamic clipping back, especially in the 100/115nits tests, so I hope that madshi will find the time to fix this in the live algo once he's done with the dynamic targets. It really helps to maximise contrast and significantly improves the picture.

By the way I followed your suggestion and used a max of 10,000nits and a don't compress of 0 for all my tests.

Great job Anna and Flo, I can't wait to test more titles. I hope Madshi will like it too and will implement this new BT2390 in the live algo, even if it's an option. Personally I prefer it to the FALL algo at this stage, but again I have only done very limited testing. I only wanted to say thanks for the continued development and all your efforts :)
 

·
Registered
Joined
·
1,133 Posts
Cheers Ana and Flo!!!

Thanks for the update and for including the proper beta version of MadVRHdrMeasure to be used with your tool!!
 
  • Like
Reactions: Soulnight

·
Registered
Joined
·
2,033 Posts
I browsed through BT.2390 and think I understand what you were saying yesterday. If the entire PQ scale is being rescaled with the measured frame peak (mastering display peak) then overlap of most PQ values is not likely a significant issue.

I can agree that scaling the brightness of each target with the real display nits would make sense, as long as you retain the ability to adjust the overall scale up or down with the dynamic tuning value. I think I would still vote for 480nits as a practical top-end limit.
 

·
Registered
Joined
·
7,962 Posts
Yes, this was based on our discussion. However FALL is not enough, you need to scale down or up based on the currently used target nits compared to the "real nits" to have the "real perceived brightness". :)
I'm not sure I understand this part. Can you explain this in more detail?

Yes limiter are still there.
So yes, you can't go lower than the mininum/real nits (that would be rendered brighter than intended on the 4k bluray anyway).
Of course we don't want to render pixels brighter than encoded on the disc. I fully agree with you there. However, with the current live FALL algo, if I change the real nits value in the madVR settings dialog, sometimes the image doesn't change at all, which doesn't seem logical. Because if the real peak nits does change, then so should the FALL algo output, otherwise the pixel would measure differently.

You also can't go lower than the calculated "Mintargetnits" for a Kneestart at 100 for protecting at all time 0-100nits from any compression.
Is this optional? If not, then I would suggest making it optional. Reason 1: So we can test if it actually improves perceived image quality. Reason 2: Even if tests confirm that this option helps improve image quality in some situations, it will also result in the image getting darker in some situations. So I believe this is a user preference setting.

And I you can't go higher than the peak because that would just throw brightness away.
I understand your reasoning here, but I'm not sure I fully agree. E.g. imagine we have a scene where the actor moves his arm, and suddenly the sun reflects on his wrist watch (happens in Matrix 2, IIRC). This way suddenly there's a much higher peak in some frames. In this situation, both your FALL and BT2390 algos may disallow a high target nits for all frames where there's not a reflection on the wrist watch. But for the few frames where the reflection appears, suddenly a much higher target nits is allowed. That seems somewhat weird to me.

Of course throwing brightness away is generally questionable. On the other hand, we're talking here about frames which seem to have a very high FALL already, otherwise we would not run into this limitation in the first place! So I would argue that maybe this specific limitation should be removed?

And not higher than 2*avgHL*brightnessscalefactor either which helps in some cases to not go too high.
Have you checked if this is still necessary with the BT2390 algo? Maybe it was only needed for the FALL algo? Just wondering, because the BT2390 algo makes everything so nicely scientific now, so I wonder if all the "artificial" limitations are still needed or not...

I just looked at your release notes briefly. Can you explain your BT.2390 adjustment a little more. For example, I was looking at this line:

- Display Target Nits: 214nits
- Frame Peak: 1000nits
- Knee in nits: 100nits

I have been using a real display nits of about 214nits with the current FALL algo. Does this mean 480nits is always chosen when the frame peak is 1,000 nits? If so, I'm not sure I like that as the previous FALL algo actually seems to make a more intelligent choice of when to raise the target nits that high. In other words, I took your side that using the FALL more than the frame peak results in better perceived brightness and contrast than always attempting to protect 100 nits from compression.
Without having really tested it yet, I might agree with you there. I'm not convinced yet that the rule to never compress the 0-100nits range is important enough to sacrifize brightness for that.

I still don't see why most people need targets above 480nits if their display is outputting 50-150nits.
It does make sense for very brightly encoded scenes. E.g. The Meg, or an HDR demo I have here, which I can't share atm. These *REALLY* need very high targets, otherwise they look really bad.

Tone mapping is desaturating all colors relative to the target display nits. Using crazy targets like 600-1,500 nits on a dim display often creates terribly oversaturated, dark color tones. Even if it might look good to the eye, color accuracy is harmed by using excessively high target nits relative to the actual display brightness. There was a thought that 4,000 nits targets should be used as a point of reference for color accuracy at one point, but BT.2390 is meant to desaturate pixels relative to the loss of luminance.
I think you may be interpreting some of these things the wrong way around. Please understand that HDR movies are encoded for a 10,000 nits target display. So basically if you render the HDR movie untouched, without any tone mapping (and thus no desaturation) at all, you actually render them "correctly". But obviously too dark, if your display doesn't really do 10,000 nits.

Now if you give madVR a rather low target nits value to work with, e.g. 200 nits, then madVR has to apply a LOT of compression, which includes a lot of desaturation, as well. The higher the target nits value, the less compression/processing madVR has to do. So using a higher target nits value should actually get you nearer to the intended look of the movie - not further away!

You're throwing the actual display brightness into the discussion. But in all my research I don't think I've seen anyone else doing that. The key is the amount of compression which tone mapping applies. The more I compress, the more I also need to desaturate, to make the image look "right". How high the actual display brightness is, has nothing to do with desaturation levels, from what I've read.

For example, the "Color Correction for Tone Mapping" paper you referenced, compares different luminance reduction methods and checks how they effect perceived saturation levels. They only ever care about what tone mapping (= luminance compression) does to saturation. They do never care about how the same image looks on a dark vs bright display. So the gist of this paper is that when you compress the luminance channel, you also have to reduce saturation to some extent. But if you don't compress the luminance channel, you don't. So if you happened to use a 10,000 nits target in madVR, no tone mapping is applied, so also no desaturation is needed. And if you use a 1,000 nits target in madVR, less tone mapping is applied compared to a 200 nits target, so at a 1,000 nits target in madVR, less desaturation is needed.

Again: The actual luminance of your display does not seem to have anything to do with saturation levels (if it's calibrated properly).

The whole tone mapping vs desaturation topic can be reduced to: "The more luminance compression is applied to a pixel, the stronger it needs to be desaturated". This is all about digital processing, and not at all about how images will look on displays with different actual brightness capabilities.

So if you say, using a high target nits setting in madVR will result in too high saturation, that's actual the opposite of what I think to be true: Because using a high target nits setting means madVR has to compress less, which brings the image nearer to the master. The extreme variation of this is to use 10,000 target nits setting: This will look very dark. But saturation levels *should* be correct, because they're untouched from the master. Of course this is only true with a properly calibrated display. If your display has too high saturation with very dark pixels, that would explain why you feel that high target nits settings has too high saturation. But I don't believe that to be true for a correctly calibrated display.

It was also nice to get dynamic clipping back, especially in the 100/115nits tests, so I hope that madshi will find the time to fix this in the live algo once he's done with the dynamic targets. It really helps to maximise contrast and significantly improves the picture.
Not sure what you mean? Dynamic clipping should work fine in the live algo. It's a bit different to what Anna & Flo's tool does because I didn't include their latest algorithm changes yet. But IIRC the live algo is currently actually *more* aggressive!

I hope Madshi will like it too and will implement this new BT2390 in the live algo
I definitely plan to implement it. But want to finish work on scene detection first.
 

·
Premium Member
Joined
·
9,859 Posts
Not sure what you mean? Dynamic clipping should work fine in the live algo. It's a bit different to what Anna & Flo's tool does because I didn't include their latest algorithm changes yet. But IIRC the live algo is currently actually *more* aggressive!
I know it's there, but it's not usable. :) It badly clips everything, that's why we had to disable it during the tests (remember, we all agreed to not use it?). That's why I said it would be great if you could find the time to fix it at some point. Having it back (working properly) with measurements file really improves contrast.
 

·
Registered
Joined
·
1,479 Posts
Discussion Starter #867
I'm not sure I understand this part. Can you explain this in more detail?


Let's take an example:
You display has a real peak of 100nits.
Your selected target nits is also 100nits.
If you build the FALL after Bt2390 tone mapping, you will calculate a*certain" FALLBT2390 value in nits. Let's say you calcultate a FALL2390 of 10nits for this frame.
To your eyes, on your screen you will REALLY see in average 10nits since you have a MATCHING target nits and Real display nits.
-->Final average brightnes on screen: 10*100/100=10nits

Now, if you select 200 target nits instead for the same frame, the KneeStart will raise and the FALLBT2390 will be recalculated, compressing less than before the pixels above the Kneestart, so your pure FALLBT2390 will have likely raised a bit. Maybe you get FALLBT2390=12nits instead of 10before.
However, what the screen is showing in not a brighter picture to our eyes, since we are now using a target nits 2 times bigger than our real display nits of 100nits which means that the brightness on screen is efectively dividing the FALL2390 value by two.
-->Final average brightnes on screen: 12*100/200=6nits



Of course we don't want to render pixels brighter than encoded on the disc. I fully agree with you there. However, with the current live FALL algo, if I change the real nits value in the madVR settings dialog, sometimes the image doesn't change at all, which doesn't seem logical. Because if the real peak nits does change, then so should the FALL algo output, otherwise the pixel would measure differently.

With the FALL algo, you don't control the brightness via "Real nits" but with "dynamic tuning".


Is this optional? If not, then I would suggest making it optional. Reason 1: So we can test if it actually improves perceived image quality. Reason 2: Even if tests confirm that this option helps improve image quality in some situations, it will also result in the image getting darker in some situations. So I believe this is a user preference setting.

Of course everything is optional. :) Just a IF somewhere and checkbox somewhere else. It's not the core of the algo but only a sub algo.

We may actually improve on that and only protect up to the highest nits below 100 nits. Sometimes there is nothing between 30nits and 700nits to begin with. So what you want to protect is only 0-30nits which will require an even lower minimum target nits.

But in any case, it gives "me" a good feeling. :D And I don't believe it is being used that often.


I understand your reasoning here, but I'm not sure I fully agree. E.g. imagine we have a scene where the actor moves his arm, and suddenly the sun reflects on his wrist watch (happens in Matrix 2, IIRC). This way suddenly there's a much higher peak in some frames. In this situation, both your FALL and BT2390 algos may disallow a high target nits for all frames where there's not a reflection on the wrist watch. But for the few frames where the reflection appears, suddenly a much higher target nits is allowed. That seems somewhat weird to me.

Of course throwing brightness away is generally questionable. On the other hand, we're talking here about frames which seem to have a very high FALL already, otherwise we would not run into this limitation in the first place! So I would argue that maybe this specific limitation should be removed?


I understand your reasoning as well.
Optional is also an option. ;)
I guess it matters less with the madmeasure optimizer tool since we use a "large" centered rolling avg which can smooth quite a lot such undesired behaviour.
I understand that knowing only one frame at a time makes a stable brightness / target much more desirable even if you throw away brightness as a consequence.

Have you checked if this is still necessary with the BT2390 algo? Maybe it was only needed for the FALL algo? Just wondering, because the BT2390 algo makes everything so nicely scientific now, so I wonder if all the "artificial" limitations are still needed or not...

No checked in depth. But I had the feeling that it would help on the MEG very bright oustide scene so that the target does not climb even higher than it does.
Of course this can also be made optional
:)
 

·
Registered
Joined
·
2,033 Posts
Without having really tested it yet, I might agree with you there. I'm not convinced yet that the rule to never compress the 0-100nits range is important enough to sacrifize brightness for that.
Based on Soulnight's math from BT.2390, it seems as though the knee point is being rescaled based on the mastering peak:



If I used a PQ value of 100, mastering black of 0, and mastering white of 10,000, this is the result:

E1 = (100-0) / (10000-0) = .01

If the mastering peak is instead 1000, then:

E1 = (100-0) / (1000-0) = .10

This result impacts the calculation of the KneeStart. His scale looks like a a flexible knee point based on the input tone mapping parameters that responds to the mastering peak or content peak.

It does make sense for very brightly encoded scenes. E.g. The Meg, or an HDR demo I have here, which I can't share atm. These *REALLY* need very high targets, otherwise they look really bad.


I think you may be interpreting some of these things the wrong way around. Please understand that HDR movies are encoded for a 10,000 nits target display. So basically if you render the HDR movie untouched, without any tone mapping (and thus no desaturation) at all, you actually render them "correctly". But obviously too dark, if your display doesn't really do 10,000 nits.

Now if you give madVR a rather low target nits value to work with, e.g. 200 nits, then madVR has to apply a LOT of compression, which includes a lot of desaturation, as well. The higher the target nits value, the less compression/processing madVR has to do. So using a higher target nits value should actually get you nearer to the intended look of the movie - not further away!

You're throwing the actual display brightness into the discussion. But in all my research I don't think I've seen anyone else doing that. The key is the amount of compression which tone mapping applies. The more I compress, the more I also need to desaturate, to make the image look "right". How high the actual display brightness is, has nothing to do with desaturation levels, from what I've read.

For example, the "Color Correction for Tone Mapping" paper you referenced, compares different luminance reduction methods and checks how they effect perceived saturation levels. They only ever care about what tone mapping (= luminance compression) does to saturation. They do never care about how the same image looks on a dark vs bright display. So the gist of this paper is that when you compress the luminance channel, you also have to reduce saturation to some extent. But if you don't compress the luminance channel, you don't. So if you happened to use a 10,000 nits target in madVR, no tone mapping is applied, so also no desaturation is needed. And if you use a 1,000 nits target in madVR, less tone mapping is applied compared to a 200 nits target, so at a 1,000 nits target in madVR, less desaturation is needed.

Again: The actual luminance of your display does not seem to have anything to do with saturation levels (if it's calibrated properly).

The whole tone mapping vs desaturation topic can be reduced to: "The more luminance compression is applied to a pixel, the stronger it needs to be desaturated". This is all about digital processing, and not at all about how images will look on displays with different actual brightness capabilities.

So if you say, using a high target nits setting in madVR will result in too high saturation, that's actual the opposite of what I think to be true: Because using a high target nits setting means madVR has to compress less, which brings the image nearer to the master. The extreme variation of this is to use 10,000 target nits setting: This will look very dark. But saturation levels *should* be correct, because they're untouched from the master. Of course this is only true with a properly calibrated display. If your display has too high saturation with very dark pixels, that would explain why you feel that high target nits settings has too high saturation. But I don't believe that to be true for a correctly calibrated display.
I can understand that the color saturation is closer to the original mastering peak values at higher targets, but raising the target nits above the display nits also changes the gamma, like here:

200 target nits:
http://upload.vstanced.com/images/2019/01/13/b8bdf2d1b270aefbe28cc3d5960a27be.png

425 target nits:
http://upload.vstanced.com/images/2019/01/13/d9a4983803a4df4d2d9bc05593735f6f.png

It simply looks as though the SDR gamma value has been raised at the higher target, but gamma is also directly linked to color saturation. You can see the clear difference in color saturation with the orange hues in these images. Too high of gamma = oversaturated, overdark colors. I don't think this gamma mismatch is a good thing when displaying absolute PQ values.

I have watched The Meg once (the last time I'd ever watch it) and thought it looked good at 475 target nits. But I could be the outlier there. Movies shot is bright sunlight outdoors should have some sense of the contrast pushing against the edges of the screen, but not completely blowing it out.

I do have one more question:

Is it important to include the target display black level in the calculation for BT.2390? I ask because there appears to be a significant difference in the resulting path of the tone curve when a different display black level (b = minLum) is used:



I don't know if this would make a difference or not, but I ask because the two displays I've used to test the mapping of black to SDR gamma both have issues with using 2.20 as the output SDR gamma:

2.20 madVR -> 2.20 display:
http://upload.vstanced.com/images/2019/03/11/45933037a21fa8841c37d42db4055cd2.png

2.40 madVR -> 2.40 display:
http://upload.vstanced.com/images/2019/03/11/655dd06de36842596f017cfef65f5979.png

I only seem to get good black clipping with madVR set to 2.40. 2.40 (madVR) -> 2.20 (display) also works better than setting madVR to 2.20. 2.20 is always dark. 2.20 at the display is clipping ideally with SDR content, so I don't think the display gamma is doing anything wrong.

Neither display has a black level lower than 0.05 cd/m2. I wondered if that might be a factor. Maybe the more gradual rise out of black at 2.40 is helping to avoid black crush. And maybe the occasional washed out image after tone mapping is due to an imperfect mapping of black on some displays.

I doubt this is an issue because the image still appears punchy, but I thought I'd ask. If it was a factor in the result, maybe a drop-down would help:

- 0.001 cd/m2 (OLED)
- 0.005 cd/m2 (High-end Projector) (Same black level as reported by most current HDR mastering metadata)
- 0.01 cd/m2 (Mid-range Projector/Local Dimming VA LED)
- 0.05 cd/m2 (VA LED)
- 0.10 cd/m2 (IPS LED)

Edit: I tried calculating this with one PQ value and got the following:

Input PQ Value: 100 = 0.306 nits

- Display minLum = .001 cd/m2 -> Output PQ value = 100.000096

- Display minLum = .10 cd/m2 -> Output PQ value = 100.096059

I don't know that the decimal point ever becomes relevant enough to change the PQ value. That graphic shown is just confusing.
 

·
Registered
Joined
·
1,479 Posts
Discussion Starter #869 (Edited)
Has BT.2390 been using a variable and not fixed knee point all along with frame-based tone mapping by using the measured brightness of each frame?
Yes.
MadVR dynamic implementation of bt2390 has been doing that for more than one year now. Maybe even two.

This is why you had the check-box: "measure each frame peak luminance ".

Our tool does not do the bt2390 tone mapping. MadVR does.

All we can do with our tool is select automatically a target nits for each frame using the histogram information saved in the measurement file as a guide. :)

We can also overwrite the measure peak nits to apply the dynamic clipping.

Those are our only tools.
All the rest happens in madVR.

So the new bt2390 algo is only analyzing the histogram and the peak knowing what madVR will do with it to assess what target nits would give the wanted results like:
- constant brightness on screen result (using also your real display nits info)
- protect 0-100nits by back calculating what the target nits needs to be with the detected peak.
 

·
Premium Member
Joined
·
9,859 Posts
Oh, I see. ("what does it stand for!?")
"it" stands for the dynamic clipping in the live algo. It is implemented, but it badly clips highlights (very aggressive, as you point out) so isn't really usable as it does more harm than good.
With measurements files, dynamic clipping seems to be working properly (it doesn't clip highlights excessively), and I attribute some of the improvement in contrast with measurements files vs the live algo to it being enabled (and working as intended). I don't notice a significant loss of highlights.
I could be mistaken of course. I haven't done a detailed comparison with/without with both the live algo and with measurements files because I thought it was established it didn't work as intended (yet) in the live algo :)
 

·
Registered
Joined
·
2,033 Posts
Can you respond to this madshi? I am still wondering if this is important or not. I don't get why mapping to 2.20 is not working correctly in some cases...

Is it important to include the target display black level in the calculation for BT.2390? I ask because there appears to be a significant difference in the resulting path of the tone curve when a different display black level (b = minLum) is used:



I don't know if this would make a difference or not, but I ask because the two displays I've used to test the mapping of black to SDR gamma both have issues with using 2.20 as the output SDR gamma:

2.20 madVR -> 2.20 display:
http://upload.vstanced.com/images/2019/03/11/45933037a21fa8841c37d42db4055cd2.png

2.40 madVR -> 2.40 display:
http://upload.vstanced.com/images/2019/03/11/655dd06de36842596f017cfef65f5979.png

I only seem to get good black clipping with madVR set to 2.40. 2.40 (madVR) -> 2.20 (display) also works better than setting madVR to 2.20. 2.20 is always dark. 2.20 at the display is clipping ideally with SDR content, so I don't think the display gamma is doing anything wrong.

Neither display has a black level lower than 0.05 cd/m2. I wondered if that might be a factor. Maybe the more gradual rise out of black at 2.40 is helping to avoid black crush. And maybe the occasional washed out image after tone mapping is due to an imperfect mapping of black on some displays.

I doubt this is an issue because the image still appears punchy, but I thought I'd ask. If it was a factor in the result, maybe a drop-down would help:

- 0.001 cd/m2 (OLED)
- 0.005 cd/m2 (High-end Projector) (Same black level as reported by most current HDR mastering metadata)
- 0.01 cd/m2 (Mid-range Projector/Local Dimming VA LED)
- 0.05 cd/m2 (VA LED)
- 0.10 cd/m2 (IPS LED)

Edit: I tried calculating this with one PQ value and got the following:

Input PQ Value: 100 = 0.306 nits

- Display minLum = .001 cd/m2 -> Output PQ value = 100.000096

- Display minLum = .10 cd/m2 -> Output PQ value = 100.096059

I don't know that the decimal point ever becomes relevant enough to change the PQ value. That graphic shown is just confusing. Can't guarantee I interpreted the formula correctly, but I think those numbers are accurate.
 

·
Registered
Joined
·
7,962 Posts
In theory I think your BT.1886 SDR calibration should already take care of the imperfect black level. As such I don't think I need to handle this as part of the tone mapping.

madVR usually sends SDR content untouched to the display, which means the content's gamma curve is whatever the SDR content was encoded with. However, for tone mapping your "this display is calibrated to" settings are used. They must match what your display is calibrated to. That could be e.g. BT.601/709 2.2 gamma or a pure power curve 2.2, or anything else.
 

·
Registered
Joined
·
2,033 Posts
Yes, but when I try to set madVR to 2.20 to a display calibrated to 2.20, I get a darker result than when madVR is set to 2.40. Both displays are calibrated, so the display shouldn't be doing anything wrong. I can use multiple SDR black clipping patterns and get proper black clipping with 16-235 SDR content.

I'm not the only one who has gotten a worse result at 2.20. Neither the plasma screen or LED used has a black level as low as the typical projector. The picture is not unusual when watching HDR content, but 2.20 is not watchable.

The original question was "why include the end display black level in the calculation for BT.2390?" You can see in the one example I provided that it does change the resulting PQ value by decimal points. Are those decimal points relevant?
 

·
Registered
Joined
·
2,033 Posts
madshi you are getting too stubborn on these forums.

I guess based on the infographic above, I should assume the most standard PQ to SDR gamma conversion would go from 0.0001 cd/m2 (PQ black) to 0.01 cd/m2 (SDR black).

In that case, it is probably fine. Something still seems imperfect in this conversion, though.
 

·
Registered
Joined
·
7,962 Posts
Yes, but when I try to set madVR to 2.20 to a display calibrated to 2.20, I get a darker result than when madVR is set to 2.40. Both displays are calibrated, so the display shouldn't be doing anything wrong. I can use multiple SDR black clipping patterns and get proper black clipping with 16-235 SDR content.
You're always only saying 2.20 and 2.40. But for calibration you can actually use a pure power curve of 2.20, or a BT.601/709 curve of 2.20, or BT.1886 2.20.

Anyway, I'm far from a calibration expert, so I can't explain to you why you get different results when using 2.20 (in both madVR and your display) vs 2.40 (in both). Either there must be a bug in madVR, which seems highly unlikely, though, because the math is simple enough, so there's isn't so much room for me to make mistakes. Or there must be a problem with the calibration. Or the calibration settings in madVR & display don't match, for whatever reason, maybe because the curve type doesn't match.

FWIW, if you use a BT.601/709 curve, madVR internally applies an 1.25 factor to the gamma value (so 2.20 becomes 2.75). This is done to make both curve types look more similar (= comparable).

In the end, I'm the wrong guy to talk to when it comes to calibration topics. What madVR does it really simple: After tone mapping in linear light it converts the pixels into whatever you set "this display is already calibrated" to. And the math for that is pretty simple, so I rather don't think it's likely that there's a bug in madVR there.

The original question was "why include the end display black level in the calculation for BT.2390?" You can see in the one example I provided that it does change the resulting PQ value by decimal points. Are those decimal points relevant?
I don't understand your question, and I'm not sure which original question you're refering to. I already explained to you why I think madVR's tone mapping shouldn't take the display black level into account. Let me say it again: Because your BT.1886 SDR calibration should already take care of that. If madVR took the display black level into account, and your BT.1886 already did that, as well, there would be double correction for the black level.

I guess based on the infographic above, I should assume the most standard PQ to SDR gamma conversion would go from 0.0001 cd/m2 (PQ black) to 0.01 cd/m2 (SDR black).
This makes no sense to me whatsoever. Black is black, 0 cd/m2.
 
861 - 880 of 1220 Posts
Top