AVS Forum | Home Theater Discussions And Reviews (https://www.avsforum.com/forum/)
-   Digital Hi-End Projectors - $3,000+ USD MSRP (https://www.avsforum.com/forum/24-digital-hi-end-projectors-3-000-usd-msrp/)
-   -   Improving Madvr HDR to SDR mapping for projector (https://www.avsforum.com/forum/24-digital-hi-end-projectors-3-000-usd-msrp/2954506-improving-madvr-hdr-sdr-mapping-projector.html)

Soulnight 02-03-2018 01:28 AM

Madvr HDR to SDR mapping: already great, soon even better for projector?
 
I am opening this dedicated topic for:

How to further improve the already great Madvr HDR to SDR Mapping with special focus on projector. :)

I am myself a very happy user with Madvr HDR to SDR ton Mapping and have been using it for months with a great projector which is officialy not compatible with HDR (Epson EH-LS10000).
What does it do:
- It makes every projector or TV compatible with HDR even if they were not initially "marketed" for that :)
- Madvr compress the highlight dynamically: so you get dynamic HDR ala "Dolby Vision or HDR10+. :eek::cool:
- You can choose your target color gamut according to your display: REC2020, DCI-P3 D65, REc709 and even better a full 3DLUT calibration over thousand of points! (for example I choose DCI-P3 because my projector covers 100% of DCI)
- Madvr let you choose the "HDR strength" vs "Brightness" through a control called: "this display target nits".
- The next best choice at the moment is the Lumagen pro at the moment for 5000$++ but it does not enable dynamic HDR (yet)

What do you need:
- a HTPC with recent graphic card

I hope that concentrating all the discussion and effort in one place should enable a faster solution.
The discussion has been going all over the place in avsforum lately.

I see currently 4 improvement potential through the below discussion:

1) Madvr could handle highlights even better through light clipping (no bug).
-->solution with clipping only a certain % of the brightest pixels through use of an histogramm?
-->Madshi says that Madvr follows SMPTE 2390 accuratly. This will be new feature not a bug fixing.
Spoiler!

3) Madvr desaturates the colors which are above "this display target nits" EVEN if you choose: "100% saturation, 0% brightness".
-->Bug?
-->Solved my Madshi 2018-02-04
4) Madvr does not use yet a specific method for projector with low brightness
-->Multiplication factor for HDR curve? Inspiration from manni/Javs Arve HDR curve?
--> Will be implemented by Madshi in next build

Here a few quotes trying to illustrate what could be improved/fixed with Madvr HDR to SDR Mapping with special focus for projector:


Discussion in Guide Building 4K HTPC Madvr

https://www.avsforum.com/forum/26-ho...l#post55463634

Quote:

Originally Posted by Soulnight (Post 55463634)
Hi Madshi, happy new year! :)

I have been thinking lately while watching 4K HDR movies on my projector how to improve HDR "pop" on a low brightness projector.
Below my quoted old idea which you did not like for good reasons.

Based on that feedback and my user experience, here here is my new idea:

1) A bit complicated but nice
I am using the HDR to SDR shader math mapping with a 300nits target, with dynamic compression of the highlights up to the peak luminance of each image.

What I have noticed is very often only a few pixels in the image are reaching this peak luminance (let's say 1000nits), but most of the highlights (let's say 90% of the pixel above 300nits) are still below 600nits.

So Madvr is compressing the pixels between 300nits to 1000nits in order to NOT clip ANY pixels, even it's only a few. But doing so, most of the pixels are compressed heavily in the process, for those few.

I would propose to give the user a choice for setting "a dynamic clipping nits limit" in "percent" of the number of pixels above "this display target nits".
This "P=percentage clipping value" (in our example 90%) would be used like this:
- T= total number of pixels above "the display target nits" (in our example 300nits)
- C= cumulated number of pixel above 300nits sorted by increasing nits
- When C/T=P=90%, then select accordingly the clipping value. (in our example, 90% of the pixels above 300nits are below 600nits).

In another word, we are looking for the NITS level of the P=% Quantil for the pixels above "this display target nits"

Use the new calculated clipping value instead of "the measured peak luminance".

Advantages:

- Most of pixels composing the highlights above the "soft clipping" have now a better differenciation and the HDR effect will be more pronounced
- Good use of Madvr looking at each single pixel nits

Disadvantage: A certain amount of pixel is now clipped but it should be small enough not to impact the image quality negatively

2) Very simple but less nice
-->provide the user an input box to define a hard coded "clipping limit.
You would then have:
1) the soft clipping point / this display target nits = 300nits
2) the clipping limit = 600nits for example
Everything above 600nits gets clipped.
Everything between 300 and 600 nits gets compressed up to the peak luminance of the said picture
Below 300nits: nothing touched


Of course you could implement both. :-)

Also: could you add in the information displayed under "CTrl+Y" so that we have those 4:
- chosen value for "this display target nits" (above example 300nits)
- average nits of the picture
- maximum peak luminance in nits (above example 1000nits)
- Nits Number for of the 90% cumulated number of pixel ranked by increased brightness between "this display target nits" and "peak luminance" (in the above example 600nits). In another word the Nits level for the 90% Quantil of the pixels above "this display target nits".


Thank you a lot!
I know I am asking a lot and I hope Masvr V1.0 comes soon so that I can give a least a bit back to you. ;)


Viele Grüße,
Florian

Quote:

Originally Posted by madshi (Post 55500058)
Not a bad idea at all. Two problems:

1) It's more difficult to implement than the current measurement because I'd have to measure a full histogramm, which is harder to do with simple pixel or compute shaders than the current very simple measurement.
2) Any additional user option makes things harder to understand for the average user. Especially if the new option is cryptic. How would the average user understand "percent of the number of pixels above the display target nits"?

But I'll add this idea to my list of things to look at. Maybe I can come up with something that either needs no additional user options, or can be explained to the user in such a way that he's not confused.

Quote:

Originally Posted by Soulnight (Post 55500422)
Glad that you like it!:)
You could just put an option named "increase highlights HDR strength". :)
And use internally a hard coded value of 90% of unclipped "highlights" pixels.

As for the problem number 2 from Madshi, I would suggest an option called:

Clip highlight strength /HDR Strength:
- none (0%)
- low (10%)
- medium (25%)
- high (50%)
- Full (100%)
- User value: (open box where value can be set between 0 and 100%)


Color shift issue with Madvr HDR to SDR mapping WITHOUT 3DLUT? Solved with 3DLUT?
Quote:

Originally Posted by Soulnight (Post 55501108)
:eek::rolleyes::)
Ok. I thought you already said so in the past that is was DCI D65 and so I calibrated my projector Epson EH-LS10000 with really excellent 2DLUT tracking.
However yesterday I generated a 3DLUT SDR Gamma 2.2 REC2020 to be even more precise. To my surprise using this display is already calibrated in DCI-P3 delivers a picture slighty warmer than the 3DLUT and pushing slightly more the red and green. Therefore I concluded that somehow I had mistaken and you meant DCI-P3 D63 which would explain what I observe.






Discussion in Projector Mini Shotout Thread

https://www.avsforum.com/forum/24-di...l#post55607794

Quote:

Originally Posted by Zombie10K
I recently started experimenting with MadVR and the tone mapping on my HTPC (7700K + GTX 1080ti) and seeing a similar color shift that @Javs posted in the JVC thread. Is the MadVR thread on Doom9 the main area to post on this topic? Thx!!

Quote:

Originally Posted by Madshi
Nobody in the doom9 thread has compared about color shifts yet. So maybe AVSForum is a better place to discuss that, since there are now seemingly two users (Javs and you) who see a problem with madVR's tone mapping. However, it doesn't make sense to discuss this in different AVSForum threads. Not sure, should we create a new thread, or pick one of the existing threads to discuss that?

Quote:

Originally Posted by Madshi
Some things to think about:

1) The only way to truely know how the HDR Blu-Ray is supposed to look like is to play it on a true 10,000 Nits BT.2020 display. (Ok, a 4,000 Nits DCI-P3 display would also do, if the combination of player and display are clever enough to *clip* both luminance and colors, because tone mapping is in this case not necessary, because the movie's actual data fits into 4,000 Nits DCI-P3.)

2) Let's say you have a highly saturated red color which the HDR Blu-Ray has encoded with 4,000 Nits. And your display actually *can* do 4,000 Nits. No problems, right? Actually yes, BIG problem, because the display peak Nits capability is for white, not for red. So what should a tone mapping algorithm do now? Should it make the pixel white? It could achieve the wanted 4,000 Nits, but the pixel's color/saturation would be completely lost. Or should the tone mapping maintain the full saturation/color, and lose all the Nits it can't handle? Then a significant amout of highlight punch & detail would get lost. So what should we do? In madVR you can choose. See option "fix too bright & saturated pixels by".

3) Let's say the studio uses a sub-optimal tone mapping algorithm for their 1080p Blu-Ray. Or maybe they use a custom tone mapping algorithm to achieve a specific look, or maybe (for high profile titles) they're even tuning it for each different scene. How then can you use an 1080p Blu-Ray as a reference for a tone-mapped HDR Blu-Ray should look like?

4) Every CE manufacturer has their own tone mapping algorithm, and they all look different. So who's right and who's wrong? Who can be accepted as reference? And all the others are then judged to have color shifts?

All that said, if you're not completely happy with madVR's tone mapping results, here are a few things you could try:

A) Try different values for the "fix too bright & saturated pixels by" option. See point 2) above. There's no right or wrong setting here, unfortunately. In some scenes one setting looks better, in other scenes another.

B) You could try to turn "preserve hue" on/off in madVR, or switch between "low" and "high" quality. FWIW, "high" quality is doing the tone mapping in ICtCp, which is Dolby's preferred color space for tone mapping.

C) You can use DisplayCAL to create 3DLUTs which include HDR -> SDR conversion (tone mapping). The 3DLUTs are calculated offline, by also using the ArgyllCMS framework, IIRC, so it should be very high quality. Maybe you prefer it over what madVR's pixel shader math does? Here's a test 3DLUT you can try, which doesn't do any actual display calibration, but just does tone mapping, nothing else. Tone mapping is done to 400 Nits and BT.709 gamut by this test 3DLUT:

http://madshi.net/DisplayCal400Nits709.rar

You can enter this in the BT.2020 slot when switching madVR to "convert HDR content to SDR by using an external 3DLUT". Of course you can create any tone mapping 3DLUT you want (different nits values and gamuts) with DisplayCAL.

Quote:

Originally Posted by Javs
Thanks Madshi. I will respond to this in some more detail today and provide images to show my results clearly. Just to nip something in the bud right now, yes I have a 1400 nit capable hdr tv to use along side my JVC. They both display MadMax in true HDR mode in an extremely comparable manner regarding highlights, and the colour of those highlights.

Ignoring the white highlights in mad Max specifically for a moment, it also appears when I view a colour clipping test pattern from the masciola suite - the colours go from saturated at the low end all the way to white on the high end, which they should stay a constant colour. I believe these two things are relsted. I will post a photo of this shortly.

I have also tried a bunch of the preserve hue combinations. I will post the differences soon. I was able to get it very close. But the colour of the highlights seemed to be compromised.

There is also a general colour shift in the image to a different hue/saturation overall. It's slight. But it was very obvious in the mad Max shot since it was a strong red. I saw the same exact colour shift on my Samsung HDR tv as the JVC.

Both are fairly well calibrated at least to a comparable state, the JVC is very well calibrated. So comparing the content on my HDR curves to the tone mapping is showing a shift in colour on both of my displays.

I have set madvr to say the display is calibrated to bt2020 and gamma 2.4, I am assuming that's the correct thing to do when I have a display I know has been calibrated to rec2020 such as my JVC? I am totally hoping this part is user error as there seems to be a few things to set up here.

Thanks, I am sure we can get to the bottom of this and I can kiss my curves goodbye.





Discussion in JVC topic


https://www.avsforum.com/forum/24-di...l#post55542012
Quote:

Originally Posted by Javs (Post 55542012)
Stanger, would you share the settings you are using for highlight information in the HDR to SDR conversion settings?

Have you had a look and compared whats its doing to highlights and colours vs the normal HDR curves?

I have found MadVR conversion to actually negatively affect the highlight information and colour overall when its doing tone mapping, and the information is not able to be restored to a level that matches any of the custom HDR curves.

To not have compromised highlights you MUST use the Restore Hue settings, and there is no combination of settings there that truly resores the correct highlight gradation and colour to the area that is affects. I tested this at length and took quick photos of it too, it gets really close, but its not close enough. You can also see what its doing to the image if you look at HDR colour clipping test patterns which illustrates what happens in brighter highlights that contain colour information.

The inconsistencies are very visible in Mad Max.

There is also a notable and very clear colour shift and oversaturation overall when using it vs the calibrated BT2020 HDR curves. And yes, before you ask, I am most certainly telling MadVR that I have a calibrated BT2020 display when I am using the conversion. Yet there is still a colour shift, notably red, which is quite obvious.

The only way I could see around this would be to create a 3D LUT using the HDR to SDR conversion at the same time as measuring Rec2020 primaries while the conversion is in place in order to measure and correct any colour shift that is occuring.

Just quickly, this is what it looks like:

Normal HDR Curves:

https://i.imgur.com/jq9fxFY.jpg

HDR to SDR Conversion with BT2020 calibration selected at various nit level outputs (All other calibration settings including Rec709 and P3 still produce the wrong colours)

https://i.imgur.com/RFG0njF.jpg

https://i.imgur.com/ZqRUYTE.jpg

And to see what it does to test patterns in the most hue setting mode to get highlights looking correct:

https://i.imgur.com/Yz9qWFK.jpg

When it should look closer to this:

https://i.imgur.com/r8cgNNO.jpg

Current MadVR Settings, tried all of them:

https://i.imgur.com/KgaDuXE.png

Tried every combination of settings on this page.

https://i.imgur.com/R65Spiz.png


Thoughts?

p.s. If anyone such as Manni jumps in here to tell me I am incompetent and I am doing it wrong, perhaps take a moment to think about posting a useful comment that actually helps solve this issue rather than going nowhere.

https://www.avsforum.com/forum/24-di...l#post55609904

Quote:

Originally Posted by Manni01 (Post 55609904)
I still haven't had a chance to look at the Radiance or the Oppo, so I can't comment on that, but I have since seen issues in MadVR and I've asked Madshi to make a few changes so that I have a chance to identify exactly what the problem is, i.e. whether it's calibration related or MadVR related. I've also had issues selecting calibrations with MadVR, so for now I am back to custom curves.

For me the Vertex switching automatically to 2-3 curves according to MaxCLL/MaxBrightness is still a better solution, but I hope that Madshi will be able to make the requested changes to that I can get back to diagnose the HDR to SDR conversion in MadVR more precisely and we can get a fully automated solution with MadVR.

Sure, I did write that at the time, but since I have also posted that I was experiencing issues and that I needed more time (and changes in MadVR to test properly the origin of the issues).

Ideally, especially until the external command works reliably, we would need:

- MadVR/GPU/OS reporting SDR BT2020 in the HDMI stream, so that the Vertex can select the correct calibration automatically. I know that you can't do this without a custom API but given the current state of the external commands and also the fact that many of us don't have the HTPC as our only source, we will need this at some point.
- MadVR to display the active 3D LUT to know exactly what MadVR is doing behind the scenes when doing the conversion.

I am seeing various issues, especially with highlight compression, and as far as I can see MadVR isn't as adaptive as I thought it was initially. But before reporting anything and wasting your time, I'd like to rule out calibration issues, so I'll resume my tests as soon as we get the active LUT reporting in MadVR. I also had issues with some of the 3D LUTs created with Calman (weird posterization), so I need to investigate that as well.

At the moment, it looks like MaDVR doesn't adapt the curve to the reported MaxCLL, so it works great at one end of the spectrum or the other, but it doesn't work as well for all titles if you want it to improve the low end. Similar to the Oppo apparently.

It would be great to have a way to test MaxCLL/MaxBrightness in profiles, or at least one threshold (possibly more) in the HDR to SDR conversion settings to specify different peakY values depending on content.

The big plus of MadVR (and something that the Radiance can do but not the Oppo) is its ability to provide perfect calibration with its 3D LUTs. Unless/until I can get this to work as it should, the custom curve(s) remain a better option, at least here.

Quote:

Originally Posted by madshi (Post 55609936)
Why would I do that, when I can actually measure the brightest pixel in each frame myself and adjust the tone mapping curve to that (which is what I do)?

Quote:

Originally Posted by Javs (Post 55609972)
I have also discovered MacCLL is not always accurate even when it is reported, so I agree that's a far superior method. Spider-man Homecoming is an example of containing content nearly 4x higher than its reported MaxCLL.

Quote:

Originally Posted by Manni01 (Post 55610016)
I also thought that you would be doing this when I initially tested, but I can't get MadVR to improve the low end for low nits titles AND not crush the highlights for high nits titles with a single peakY value.

If I use 200-300, I get better low end for 1100nits tiles, but the compression in the highlights is unacceptable for titles with a content going significantly above 1100nits (Mad Max, Pacific Rim, The Shallows, etc).

If I use 400 or more, I get better highlights resolution for high nits titles but there is no improvement in the low end of 1000nits titles compared to a good 4000nits custom curve.

Again, I need to do more tests, I was only correcting your statement as you were only quoting my initial reaction and not the reservations I've expressed since.

I also have issues with calibration so I'd like to rule these out before taking your time with reports.

I have very little time at the moment, so until I can do proper tests with enough information from MadVR about what it's doing I've parked it.



Quote:

Originally Posted by madshi (Post 55612140)
madVR's tone mapping works like this: If you actually tell madVR the proper peak Nits value that you measure your display as, all the pixels in the lower Nits range (ideally from 0-100 Nits) are displayed absolutely perfectly, in the same way a true 10,000 Nits display would show them. Tone mapping only starts somewhere above this lower Nits range. However, we can't simply jump abruptly from 0 compression to strong compression, so the tone mapping curve needs to start smoothly, otherwise the image would get a somewhat unnatural clipped look. Practically, if you set madVR's peak luminance value to 200 Nits, the tone mapping curve starts compressing pixels at 23 Nits. If you set madVR's peak luminance value to 400 nits, the tone mapping curve starts compressing at 75 Nits.

Of course projectors are rather dim, compared to flat panel displays. So if you actually tell madVR the proper Nits value you measured, tone mapping will be very heavily handed, and you'll lose a lot of highlight detail. So naturally, you'll want to pretend that your projector has higher peak luminance capability than it really has. If you do that, tone mapping will relax a little, but the overall image (including the low end range!) will be darker than the UHD Blu-Ray disc encoding asks for.

This probably explains why you're not happy with either the 200 Nits nor 400 Nits setting? I suppose what you really want/need is some tricky tone mapping curve which doesn't compress highlights as much as it mathematically should, while at the same time making sure that the lower end is reproduced exactly as the UHD disc asks for? We're approaching the impossible here: Since your projector is so dim, a lot of compression is needed. So where should we compress? If you enter the measured Nits value, the top end if heavily compressed. You don't like that. If you enter a much higher than measured Nits value, the bottom end becomes too dark (= compressed in a sense). You don't like that. So if you don't like compression at the top end and not at the bottom end, where should be compress instead? I suppose we could try making the tone mapping curve flatter, so that it compresses more strongly in the mid range and less strongly in the top range, but doing that would probably take some life out of the picture...

Anyway, here's a quick test suggestion: madVR's "contrast" slider is a linear light S curve gamma modification. So you could try to enable gamma processing, use 400 Nits to get less compression, and then use the contrast slider to "help" the lower end. Maybe that's more to your liking? Doing it this way will move away from the scientific approach of madVR's tone mapping curve, though. But I suppose lying to madVR by entering a higher than measured peak Nits value already gets rid of the science, anyway...

Quote:

Originally Posted by Manni01 (Post 55612250)
I understand all this, and this is why I'm saying that MadVR isn't as adaptive as it could/should be to get best results depending on the content for each title, at least with our projectors.

My display has 120nits PeakY (actual) in low lamp with 1750 hours on. I quickly saw that using the actual peakY was compressing too much, this is when I started raising the value. But then it seems impossible to find a single value that provides bright enough picture for 1000-1100nits titles (in a way that addresses the issue pointed by Kris Deering in titles such as The Revenant) and still resolve enough highlights that you don't clip visible information in titles with content going significantly above 1100nits.

What I can do with custom curves is use the equivalent of a 200nits value in MadVR for 1000-1100nits titles (titles with no content above 1100nits), and the equivalent of a 400nits+ value in MadVR for 4000nits titles (titles with content above 1100nits). The Vertex switches automatically between these two curves (in fact I'm playing with three curves at the moment, although I think two are probably enough).

My initial assumption was that MadVR would look at the content, and adjust the room left for the highlights according to said content. We don't need as much room for highlights with content under 1100nits, so we can ramp up the brightness of the curve to improve the low end and keep less room for highlights.

I have no interests in "tricks". The custom curves are mostly based on BT2390, they are just not adaptable the way MadVR could/should be if it was looking for MaxCLL and adjusting the value for PeakY accordingly, as it seems the current adaptive way based on the frame analysis isn't able to address the above issue..

I guess you can't do this on a frame by frame basis without causing significant jumps in picture brightness because the content wasn't mastered that way (it's not HDR10+ or DV), but you can get a fairly accurate estimate of the actual MaxCLL, if using MaxBrightness to first identify whether the title has valid metadata or not, then determine whether it could have content above 1100nits or not, then use MaxCLL to test again if the metadata is valid or not, then if it's valid get an indication of the content. If there is any uncertainty on the accuracy of the metadata, I fall back on 4000nits to be on the safe side.

This is the latest automatic algo I've asked HD Fury to implement in the Vertex for the V2.1 of the JVC Macro feature (HDR10-Main is a curve clipping at 4000nits, HDR10-Optional is a curve clipping at 1100nits with a higher diffuse white):

IF HDR10_Max_Brightness EQUALS 0 THEN HDR10-Main ELSE ** invalid metadata so select 4000nits curve
IF HDR10_Max_Brightness BELOW OR EQUALS 1100 THEN HDR10-Optional ELSE ** in that case we know the content can't be above 1100nits, so we select the 1100nits curve even if the maxCLL is invalid (it often is equal to 0)
IF HDR10_Max_CLL EQUALS 0 THEN HDR10-Main ELSE ** (title is a 4000nits curve with invalid MaxCLL, so I select the 4000nits curve to be safe)
IF HDR10_Max_CLL BELOW OR EQUALS 1100 THEN HDR10-Optional ELSE HDR10-Main ** (otherwise I use MaxCLL to select the 1100nits or 4000nits curve).

With the custom tabs, I can use the JVC max of three custom curves and I select (at the moment) a curve according to MaxCLL that clips at 1100nits, 2200nits or 4000nits following the algo above but testing MaxCLL for 2200nits as well. If you prefer, you could do 500nits, 1200nits and 4000nits as the MaxCLL value has little bearing with the MaxBrightness value, but with the Vertex the algo would be less accurate as there are a limited amount of lines and conditions available in the custom mode so I prefer to stick to values used in the mastering metadata info, even if it's not 100% accurate. I think the algo above will be 99% of the time a best case scenario.

This is why I'm suggesting to add ways to test for MaxBrighness and MaxCLL in profiles in MadVR, or preferably ways to specify at least one threshold value (preferably two) so that MadVR can adjust the internal value of PeakY according to the actual MaxCLL of each title (using algo above or similar), or let us specify the PeakY value we want to apply according to content if you can't find an algo to extrapolate this for projectors from the actual peakY (which would be, of course, the ideal way to implement this).

For example, if the algo above (or similar) was implemented in MadVR and we had the possibility to specify a PeakY value for 1100nits and 4000nits content, I would specify for my actual peakY of 120 a value of 200nits for 1100nits content and 450 (possibly more) for titles with content up to 4000nits (or above). This would give a similar result to what I can get at the moment with custom curves, using the Vertex to select automatically the correct calibration according to content. Ideally, we would want to support a few more MaxCLL threshold, such as 500nits and 2200nits.

Again, I've parked my tests in MadVR at the moment due to lack of time. MadVR works great in passthrough mode with 2/3 custom curves and the Vertex to switch automatically between the curves according to the actual content, not only with the HTPC as a source but with any other source and any other calibration (3D, x.v.color, HLG BT2020, HLG REC709, SDR film, SDR TV, many of which are not supported by MadVR/LAV). I want to first establish that the calibration is correct and assess whether some of the issues I'm seeing are calibration related or MadVR related before fine-tuning the peakY value I need according to content and providing you with a detailed report. The earlier you find the time to implement this, the faster I can resume my tests and resume this discussion with you, probably in the doom9 thread rather than here as most of it isn't JVC specific (and probably too technical and boring for most of the readers of this thread).

I have run a full autocal of my PJ a couple of days ago to calibrate my SDR Rec-709, SDR BT2020 and HDR10 baselines and I've created new 3D LUTs (rec-709 profile which I use to also create PAL and NTSC LUT and SDR BT2020 profile which I use to also create a DCI-P3 LUT). I'm only waiting for MadVR to report the active 3D LUT in the OSD to resume my testing.

Madshi I would appreciate if we don't start a long discussion here and now. As I said, I need to resume testing once I've ruled out any potential issue with calibration in MadVR. Once I'm 100% sure that the correct 3D LUT is applied, I'll provide more feedback and we can move on. :)

Quote:

Originally Posted by madshi (Post 55612278)
IMHO the best (most scientific) approach would be to tell madVR the true measurement of your display's peak Y value, and then leave the rest in madVR's hands. That's the only way madVR even has a chance to reproduce the important low end (e.g. 0-25 Nits) faithfully. If that doesn't produce good enough results right now, let's work together on improving that.

BTW, yes, adjusting tone mapping to the measured peakY frame by frame produces flickering, that's why I'm using a rolling average.

Quote:

Originally Posted by Manni01 (Post 55612314)
Agreed, but that produces unacceptable clipping even for 1100nits titles, so not an option in practice.

We can't be 100% scientific with projectors, there are too many variables and no satisfying standard. :)

Looking forward to doing more testing when you have the time to implement the active 3D LUT report in the OSD.

If there is any way you could get your hand on a custom API allowing you to get the GPU/OS to report the correct content info in the HDMI stream (especially SDR rec-709 or SDR BT2020), this would be greatly useful to automate the calibration changes so that the optimal baseline calibration is always selected (SDR Rec-709 or SDR BT-2020). Even if you manage to fix the external command in MadVR, we will have to use IP control (this is what Arve's tool uses) which means that we won't be able to use iRule, Roomie and other apps also using IP to control the PJ as it will conflict with it. The Vertex uses RS-232, so it works perfectly in parallel with iRule, Roomie etc. I know it's a niche problem but for us projector owners with dedicated rooms it's a very important point.

Quote:

Originally Posted by madshi (Post 55612474)
We haven't even seriously tried yet! I've only implemented one general scientific tone mapping curve, without any real tweaks/modifications aimed at projectors yet. Please don't give up so quickly! Let's try to achieve the best scientific solution first, and only after we've seriously tried that and failed, it would be time to admit defeat and look for alternatives.

Quote:

Originally Posted by Manni01 (Post 55612546)
Not giving up at all, but unless we can input the actual peakY and dynamically adjust the internal PeakY according to content, I don't think there will be a satisfying result for all titles. The custom curves I'm using are 100% scientific, they do follow Bt2390, it's just that they are optimized according to content in order to get the best of both worlds. They are not "pie in the sky let's tune to taste so that it looks like I want" curves. Not at all. My understanding is that Arve is using a BT2390 formula with a few more tweaks, but you can decide where to hard clip and use the available range in an optimum way.

EDIT: I thought there was a parameter for max_brightness of the content (distinct from the max_brightness of the display) in BT2390. Did you try feeding it with MaxCLL (once you have ascertained MaxCLL was valid, as per the algo above or similar)?

Quote:

Originally Posted by madshi (Post 55612618)
If madVR has all the necessary information (truthfully measured peakY of your display, measured peakY of each video frame), then why should it not be possible for madVR to achieve a satisfying result for all titles? madVR already has all that info. Probably just some tweaks to the tone mapping parameters are needed, that's all.

I'd say, once a new madVR build is out and you find more time for testing, maybe you can make a couple of very small movie samples available to me that showcase what is not to like with the current tone mapping behaviour, if you input the true measured peak luminance of 120nits, and then I can try tweaking the tone mapping behaviour accordingly. Let's see how far that takes us.

Quote:

Originally Posted by Manni01 (Post 55612686)
Because when you truthfully measure the peakY of each frame you can't adjust as much as you should without producing flicker (too wide variations in brightness) as you said yourself. You have no way when you start playing the file to know whether the content will shoot above 1100nits or not.

Did you see my edit above? I thought ST2390 had a parameter in the formula for max_brightness of the content. That might be what's missing? Could you share the parameters you have implemented? It might be useful to make some of them accessible in the next build, at least during testing.

I don't think I'll be able to make short clips but if you send me the list of the titles you have I can give you examples from that list with timecodes/chapters. The Revenant, Mad Max, Pacific Rim, Deadpool, The Shallows, Batman vs Superman should be enough.

Happy to start with actual peakY, I agree that it would be the ideal way to proceed, but I think you will need a checkbox for projectors (in dedicated room), because while it makes sense to follow the curve up to 100nits for displays with ambient light and a peakY of at least 600nits, because in that case reference white =100nits if you want to be correct, for projectors in a dedicated room reference white = 50nits and not 100nits, so it doesn't make sense to do it the same way as following the absolute curve religiously isn't accurate or desirable. BT2390 accounts for that in part, but not completely as far as I can see. Being able to specify (or ideally extract from the metadata) the max brightness of the content and the value for reference white (or brightness factor) would help a lot to get better results with projectors specifically.

Looking forward to testing the next build :)

Quote:

Originally Posted by madshi (Post 55612756)
And as I said, I'm using a rolling average which nicely takes care of flickering. I see no use in maxCLL, because not all titles have it, because it might be incorrect, because it's probably static for the whole movie, and because I can measure myself frame by frame, which should be greatly superior. I should easily be able to provide all the benefits of HDR10+ with current HDR10 content.

Quote:

Originally Posted by stanger89 (Post 55613294)
One thing to consider is that we probably don't want to "faithfully" reproduce anything on a projector. SDR is assumed to be 100 nits, however with projectors we generally calibrate it to half that. The best luck we've had with HDR is to calibrate to some factor/fraction of what ST.2084 actually calls for. Perhaps what we need is to be able to tell madVR our true peakY and also some scale factor to apply.

The "problem" I noticed with low (more "accurate") PeakY values in madVR is that the overall image is way, way, way too bright. I started out with peakY set to 200, while my projector can really only do about 100. The result is that the mid tones seemed probably twice as bright as an SDR image, when they're really supposed to be about the same. I can't remember if I've got it at 300 or 400 now, but HDR still seems brighter than SDR, and I'm not talking highlights, but the overall tone.

Quote:

Originally Posted by madshi (Post 55613446)
Fair enough. I was planning to add some more controls to allow users to "fine tune" the tone mapping. Some sort of "scale factor" would be an option, I guess.

Quote:

Originally Posted by stanger89 (Post 55613520)
Perhaps there's a better way, I'm not really sure what Lumagen or Oppo are actually doing, they seem to both get rave reviews, but it seems like we should acknowledge that with projectors we don't need to try to retain (for example) medium gray (18%/18 nits SDR and 18 nits on a flat panel), when we'd normally calibrate a projector to about half that.

Quote:

Originally Posted by madshi (Post 55613562)
Well, I do like to do things scientifically (if possible). So if the UHD Blu-Ray asks for a pixel to be displayed with 20 Nits, I would like to achieve that, and madVR actually does. However, as you say, due to projectors being rather dim, and because they're usually used without any ambient light, the rules are somewhat different, so some sort of scaling factor does make sense. What I don't want to do is to throw science completely out the window and just hand draw some curve which seems pleasing for some titles.

Quote:

Originally Posted by nathan_h (Post 55613750)
That's a fair point. There are a few scientific hints for what a projector should do in the SDR realm (ie, 100 ire is defined as half as bright on a projector as a flat panel) and that points at a standard for adapting definitions created for HDR on flat panels to HDR on a projector (eg, that diffuse white should be half as many measured FTL on a projector in a dark room to look "equivalent" to a flat panel in a living room/lounge).

Quote:

Originally Posted by madshi (Post 55614402)
Ok, I suppose I could add a "diffuse white target" option which defaults to "flat panel -> 100 nits". Would it make sense to design the option like that?

If so, and if you then change that option to "projector -> 50 nits", I should probably simply half all pixel nits values? E.g. if then the UHD Blu-Ray asks for a pixel to be 30 nits bright, I should reproduce it with 15 nits instead? Or would the "scaling factor" be non-linear in some way?


Haha, yeah, I'm mad(shi). That's why all my products start with "mad"... ;)

https://www.avsforum.com/forum/24-di...l#post55614668

Quote:

Originally Posted by stanger89 (Post 55614668)
Oh, agreed, I don't like just going by hand/eye, if only because it invariably leads to not working "all the time". The closest thing to science we have to go on with HDR for projectors, is Dolby Cinema/Dolby Vision, which has a peak Y of 106 nits, which is basically equal to the diffuse white point of consumer HDR, but it still includes room for highlights.



I think that's a good option, that's actually how I use Arve's tool. I set the max brightness to my measured peak white, and then the reference white to something less than that.



That seems like a good starting point, except, you'd basically end up with exactly what Arve's tool gives us at low brightness levels, and this is what people are "complaining" about.

Quote:

Originally Posted by madshi (Post 55614820)
Is "diffuse white" or "reference white" a better name?


What complaints are there exactly? FWIW, the complaints could have many possible reasons. E.g. the gamma curve not having enough control/correction points, or the gamma processing having too low precision/bitdepth, or tone mapping screwing with hue or with saturation. None of these issues should apply to madVR (as far as I can say).

Some people may think that tone mapping just consists of applying some compression curve to the Y channel. Which is not true (if you want good quality).

Well, anyway, I'll add this option, and let's see what effect it will bring.

Quote:

Originally Posted by Dominic Chan (Post 55614952)
In the latest version of HCFR, diffuse white corresponds to the luminance on the ST2084 curve at 50% stimulus, which is 92~94 nits depending on the mastering display.
Initially it scaled everything down linearly for lower diffuse white, but that shifts the clipping point up for a fixed luminance. In the latest version (v3.5), the roll off point is kept fixed while the diffuse white is scaled.

Quote:

Originally Posted by madshi (Post 55615042)
I'm not sure I understand the comment about the clipping point for a fixed luminance. With "roll off point" you mean the compression curve "knee start", meaning the nits value at which the compression curve starts to do work? So you're saying that if the knee start is at e.g. 20 Nits with a 100 Nits diffuse white, then the knee still stays at 20 Nits, even if diffuse white is set to 50 Nits instead?

Quote:

Originally Posted by Dominic Chan (Post 55616012)
The attached figure illustrates what I was referring to (no tone mapping assumed).
Assuming a display with peak luminance of 1000 nits, the ST2084 EOTF (yellow line) hard clips at 75% stimulus. If we a multiplier of 2 for the projector with linear scaling, the target EOTF shifts down to the dashed white line. At the same time, the hard clipping point will shift to 83%, essentially "wasting" of the luminance in the upper end as no stimulus above 75% will increase the output. A similar issue may exist when tone mapping curve is applied, if the the entire curve is scaled linearly.

Quote:

Originally Posted by madshi (Post 55616128)
My plan was to first convert the video to linear light (1.0 = 10,000 Nits), and then simply multiply with e.g. 0.5 to account for projectors targetting a lower diffuse white Nits, and then apply tone mapping as usual, but in such a way that no luminance is wasted. So the brightest pixel in the frame (if it exceeds diffuse white) gets assigned 1.0 by tone mapping. Sounds ok to you?

Quote:

Originally Posted by Dominic Chan (Post 55616586)
That sounds right.
I am not familiar this this approach. I didn’t think the objective of tone mapping is to expand every frame the peak luminance, even with a sliding average.
EDIT:
I believe the luminance at any input level should not be expanded beyond the "scaled PQ curve".
As an example, the luminance at 60% of the PQ curve is is 240 nits. Assuming a multiplier of 2 is being used (diffuse white = 94/2 = 47 nits) for a display with 200 nits peak luminance, a 60% input should not be expanded beyond 240/2= 120 nits. Put it another way, the contrast ratio between any two points after tone mapping should not exceed the contrast ratio without tone mapping.

Quote:

Originally Posted by madshi (Post 55618876)
The goal of this logic is to already deliver today what HDR10+ plans to achieve in the future.


Yes, I detect if the frame's peak luminance (after multiplying with 0.5) is within the display capabilities. If it is, I completely disable tone mapping and simply display the content as is. Any luminance modifications are only done if the frame's peak luminance exceeds the display's capabilities.

Quote:

Originally Posted by Dominic Chan (Post 55619694)
I don't think this will work satisfactorily, unless I misunderstand what you're describing. Assuming the display's maximum luminance is 300 nits, and ignoring the multiplier:

- A frame containing 300 peak nits will be displayed as is;
- Another frame containing areas of 600 nits and 2000 nits peak, will be scaled down such that the peak become 300 nits or less (depending on the tone map), which means the 600 nits area will be displayed far dimmer than the other frame's 300 nits.

Quote:

Originally Posted by madshi (Post 55620146)
Well, considering that diffuse white is supposed to be about 100 nits, all we're talking about here is HDR specular highlights. It's not like large areas of the frame should have 600+ nits pixels. I'd expect there to be only few 600+ nits pixels which should be concentrated to small spots in the frame. Furthermore, the tone mapping compresses much stronger at the top than in the middle. So while in the 600 + 2000 nits frame, my algo would draw 600 nits pixels dimmer than in a max 600 nits frame, it should not be an extreme difference. Furthermore, the rolling average should prevent flickering if the 600 nits pixels stay constant and the 2000 nits pixels come and go.

The idea is that all pixels at or below diffuse white (which should really be the bulk of the frame's pixels) should stay the same regardless of the peak luminance of the frame, but if the frame's peak luminance differs, we can compress specular highlights more or less strongly to make the most out of the display's luminance capabilities.

If you think about it: If the whole movie has one sunlight scene with 4000 nits, but the rest of the movie is in the 600 nits range, do you really want the tone mapping to always reserve space for 4000 nits? That doesn't make too much sense in my book, and that's the whole reason why HDR10+ was created.

Quote:

Originally Posted by Dominic Chan (Post 55620870)
I suppose restricting the dynamic mapping to highlights will minimize the "pumping effect".
However, from what I've seen, many of the complaints about the custom curves are regarding "dark pictures", far more than lack of "sparkle" in specular highlights.
As a matter of fact, many Oppo users who rave about the bright pictures are using low lamp power with iris closed down, so obviously they are not looking at specular highlights.

Quote:

Originally Posted by nathan_h (Post 55622052)
Right, we need both:

1. Adaptive tone mapping to retain specular highlights without impacting the main content, especially
2. Better APL handling, particularly when it comes to shadow detail and color rendering

and they may not be related much (except that the solution to each shouldn't hurt the other).





Here an example of the best Arve Tool HDR Curve: manni / Javs:

Quote:

Originally Posted by Javs (Post 55624440)
Nice work on the v3 curves. Lots of options here, you cant go wrong with either yours or mine it seems.

Ours are actually very similar if looking at the 107nit curves, at least they are to my V1 curves, they had a shallow rolloff, I have since changed mine while looking at content clipping and ended up at my v2 curves.

Just for academic comparison.

Manni's are the light grey lines, mine are the darker black lines... As for the shadow detail and midrange of the curves, they are almost completely identical to each other until the highlight rolloff, interesting,

Those of you using my curves already, looks like we have been looking at mostly the same image in regards to shadow detail and mid tones for some time now.

107Ynit / 4000nit Manni DVE / 4000nit Javs V2 curves:

https://i.imgur.com/AroMyRN.png

107Ynit / 1100nit Manni DVE / Javs 1200nit V2 Curves, this one is slightly different in the mid tones, it will be ever so slightly brighter. Manni's 1100nit curve is actually probably rolling off at closer to 1500 nits.

https://i.imgur.com/KKkXmGf.png

Those of you that used to run my V1 curves, they had a much more shallow roll off, lets look at the 4000nit one.

107nit 4000 Manni DVE curve / Javs 4000nit V1 Curves.

https://i.imgur.com/AQD0yg9.png

If we want to further improve the shadow detail in the Revenant from either Manni's or my curves, we would need to be lifting the curves in the first 20nits or so vs the rest... Only the fire and the brightest parts of the following image are passing the ~300nit mark. 90% of this shot sits well under 100 nits.

https://i.imgur.com/Pi6vhIC.png


Soulnight 02-03-2018 01:29 AM

Manni great explanation on how HDR to SDR mapping actually works:

summary: it may called SDR but it does not mean it is not TRUE HDR ;-)

Quote:

Originally Posted by Manni01 (Post 55610366)
Correct, but just to clarify, there is no HDR flag. The content is still HDR, but none of the HDR metadata is sent to the projector to prevent it from switching to its HDR mode/gamma automatically. The only thing that tells the display that it's dealing with HDR content is the HDR metadata. If you don't sent it, the display thinks it's displaying SDR even if exactly the same content is sent.

I know you know the rest, but I'm going to try to explain for others.

What people don't understand is that when we say HDR metadata, it doesn't mean HDR content, or an HDR layer on top of an SDR layer. On UHD Bluray, there is no SDR layer. There is an HDR10 mandatory layer, and optionally a DV layer or soon an HDR10+ layer (let's ignore that for now). So you have an HDR10 mandatory layer, and HDR metadata that describes the way this HDR10 layer was created, and (theoretically) which kind of data it contains. It's metadata, not data. If you don't send the HDR metadata, you are still sending the full HDR10 layer, and it's that HDR layer that is displayed, either using a custom curve or gamma D (in our projectors). If you use an SDR calibration to display it, it won't be displayed properly, which proves that it's still HDR content.

When doing and HDR to SDR conversion, whether in the source or in the display, we are changing the location of the conversion, but it's not necessarily the same kind of conversion:

- When we use the UB900 HDR to SDR BT2020 conversion, the source is assuming an SDR display (i.e. a non-HDR-capable display), so it's tone mapping to about 100nits peakY. It is therefore NOT using all the potential brightness in the PJ, for those who can reach significantly above that. This means that although it's doing a decent job, it's NOT using the whole dynamic range of the display (the highlights are far more compressed). The advantage is better black levels and better native contrast (if we close the iris to get 100nits peakY), but the downside is less dynamic range. You are NOT watching HDR with this conversion, and it is INFERIOR to what a well-designed custom curve can provide, unless your display has significantly less than 100nits peak brightness.
- When we use MadVR or the Oppo new f/w or the Radiance Pro to do the HDR to SDR conversion, we are telling the source how bright the display can be, and the source is expecting the content to be displayed on an HDR-capable display, i.e. a display able to reach far more than 100nits (if we're not talking about projectors). So in that case, the HDR to SDR conversion, despite the fact that it's done in the source, is using the same range (whole native brightness) the display is capable of, which means that it's still displaying HDR content and it will look just as HDR (if well done) as it would if the display was in HDR mode. The downside is that if we use the iris fully open, the black floor goes up (and the native contrast goes down). But this is not an issue if you're willing to use the DI, as you then get a far better overall contrast, even if the black floor is slightly raised.

In both these cases, because the source is sending the content using a power gamma and not an ST2084 gamma, we have to use an SDR calibration, but the content displayed is still HDR!

The HUGE advantage for projectors of doing the HDR to SDR conversion in the source when the source is aware that the display is HDR capable, is that we can calibrate accurately and automatically to a known standard, in that case SDR BT2020 power gamma 2.4. This is NOT possible to do at this stage with HDR, simply because there is no HDR standard for projectors (bar ST2390 which isn't supported by all software yet).

So calibrating the display to SDR BT-2020 and asking the source to do the conversion while being aware that it can send "SDR" content that goes far above 100nits (i.e. HDR content) is what makes a whole difference. When the source is able to adapt the ST2390 curve to the content dynamically, the results are even better.

In other words, when we say "HDR", we mean content with a high dynamic range, i.e. using (usually) more than 0-100nits encoded with an ST-2084 gamma.

But when we say "SDR", we can mean two things: 1) legacy content encoded with a standard dynamic range and a power gamma or BT1886 gamma, or 2) HDR content converted to a power gamma 2.4, but still covering a high dynamic range (and ideally a WCG, hence BT2020).

It's important to make this distinction, otherwise it's not possible to understand the difference between the HDR to SDR conversion of the UB900 (type 1) and the HDR to SDR conversion from MadVR, new Oppo f/w or Radiance Pro (type 2).

I know it's enough to do most people's head in, but unfortunately that's what we have to deal with in these early days of HDR implementation. :)

And now I'm really out...


Soulnight 02-03-2018 01:30 AM

****Reserved****

Soulnight 02-03-2018 01:34 PM

Preserve hue...
 
Today, I have played with all the possibilities of "preserve hue" and "fix too bright and saturated pixels":
https://i.imgur.com/R65Spiz.png

I also looked at the same pattern than Javs did from Ryan disc and it looked the same with washed out colors above your "display target niits"
https://i.imgur.com/Yz9qWFK.jpg

Any combination of preserve brightness or saturation resulted in almsost the same results. Preserve brightness 100% desatured completely the color above your target nits, while 100% saturation still got the color desaturated quite a bit compared to the one matching the display target nits.

BUT:
de-activating completely "preserve hue" just fixed the issue for me and the pattern was then saturated normally. I am still using the compress highlights from madvr though.

I then went through a few of my movies with 2 shortcuts:
1) 300nits with preserve hue 50% sat 50% brightness + compression
2) 300nits without preserve hue 50% sat 50% brightness + compression

For 99% of the pictures in the movie, they were identical.
But for a few chosen images with "bright and saturated color", the result was better and more saturated without the hue correction:

This demo shows this phenomena very well:
http://4kmedia.org/sony-swordsmith-hdr-uhd-4k-demo/
With hue correction, you do not get the nice orange saturated melting metal and fire color.
Without hue correction: wonderfull colors.

Also in planet earth 2: jungle: when the luminicent mushroom grows. Way nicer saturated green without hue correction as with.
Also true for the bird (blue, green red) preparingits show for his mate. Blue is way more intense without hue correction and the blue of its legs also.

So, right now, I will continue to use the dynamic "compress highlights" but will stop to use the "preserve hue".

I wonder still why when you choose, 100% saturation 0% brigtness, it is still way desaturated above the chosen "this display target nits"

stef2 02-03-2018 01:51 PM

Thanks for the nice wrap up. I would really like madshi to come up with a solution designed for our projectors capabilities. For now, I am not sure whether I prefer using madvr pixel shader or Arves tool custom curves to deal with HDR content. I will ve following this thread, and I will also try the new Oppo’s firmware since it seems to perform so well.

Javs 02-03-2018 01:54 PM

Quote:

Originally Posted by Soulnight (Post 55627770)
Today, I have played with all the possibilities of "preserve hue" and "fix too bright and saturated pixels":
https://i.imgur.com/R65Spiz.png

I also looked at the same pattern than Javs did from Ryan disc and it looked the same with washed out colors above your "display target niits"
https://i.imgur.com/Yz9qWFK.jpg

Any combination of preserve brightness or saturation resulted in almsost the same results. Preserve brightness 100% desatured completely the color above your target nits, while 100% saturation still got the color desaturated quite a bit compared to the one matching the display target nits.

BUT:
de-activating completely "preserve hue" just fixed the issue for me and the pattern was then saturated normally. I am still using the compress highlights from madvr though.

I then went through a few of my movies with 2 shortcuts:
1) 300nits with preserve hue 50% sat 50% brightness + compression
2) 300nits without preserve hue 50% sat 50% brightness + compression

For 99% of the pictures in the movie, they were identical.
But for a few chosen images with "bright and saturated color", the result was better and more saturated without the hue correction:

This demo shows this phenomena very well:
http://4kmedia.org/sony-swordsmith-hdr-uhd-4k-demo/
With hue correction, you do not get the nice orange saturated melting metal and fire color.
Without hue correction: wonderfull colors.

Also in planet earth 2: jungle: when the luminicent mushroom grows. Way nicer saturated green without hue correction as with.
Also true for the bird (blue, green red) preparingits show for his mate. Blue is way more intense without hue correction and the blue of its legs also.

So, right now, I will continue to use the dynamic "compress highlights" but will stop to use the "preserve hue".

I wonder still why when you choose, 100% saturation 0% brigtness, it is still way desaturated above the chosen "this display target nits"

Turning off preserve hue breaks bright highlights completely. The colour is ok but there is massive clipping. See mad Max explosions [emoji16]

Unfortunately that was one of the first things I tried but it looks very wrong in real fontent.

Soulnight 02-03-2018 01:58 PM

Quote:

Originally Posted by Javs (Post 55627886)
Turning off preserve hue breaks bright highlights completely. The colour is ok but there is massive clipping. See mad Max explosions [emoji16]

Unfortunately that was one of the first things I tried but it looks very wrong in real fontent.

maybe sometimes but not in what I have looked at. :-)
I will try Mad max to see for myself.

For me, each time I saw a different between on or off, it was better off.
But I still use dynamic compression oh the highlights.

Anyway, the preserv hue with use 100%sat 0% brightness should I believe look different from what is currently implemented.

Soulnight 02-03-2018 02:03 PM

Quote:

Originally Posted by stef2 (Post 55627866)
Thanks for the nice wrap up. I would really like madshi to come up with a solution designed for our projectors capabilities. For now, I am not sure whether I prefer using madvr pixel shader or Arves tool custom curves to deal with HDR content. I will ve following this thread, and I will also try the new Oppo’s firmware since it seems to perform so well.

Thanks. :-)
I am sure than Madshi can match and exceed any other solution available in the market with the right support.
HTPC + Live analysis of every frame + great flexibilty open all the doors.
You just need to open the right one.

And the current implementation of Madvr HDR to SDR is already great for the most part. :)
I am using 300nits for most of the content. And move to 400 or 500nits for very bright content (mostly demos).

For some very dark movie like "Jason Bourne" or "Arrival", you can see that the rolling peakY is mostly below 200nits all the time! For those, I use 200nits display target nits.
This rolling average peakY is a very nice information to help you choose the right "this display target nits" for projector.
But again 300nits is for me a good compromise between brghtness and HDR strength.

Javs 02-03-2018 02:15 PM

Quote:

Originally Posted by Soulnight (Post 55627916)
maybe sometimes but not in what I have looked at. :-)
I will try Mad max to see for myself.

For me, each time I saw a different between on or off, it was better off.
But I still use dynamic compression oh the highlights.

Anyway, the preserv hue with use 100%sat 0% brightness should I believe look different from what is currently implemented.

Yeah but leaving preserve hue off is probably more incorrect than any other option, although I did notice a long time ago that it seems to fix the colour clipping charts (odd), I do agree the closest match right now in real content is 100% sat and 0% luminance.

Here is preserve hue off. No detail at all in highlights now, its even worse at the very big explosion in the dust storm.

https://i.imgur.com/lLYxjCf.jpg

100% Sat 0% Luminance.

https://i.imgur.com/Dlic6Ty.jpg

madshi 02-03-2018 02:55 PM

Quote:

Originally Posted by Soulnight (Post 55624872)
I see currently 4 issues through those discussion:

1) Madvr is compressing too much the highlights
-->solution with clipping only a certain % of the brightest pixels through use of an histogramm?
2) Madvr has "probably" an issue with color shift while using HDR to SDR Mapping
--> Maybe not present if using a 3dlut?
3) Madvr desaturates the colors which are above "this display target nits" EVEN if you choose: "100% saturation, 0% brightness".
-->Bug?
4) Madvr does not use yet a specific method for projector with low brightness
-->Multiplication factor for HDR curve? Inspiration from manni/Javs Arve HDR curve?

1) I don't think so. madVR does SMPTE 2390. So are you saying that SMPTE 2390 compresses the highlights too much?

2) I don't think so. Rather I think madVR probably has the lowest color shift of all the algos out there, except offline algos like DisplayCAL. But DisplayCAL looks quite similar to madVR.

3) Yes, this appears to be a bug. I wasn't aware of that. Here's a test build which should fix it:

http://madshi.net/madVRhdr.rar

My recommendation is still to use the default setting of 50% luminance and 50% saturation reduction, though. I don't believe it's a good idea to draw a 10,000 nits pure red pixel exactly the same way as a 1,000 nits pure red pixel, which Javs seems to expect.

4) True. Some new options for that coming soon.

-------

Oh, and btw, "preserve hue" off is really bad. Use it only if your GPU is too slow for anything better.

Javs 02-03-2018 05:29 PM

Quote:

Originally Posted by madshi (Post 55628200)
1) I don't think so. madVR does SMPTE 2390. So are you saying that SMPTE 2390 compresses the highlights too much?

2) I don't think so. Rather I think madVR probably has the lowest color shift of all the algos out there, except offline algos like DisplayCAL. But DisplayCAL looks quite similar to madVR.

3) Yes, this appears to be a bug. I wasn't aware of that. Here's a test build which should fix it:

http://madshi.net/madVRhdr.rar

My recommendation is still to use the default setting of 50% luminance and 50% saturation reduction, though. I don't believe it's a good idea to draw a 10,000 nits pure red pixel exactly the same way as a 1,000 nits pure red pixel, which Javs seems to expect.

4) True. Some new options for that coming soon.

-------

Oh, and btw, "preserve hue" off is really bad. Use it only if your GPU is too slow for anything better.

I only have two issues personally with the HDR tone-mapping conversion right now number 3 which you seem to say is a bug is a very big one for me, and if you have fixed it, that's most excellent.

I don't expect a 10.000 nit red to look the same as a 1000nit red - except I do if I am clipping my HDR to 1000nits, it should be solid 100% saturation from whatever the clipping point is up to the max of the chart. I expect colour clipping charts to look accurate in tone-mapping as they do in real HDR. So far they don't. The closest to reality is preserve hue off, it makes 10,000 nit red look like 100% Red saturation which is the correct behaviour IMO. So far tyhe tone mapping is actually reducing the saturation of the colour in one form or another. 10,000 nits red is not full red saturation with tone mapping.

Downloading the build now, and will check it out.

Thanks!

Edit- Had a look at the build, I think its closer. The colour shift seems to be resolved somehow at least on my Samsung, so I will look at that on the JVC later, but to me the colour look ok now, it was a far more saturated red before, which is what happens in this new build if I select P3 calibration instead of the BT2020 calibration. So, BT2020 appears to be behaving now.

50/50 Lum Sat

https://i.imgur.com/xBaZq5R.jpg

0/100 Lum Sat

https://i.imgur.com/U8rUrHb.jpg

100/0 Lum Sat

https://i.imgur.com/t4dD4pH.jpg

Preserve Hue Off

https://i.imgur.com/lj9AS1p.jpg

This is how my JVC in HDR mode renders this shot:

https://i.imgur.com/jq9fxFY.jpg

50/50 Lum Sat

https://i.imgur.com/9MZyVWO.jpg

0/100 Lum Sat

https://i.imgur.com/94GAG5X.jpg

100/0 Lum Sat

https://i.imgur.com/hW9Asuu.jpg

50/50 Lum Sat

https://i.imgur.com/AuCNPeJ.jpg

0/100 Lum Sat

https://i.imgur.com/EaXpavU.jpg

100/0 Lum Sat - This one looks really close, but the middle section of the clipping chart is correct, and the higher 2nd half of it actually reduces in saturation obviously, but overall this looks to be the most correct.

https://i.imgur.com/ONsaOYm.jpg

And here is a screen grab of the actual file, which, if viewing in HDR should look about the same, only more colour and the clipping points will be lower, however the saturation should be constant after the clipping point. The point is, the upper 2nd half of the pattern should still be intense red saturation. Al;so note the bars above and below each clipping colour, they should all be similar saturation intensities. With the tone-mapping, only turning off preserve hue will make this look right. But of course, in content there is a huge loss of highlight detail here.

https://i.imgur.com/rBvaVmI.jpg

Overall it appears to me in all these images, 100/0 Lum Sat seems to be the closest most accurate option in both real content and the test patterns. But, if you look at how my JVC renders one of those shots above, since we are missing some of the upper saturation on the test patterns, we are also missing the 100% saturation levels on the content, if you look at the fire, we never quite get that colour volume we should be with the more intense brighter colours.

madshi 02-04-2018 12:40 AM

You seem to value saturation much more than luminance. But that's a *very* subjective judgement. With 100/0 Lum Sat reduction, a 10,000 nits pure red pixel will look exactly the same as a 1,000 nits pure red pixel. How on earth can that produce reasonably good results? You'll lose a *lot* of highlight detail in highly saturated areas. Sure, if you allow madVR to reduce some saturation in those areas, you do lose color saturation (obviously), but that's the only way to reproduce the luminance detail. You *do* want there to be a visible difference between a 10,000 nits and a 1,000 nits pure red pixel, don't you? madVR reproduces the 1,000 nits red pixel with very high saturation. It's as saturated and as bright as possible. Now the 10,000 nits pure red asks for the same saturation but 10x more brightness, which is technically impossible to achieve on your display. So my recommendation is to use a compromise setting like 50/50 Lum Sat reduction, so that you'll see some of that additional brightness, while losing some of the saturation.

In the end it's your choice. If you absolutely hate losing any saturation, that's your call - that's why madVR has these options. But you're definitely giving up any highlight detail in highly saturated image areas that way. As long as you're aware of the consequences of that decision, I'm ok with you using the setting. But please don't recommend it to other users as being "best" without also explaining the heavy disadvantages.

Please note that these are technical problems that are unsolvable without accepting *some* kind of compromise. If you want there to be a visible difference between a 1,000 nits pure red pixel and a 10,000 nits pure pixel, then the difference has to come from somewhere. Either you aim for the 10,000 nits pure red pixel to be perfect. Then you have to make the 1,000 nits pure red pixel a lot darker. Or you aim for the 1,000 nits pure red pixel to be perfect. Then you have to make the 10,000 nits pure red pixel brighter, which is only possible by adding some green and blue to it (which means reducing saturation). Maybe you're now thinking that we should aim for the 10,000 nits pure red pixel to be perfect and make the 1,000 nits pure red pixel darker? Of course we can do that, but then we still have the same problem with blue pixels. If you want a 10,000 nits pure blue pixel to be perfect (max brightness the display can do, but still fully saturated), then the whole image will automatically become about 16.8x darker overall (before tone mapping). That would be a *very* heavy price to pay. Which is why AFAIK no tone mapping algorithm on planet earth uses that approach.

BTW, the combination of DisplayCAL + ArgyllCMS (which might serve as a reference point since due to offline rendering and their nature as calibration products) do something like 30/70 Lum Sat reduction, which is very far away from your 100/0 preference.

P.S: Your first 3 screenshots in your last comment are not named correctly. I believe they're 100/0, 0/100, 50/50.

Soulnight 02-04-2018 01:04 AM

Quote:

Originally Posted by madshi (Post 55628200)
1) I don't think so. madVR does SMPTE 2390. So are you saying that SMPTE 2390 compresses the highlights too much?

2) I don't think so. Rather I think madVR probably has the lowest color shift of all the algos out there, except offline algos like DisplayCAL. But DisplayCAL looks quite similar to madVR.

3) Yes, this appears to be a bug. I wasn't aware of that. Here's a test build which should fix it:

http://madshi.net/madVRhdr.rar

My recommendation is still to use the default setting of 50% luminance and 50% saturation reduction, though. I don't believe it's a good idea to draw a 10,000 nits pure red pixel exactly the same way as a 1,000 nits pure red pixel, which Javs seems to expect.

4) True. Some new options for that coming soon.
-------
Oh, and btw, "preserve hue" off is really bad. Use it only if your GPU is too slow for anything better.

Great news for 3 and 4. :-)

As for 1, yes I do believe that Madvr using SMPTE 2390 compress the highlights too much for a projector. This display target nits is very often around 300nits for a projector and there is quite a it to compress above.
As we discussed before, clipping dynamically a certain % of the highlights above your "target display nits" with the help of an histogramm should help greatly with that, by greatly reducing the compression range by lowering the upper limit.

[/quote]

Soulnight 02-04-2018 01:35 AM

Madshi sorry for that. Was not my intention. Edited.
Also edited first post ;-)

madshi 02-04-2018 12:39 PM

New build out now:

http://madshi.net/madVR.zip (v0.92.12)

Previously, there were 3 hue related options:

A) "preserve hue" unchecked
B) "preserve hue" checked - low quality
C) "preserve hue" checked - high quality

In the new build there is now only a checkbox "preserve hue", but no low vs high quality, anymore. Basically I've combined the old A) and B) options into one, trying to combine the best parts of both. This combined new algorithm is what you get when you uncheck "preserve hue" in the new build. It's acceptable quality, but not great. For great quality you need to check "preserve hue", of course.

There's now a new "diffuse white" option, which allows you to target either 50 Nits or 100 Nits for diffuse white. It'd be great if you could compare:

1) Your current settings. Where diffuse white is still targetted at 100 Nits. And probably you "lied" to madVR about your projector's peak nits value.
2) Instead of 1) try setting diffuse white to 50 Nits, which will make your image half as bright. But now please tell madVR about your projector's *true* measured peak nits value.

How do 1) and 2) compare? Which looks better?

FWIW, I'm a bit afraid that aiming for 50 Nits may harm shadow detail because every pixel is now half as bright as before. So in addition to the simple "linear" diffuse white scaling I've also added a "spline" option which tries to keep shadow detail the same way as if you had chosen 100 Nits for diffuse white - but still ending up at 50 Nits for diffuse white. To be honest, I'm not fully happy with the spline curve yet, it will probably need some work. But do you like this approach at all? Or is it bad? Maybe even not necessary at all?

The "desaturate" option does processing slightly different which results in ever so slightly lower saturation. I'm not sure which is "better" or more correct. So I'm offering both for testing. Please let me know which looks more accurate to you (I know, it's hard to say, so you'll have to trust your instinct on this one).

madshi 02-04-2018 12:59 PM

There was some discussion earlier about:

1) Is "preserve hue" a good or bad option?
2) Which value should we use for "fix too bright & saturated pixels by [X] % luminance reduction and [Y] % saturation"?

In order to clear this up, I've made 2 little comparison screenshots for you:

http://madVR.com/doom9/toneMapping/t...ngSonyCamp.png
http://madVR.com/doom9/toneMapping/toneMappingN.png

All images were gamut mapped to BT.709, to make the colors look right when you compare these images on a typical computer monitor.

10000 Nits: Rendered for a 10,000 Nits display. This gives us a hint how the colors should probably look like (in BT.709, though).
400 nits, 100/0: madVR "preserve hue" activated, 100% luminance reduction, 0% saturation reduction
400 nits, 50/50: madVR "preserve hue" activated, 50% luminance reduction, 50% saturation reduction
400 nits, 30/70: madVR "preserve hue" activated, 30% luminance reduction, 70% saturation reduction
400 nits, 0/100: madVR "preserve hue" activated, 0% luminance reduction, 100% saturation reduction
50/50, no hue pres: madVR "preserve hue" deactivated, 50% luminance reduction, 50% saturation reduction
400 nits, DisplayCAL: Using a pure tone-mapping 3dlut (no calibration involved) created with DisplayCAL for 400 Nits BT.709.

Here are my personal impressions:

1) 100/0 looks bad to me. It's missing important highlight detail. E.g. the big round "ring" inside the lamp is partially hidden. And the "N" is very blurred.
2) 50/50 looks quite ok.
3) 30/70 looks slightly worse than 50/50 in the "lamp" image, but noticeably better in the "N" image. And this looks nearest to DisplayCAL.
4) 0/100 looks great in the "N" image, but terrible in the "lamp" image.
5) "preserve hue" is a must for high quality. There's a very clear hue shift if this option is unchecked. To be honest, though, these test images are corner cases. In most situations there's only a small difference between "preserve hue" on vs off.

After looking at these results, I'm wondering if I shouldn't change madVR's default from 50/50 to 30/70?

Manni01 02-04-2018 02:01 PM

@madshi : I don't know in which thread you want to discuss this, so please pick one :)

Here is a copy of my post in the doom9 thread:

Thanks for the new build Madshi. I still don't have much time right now but I tried to take a quick look.

The new diffuse white option is very useful to us projector owners in a dedicated room, so thanks for implementing this. I didn't have the time to test in depth but it's promising and goes in the direction of what I was asking. You might want to give more options, as although the standard is 50nits for SDR, with HDR we need to go down to 16-20nits in some cases, so I would suggest 25-50 as possible values (factor of 2 or 4).

I still have issues with the way highlights are being processed with the pixel shader conversion, and now that I can rule out the 3D LUT (with the new line in the OSD, super useful so thanks for that), there is something definitely wrong with MadVR's processing.

I specified - as you suggested - my true peakY which is 125nits. I've tested with "This display is already calibrated" and with a null 3D LUT for SDR BT2020 generated with Lightspace (checked and confirmed as not changing anything to the picture).

In both cases, there is very significant posterization/clipping (depending on the settings) with the pixel shader conversion that entirely goes away in passthrough when using a custom curve in the projector. If you don't have a way to upload a custom curve, trust me and take a look at the scene below with the pixel shader conversion.

Quick question: in passthrough, the OSD reports that it's using the SDR-BT2020 3D LUT if 3D LUTs are selectin in the calibration tab. Is it to be expected? It should either use nothing (as it's sending HDR, not SDR) or use an HDR BT2020 3D LUT (I think).

Please try the beginning of Mad Max, minute 3:20. The end of the shot is a sunset with dust settling down and a fade out. The posterization/clipping (depending on settings) on the red sun/sky is very, very bad.

I thought it was my 3D LUT but it isn't, as even a null LUT (or "this display is already calibrated") produces the same effect.

Let me know by email/PM if you need a short clip of the beginning of Mad Max (as well as the null 3D LUT I'm using for these tests if you want to check it).

Any progress on the custom API to send the SDR BT2020 info in the HDMI stream?

Soulnight 02-04-2018 02:09 PM

Quote:

Originally Posted by madshi (Post 55632604)
New build out now:

http://madshi.net/madVR.zip (v0.92.12)

Previously, there were 3 hue related options:

A) "preserve hue" unchecked
B) "preserve hue" checked - low quality
C) "preserve hue" checked - high quality

In the new build there is now only a checkbox "preserve hue", but no low vs high quality, anymore. Basically I've combined the old A) and B) options into one, trying to combine the best parts of both. This combined new algorithm is what you get when you uncheck "preserve hue" in the new build. It's acceptable quality, but not great. For great quality you need to check "preserve hue", of course.

There's now a new "diffuse white" option, which allows you to target either 50 Nits or 100 Nits for diffuse white. It'd be great if you could compare:

1) Your current settings. Where diffuse white is still targetted at 100 Nits. And probably you "lied" to madVR about your projector's peak nits value.
2) Instead of 1) try setting diffuse white to 50 Nits, which will make your image half as bright. But now please tell madVR about your projector's *true* measured peak nits value.

How do 1) and 2) compare? Which looks better?

FWIW, I'm a bit afraid that aiming for 50 Nits may harm shadow detail because every pixel is now half as bright as before. So in addition to the simple "linear" diffuse white scaling I've also added a "spline" option which tries to keep shadow detail the same way as if you had chosen 100 Nits for diffuse white - but still ending up at 50 Nits for diffuse white. To be honest, I'm not fully happy with the spline curve yet, it will probably need some work. But do you like this approach at all? Or is it bad? Maybe even not necessary at all?

The "desaturate" option does processing slightly different which results in ever so slightly lower saturation. I'm not sure which is "better" or more correct. So I'm offering both for testing. Please let me know which looks more accurate to you (I know, it's hard to say, so you'll have to trust your instinct on this one).

Hi Madshi,

thanks for the fast release. :)
So I did quick check on some titles while using preserve hue 50/50 for all my comparison:

I compared after creating some keyboard shortcurts:
1) 100 diffuse white + 300 target nits (my standard for the last few months)
2) 100 diffuse white + 160 target nits
3) 50 diffuse white + 80 target nits linear
4) 50 diffuse white + 80 target nits spline
I didn't try the desaturate option yet.

What I found is that 2) and 3) are almost identical which I think should not be a surprise?
Both give a very bright picture (not very surprising with 160nits) but you loose the HDR effect in the picture and it is mostly SDR .

For the "fire-swordman demo" I linked previously, 4) was doing a very nice job with all those dark areas but on real movie it just "killed" the depth on the picture for me. Should also not be so surprising since we are there raising the black and the white do not then pop out as much.

So all in all, I still like the 1) 300nits /100 diffuse white the best. I guess 150nits /50 diffuse whte will probably also be identical. For me the best compromise between brightness and image depth/HDR strength. :)

FYI: My projector has "only" 75cd/m2 with 100% DCI coverage in a full Triple Black velvet darkened room.
This is why I choosed 80nits to try the "do not lie to madvr" as you asked for.
I also compared with my 2dlut dci calibration and with my display cal SDR REC2020 Gamma 2.2 3DLUT calibration which are very similar in their results.

Also saturations for bright colors seemed better. :-)

madshi 02-04-2018 02:14 PM

@Manni , might be better to discuss the tone mapping related topics here, because here seem to be more projector owners than on doom9.

Yes, it would be very helpful to get a short sample and your NULL 3DLUT. You say either posterization or clipping depending on settings. Depending on which settings? And just to be 100% sure, you do have dithering enabled, right? And setting the display bitdepth to 8bit probably doesn't help, either?

Manni01 02-04-2018 02:41 PM

Quote:

Originally Posted by madshi (Post 55633186)
@Manni , might be better to discuss the tone mapping related topics here, because here seem to be more projector owners than on doom9.

Yes, it would be very helpful to get a short sample and your NULL 3DLUT. You say either posterization or clipping depending on settings. Depending on which settings? And just to be 100% sure, you do have dithering enabled, right? And setting the display bitdepth to 8bit probably doesn't help, either?

Yes dithering is enabled (always). I didn't try changing the bit depth because the problem goes away in passthrough and I don't have it with any other source, so it's clearly a pixel shader conversion processing issue in MadVR.

The settings that make a difference are those related to highlights compression. If you don't compress the highlights, the posterization becomes clipping. I didn't try to play with the peakY value, as we had agreed to stick to the actual peakY of my display (125nits). I was using 50nits diffuse white, but the problem was already there in the last build so not directly related.

I'll email you a link to a short clip showing the issue (with the null 3D LUT I used to rule out Calman or the display/calibration itself, but I had a similar issue with a Calman generated 3D LUT). [EDIT: done!]

madshi 02-04-2018 04:53 PM

Quote:

Originally Posted by Soulnight (Post 55633164)
What I found is that 2) and 3) are almost identical which I think should not be a surprise?
Both give a very bright picture (not very surprising with 160nits) but you loose the HDR effect in the picture and it is mostly SDR .

Overall brightness of the frame should be comparable, but it should look a bit different. At least it does on my PC.

Quote:

Originally Posted by Soulnight (Post 55633164)
For the "fire-swordman demo" I linked previously, 4) was doing a very nice job with all those dark areas but on real movie it just "killed" the depth on the picture for me. Should also not be so surprising since we are there raising the black and the white do not then pop out as much.

Yes, that's what I was fearing. It current spline curve is too agressive, I need to tone it down quite a bit, but I'm still working on the math...

Quote:

Originally Posted by Soulnight (Post 55633164)
So all in all, I still like the 1) 300nits /100 diffuse white the best. I guess 150nits /50 diffuse whte will probably also be identical. For me the best compromise between brightness and image depth/HDR strength. :)

Can you do some more comparisons between 300|100 vs 150|50?

Quote:

Originally Posted by Manni01 (Post 55633308)
Yes dithering is enabled (always). I didn't try changing the bit depth because the problem goes away in passthrough and I don't have it with any other source, so it's clearly a pixel shader conversion processing issue in MadVR.

The settings that make a difference are those related to highlights compression. If you don't compress the highlights, the posterization becomes clipping. I didn't try to play with the peakY value, as we had agreed to stick to the actual peakY of my display (125nits). I was using 50nits diffuse white, but the problem was already there in the last build so not directly related.

I'll email you a link to a short clip showing the issue (with the null 3D LUT I used to rule out Calman or the display/calibration itself, but I had a similar issue with a Calman generated 3D LUT). [EDIT: done!]

Thanks. On a very quick check I don't see any obvious posterization, but I might have missed it. Could you please make a screenshot with that sample video which clearly shows the issue? It should be visible on a screenshot, if it's madVR's fault.

stef2 02-04-2018 06:59 PM

THANK you Soulnight for starting this thread, and than you madshi for you hard work and for listening to projector owners and helping them in their quest to obtain the best “HDR” image possible!

EDIT: Thank you Manni also, I really do not know how you find the time to write all this!

Manni01 02-05-2018 12:22 AM

Quote:

Originally Posted by madshi (Post 55629964)
You seem to value saturation much more than luminance. But that's a *very* subjective judgement. With 100/0 Lum Sat reduction, a 10,000 nits pure red pixel will look exactly the same as a 1,000 nits pure red pixel. How on earth can that produce reasonably good results? You'll lose a *lot* of highlight detail in highly saturated areas. Sure, if you allow madVR to reduce some saturation in those areas, you do lose color saturation (obviously), but that's the only way to reproduce the luminance detail. You *do* want there to be a visible difference between a 10,000 nits and a 1,000 nits pure red pixel, don't you? madVR reproduces the 1,000 nits red pixel with very high saturation. It's as saturated and as bright as possible. Now the 10,000 nits pure red asks for the same saturation but 10x more brightness, which is technically impossible to achieve on your display. So my recommendation is to use a compromise setting like 50/50 Lum Sat reduction, so that you'll see some of that additional brightness, while losing some of the saturation.

In the end it's your choice. If you absolutely hate losing any saturation, that's your call - that's why madVR has these options. But you're definitely giving up any highlight detail in highly saturated image areas that way. As long as you're aware of the consequences of that decision, I'm ok with you using the setting. But please don't recommend it to other users as being "best" without also explaining the heavy disadvantages.

Please note that these are technical problems that are unsolvable without accepting *some* kind of compromise. If you want there to be a visible difference between a 1,000 nits pure red pixel and a 10,000 nits pure pixel, then the difference has to come from somewhere. Either you aim for the 10,000 nits pure red pixel to be perfect. Then you have to make the 1,000 nits pure red pixel a lot darker. Or you aim for the 1,000 nits pure red pixel to be perfect. Then you have to make the 10,000 nits pure red pixel brighter, which is only possible by adding some green and blue to it (which means reducing saturation). Maybe you're now thinking that we should aim for the 10,000 nits pure red pixel to be perfect and make the 1,000 nits pure red pixel darker? Of course we can do that, but then we still have the same problem with blue pixels. If you want a 10,000 nits pure blue pixel to be perfect (max brightness the display can do, but still fully saturated), then the whole image will automatically become about 16.8x darker overall (before tone mapping). That would be a *very* heavy price to pay. Which is why AFAIK no tone mapping algorithm on planet earth uses that approach.

BTW, the combination of DisplayCAL + ArgyllCMS (which might serve as a reference point since due to offline rendering and their nature as calibration products) do something like 30/70 Lum Sat reduction, which is very far away from your 100/0 preference.

P.S: Your first 3 screenshots in your last comment are not named correctly. I believe they're 100/0, 0/100, 50/50.

With all respect, I heavily disagree with the above.

If the title is a 1,000nits title, I do not care about 10,000nits or 4,000nits and I do want 1,000nits to look the same as 10,000nits, and to keep all of its saturation.
If the title is a 4,000nits title, I want 4,000nits to look like 10,000nits, because I have a projector that can only go up to 200nits max (new lamp, high lamp, iris open) with a tiny screen. Most people have less than 100nits to start with.

It doesn't make any sense to me to keep 10,000nits as a reference for all titles. Even if a title has content above 4000nits, I don't even want to take that into account for a projector.

AFAIK there is a content max_brightness parameter in BT2390, but it doesn't look like you are taking it into account.

For me, all titles with content above 4,000nits have content clipped at 4,000nits. This means I'm expecting 100% lum and 100% sat at 4,000nits for these titles, and 100% lum / 100% sat at 1,100nits for 1,100nits content.

Until you start taking metadata into account and stop using 10,000nits as a reference for every title, I don't think you'll ever get satisfying highlights for projector owners. There isn't enough headroom for that, even with reference white at 50nits. JVC did that initially but it was a mistake, and they quickly released a f/w with 100%=4000nits. I know that metadata isn't 100% reliable, but if you follow the algo I already posted you'll get it right 99% of the time and will fallback to 4,000nits the rest of the time.

Users with less than 100nits peakY need diffuse white at 25 or below to stand a chance to get decent details and saturation in highlights for 4,000nits titles such as Mad Max.

I'll take screenshots of the posterization issue with Mad Max and will post them as soon as I can.

madshi 02-05-2018 01:19 AM

Quote:

Originally Posted by Manni01 (Post 55635116)
If the title is a 4,000nits title, I want 4,000nits to look like 10,000nits

Of course - and that is what madVR already does!! In my comment you quoted I was talking about a true 10,000nits saturation test pattern Javs was using. Of course when talking about a movie scene which has a 1,000nits peak nits, madVR outputs 10bit RGB (940,940,940) to the TV for 1,000nits white. However, if a 1,000nits movie has pure red pixels which are encoded as 1,000nits, we have the same problem all over again: Tone mapping maps peak *white* to (940,940,940), so it's impossible to show a pure red 1,000nits pixel properly. So either luminance or saturation has to take a hit for such 1,000nits red pixels.

And actually, content does these things. E.g. the red "N" from the screenshot comparison above has pure red pixels encoded as 4,000Nits, with the movie metadata specifying a max luminance of 4,000Nits. Consequently, madVR can't draw the red pixels "correctly" because madVR optimizes for 4,000Nits being peak *white*. So the only way for madVR to draw those 4,000Nits red pixels at 4,000Nits is to lose all saturation and draw them white. Or alternatively, I can keep them red, but then the luminance will be much lower than 4,000Nits.

Quote:

Originally Posted by Manni01 (Post 55635116)
AFAIK there is a content max_brightness parameter in BT2390, but it doesn't look like you are taking it into account.

Of course I take that parameter into account!! Why else would madVR take HDR peak measurements if I didn't? It would serve no purpose.

Quote:

Originally Posted by Manni01 (Post 55635116)
Until you start taking metadata into account and stop using 10,000nits as a reference for every title

I already took metadata into account from the very first madVR HDR release, and never used 10,000nits as a reference for every title. Furthermore, I implemented real time HDR peak measurements to improve on the metadata to give you HDR10+ like behaviour, which is something which AFAIK nobody else does (except mpv, which copied the idea from me). And you already knew all this, so I'm not sure why you suddenly think I wouldn't do that.

Manni01 02-05-2018 02:12 AM

Quote:

Originally Posted by madshi (Post 55635164)
Of course - and that is what madVR already does!! In my comment you quoted I was talking about a true 10,000nits saturation test pattern Javs was using. Of course when talking about a movie scene which has a 1,000nits peak nits, madVR outputs 10bit RGB (940,940,940) to the TV for 1,000nits white. However, if a 1,000nits movie has pure red pixels which are encoded as 1,000nits, we have the same problem all over again: Tone mapping maps peak *white* to (940,940,940), so it's impossible to show a pure red 1,000nits pixel properly. So either luminance or saturation has to take a hit for such 1,000nits red pixels.

And actually, content does these things. E.g. the red "N" from the screenshot comparison above has pure red pixels encoded as 4,000Nits, with the movie metadata specifying a max luminance of 4,000Nits. Consequently, madVR can't draw the red pixels "correctly" because madVR optimizes for 4,000Nits being peak *white*. So the only way for madVR to draw those 4,000Nits red pixels at 4,000Nits is to lose all saturation and draw them white. Or alternatively, I can keep them red, but then the luminance will be much lower than 4,000Nits.


Of course I take that parameter into account!! Why else would madVR take HDR peak measurements if I didn't? It would serve no purpose.


I already took metadata into account from the very first madVR HDR release, and never used 10,000nits as a reference for every title. Furthermore, I implemented real time HDR peak measurements to improve on the metadata to give you HDR10+ like behaviour, which is something which AFAIK nobody else does (except mpv, which copied the idea from me). And you already knew all this, so I'm not sure why you suddenly think I wouldn't do that.

AFAIK, MaxCLL doesn't describe the pixel with the highest luminance, but the sub-pixel with the highest luminance. That might be some of the problem.

Regarding the above, there are two different things.

First of all, every time I mention HDR10 metadata, you discard it saying it's not relevant, it's not accurate, it's pointless because it's static for the whole content, we don't care about metadata because you measure each frame etc. So this led me to think that you are not taking metadata into account in your calculations.

I've asked you a few times if you had implemented the content max_brightness parameter in your BT2390 conversion and you never replied to that question, so I assumed you hadn't given the way you poo-poo metadata every time I mention it.

This has nothing to do with measuring each frame and getting a variable content peak brightness. It has to do with having to correct (fixed) hard clipping value for the whole of the content, so that the end point of the curve sits at the correct value according to the static metadata, i.e. the way the title was mastered.

So yes, as far as I'm concerned, you were not taking metadata into account in order to hard clip at the max brightness of the content (which is a fixed value in HDR10, not a changing value as in HDR10+).

Calculating the peak for each frame doesn't mean that you wouldn't get it wrong if you were using the wrong theoretical ceiling in your calculations, even if you used your frame by frame analysis to know the peak brightness of the frame.

Personally, I'm not sure I like the idea of having the peak brightness of each frame changing in relation to the shot before and after it. It changes the intention of the grading as HDR10, unlike HDR10+, isn't mastered with dynamic metadata.

I'll do more tests when/if the other issues are resolved, but I personally would prefer the peak brightness (hard clip point) to be the same over the whole title so that the relative brightness of each shot isn't altered, but use the frame by frame analysis to make the low end in dark scenes or the highlights in bright scenes look as well as they can. So not changing the bottom and the top, but how we map things in between according to actual content. In a 1000nits title, I don't think one shot should have 100%=5nits (like the scene in chapter 4 of The Revenant) and the next one 100%=1000nits.

Otherwise, it looks like what you're doing goes against what the dynamic iris is trying to do. By the way, I think that HDR10+ and DV are far more useful with displays with poor native on/off than with our displays. We need more absolute brightness, but we don't need more native (or even dynamic) contrast. The opposite of monitors and flat panels (except OLEDs), that don't need more brightness but need more native on/off. You might want to take this into account in your implementation of HDR for projectors. We (JVC owners) don't really need HDR10+ or DV.

Maybe I misunderstand what you are doing, but if you explain more clearly (and entirely) what you do it might help us not to make incorrect assumptions based on your selective answers.

For example, as a start:

1) Do you read the static metadata of the content and set the hard clipping point to the maximum brightness of the content (falling back to 4000nits if the metadata is invalid, but you might want to make this a user variable because some might want to clip lower in that case) over the whole program in your BT2390 implementation? In other words, do you set 100% to 1,000nits, 1,100nits or 4,000nits to start with (hence never use a theoretical value of 10,000nits in your calculations)? If not, we need that option.
2) When you analyse a frame, do you assume 100% is the max_brightness above, or 100% is the peak of the frame (or a rolling over average to avoid flicker)?
3) Don't you think, doing this, that you are changing the intent of the grading by altering the relative brightness of the shots (not the frames) between each other?

Manni01 02-05-2018 02:32 AM

@madshi : For reference re MaxCLL, see P44 of the following document:

https://www.smpte.org/sites/default/...-Ecosystem.pdf

E.2.3.2 Maximum Content Light Level (MaxCLL) information
For MaxCLL, the unit is equivalent to cd/m
2 when the brightest pixel in the entire video stream has the
chromaticity of the white point of the encoding system used to represent the video stream. Since the value of
MaxCLL is computed with a max() mathematical operator, it is possible that the true CIE Y Luminance value is
less than the MaxCLL value.
This situation may occur when there are very bright blue saturated pixels in the
stream, which may dominate the max(R,G,B) calculation, but since the blue channel is an approximately 10%
contributor to the true CIE Y Luminance, the true CIE Y Luminance value of the example blue pixel would be only
approximately 10% of the MaxCLL value.

madshi 02-05-2018 02:44 AM

Quote:

Originally Posted by Manni01 (Post 55635214)
1) Do you read the static metadata of the content and set the hard clipping point to the maximum brightness of the content (falling back to 4000nits if the metadata is invalid, but you might want to make this a user variable because some might want to clip lower in that case) over the whole program in your BT2390 implementation? In other words, do you set 100% to 1,000nits, 1,100nits or 4,000nits to start with (hence never use a theoretical value of 10,000nits in your calculations)?

Of course I do. Using SMPTE 2086 metadata (mastering display peak luminance + gamut), though, not maxCLL, because maxCLL is very often zero.

Quote:

Originally Posted by Manni01 (Post 55635214)
2) When you analyse a frame, do you assume 100% is the max_brightness above, or 100% is the peak of the frame (or a rolling over average to avoid flicker)?

Measuring the frame would serve no purpose if I didn't also use the measurement for tone mapping. So the measurement (if active) replaces the metadata. Of course you have the option to enable/disable measurements. Measurements are using a rolling average to avoid flickering. Measurements below 600 Nits are capped to 600 Nits to avoid too large tone mapping fluctuations.

Quote:

Originally Posted by Manni01 (Post 55635214)
3) Don't you think, doing this, that you are changing the intent of the grading by altering the relative brightness of the shots (not the frames) between each other?

SMPTE 2390 tone mapping keeps the lower Nits pixels identical, so most of the pixels below diffuse white are not modified by tone mapping at all. Based on that I don't think we have to worry too much about modifying the intent of the grading. If you're super sensitive about this topic, in theory already doing any tone mapping at all could already classify as modifying the grading intent to some degree. Optimizing the tone mapping to the measured luminance should mostly only affect the rendering of specular highlights. Of course, the lower the display's peak luminance capability, the more extreme the tone mapping gets. So for projector users this is a more relevant discussion than for flat panel users.

BTW, I would be surprised if the majority of HDR10+ titles would get hand tuned tone mapping parameters by the original movie makers. It will most probably be either completely automated, or at best somewhat hand tuned by the person doing the encoding, for the majority of titles.

Manni01 02-05-2018 03:04 AM

Quote:

Originally Posted by madshi (Post 55635256)
Of course I do. Using SMPTE 2086 metadata (mastering display peak luminance + gamut), though, not maxCLL, because maxCLL is very often zero.

Mastering display peak luminance on its own is pointless, as it will often specify 4000nits (the grading monitor capability) when the content barely goes above 100nits on some titles.

Yes, both max_brightness and max_cll can be invalid (zero), this is why I shared with you the algo I use to take that into account and fall back to 4000nits (or whichever value the user would like to use with invalid metadata). You need to first test max_brightness to isolate with certainty titles that don't have content above 1100nits, and use that as a ceiling, then if max_brightness is 4,000nits, use MaxCLL to ascertain the actual peak brightness of the content, again with a fallback to a default value if MaxCLL is invalid. Until you do this, disabling your frame measurements is not possible as you're wrong too often.

Quote:

Originally Posted by madshi (Post 55635256)
Measuring the frame would serve no purpose if I didn't also use the measurement for tone mapping. So the measurement (if active) replaces the metadata. Of course you have the option to enable/disable measurements. Measurements are using a rolling average to avoid flickering. Measurements below 600 Nits are capped to 600 Nits to avoid too large tone mapping fluctuations.

Thanks, it helps to know that you use a floor of 600nits. I know that I can disable the peak luminance measurements, but that's not what I would like (ideally). I'd like the option to not change the relative brightness of each shot, yet to optimize the headroom given to the highlights. In other words, change the soft clip point / start of the knee (and possibly the shape of the curve) according to the frame max_brightness, not the hard clip point (content max_brightness). I think that this, combined with the way BT2390 doesn't alter the lower nits (with a selectable diffuse white with more options than just 50/100) would give the most accurate (perceptually) optimized representation.

Quote:

Originally Posted by madshi (Post 55635256)
SMPTE 2390 tone mapping keeps the lower Nits pixels identical, so most of the pixels below diffuse white are not modified by tone mapping at all. Based on that I don't think we have to worry too much about modifying the intent of the grading. If you're super sensitive about this topic, in theory already doing any tone mapping at all could already classify as modifying the grading intent to some degree. Optimizing the tone mapping to the measured luminance should mostly only affect the rendering of specular highlights. Of course, the lower the display's peak luminance capability, the more extreme the tone mapping gets. So for projector users this is a more relevant discussion than for flat panel users.

BTW, I would be surprised if the majority of HDR10+ titles would get hand tuned tone mapping parameters by the original movie makers. It will most probably be either completely automated, or at best somewhat hand tuned by the person doing the encoding, for the majority of titles.

This thread is about optimizing HDR for projectors in MadVR, so yes, we're only discussing this.

I am trying to get the most accurate HDR10 representation given the limitations of our projectors and the lack of standard.

So what I would like is something as close as possible to what we would get with an OLED display in a living room, or (ideally) with a title mastered to Dolby Cinema (107 peakY) in a dedicated room. I don't think we need more fluctuations than necessary. We only need to make a title mastered to 1000, 1100 or 4000nits (or whatever value indicated by a valid MaxCLL) look like it would in a Dolby Cinema theatre if it had been mastered to 107nits instead.

We don't need a lot more than 100nits to get great results in a dedicated room. Our main problem is that we don't have the grading we should be getting. We're getting a grading designed for flat panels in a living room with ambient light when we would need the grading made for dedicated rooms with no ambient light (except an exit sign).

madshi 02-05-2018 03:23 AM

Quote:

Originally Posted by Manni01 (Post 55635276)
Mastering display peak luminance on its own is pointless, as it will often specify 4000nits (the grading monitor capability) when the content barely goes above 100nits on some titles.

Fair enough. I guess it shouldn't harm to use maxCLL as the ceiling, provided it's valid *and* lower than the mastering display peak luminance.

But really, what do we do if maxCLL is invalid (as it often is), and the content barely goes above 100nits, but still the mastering display had 4000nits? This is exactly why I implemented peak luminance measurements!

Quote:

Originally Posted by Manni01 (Post 55635276)
I'd like the option to not change the relative brightness of each shot, yet to optimize the headroom given to the highlights. In other words, change the soft clip point (and possibly the shape of the curve) according to the frame max_brightness, not the hard clip point (content max_brightness). I think that this, combined with the way BT2390 doesn't alter the lower nits (with a selectable diffuse white with more options than just 50/100) would give the most accurate (perceptually) optimized representation.

With "soft clip point" do you mean the position of the knee? Actually, the position of the knee is what will IMHO make the biggest difference to pixels below diffuse white. If the knee is below diffuse white, moving it will change the look of the image to some extent. The ceiling mostly only affects specular highlights. So isn't what you're asking for the opposite of what you want to achieve? Shouldn't we ideally even hard fix the knee to be at diffuse white (or at least not lower than diffuse white) and adjust the hard clipping point dynamically to the measured peak luminance? Doing that would guarantee that whatever peak luminance each frame has, all pixels below diffuse white would always look the same, so different peak luminance measurements could by definition only affect pixels above diffuse white.

Manni01 02-05-2018 03:41 AM

Quote:

Originally Posted by madshi (Post 55635292)
Fair enough. I guess it shouldn't harm to use maxCLL as the ceiling, provided it's valid *and* lower than the mastering display peak luminance.

But really, what do we do if maxCLL is invalid (as it often is), and the content barely goes above 100nits, but still the mastering display had 4000nits? This is exactly why I implemented peak luminance measurements!

Again, I never said that relying on static metadata was foolproof. But at least it helps to be closer to reality 99% of the time.

Yes, occasionally you'll get a title with invalid MaxCLL, but most of the time it will have a valid Max_Brightness, so in the end we're talking about maybe 1% of the titles where we default to 4000nits because we know we don't know. Not ideal, but that's precisely why your frame by frame analysis becomes priceless (as long as you don't cause more artifact/grading intention changes than necessary).

Again, here is my algo (with HDR10_MAIN = 4000nits hard clipping and HDR10_Optional = 1100nits hard clipping):

IF HDR10_Max_Brightness EQUALS 0 THEN HDR10-Main ELSE
IF HDR10_Max_Brightness BELOW OR EQUALS 1100 THEN HDR10-Optional ELSE
IF HDR10_Max_CLL EQUALS 0 THEN HDR10-Main ELSE
IF HDR10_Max_CLL BELOW OR EQUALS 1100 THEN HDR10-Optional ELSE HDR10-Main

Quote:

Originally Posted by madshi (Post 55635292)
With "soft clip point" do you mean the position of the knee? Actually, the position of the knee is what will IMHO make the biggest difference to pixels below diffuse white. If the knee is below diffuse white, moving it will change the look of the image to some extent. The ceiling mostly only affects specular highlights. So isn't what you're asking for the opposite of what you want to achieve? Shouldn't we ideally even hard fix the knee to be at diffuse white (or at least not lower than diffuse white) and adjust the hard clipping point dynamically to the measured peak luminance? Doing that would guarantee that whatever peak luminance each frame has, all pixels below diffuse white would always look the same, so different peak luminance measurements could by definition only affect pixels above diffuse white.

Yes I had edited my post to clarify that. It DOES make the biggest difference, and I never use a start point below 200nits as you want to start the roll-off as late as possible to keep ST2084 as long as possible, but the more highlights you have in the content, the earlier you need to start it if you don't want to clip highlights. That's the whole point of a frame by frame analysis. You don't change the hard clip, your change the soft clip start point (and possibly the shape of the curve) according to the actual highlights in each frame. More highlights, more headroom (start soft clip earlier, but never below 200 because we're talking content, not output), less highlights, less headroom (start soft clip later, so you can follow ST2084 for longer, have a brighter picture without clipping highlights). But the hard clip point doesn't need to change, or you're changing the relative brightness of the shots too much (potentially, I haven't looked at this specifically yet because there are other more important issues to resolve first).

I'm not sure you make a difference between nits in the content and nits in the output. For example, when you process (not convert) and output metadata, you indicate for max_brightness the peakY value we specify in the parameters (say 125nits for me). This is completely wrong. The peakY value we specify is the max output nits. It has nothing to do with the max content nits. As when you process and don't convert you're expecting the display to use an ST2084 calibration, you should report the content max_brightness, not the output max_brightness, otherwise the display will surely get it wrong if it uses the metadata to do its own tone-mapping?


All times are GMT -7. The time now is 12:33 PM.

Powered by vBulletin® Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.
vBulletin Security provided by vBSecurity (Pro) - vBulletin Mods & Addons Copyright © 2019 DragonByte Technologies Ltd.

vBulletin Optimisation provided by vB Optimise (Pro) - vBulletin Mods & Addons Copyright © 2019 DragonByte Technologies Ltd.
User Alert System provided by Advanced User Tagging (Pro) - vBulletin Mods & Addons Copyright © 2019 DragonByte Technologies Ltd.