AVS Forum banner

Improving Madvr HDR to SDR mapping for projector (No Support Questions)

94 reading
4.6M views 20K replies 592 participants last post by  rjyap  
#1 · (Edited)
Madvr HDR to SDR mapping: already great, soon even better for projector?

I am opening this dedicated topic for:

How to further improve the already great Madvr HDR to SDR Mapping with special focus on projector. :)

I am myself a very happy user with Madvr HDR to SDR ton Mapping and have been using it for months with a great projector which is officialy not compatible with HDR (Epson EH-LS10000).
What does it do:
- It makes every projector or TV compatible with HDR even if they were not initially "marketed" for that :)
- Madvr compress the highlight dynamically: so you get dynamic HDR ala "Dolby Vision or HDR10+. :eek::cool:
- You can choose your target color gamut according to your display: REC2020, DCI-P3 D65, REc709 and even better a full 3DLUT calibration over thousand of points! (for example I choose DCI-P3 because my projector covers 100% of DCI)
- Madvr let you choose the "HDR strength" vs "Brightness" through a control called: "this display target nits".
- The next best choice at the moment is the Lumagen pro at the moment for 5000$++ but it does not enable dynamic HDR (yet)

What do you need:
- a HTPC with recent graphic card

I hope that concentrating all the discussion and effort in one place should enable a faster solution.
The discussion has been going all over the place in avsforum lately.

I see currently 4 improvement potential through the below discussion:

1) Madvr could handle highlights even better through light clipping (no bug).
-->solution with clipping only a certain % of the brightest pixels through use of an histogramm?
-->Madshi says that Madvr follows SMPTE 2390 accuratly. This will be new feature not a bug fixing.
2) Madvr has "probably" an issue with color shift while using HDR to SDR Mapping
--> Maybe not present if using a 3dlut?
Madshi 2018-02-04: is sure that there is not color shift.
3) Madvr desaturates the colors which are above "this display target nits" EVEN if you choose: "100% saturation, 0% brightness".
-->Bug?
-->Solved my Madshi 2018-02-04
4) Madvr does not use yet a specific method for projector with low brightness
-->Multiplication factor for HDR curve? Inspiration from manni/Javs Arve HDR curve?
--> Will be implemented by Madshi in next build

Here a few quotes trying to illustrate what could be improved/fixed with Madvr HDR to SDR Mapping with special focus for projector:


Discussion in Guide Building 4K HTPC Madvr

https://www.avsforum.com/forum/26-h...ome-theater-computers/2364113-guide-building-4k-htpc-madvr-21.html#post55463634

Hi Madshi, happy new year! :)

I have been thinking lately while watching 4K HDR movies on my projector how to improve HDR "pop" on a low brightness projector.
Below my quoted old idea which you did not like for good reasons.

Based on that feedback and my user experience, here here is my new idea:

1) A bit complicated but nice
I am using the HDR to SDR shader math mapping with a 300nits target, with dynamic compression of the highlights up to the peak luminance of each image.

What I have noticed is very often only a few pixels in the image are reaching this peak luminance (let's say 1000nits), but most of the highlights (let's say 90% of the pixel above 300nits) are still below 600nits.

So Madvr is compressing the pixels between 300nits to 1000nits in order to NOT clip ANY pixels, even it's only a few. But doing so, most of the pixels are compressed heavily in the process, for those few.

I would propose to give the user a choice for setting "a dynamic clipping nits limit" in "percent" of the number of pixels above "this display target nits".
This "P=percentage clipping value" (in our example 90%) would be used like this:
- T= total number of pixels above "the display target nits" (in our example 300nits)
- C= cumulated number of pixel above 300nits sorted by increasing nits
- When C/T=P=90%, then select accordingly the clipping value. (in our example, 90% of the pixels above 300nits are below 600nits).

In another word, we are looking for the NITS level of the P=% Quantil for the pixels above "this display target nits"

Use the new calculated clipping value instead of "the measured peak luminance".

Advantages:

- Most of pixels composing the highlights above the "soft clipping" have now a better differenciation and the HDR effect will be more pronounced
- Good use of Madvr looking at each single pixel nits

Disadvantage: A certain amount of pixel is now clipped but it should be small enough not to impact the image quality negatively

2) Very simple but less nice
-->provide the user an input box to define a hard coded "clipping limit.
You would then have:
1) the soft clipping point / this display target nits = 300nits
2) the clipping limit = 600nits for example
Everything above 600nits gets clipped.
Everything between 300 and 600 nits gets compressed up to the peak luminance of the said picture
Below 300nits: nothing touched


Of course you could implement both. :)

Also: could you add in the information displayed under "CTrl+Y" so that we have those 4:
- chosen value for "this display target nits" (above example 300nits)
- average nits of the picture
- maximum peak luminance in nits (above example 1000nits)
- Nits Number for of the 90% cumulated number of pixel ranked by increased brightness between "this display target nits" and "peak luminance" (in the above example 600nits). In another word the Nits level for the 90% Quantil of the pixels above "this display target nits".


Thank you a lot!
I know I am asking a lot and I hope Masvr V1.0 comes soon so that I can give a least a bit back to you. ;)


Viele Grüße,
Florian
Not a bad idea at all. Two problems:

1) It's more difficult to implement than the current measurement because I'd have to measure a full histogramm, which is harder to do with simple pixel or compute shaders than the current very simple measurement.
2) Any additional user option makes things harder to understand for the average user. Especially if the new option is cryptic. How would the average user understand "percent of the number of pixels above the display target nits"?

But I'll add this idea to my list of things to look at. Maybe I can come up with something that either needs no additional user options, or can be explained to the user in such a way that he's not confused.
Glad that you like it!:)
You could just put an option named "increase highlights HDR strength". :)
And use internally a hard coded value of 90% of unclipped "highlights" pixels.

As for the problem number 2 from Madshi, I would suggest an option called:

Clip highlight strength /HDR Strength:
- none (0%)
- low (10%)
- medium (25%)
- high (50%)
- Full (100%)
- User value: (open box where value can be set between 0 and 100%)

Color shift issue with Madvr HDR to SDR mapping WITHOUT 3DLUT? Solved with 3DLUT?
:eek::rolleyes::)
Ok. I thought you already said so in the past that is was DCI D65 and so I calibrated my projector Epson EH-LS10000 with really excellent 2DLUT tracking.
However yesterday I generated a 3DLUT SDR Gamma 2.2 REC2020 to be even more precise. To my surprise using this display is already calibrated in DCI-P3 delivers a picture slighty warmer than the 3DLUT and pushing slightly more the red and green. Therefore I concluded that somehow I had mistaken and you meant DCI-P3 D63 which would explain what I observe.





Discussion in Projector Mini Shotout Thread

https://www.avsforum.com/forum/24-d...ors-3-000-usd-msrp/1434826-projector-mini-shootout-thread-623.html#post55607794

Zombie10K said:
I recently started experimenting with MadVR and the tone mapping on my HTPC (7700K + GTX 1080ti) and seeing a similar color shift that @Javs posted in the JVC thread. Is the MadVR thread on Doom9 the main area to post on this topic? Thx!!
Madshi said:
Nobody in the doom9 thread has compared about color shifts yet. So maybe AVSForum is a better place to discuss that, since there are now seemingly two users (Javs and you) who see a problem with madVR's tone mapping. However, it doesn't make sense to discuss this in different AVSForum threads. Not sure, should we create a new thread, or pick one of the existing threads to discuss that?
Madshi said:
Some things to think about:

1) The only way to truely know how the HDR Blu-Ray is supposed to look like is to play it on a true 10,000 Nits BT.2020 display. (Ok, a 4,000 Nits DCI-P3 display would also do, if the combination of player and display are clever enough to *clip* both luminance and colors, because tone mapping is in this case not necessary, because the movie's actual data fits into 4,000 Nits DCI-P3.)

2) Let's say you have a highly saturated red color which the HDR Blu-Ray has encoded with 4,000 Nits. And your display actually *can* do 4,000 Nits. No problems, right? Actually yes, BIG problem, because the display peak Nits capability is for white, not for red. So what should a tone mapping algorithm do now? Should it make the pixel white? It could achieve the wanted 4,000 Nits, but the pixel's color/saturation would be completely lost. Or should the tone mapping maintain the full saturation/color, and lose all the Nits it can't handle? Then a significant amout of highlight punch & detail would get lost. So what should we do? In madVR you can choose. See option "fix too bright & saturated pixels by".

3) Let's say the studio uses a sub-optimal tone mapping algorithm for their 1080p Blu-Ray. Or maybe they use a custom tone mapping algorithm to achieve a specific look, or maybe (for high profile titles) they're even tuning it for each different scene. How then can you use an 1080p Blu-Ray as a reference for a tone-mapped HDR Blu-Ray should look like?

4) Every CE manufacturer has their own tone mapping algorithm, and they all look different. So who's right and who's wrong? Who can be accepted as reference? And all the others are then judged to have color shifts?

All that said, if you're not completely happy with madVR's tone mapping results, here are a few things you could try:

A) Try different values for the "fix too bright & saturated pixels by" option. See point 2) above. There's no right or wrong setting here, unfortunately. In some scenes one setting looks better, in other scenes another.

B) You could try to turn "preserve hue" on/off in madVR, or switch between "low" and "high" quality. FWIW, "high" quality is doing the tone mapping in ICtCp, which is Dolby's preferred color space for tone mapping.

C) You can use DisplayCAL to create 3DLUTs which include HDR -> SDR conversion (tone mapping). The 3DLUTs are calculated offline, by also using the ArgyllCMS framework, IIRC, so it should be very high quality. Maybe you prefer it over what madVR's pixel shader math does? Here's a test 3DLUT you can try, which doesn't do any actual display calibration, but just does tone mapping, nothing else. Tone mapping is done to 400 Nits and BT.709 gamut by this test 3DLUT:

http://madshi.net/DisplayCal400Nits709.rar

You can enter this in the BT.2020 slot when switching madVR to "convert HDR content to SDR by using an external 3DLUT". Of course you can create any tone mapping 3DLUT you want (different nits values and gamuts) with DisplayCAL.
Javs said:
Thanks Madshi. I will respond to this in some more detail today and provide images to show my results clearly. Just to nip something in the bud right now, yes I have a 1400 nit capable hdr tv to use along side my JVC. They both display MadMax in true HDR mode in an extremely comparable manner regarding highlights, and the colour of those highlights.

Ignoring the white highlights in mad Max specifically for a moment, it also appears when I view a colour clipping test pattern from the masciola suite - the colours go from saturated at the low end all the way to white on the high end, which they should stay a constant colour. I believe these two things are relsted. I will post a photo of this shortly.

I have also tried a bunch of the preserve hue combinations. I will post the differences soon. I was able to get it very close. But the colour of the highlights seemed to be compromised.

There is also a general colour shift in the image to a different hue/saturation overall. It's slight. But it was very obvious in the mad Max shot since it was a strong red. I saw the same exact colour shift on my Samsung HDR tv as the JVC.

Both are fairly well calibrated at least to a comparable state, the JVC is very well calibrated. So comparing the content on my HDR curves to the tone mapping is showing a shift in colour on both of my displays.

I have set madvr to say the display is calibrated to bt2020 and gamma 2.4, I am assuming that's the correct thing to do when I have a display I know has been calibrated to rec2020 such as my JVC? I am totally hoping this part is user error as there seems to be a few things to set up here.

Thanks, I am sure we can get to the bottom of this and I can kiss my curves goodbye.




Discussion in JVC topic


https://www.avsforum.com/forum/24-d...ial-jvc-rs600-rs500-x950r-x750r-x9000-x7000-owners-thread-947.html#post55542012
Stanger, would you share the settings you are using for highlight information in the HDR to SDR conversion settings?

Have you had a look and compared whats its doing to highlights and colours vs the normal HDR curves?

I have found MadVR conversion to actually negatively affect the highlight information and colour overall when its doing tone mapping, and the information is not able to be restored to a level that matches any of the custom HDR curves.

To not have compromised highlights you MUST use the Restore Hue settings, and there is no combination of settings there that truly resores the correct highlight gradation and colour to the area that is affects. I tested this at length and took quick photos of it too, it gets really close, but its not close enough. You can also see what its doing to the image if you look at HDR colour clipping test patterns which illustrates what happens in brighter highlights that contain colour information.

The inconsistencies are very visible in Mad Max.

There is also a notable and very clear colour shift and oversaturation overall when using it vs the calibrated BT2020 HDR curves. And yes, before you ask, I am most certainly telling MadVR that I have a calibrated BT2020 display when I am using the conversion. Yet there is still a colour shift, notably red, which is quite obvious.

The only way I could see around this would be to create a 3D LUT using the HDR to SDR conversion at the same time as measuring Rec2020 primaries while the conversion is in place in order to measure and correct any colour shift that is occuring.

Just quickly, this is what it looks like:

Normal HDR Curves:

Image


HDR to SDR Conversion with BT2020 calibration selected at various nit level outputs (All other calibration settings including Rec709 and P3 still produce the wrong colours)

Image


Image


And to see what it does to test patterns in the most hue setting mode to get highlights looking correct:

Image


When it should look closer to this:

Image


Current MadVR Settings, tried all of them:

Image


Tried every combination of settings on this page.

Image



Thoughts?

p.s. If anyone such as Manni jumps in here to tell me I am incompetent and I am doing it wrong, perhaps take a moment to think about posting a useful comment that actually helps solve this issue rather than going nowhere.
https://www.avsforum.com/forum/24-d...ial-jvc-rs600-rs500-x950r-x750r-x9000-x7000-owners-thread-955.html#post55609904

I still haven't had a chance to look at the Radiance or the Oppo, so I can't comment on that, but I have since seen issues in MadVR and I've asked Madshi to make a few changes so that I have a chance to identify exactly what the problem is, i.e. whether it's calibration related or MadVR related. I've also had issues selecting calibrations with MadVR, so for now I am back to custom curves.

For me the Vertex switching automatically to 2-3 curves according to MaxCLL/MaxBrightness is still a better solution, but I hope that Madshi will be able to make the requested changes to that I can get back to diagnose the HDR to SDR conversion in MadVR more precisely and we can get a fully automated solution with MadVR.

Sure, I did write that at the time, but since I have also posted that I was experiencing issues and that I needed more time (and changes in MadVR to test properly the origin of the issues).

Ideally, especially until the external command works reliably, we would need:

- MadVR/GPU/OS reporting SDR BT2020 in the HDMI stream, so that the Vertex can select the correct calibration automatically. I know that you can't do this without a custom API but given the current state of the external commands and also the fact that many of us don't have the HTPC as our only source, we will need this at some point.
- MadVR to display the active 3D LUT to know exactly what MadVR is doing behind the scenes when doing the conversion.

I am seeing various issues, especially with highlight compression, and as far as I can see MadVR isn't as adaptive as I thought it was initially. But before reporting anything and wasting your time, I'd like to rule out calibration issues, so I'll resume my tests as soon as we get the active LUT reporting in MadVR. I also had issues with some of the 3D LUTs created with Calman (weird posterization), so I need to investigate that as well.

At the moment, it looks like MaDVR doesn't adapt the curve to the reported MaxCLL, so it works great at one end of the spectrum or the other, but it doesn't work as well for all titles if you want it to improve the low end. Similar to the Oppo apparently.

It would be great to have a way to test MaxCLL/MaxBrightness in profiles, or at least one threshold (possibly more) in the HDR to SDR conversion settings to specify different peakY values depending on content.

The big plus of MadVR (and something that the Radiance can do but not the Oppo) is its ability to provide perfect calibration with its 3D LUTs. Unless/until I can get this to work as it should, the custom curve(s) remain a better option, at least here.
Why would I do that, when I can actually measure the brightest pixel in each frame myself and adjust the tone mapping curve to that (which is what I do)?
I have also discovered MacCLL is not always accurate even when it is reported, so I agree that's a far superior method. Spider-man Homecoming is an example of containing content nearly 4x higher than its reported MaxCLL.
I also thought that you would be doing this when I initially tested, but I can't get MadVR to improve the low end for low nits titles AND not crush the highlights for high nits titles with a single peakY value.

If I use 200-300, I get better low end for 1100nits tiles, but the compression in the highlights is unacceptable for titles with a content going significantly above 1100nits (Mad Max, Pacific Rim, The Shallows, etc).

If I use 400 or more, I get better highlights resolution for high nits titles but there is no improvement in the low end of 1000nits titles compared to a good 4000nits custom curve.

Again, I need to do more tests, I was only correcting your statement as you were only quoting my initial reaction and not the reservations I've expressed since.

I also have issues with calibration so I'd like to rule these out before taking your time with reports.

I have very little time at the moment, so until I can do proper tests with enough information from MadVR about what it's doing I've parked it.


madVR's tone mapping works like this: If you actually tell madVR the proper peak Nits value that you measure your display as, all the pixels in the lower Nits range (ideally from 0-100 Nits) are displayed absolutely perfectly, in the same way a true 10,000 Nits display would show them. Tone mapping only starts somewhere above this lower Nits range. However, we can't simply jump abruptly from 0 compression to strong compression, so the tone mapping curve needs to start smoothly, otherwise the image would get a somewhat unnatural clipped look. Practically, if you set madVR's peak luminance value to 200 Nits, the tone mapping curve starts compressing pixels at 23 Nits. If you set madVR's peak luminance value to 400 nits, the tone mapping curve starts compressing at 75 Nits.

Of course projectors are rather dim, compared to flat panel displays. So if you actually tell madVR the proper Nits value you measured, tone mapping will be very heavily handed, and you'll lose a lot of highlight detail. So naturally, you'll want to pretend that your projector has higher peak luminance capability than it really has. If you do that, tone mapping will relax a little, but the overall image (including the low end range!) will be darker than the UHD Blu-Ray disc encoding asks for.

This probably explains why you're not happy with either the 200 Nits nor 400 Nits setting? I suppose what you really want/need is some tricky tone mapping curve which doesn't compress highlights as much as it mathematically should, while at the same time making sure that the lower end is reproduced exactly as the UHD disc asks for? We're approaching the impossible here: Since your projector is so dim, a lot of compression is needed. So where should we compress? If you enter the measured Nits value, the top end if heavily compressed. You don't like that. If you enter a much higher than measured Nits value, the bottom end becomes too dark (= compressed in a sense). You don't like that. So if you don't like compression at the top end and not at the bottom end, where should be compress instead? I suppose we could try making the tone mapping curve flatter, so that it compresses more strongly in the mid range and less strongly in the top range, but doing that would probably take some life out of the picture...

Anyway, here's a quick test suggestion: madVR's "contrast" slider is a linear light S curve gamma modification. So you could try to enable gamma processing, use 400 Nits to get less compression, and then use the contrast slider to "help" the lower end. Maybe that's more to your liking? Doing it this way will move away from the scientific approach of madVR's tone mapping curve, though. But I suppose lying to madVR by entering a higher than measured peak Nits value already gets rid of the science, anyway...
I understand all this, and this is why I'm saying that MadVR isn't as adaptive as it could/should be to get best results depending on the content for each title, at least with our projectors.

My display has 120nits PeakY (actual) in low lamp with 1750 hours on. I quickly saw that using the actual peakY was compressing too much, this is when I started raising the value. But then it seems impossible to find a single value that provides bright enough picture for 1000-1100nits titles (in a way that addresses the issue pointed by Kris Deering in titles such as The Revenant) and still resolve enough highlights that you don't clip visible information in titles with content going significantly above 1100nits.

What I can do with custom curves is use the equivalent of a 200nits value in MadVR for 1000-1100nits titles (titles with no content above 1100nits), and the equivalent of a 400nits+ value in MadVR for 4000nits titles (titles with content above 1100nits). The Vertex switches automatically between these two curves (in fact I'm playing with three curves at the moment, although I think two are probably enough).

My initial assumption was that MadVR would look at the content, and adjust the room left for the highlights according to said content. We don't need as much room for highlights with content under 1100nits, so we can ramp up the brightness of the curve to improve the low end and keep less room for highlights.

I have no interests in "tricks". The custom curves are mostly based on BT2390, they are just not adaptable the way MadVR could/should be if it was looking for MaxCLL and adjusting the value for PeakY accordingly, as it seems the current adaptive way based on the frame analysis isn't able to address the above issue..

I guess you can't do this on a frame by frame basis without causing significant jumps in picture brightness because the content wasn't mastered that way (it's not HDR10+ or DV), but you can get a fairly accurate estimate of the actual MaxCLL, if using MaxBrightness to first identify whether the title has valid metadata or not, then determine whether it could have content above 1100nits or not, then use MaxCLL to test again if the metadata is valid or not, then if it's valid get an indication of the content. If there is any uncertainty on the accuracy of the metadata, I fall back on 4000nits to be on the safe side.

This is the latest automatic algo I've asked HD Fury to implement in the Vertex for the V2.1 of the JVC Macro feature (HDR10-Main is a curve clipping at 4000nits, HDR10-Optional is a curve clipping at 1100nits with a higher diffuse white):

IF HDR10_Max_Brightness EQUALS 0 THEN HDR10-Main ELSE ** invalid metadata so select 4000nits curve
IF HDR10_Max_Brightness BELOW OR EQUALS 1100 THEN HDR10-Optional ELSE ** in that case we know the content can't be above 1100nits, so we select the 1100nits curve even if the maxCLL is invalid (it often is equal to 0)
IF HDR10_Max_CLL EQUALS 0 THEN HDR10-Main ELSE ** (title is a 4000nits curve with invalid MaxCLL, so I select the 4000nits curve to be safe)
IF HDR10_Max_CLL BELOW OR EQUALS 1100 THEN HDR10-Optional ELSE HDR10-Main ** (otherwise I use MaxCLL to select the 1100nits or 4000nits curve).

With the custom tabs, I can use the JVC max of three custom curves and I select (at the moment) a curve according to MaxCLL that clips at 1100nits, 2200nits or 4000nits following the algo above but testing MaxCLL for 2200nits as well. If you prefer, you could do 500nits, 1200nits and 4000nits as the MaxCLL value has little bearing with the MaxBrightness value, but with the Vertex the algo would be less accurate as there are a limited amount of lines and conditions available in the custom mode so I prefer to stick to values used in the mastering metadata info, even if it's not 100% accurate. I think the algo above will be 99% of the time a best case scenario.

This is why I'm suggesting to add ways to test for MaxBrighness and MaxCLL in profiles in MadVR, or preferably ways to specify at least one threshold value (preferably two) so that MadVR can adjust the internal value of PeakY according to the actual MaxCLL of each title (using algo above or similar), or let us specify the PeakY value we want to apply according to content if you can't find an algo to extrapolate this for projectors from the actual peakY (which would be, of course, the ideal way to implement this).

For example, if the algo above (or similar) was implemented in MadVR and we had the possibility to specify a PeakY value for 1100nits and 4000nits content, I would specify for my actual peakY of 120 a value of 200nits for 1100nits content and 450 (possibly more) for titles with content up to 4000nits (or above). This would give a similar result to what I can get at the moment with custom curves, using the Vertex to select automatically the correct calibration according to content. Ideally, we would want to support a few more MaxCLL threshold, such as 500nits and 2200nits.

Again, I've parked my tests in MadVR at the moment due to lack of time. MadVR works great in passthrough mode with 2/3 custom curves and the Vertex to switch automatically between the curves according to the actual content, not only with the HTPC as a source but with any other source and any other calibration (3D, x.v.color, HLG BT2020, HLG REC709, SDR film, SDR TV, many of which are not supported by MadVR/LAV). I want to first establish that the calibration is correct and assess whether some of the issues I'm seeing are calibration related or MadVR related before fine-tuning the peakY value I need according to content and providing you with a detailed report. The earlier you find the time to implement this, the faster I can resume my tests and resume this discussion with you, probably in the doom9 thread rather than here as most of it isn't JVC specific (and probably too technical and boring for most of the readers of this thread).

I have run a full autocal of my PJ a couple of days ago to calibrate my SDR Rec-709, SDR BT2020 and HDR10 baselines and I've created new 3D LUTs (rec-709 profile which I use to also create PAL and NTSC LUT and SDR BT2020 profile which I use to also create a DCI-P3 LUT). I'm only waiting for MadVR to report the active 3D LUT in the OSD to resume my testing.

Madshi I would appreciate if we don't start a long discussion here and now. As I said, I need to resume testing once I've ruled out any potential issue with calibration in MadVR. Once I'm 100% sure that the correct 3D LUT is applied, I'll provide more feedback and we can move on. :)
IMHO the best (most scientific) approach would be to tell madVR the true measurement of your display's peak Y value, and then leave the rest in madVR's hands. That's the only way madVR even has a chance to reproduce the important low end (e.g. 0-25 Nits) faithfully. If that doesn't produce good enough results right now, let's work together on improving that.

BTW, yes, adjusting tone mapping to the measured peakY frame by frame produces flickering, that's why I'm using a rolling average.
Agreed, but that produces unacceptable clipping even for 1100nits titles, so not an option in practice.

We can't be 100% scientific with projectors, there are too many variables and no satisfying standard. :)

Looking forward to doing more testing when you have the time to implement the active 3D LUT report in the OSD.

If there is any way you could get your hand on a custom API allowing you to get the GPU/OS to report the correct content info in the HDMI stream (especially SDR rec-709 or SDR BT2020), this would be greatly useful to automate the calibration changes so that the optimal baseline calibration is always selected (SDR Rec-709 or SDR BT-2020). Even if you manage to fix the external command in MadVR, we will have to use IP control (this is what Arve's tool uses) which means that we won't be able to use iRule, Roomie and other apps also using IP to control the PJ as it will conflict with it. The Vertex uses RS-232, so it works perfectly in parallel with iRule, Roomie etc. I know it's a niche problem but for us projector owners with dedicated rooms it's a very important point.
We haven't even seriously tried yet! I've only implemented one general scientific tone mapping curve, without any real tweaks/modifications aimed at projectors yet. Please don't give up so quickly! Let's try to achieve the best scientific solution first, and only after we've seriously tried that and failed, it would be time to admit defeat and look for alternatives.
Not giving up at all, but unless we can input the actual peakY and dynamically adjust the internal PeakY according to content, I don't think there will be a satisfying result for all titles. The custom curves I'm using are 100% scientific, they do follow Bt2390, it's just that they are optimized according to content in order to get the best of both worlds. They are not "pie in the sky let's tune to taste so that it looks like I want" curves. Not at all. My understanding is that Arve is using a BT2390 formula with a few more tweaks, but you can decide where to hard clip and use the available range in an optimum way.

EDIT: I thought there was a parameter for max_brightness of the content (distinct from the max_brightness of the display) in BT2390. Did you try feeding it with MaxCLL (once you have ascertained MaxCLL was valid, as per the algo above or similar)?
If madVR has all the necessary information (truthfully measured peakY of your display, measured peakY of each video frame), then why should it not be possible for madVR to achieve a satisfying result for all titles? madVR already has all that info. Probably just some tweaks to the tone mapping parameters are needed, that's all.

I'd say, once a new madVR build is out and you find more time for testing, maybe you can make a couple of very small movie samples available to me that showcase what is not to like with the current tone mapping behaviour, if you input the true measured peak luminance of 120nits, and then I can try tweaking the tone mapping behaviour accordingly. Let's see how far that takes us.
Because when you truthfully measure the peakY of each frame you can't adjust as much as you should without producing flicker (too wide variations in brightness) as you said yourself. You have no way when you start playing the file to know whether the content will shoot above 1100nits or not.

Did you see my edit above? I thought ST2390 had a parameter in the formula for max_brightness of the content. That might be what's missing? Could you share the parameters you have implemented? It might be useful to make some of them accessible in the next build, at least during testing.

I don't think I'll be able to make short clips but if you send me the list of the titles you have I can give you examples from that list with timecodes/chapters. The Revenant, Mad Max, Pacific Rim, Deadpool, The Shallows, Batman vs Superman should be enough.

Happy to start with actual peakY, I agree that it would be the ideal way to proceed, but I think you will need a checkbox for projectors (in dedicated room), because while it makes sense to follow the curve up to 100nits for displays with ambient light and a peakY of at least 600nits, because in that case reference white =100nits if you want to be correct, for projectors in a dedicated room reference white = 50nits and not 100nits, so it doesn't make sense to do it the same way as following the absolute curve religiously isn't accurate or desirable. BT2390 accounts for that in part, but not completely as far as I can see. Being able to specify (or ideally extract from the metadata) the max brightness of the content and the value for reference white (or brightness factor) would help a lot to get better results with projectors specifically.

Looking forward to testing the next build :)
And as I said, I'm using a rolling average which nicely takes care of flickering. I see no use in maxCLL, because not all titles have it, because it might be incorrect, because it's probably static for the whole movie, and because I can measure myself frame by frame, which should be greatly superior. I should easily be able to provide all the benefits of HDR10+ with current HDR10 content.
One thing to consider is that we probably don't want to "faithfully" reproduce anything on a projector. SDR is assumed to be 100 nits, however with projectors we generally calibrate it to half that. The best luck we've had with HDR is to calibrate to some factor/fraction of what ST.2084 actually calls for. Perhaps what we need is to be able to tell madVR our true peakY and also some scale factor to apply.

The "problem" I noticed with low (more "accurate") PeakY values in madVR is that the overall image is way, way, way too bright. I started out with peakY set to 200, while my projector can really only do about 100. The result is that the mid tones seemed probably twice as bright as an SDR image, when they're really supposed to be about the same. I can't remember if I've got it at 300 or 400 now, but HDR still seems brighter than SDR, and I'm not talking highlights, but the overall tone.
Fair enough. I was planning to add some more controls to allow users to "fine tune" the tone mapping. Some sort of "scale factor" would be an option, I guess.
Perhaps there's a better way, I'm not really sure what Lumagen or Oppo are actually doing, they seem to both get rave reviews, but it seems like we should acknowledge that with projectors we don't need to try to retain (for example) medium gray (18%/18 nits SDR and 18 nits on a flat panel), when we'd normally calibrate a projector to about half that.
Well, I do like to do things scientifically (if possible). So if the UHD Blu-Ray asks for a pixel to be displayed with 20 Nits, I would like to achieve that, and madVR actually does. However, as you say, due to projectors being rather dim, and because they're usually used without any ambient light, the rules are somewhat different, so some sort of scaling factor does make sense. What I don't want to do is to throw science completely out the window and just hand draw some curve which seems pleasing for some titles.
That's a fair point. There are a few scientific hints for what a projector should do in the SDR realm (ie, 100 ire is defined as half as bright on a projector as a flat panel) and that points at a standard for adapting definitions created for HDR on flat panels to HDR on a projector (eg, that diffuse white should be half as many measured FTL on a projector in a dark room to look "equivalent" to a flat panel in a living room/lounge).
Ok, I suppose I could add a "diffuse white target" option which defaults to "flat panel -> 100 nits". Would it make sense to design the option like that?

If so, and if you then change that option to "projector -> 50 nits", I should probably simply half all pixel nits values? E.g. if then the UHD Blu-Ray asks for a pixel to be 30 nits bright, I should reproduce it with 15 nits instead? Or would the "scaling factor" be non-linear in some way?


Haha, yeah, I'm mad(shi). That's why all my products start with "mad"... ;)
https://www.avsforum.com/forum/24-d...ial-jvc-rs600-rs500-x950r-x750r-x9000-x7000-owners-thread-956.html#post55614668

Oh, agreed, I don't like just going by hand/eye, if only because it invariably leads to not working "all the time". The closest thing to science we have to go on with HDR for projectors, is Dolby Cinema/Dolby Vision, which has a peak Y of 106 nits, which is basically equal to the diffuse white point of consumer HDR, but it still includes room for highlights.



I think that's a good option, that's actually how I use Arve's tool. I set the max brightness to my measured peak white, and then the reference white to something less than that.



That seems like a good starting point, except, you'd basically end up with exactly what Arve's tool gives us at low brightness levels, and this is what people are "complaining" about.
Is "diffuse white" or "reference white" a better name?


What complaints are there exactly? FWIW, the complaints could have many possible reasons. E.g. the gamma curve not having enough control/correction points, or the gamma processing having too low precision/bitdepth, or tone mapping screwing with hue or with saturation. None of these issues should apply to madVR (as far as I can say).

Some people may think that tone mapping just consists of applying some compression curve to the Y channel. Which is not true (if you want good quality).

Well, anyway, I'll add this option, and let's see what effect it will bring.
In the latest version of HCFR, diffuse white corresponds to the luminance on the ST2084 curve at 50% stimulus, which is 92~94 nits depending on the mastering display.
Initially it scaled everything down linearly for lower diffuse white, but that shifts the clipping point up for a fixed luminance. In the latest version (v3.5), the roll off point is kept fixed while the diffuse white is scaled.
I'm not sure I understand the comment about the clipping point for a fixed luminance. With "roll off point" you mean the compression curve "knee start", meaning the nits value at which the compression curve starts to do work? So you're saying that if the knee start is at e.g. 20 Nits with a 100 Nits diffuse white, then the knee still stays at 20 Nits, even if diffuse white is set to 50 Nits instead?
The attached figure illustrates what I was referring to (no tone mapping assumed).
Assuming a display with peak luminance of 1000 nits, the ST2084 EOTF (yellow line) hard clips at 75% stimulus. If we a multiplier of 2 for the projector with linear scaling, the target EOTF shifts down to the dashed white line. At the same time, the hard clipping point will shift to 83%, essentially "wasting" of the luminance in the upper end as no stimulus above 75% will increase the output. A similar issue may exist when tone mapping curve is applied, if the the entire curve is scaled linearly.
My plan was to first convert the video to linear light (1.0 = 10,000 Nits), and then simply multiply with e.g. 0.5 to account for projectors targetting a lower diffuse white Nits, and then apply tone mapping as usual, but in such a way that no luminance is wasted. So the brightest pixel in the frame (if it exceeds diffuse white) gets assigned 1.0 by tone mapping. Sounds ok to you?
That sounds right.
I am not familiar this this approach. I didn’t think the objective of tone mapping is to expand every frame the peak luminance, even with a sliding average.
EDIT:
I believe the luminance at any input level should not be expanded beyond the "scaled PQ curve".
As an example, the luminance at 60% of the PQ curve is is 240 nits. Assuming a multiplier of 2 is being used (diffuse white = 94/2 = 47 nits) for a display with 200 nits peak luminance, a 60% input should not be expanded beyond 240/2= 120 nits. Put it another way, the contrast ratio between any two points after tone mapping should not exceed the contrast ratio without tone mapping.
The goal of this logic is to already deliver today what HDR10+ plans to achieve in the future.


Yes, I detect if the frame's peak luminance (after multiplying with 0.5) is within the display capabilities. If it is, I completely disable tone mapping and simply display the content as is. Any luminance modifications are only done if the frame's peak luminance exceeds the display's capabilities.
I don't think this will work satisfactorily, unless I misunderstand what you're describing. Assuming the display's maximum luminance is 300 nits, and ignoring the multiplier:

- A frame containing 300 peak nits will be displayed as is;
- Another frame containing areas of 600 nits and 2000 nits peak, will be scaled down such that the peak become 300 nits or less (depending on the tone map), which means the 600 nits area will be displayed far dimmer than the other frame's 300 nits.
Well, considering that diffuse white is supposed to be about 100 nits, all we're talking about here is HDR specular highlights. It's not like large areas of the frame should have 600+ nits pixels. I'd expect there to be only few 600+ nits pixels which should be concentrated to small spots in the frame. Furthermore, the tone mapping compresses much stronger at the top than in the middle. So while in the 600 + 2000 nits frame, my algo would draw 600 nits pixels dimmer than in a max 600 nits frame, it should not be an extreme difference. Furthermore, the rolling average should prevent flickering if the 600 nits pixels stay constant and the 2000 nits pixels come and go.

The idea is that all pixels at or below diffuse white (which should really be the bulk of the frame's pixels) should stay the same regardless of the peak luminance of the frame, but if the frame's peak luminance differs, we can compress specular highlights more or less strongly to make the most out of the display's luminance capabilities.

If you think about it: If the whole movie has one sunlight scene with 4000 nits, but the rest of the movie is in the 600 nits range, do you really want the tone mapping to always reserve space for 4000 nits? That doesn't make too much sense in my book, and that's the whole reason why HDR10+ was created.
I suppose restricting the dynamic mapping to highlights will minimize the "pumping effect".
However, from what I've seen, many of the complaints about the custom curves are regarding "dark pictures", far more than lack of "sparkle" in specular highlights.
As a matter of fact, many Oppo users who rave about the bright pictures are using low lamp power with iris closed down, so obviously they are not looking at specular highlights.
Right, we need both:

1. Adaptive tone mapping to retain specular highlights without impacting the main content, especially
2. Better APL handling, particularly when it comes to shadow detail and color rendering

and they may not be related much (except that the solution to each shouldn't hurt the other).




Here an example of the best Arve Tool HDR Curve: manni / Javs:

Nice work on the v3 curves. Lots of options here, you cant go wrong with either yours or mine it seems.

Ours are actually very similar if looking at the 107nit curves, at least they are to my V1 curves, they had a shallow rolloff, I have since changed mine while looking at content clipping and ended up at my v2 curves.

Just for academic comparison.

Manni's are the light grey lines, mine are the darker black lines... As for the shadow detail and midrange of the curves, they are almost completely identical to each other until the highlight rolloff, interesting,

Those of you using my curves already, looks like we have been looking at mostly the same image in regards to shadow detail and mid tones for some time now.

107Ynit / 4000nit Manni DVE / 4000nit Javs V2 curves:

Image


107Ynit / 1100nit Manni DVE / Javs 1200nit V2 Curves, this one is slightly different in the mid tones, it will be ever so slightly brighter. Manni's 1100nit curve is actually probably rolling off at closer to 1500 nits.

Image


Those of you that used to run my V1 curves, they had a much more shallow roll off, lets look at the 4000nit one.

107nit 4000 Manni DVE curve / Javs 4000nit V1 Curves.

Image


If we want to further improve the shadow detail in the Revenant from either Manni's or my curves, we would need to be lifting the curves in the first 20nits or so vs the rest... Only the fire and the brightest parts of the following image are passing the ~300nit mark. 90% of this shot sits well under 100 nits.

Image
 
#19,401 ·
#19,404 ·
i have a working MPV and MadVR config that i dont hate, tone mapping HDR to SDR rec709 with a lower nit target for improved brightness on HDR content. however my end goal is to try and get tone-mapped SDR in bt2020 10bit as a final output on my nexigo aurora pro

i'm trying to get the following:
a) an improved overall brightness/pop with HDR content on a low nit display/projector with lower nit target tone mapping (mostly achieved other than tweaking nits/contrast and scaling algos)
b) not lose the expanded color space that HDR brings by sending it to rec709, keeping bt.2020 (madVR only? icc with mpv?)
c) somehow bypass the nexigo tone mapping to avoid double tone mapping (most of this thread is focused on epson/jvc's, i cannot manually set colorspace/hdr-on-off on my nexigo)

i've seen mentioned time and time again to use ICC profiles as well, can someone expand more on this? i always thought ICC was more of a calibration to basically say 'when i told your monitor to output blue-123, it measured closer to blue-456. so do some math so that when i send blue-123+'icc-adjustment', it measures blue-123 on screen'

however, i see many folk mention it for changing color spaces and even in some cases tone mapping (from OP), and maybe we need an 'icc thread, how to properly use in adventures of tone mapping' for us who dont have a deep understanding, thanks!
 
#19,405 ·
i have a working MPV and MadVR config that i dont hate, tone mapping HDR to SDR rec709 with a lower nit target for improved brightness on HDR content. however my end goal is to try and get tone-mapped SDR in bt2020 10bit as a final output on my nexigo aurora pro

i'm trying to get the following:
a) an improved overall brightness/pop with HDR content on a low nit display/projector with lower nit target tone mapping (mostly achieved other than tweaking nits/contrast and scaling algos)
b) not lose the expanded color space that HDR brings by sending it to rec709, keeping bt.2020 (madVR only? icc with mpv?)
c) somehow bypass the nexigo tone mapping to avoid double tone mapping (most of this thread is focused on epson/jvc's, i cannot manually set colorspace/hdr-on-off on my nexigo)

i've seen mentioned time and time again to use ICC profiles as well, can someone expand more on this? i always thought ICC was more of a calibration to basically say 'when i told your monitor to output blue-123, it measured closer to blue-456. so do some math so that when i send blue-123+'icc-adjustment', it measures blue-123 on screen'

however, i see many folk mention it for changing color spaces and even in some cases tone mapping (from OP), and maybe we need an 'icc thread, how to properly use in adventures of tone mapping' for us who dont have a deep understanding, thanks!
I posted a reply here - https://www.avsforum.com/posts/63888589/
 
#19,407 ·
My settings are stuck on Pluto 1 using the latest code drop during playback. I don't think that the norm and not sure how to correct/change it. The Apply button does not work (greyed out) so can't make a change to the TM. Any ideas?
It happened to me last week and I had to run the install.bat procedure that is inside the madvr folder
 
  • Like
Reactions: BL4DE and stevenjw
#19,409 · (Edited)
Is it normal for every movie I watch with MPC-HC and MadVR to appear significantly darker than on low-quality streaming sites? Or are the streaming sites simply too bright?

The first image is from a random 1080p streaming site, while the second image is from a 93GB 4K HDR Remux mkv file using MadVR with the settings below.

View attachment 3742543
View attachment 3742544

View attachment 3742545
I've compared MadVR tonemapping to the same 4K nonHDR and 1080p movies on streaming sites in the past and they look almost exactly the same.

The brighter one is correct, there is something wrong with your settings.
 
#19,412 ·
I’ trying to decide if MadVR 4k HDR remux ->1080p SDR display (Sony VW80 vpr) in worth over native 1080p remux.
Is there any suggested clip/video scene/pattern to make the test?
Typically yes, you will notice a better/sharper video going from 4K to 1080p with MadVR, there’s just more info to work with. The problem lies more with the content, some UHD releases are actually 2K upscales with digital noise reduction and other modifications, which in my opinion, softens the image. In which case, many people prefer to watch the original 1080p release.
 
#19,413 · (Edited)
I’ trying to decide if MadVR 4k HDR remux ->1080p SDR display (Sony VW80 vpr) in worth over native 1080p remux.
Is there any suggested clip/video scene/pattern to make the test?
Typically yes. There were a few early 4K releases where the HDR was a bit overcooked (eg. some of the Bourne 4K releases) but as a rule of thumb 4K HDR tonemapped and scaled to 1080p SDR will usually look better than the native 1080p SDR.
 
  • Like
Reactions: stevenjw
#19,414 ·
I am having issues with Mad Max Fury Road where flames are always overblown. I am using MadVR 208 and have tried all possible combinations to avoid the issue. I am also having clipping issues with the initial scene of the MEG when the protagonist enters into the submarine. The red light background is a solid red. The only way to avoid the issue is to reduce the saturation to -23 (without desaturation enabled). Even enabling desaturation to 50% or higher does not seem to solve the problem. Is it a limitation of madVR on this particular version? Could a solution be to provide madVR with an indication of the maximum nits achievable per primary colour rather than giving a single unit ?
 
#19,415 ·
I am having issues with Mad Max Fury Road where flames are always overblown. I am using MadVR 208 and have tried all possible combinations to avoid the issue. I am also having clipping issues with the initial scene of the MEG when the protagonist enters into the submarine. The red light background is a solid red. The only way to avoid the issue is to reduce the saturation to -23 (without desaturation enabled). Even enabling desaturation to 50% or higher does not seem to solve the problem. Is it a limitation of madVR on this particular version? Could a solution be to provide madVR with an indication of the maximum nits achievable per primary colour rather than giving a single unit ?
How is your displays baseline calibration tracking and 95-96-97-98-99-100 % ire?
 
#19,416 ·
How is your displays baseline calibration tracking and 95-96-97-98-99-100 % ire?
With white test patters it is linear up to 100 IRE. I am not sure about the linearity or capability with colour test patterns. Using DVS colour clipping test pattern I am able to display up to 80% with all colours other than red, which stops at 73 / 75%. Hope this info can help.
 
#19,418 ·
Hi guys, I am thinking to upgrade from an MSI RTX 3080 gaming trio to an ASUS PRIME RX 9070XΤ. I am considering only smooth video playback using MPC-HC + madvr. Would AMD can handle madvr or I would face annoying dropped frames and have to stick with NVDIA 5070 ?
You want this thread:
 
#19,419 ·
You want this thread:
Sorry..
 
#19,420 ·
Would it be conceivable to select the gamma switch in steps of 0.01, instead of 0.05? Steps of 0.05 change the image too much because sometimes it is not possible to calibrate precisely at 2.20, 2.25, 2.30; for example it could be at 2.22.

It would be particularly important to compare Madvr hdr-sdr tone mapping's settings, focusing on other parameters

That's my best wish...