Improving Madvr HDR to SDR mapping for projector - Page 3 - AVS Forum | Home Theater Discussions And Reviews
Forum Jump: 
 4604Likes
Reply
 
Thread Tools
post #61 of 5932 Old 02-06-2018, 10:17 AM
AVS Forum Special Member
 
madshi's Avatar
 
Join Date: May 2005
Posts: 7,541
Mentioned: 443 Post(s)
Tagged: 0 Thread(s)
Quoted: 2309 Post(s)
Liked: 2905
Quote:
Originally Posted by markmon1 View Post
I'm going to throw a guess out there. I am guessing that if a display can do 10,000 Nits on a full white frame, that on a blue frame we'd be looking at around 1/3 that say 3333 nits. I'd expect those white frames to be made up of 3333 red + 3333 green + 3333 blue for 10000 nits white. Maybe this thinking is wrong?
Sorry, missed your post. It's actually 593 nits blue, 6780 nits green and 2627 nits red. It might seem a bit weird that green has such a large piece of the pie and blue such a little piece, but that's the way it is.

Which btw, is also a good reason to avoid subpixel panel alignments for the green channel. It's better to keep green untouched and adjust red and blue instead. Because the green channel is the most "important", so to say. Or in other words: Subpixel panel alignment makes the image a bit softer. So applying subpixel alignment on the green panel will cost more perceived sharpness than applying subpixel alignment on the red and blue channels.
Soulnight likes this.
madshi is online now  
Sponsored Links
Advertisement
 
post #62 of 5932 Old 02-06-2018, 10:50 AM - Thread Starter
AVS Forum Special Member
 
Soulnight's Avatar
 
Join Date: Dec 2011
Location: Germany
Posts: 1,253
Mentioned: 189 Post(s)
Tagged: 0 Thread(s)
Quoted: 913 Post(s)
Liked: 1323
Quote:
Originally Posted by Javs View Post
If you give white a value of 1 for the measured brightness rolling average can you do the same for colours? But scale the known max luminance of that colour, so 593 nits blue in the case of 10,000nits encoded blue becomes 1?
Quote:
Originally Posted by madshi View Post
You mean we would map 10,000nits blue to 593Nits, and 9,000nits blue to a bit less than 593Nits? And 8,000nits blue to yet another bit less? Etc?

What would we do with the Red and Green channels then? Would we apply exactly the same mapping? So 10,000nits green would become 593Nits, too? Or would we map 10,000nits green to the max the projector can do for green, which is respectable 6,780nits?
Javs may be on to something here.
Currently Madshi you code the saturation of the "blue" pixels correctly below the target white, but above you add some "green+red to make it brighter if needed while lossing saturation.
What if you would do the opposite:
1) you analyse the picture: you say ok max blue pixel PeakY=500nits for examples.
2) this one you give 100%sat 100%brightness to the display.
3) for the pixels which have also 100% saturation but lower brightness, like 300nits for example, you scale the brightness down but not linearly so you don't steal too much life out of the picture.

3 alternative) maybe you can even you for the lower brightness pixel with 100% sat, the correct brightness but a lower sat instead?
I just believe that our eyes are tricked into thinking the more saturation means more brightness EVEN if the total brightness is higher with a bit of white added to it.

Probably it would be all wrong with a lot of light or life stolen from the mid brightness part of the picture... except if we clip a bit the highlights, then we do not have to desaturate/or less bright that much the lower brightness pixels.

Btw. could you let us try the idea in my post above:
https://www.avsforum.com/forum/24-dig...l#post55641392
Javs likes this.

Last edited by Soulnight; 02-06-2018 at 01:38 PM.
Soulnight is online now  
post #63 of 5932 Old 02-06-2018, 01:27 PM
AVS Forum Special Member
 
Javs's Avatar
 
Join Date: Dec 2012
Location: Sydney
Posts: 7,708
Mentioned: 467 Post(s)
Tagged: 0 Thread(s)
Quoted: 6662 Post(s)
Liked: 6314
Quote:
Originally Posted by madshi View Post
You mean we would map 10,000nits blue to 593Nits, and 9,000nits blue to a bit less than 593Nits? And 8,000nits blue to yet another bit less? Etc?

What would we do with the Red and Green channels then? Would we apply exactly the same mapping? So 10,000nits green would become 593Nits, too? Or would we map 10,000nits green to the max the projector can do for green, which is respectable 6,780nits?
Yeah so, for the bolded, I am not 100% sure, we probably definitely would not apply exactly the same mapping so green would also be 593 nits, I feel like each respective value would scale but they would all do it in the same fashion. Or maybe they do scale to the same figure? I think testing would reveal the answer to this.

Hmm,

Here is something that may help, my waveforms are actually kind of technically linear light if you look at them, though the scale of the waveform slightly changes, here is that 10,000nit pattern:

First, here is a legend on how to read the waveform:



Now here is the colour information as its encoded



Now like before, say we have a ~1000 nit film, that colour information will now look like this (keep in mind the waveform scale has now changed so 1000nits is the ceiling not 10,000):



Now what I actually did to the above file is kind of rudimentary but I realised its actually kind of on the right track, I increase contrast on the image in photoshop similar to how we might do that to clip to the mastering level of a film on our projectors, now the top of the waveform has readjusted itself (because I used contrast to set a new ceiling) to 1000 nits, therefore each colour has a new max value, from what I am looking at, my brain tells me these would have to equally scale. If we scaled one colour differently, or using a difference factor than another colour we would simply have incorrect colours.

If a film has a mastering level of 1000nits surely its encoded colour is predetermined to scale to it, so with the rolling average, we could scale all colours the same was you are scaling white yes? You give white a 1 because you detect 1000 nits content, thus you will give blue, green and red a 1 each, but 1 in terms of the 10,000 larger scale will equal its proper values, 593, 6780, 2627... So I guess with 1000nit content we have 100% blue at 59.3, green at 678 and red at 262?

Again looking at these waveforms, we can see that colour content exists up near 10,0000 nits, its not stuck at its true values of 593, 6780 and 2627 on the linear scale according to this. So, it almost seems like we already have scaling even at 10,0000 nits?

Since we have blue content @ 10,000 nits, it seems the display knows it can only draw it at 593, yet its right there on the waveform as 10,000... is it possible when doing the tone mapping we are overlooking this and being too literal thus MadVR is not quite interpreting what the true values need to be?

Here is what madvr does to the same pattern, this is before the current build, but it should give you an idea on what's happening, its using 400nit peakY and 50/50 Lum Sat:



The colours on the earlier waveforms were kind of clumped together, green seemed to always be brighter, but they did all stay together and scale cleanly, but now we have each colour going off in their own directions at different scales and different saturation and luminance values.

Any of this making sense?

JVC X9500 (RS620) | 120" 16:9 | Marantz AV7702 MkII | Emotiva XPA-7 | DIY Modular Towers | DIY TPL-150 Surrounds | DIY Atmos | DIY 18" Subs
-
MadVR Settings | UHD Waveform Analysis | Arve Tool Instructions + V3 Javs Curves
Javs is online now  
Sponsored Links
Advertisement
 
post #64 of 5932 Old 02-06-2018, 02:42 PM
AVS Forum Special Member
 
madshi's Avatar
 
Join Date: May 2005
Posts: 7,541
Mentioned: 443 Post(s)
Tagged: 0 Thread(s)
Quoted: 2309 Post(s)
Liked: 2905
No time for full replies right now. But can you do a waveform like the last one, with 50/50 lum/sat, with "preserve hue" unchecked?
madshi is online now  
post #65 of 5932 Old 02-06-2018, 03:56 PM
AVS Forum Special Member
 
Javs's Avatar
 
Join Date: Dec 2012
Location: Sydney
Posts: 7,708
Mentioned: 467 Post(s)
Tagged: 0 Thread(s)
Quoted: 6662 Post(s)
Liked: 6314
Quote:
Originally Posted by madshi View Post
No time for full replies right now. But can you do a waveform like the last one, with 50/50 lum/sat, with "preserve hue" unchecked?
No probs, here are a bunch of them.

Preserve Hue Off - 50 / 50 Lum Sat



Preserve Hue Off - 0 / 100 Lum Sat



Preserve Hue Off - 100 / 0 Lum Sat



Preserve Hue Off - 30 / 70 Lum Sat



Preserve Hue On - 50 / 50 Lum Sat



Preserve Hue On - 0 / 100 Lum Sat



Preserve Hue On - 100 / 0 Lum Sat


JVC X9500 (RS620) | 120" 16:9 | Marantz AV7702 MkII | Emotiva XPA-7 | DIY Modular Towers | DIY TPL-150 Surrounds | DIY Atmos | DIY 18" Subs
-
MadVR Settings | UHD Waveform Analysis | Arve Tool Instructions + V3 Javs Curves
Javs is online now  
post #66 of 5932 Old 02-07-2018, 05:57 AM
AVS Forum Addicted Member
 
stanger89's Avatar
 
Join Date: Nov 2002
Location: Marion, IA
Posts: 23,130
Mentioned: 34 Post(s)
Tagged: 0 Thread(s)
Quoted: 4156 Post(s)
Liked: 2384
Quote:
Originally Posted by Javs View Post
Now what I actually did to the above file is kind of rudimentary but I realised its actually kind of on the right track, I increase contrast on the image in photoshop similar to how we might do that to clip to the mastering level of a film on our projectors, now the top of the waveform has readjusted itself (because I used contrast to set a new ceiling) to 1000 nits, therefore each colour has a new max value, from what I am looking at, my brain tells me these would have to equally scale. If we scaled one colour differently, or using a difference factor than another colour we would simply have incorrect colours.
I want to reiterate this, because I'm not sure we're all realizing it, but there's a fundamental flaw in trying to get madVR to reproduce that test pattern the same way as a custom curve that's designed to "dumbly" (as it it doesn't understand the content, it just does what it's designed to do). A custom curve will not know or care that there is content above 1000 nits, it just sets everything above "1000 nits" to 100% blindly. That's why on those test patterns after you hit "1000 nits" everything is all the same.

madVR (and madshi correct me if I'm misunderstanding) is fundamentally different. It measures the max brightness of each frame (keeps a rolling average apparently, but that's not important in the case of a static test pattern). It sees that this test pattern has a max brightness (peak Y) of 10,000 nits, and so it tries to figure out how to fit that 10,000 nits of real information down into the very limited range that we told it is has to work with via the settings in madVR.

As madshi says, and his screenshot examples show, there are tradeoffs, you can't just clamp content if you're trying to reproduce 10,000 nits of image with < 300, or <100 or whatever we tell it we've got.

In other words, I think this test pattern is not useful for investigating what madVR is doing, because the fact that it has information out to 10,000 nits means it drastically affects what madVR's tone mapping will have to do because it will try to retain all that 10,000 nits of information.

Quote:
If a film has a mastering level of 1000nits surely its encoded colour is predetermined to scale to it, so with the rolling average, we could scale all colours the same was you are scaling white yes? You give white a 1 because you detect 1000 nits content, thus you will give blue, green and red a 1 each, but 1 in terms of the 10,000 larger scale will equal its proper values, 593, 6780, 2627... So I guess with 1000nit content we have 100% blue at 59.3, green at 678 and red at 262?
I think the nit labels on that test pattern, for colors specifically, are a misnomer. They should really be percentages, or code levels. Blue is coded over a range of 64 to 960, as is green, and red. When we're talking white, 960 means 10,000 nits, but when we're talking red, blue or green individually, that's not the case. 960 for blue means (or more accurately, be displayed as) 593 nits on an ideal HDR display, and 6780, 2627 respectively for Green and Red. It's still a 100% signal, and madVR won't be be scaling it down to some lower value, just because it's blue or green or red. madVR, or our custom curves, or our projectors don't care, they treat them all the same, 64 == 0% output, 960 == 100% output, it's in the analog/physical output (largely, though RGB gain/offset and CMS tweak it) where the real "scaling" is done to set each color to it's real nit level.

It's really not correct to say 960 is a 10,000 nit signal for individual colors. Note how your waveform monitor doesn't list nits, it lists code values. Blue can absolutely validly have a code value of 960 (100%), but by the time it goes through the blue filter, and is measured, it will measure 593 nits instead of "10,000 nits" (on an ideal HDR display).

Again, though (or maybe it's the first time I said this, can't remember if I actually posted this already or not), I'm not trying to say your concerns with real world content aren't valid, but I think focusing on test patterns is a distraction, kind of like trying to evaluate a DI with test patterns, or evaluate a projector's gamma response with a DI enabled.

You really can't use test patterns to evaluate dynamic functions that modify the image, especially ones that are content aware like madVR.

If we really wanted to use test patterns, we'd need a test pattern that was coded only up to 1000 or 4000 nits, or whatever our master we want to simulate is, but even then, madVR probably wouldn't respond the same way as it does with real content.
stanger89 is offline  
post #67 of 5932 Old 02-07-2018, 06:19 AM
AVS Forum Special Member
 
Manni01's Avatar
 
Join Date: Sep 2008
Posts: 8,940
Mentioned: 305 Post(s)
Tagged: 0 Thread(s)
Quoted: 5312 Post(s)
Liked: 5307
I've already said it, but I'll say it again just in case it got lost (then I'll go back to lurking):

MaxCLL indicates the brightest sub-pixel of the content (max(R,G,B)), not the brightest pixel.

Also R. Masciola's patterns are mastered on a 1,000nits monitor. There is no content in them above 1,000nits.

JVC Autocal Software V11 Calibration for 2019 Models
Batch Utility V4.02 May 16 2019 to automate measurements files for madVR with support for BD Folders
Manni01 is online now  
post #68 of 5932 Old 02-07-2018, 09:30 AM
AVS Forum Addicted Member
 
stanger89's Avatar
 
Join Date: Nov 2002
Location: Marion, IA
Posts: 23,130
Mentioned: 34 Post(s)
Tagged: 0 Thread(s)
Quoted: 4156 Post(s)
Liked: 2384
Quote:
Originally Posted by Manni01 View Post
Also R. Masciola's patterns are mastered on a 1,000nits monitor. There is no content in them above 1,000nits.
That's not what this shows:


This clearly shows pixels above 1000 nits.
stanger89 is offline  
post #69 of 5932 Old 02-07-2018, 10:00 AM
AVS Forum Addicted Member
 
Kris Deering's Avatar
 
Join Date: Oct 2002
Location: The Pacific Northwet
Posts: 10,508
Mentioned: 180 Post(s)
Tagged: 0 Thread(s)
Quoted: 3642 Post(s)
Liked: 6172
Quote:
Originally Posted by Manni01 View Post
I've already said it, but I'll say it again just in case it got lost (then I'll go back to lurking):

MaxCLL indicates the brightest sub-pixel of the content (max(R,G,B)), not the brightest pixel.

Also R. Masciola's patterns are mastered on a 1,000nits monitor. There is no content in them above 1,000nits.

Correct on MaxCLL, but the digital value of a sub pixel doesn't tell you what that represents for a percentage of the image. Blue will never be more than 8% of the image luminance, so max blue and max green have the same digital value but the green will contribute FAR more to the luminance of the image itself. This is why blue can look so much different in HDR than SDR. With SDR, the max luminance level blue could EVER be is 8 nits!!


Mastering monitor has no bearing on what the max content can be in the image, HDR10 supports up to 10,000 nits. You can still code the image above the mastering monitor, you just wouldn't see it on the mastering monitor itself unless you either tone map on the mastering monitor OR use a waveform monitor. Either can be done. But you can't say that just because a mastering monitor with a max of X was used that no content on material mastered on that monitor can exceed X. This is why I've repeatedly said that the Display Max information should NOT be in metadata as it creates too much confusion and it tells you NOTHING about the material you're watching itself.


I think it is safe to say that in most usage cases so far, we haven't seen much content that actually exceeds the value of the mastering monitor. We do on some using the Pulsar, but it is rare. And we don't know if those are legitimate values or something weird in the coding. The best usage case for what I talked about before was test patterns, which obviously can be produced artificially for most patterns so the display max means nothing. But displays that use that data for their tone mapping could get screwed up with something like the Masciola disc because it has data that exceeds the Display Max.

My Home Theater UPDATED DEC 2017
Technical Editor/Writer Sound and Vision Magazine
Deep Dive AV - Calibration, Consulting and Education

Last edited by Kris Deering; 02-07-2018 at 10:03 AM.
Kris Deering is offline  
post #70 of 5932 Old 02-07-2018, 10:00 AM
AVS Forum Special Member
 
Manni01's Avatar
 
Join Date: Sep 2008
Posts: 8,940
Mentioned: 305 Post(s)
Tagged: 0 Thread(s)
Quoted: 5312 Post(s)
Liked: 5307
Quote:
Originally Posted by stanger89 View Post
That's not what this shows:


This clearly shows pixels above 1000 nits.
Then he hasn't entered the metadata correctly. The metadata for his patterns say 1,000nits MaxCLL and 400nits MaxFALL. I would have expected the metadata to be accurate given that these are calibration patterns and that actual values are specified. So if this was an automatic pass, no content was measured above 1,000nits, and if it was manually entered then it was wrong.

JVC Autocal Software V11 Calibration for 2019 Models
Batch Utility V4.02 May 16 2019 to automate measurements files for madVR with support for BD Folders
Manni01 is online now  
post #71 of 5932 Old 02-07-2018, 10:18 AM
AVS Forum Special Member
 
Manni01's Avatar
 
Join Date: Sep 2008
Posts: 8,940
Mentioned: 305 Post(s)
Tagged: 0 Thread(s)
Quoted: 5312 Post(s)
Liked: 5307
Quote:
Originally Posted by Kris Deering View Post
Mastering monitor has no bearing on what the max content can be in the image, HDR10 supports up to 10,000 nits. You can still code the image above the mastering monitor, you just wouldn't see it on the mastering monitor itself unless you either tone map on the mastering monitor OR use a waveform monitor. Either can be done. But you can't say that just because a mastering monitor with a max of X was used that no content on material mastered on that monitor can exceed X. This is why I've repeatedly said that the Display Max information should NOT be in metadata as it creates too much confusion and it tells you NOTHING about the material you're watching itself.
I've looked at hundred of titles and it is not true that the metadata for the mastering display is pointless. Sure, it shouldn't be trusted on its own because very often the content is far lower, but it's only exceptionally that the content exceeds the capability of the mastering monitor, especially for 1000-1100nits titles. I counted maybe 2 or 3 such titles, and if MaxCLL is marginally above 1000-1100nits for these titles, MaxFALL is significantly below, so it confirms that most of the actual content doesn't exceed 1000-1100nits.

The reason why this value is important is because maxCLL/MaxFALL is often invalid (0), but often in that case Max_Brightness is filled in (and 99% of the time accurate). This means that when MaxCLL doesn't give any info about the content, we can make an educated guess and select the correct curve 99% of the time using MaxBrightness as a fall back.

So while I agree that no metadata should be trusted blindly, until MaxCLL and MaxFALL are always filled and always accurate, there is a use for Max_Brightness to deal better with incomplete/invalid metadata in order to assess the content (in absence of a dynamic reading of the content as done by MadVR).
Kris Deering likes this.

JVC Autocal Software V11 Calibration for 2019 Models
Batch Utility V4.02 May 16 2019 to automate measurements files for madVR with support for BD Folders
Manni01 is online now  
post #72 of 5932 Old 02-07-2018, 10:19 AM
AVS Forum Addicted Member
 
Kris Deering's Avatar
 
Join Date: Oct 2002
Location: The Pacific Northwet
Posts: 10,508
Mentioned: 180 Post(s)
Tagged: 0 Thread(s)
Quoted: 3642 Post(s)
Liked: 6172
Quote:
Originally Posted by Manni01 View Post
Then he hasn't entered the metadata correctly. The metadata for his patterns say 1,000nits MaxCLL and 400nits MaxFALL. I would have expected the metadata to be accurate given that these are calibration patterns and that actual values are specified. So if this was an automatic pass, no content was measured above 1,000nits, and if it was manually entered then it was wrong.

This is kind of true (you just gotta LOVE HDR). Since these are test patterns, they actually don't NEED to have a mastering monitor as they can be generated on a PC. BUT, he MAY have used a mastering monitor for some of the video examples/pictures and that would have had a mastering monitor.


Another thing to keep in mind. The BDA said initially that studios should not use anything above 1000 nits for their display max because consumer displays could not exceed that number. Some studios listened, some didn't. Universal did all their masters with the Pulsar (4000 nits) but reported display max as 1000. I guess an interesting question would be were these titles mapped down so that nothing could actually exceed 1000 nits or did they just change the metadata (which can all be user assigned/manipulated).


If there is one thing that is abundantly clear about HDR is that while the concept of metadata sounds great, not a single studio has been reliable/consistent in their use of that feature. ALL of them have screwed it up at some point, making it unreliable for use by displays that rely on it. Now we have HDR10+ coming down the pipe that will include per frame metadata and somehow I'm supposed to believe this will be better?? They couldn't reliably do it for a SINGLE value, but somehow they are magically going to get it right for EVERY frame?? Let's see how that goes.
rak306 likes this.

My Home Theater UPDATED DEC 2017
Technical Editor/Writer Sound and Vision Magazine
Deep Dive AV - Calibration, Consulting and Education
Kris Deering is offline  
post #73 of 5932 Old 02-07-2018, 10:22 AM
AVS Forum Addicted Member
 
Kris Deering's Avatar
 
Join Date: Oct 2002
Location: The Pacific Northwet
Posts: 10,508
Mentioned: 180 Post(s)
Tagged: 0 Thread(s)
Quoted: 3642 Post(s)
Liked: 6172
Quote:
Originally Posted by Manni01 View Post
I've looked at hundred of titles and it is not true that the metadata for the mastering display is pointless. Sure, it shouldn't be trusted on its own because very often the content is far lower, but it's only exceptionally that the content exceeds the capability of the mastering monitor, especially for 1000-1100nits titles. I counted maybe 2 or 3 such titles, and if MaxCLL is marginally above 1000-1100nits for these titles, MaxFALL is significantly below, so it confirms that most of the actual content doesn't exceed 1000-1100nits.

The reason why this value is important is because maxCLL/MaxFALL is often invalid (0), but often in that case Max_Brightness is filled in (and 99% of the time accurate). This means that when MaxCLL doesn't give any info about the content, we can make an educated guess and select the correct curve 99% of the time using MaxBrightness as a fall back.

So while I agree that no metadata should be trusted blindly, until MaxCLL and MaxFALL are always filled and always accurate, there is a use for Max_Brightness to deal better with incomplete/invalid metadata in order to asses the content in absence of a dynamic reading of the content as done by MadVR.

Agree with you completely here Manni. In a perfect world, my comments would be valid. I totally understand that IN MOST CASES, MaxCLL will be below Max Brightness and I also understand that Max Brightness is sometimes all we have to go on. BUT, if HDR was actually handled properly, there would be ZERO need for the display information.


This is another case why frame by frame real time analysis is the best solution (or Dolby Vision really, but that isn't happening for projectors any time soon), but you still have to worry about pumping and artifacts that are similar to dynamic contrast/iris solutions with frame by frame analysis.
Manni01 likes this.

My Home Theater UPDATED DEC 2017
Technical Editor/Writer Sound and Vision Magazine
Deep Dive AV - Calibration, Consulting and Education
Kris Deering is offline  
post #74 of 5932 Old 02-07-2018, 10:27 AM
AVS Forum Addicted Member
 
stanger89's Avatar
 
Join Date: Nov 2002
Location: Marion, IA
Posts: 23,130
Mentioned: 34 Post(s)
Tagged: 0 Thread(s)
Quoted: 4156 Post(s)
Liked: 2384
Quote:
Originally Posted by Kris Deering View Post
Now we have HDR10+ coming down the pipe that will include per frame metadata and somehow I'm supposed to believe this will be better?? They couldn't reliably do it for a SINGLE value, but somehow they are magically going to get it right for EVERY frame?? Let's see how that goes.
My only hope on that front is HDR10 metadata is so basic that it could easily be left to some poor lacky to enter manually. I mean I can see the discussion, "It would cost how much to develop a tool to figure this out, vs Bob that can enter the data in 5 minutes? We're going to do it manually." And of course when you do things like this manually, you get the result we did.

On the contrary, there is no practical way to do HDR10+ metadata manually, the only practical way to do it is with a tool, and if they use a tool, it should be pretty reliable.

But I'm totally with you, I don't really expect HDR10+ to make things better, I will not be at all surprised if it makes things worse.
stanger89 is offline  
post #75 of 5932 Old 02-07-2018, 10:28 AM
AVS Forum Special Member
 
Manni01's Avatar
 
Join Date: Sep 2008
Posts: 8,940
Mentioned: 305 Post(s)
Tagged: 0 Thread(s)
Quoted: 5312 Post(s)
Liked: 5307
I don't have much hope for HDR10+ or DV, but I do hope that MadVR will resolve its issues. Clearly analyzing each frame is better than relying on metadata, providing that the processing that ensues is correct and doesn't produce worse results than a static curve selected using the metadata (with an algo to take invalid/incorrect metadata into account and get it right 99% of the time).

What MadVR does with 1,000nits titles is excellent. The problem (for me) is how it deals with highlights when the content shoots up significantly above that.

Regarding the pumping, I hope that Madshi implements a similar algo to the one I suggested so that we can disable the frame analysis if we don’t like the results. Right now we can’t because maxbrightness is a pointless data on its own.

JVC Autocal Software V11 Calibration for 2019 Models
Batch Utility V4.02 May 16 2019 to automate measurements files for madVR with support for BD Folders

Last edited by Manni01; 02-07-2018 at 10:39 AM.
Manni01 is online now  
post #76 of 5932 Old 02-07-2018, 10:36 AM
AVS Forum Addicted Member
 
Kris Deering's Avatar
 
Join Date: Oct 2002
Location: The Pacific Northwet
Posts: 10,508
Mentioned: 180 Post(s)
Tagged: 0 Thread(s)
Quoted: 3642 Post(s)
Liked: 6172
Quote:
Originally Posted by stanger89 View Post
My only hope on that front is HDR10 metadata is so basic that it could easily be left to some poor lacky to enter manually. I mean I can see the discussion, "It would cost how much to develop a tool to figure this out, vs Bob that can enter the data in 5 minutes? We're going to do it manually." And of course when you do things like this manually, you get the result we did.

On the contrary, there is no practical way to do HDR10+ metadata manually, the only practical way to do it is with a tool, and if they use a tool, it should be pretty reliable.

But I'm totally with you, I don't really expect HDR10+ to make things better, I will not be at all surprised if it makes things worse.
Some insight: https://www.linkedin.com/pulse/hdr-1...carlos-carmona

My Home Theater UPDATED DEC 2017
Technical Editor/Writer Sound and Vision Magazine
Deep Dive AV - Calibration, Consulting and Education
Kris Deering is offline  
post #77 of 5932 Old 02-07-2018, 02:28 PM
AVS Forum Special Member
 
Javs's Avatar
 
Join Date: Dec 2012
Location: Sydney
Posts: 7,708
Mentioned: 467 Post(s)
Tagged: 0 Thread(s)
Quoted: 6662 Post(s)
Liked: 6314
Quote:
Originally Posted by stanger89 View Post
I want to reiterate this, because I'm not sure we're all realizing it, but there's a fundamental flaw in trying to get madVR to reproduce that test pattern the same way as a custom curve that's designed to "dumbly" (as it it doesn't understand the content, it just does what it's designed to do). A custom curve will not know or care that there is content above 1000 nits, it just sets everything above "1000 nits" to 100% blindly. That's why on those test patterns after you hit "1000 nits" everything is all the same.

madVR (and madshi correct me if I'm misunderstanding) is fundamentally different. It measures the max brightness of each frame (keeps a rolling average apparently, but that's not important in the case of a static test pattern). It sees that this test pattern has a max brightness (peak Y) of 10,000 nits, and so it tries to figure out how to fit that 10,000 nits of real information down into the very limited range that we told it is has to work with via the settings in madVR.

As madshi says, and his screenshot examples show, there are tradeoffs, you can't just clamp content if you're trying to reproduce 10,000 nits of image with < 300, or <100 or whatever we tell it we've got.

In other words, I think this test pattern is not useful for investigating what madVR is doing, because the fact that it has information out to 10,000 nits means it drastically affects what madVR's tone mapping will have to do because it will try to retain all that 10,000 nits of information.
The fact the curves are dumb is irrelevant, I don't want madVR to ignore pixels above a set point, I only scaled the test pattern back down to 1000nits to attempt to illustrate something to masdhi, it seemed almost like madshi was of the opinion that saturation should only be full at 10,000 nits - thus impossible to tone map, when there is no actual content there, please understand that, if I had a pure 1000nit gradient I would use that instead, he seems busy right now but I feel like we are on the right track to getting this sorted.

Are you trying to say that colour saturation we were able to achieve in HDR is impossible in SDR tone mapping? Even though we are using the same brightness levels? We are doing the same thing as before, there is just FAR more intelligent control over the curve as it should now be dynamic. I don't hear the Oppo and Lumagen guys saying that colour is reduced, in fact the opposite @Kris Deering can you check this on the Lumagen? Any of these patterns and tell us if you are getting full saturation with the Lumagen tone mapping? If you are, then its case closed, we need to find a way to sort this. I am not about to watch BT2020 content if the gamut is going to be cut in half, which is definitely what is happening here.

What happens if we measure BT2020 primaries using tone mapping through MadVR? We will have a DRASTICALLY reduced gamut since the colours will all be poisoned with white and not full saturations.

Lets look at a different ramp?

Tone Mapping:







Original Screengrabs:







Quote:
Originally Posted by stanger89 View Post
I think the nit labels on that test pattern, for colours specifically, are a misnomer. They should really be percentages, or code levels. Blue is coded over a range of 64 to 960, as is green, and red. When we're talking white, 960 means 10,000 nits, but when we're talking red, blue or green individually, that's not the case. 960 for blue means (or more accurately, be displayed as) 593 nits on an ideal HDR display, and 6780, 2627 respectively for Green and Red. It's still a 100% signal, and madVR won't be be scaling it down to some lower value, just because it's blue or green or red. madVR, or our custom curves, or our projectors don't care, they treat them all the same, 64 == 0% output, 960 == 100% output, it's in the analog/physical output (largely, though RGB gain/offset and CMS tweak it) where the real "scaling" is done to set each color to it's real nit level.

It's really not correct to say 960 is a 10,000 nit signal for individual colors. Note how your waveform monitor doesn't list nits, it lists code values. Blue can absolutely validly have a code value of 960 (100%), but by the time it goes through the blue filter, and is measured, it will measure 593 nits instead of "10,000 nits" (on an ideal HDR display).

Again, though (or maybe it's the first time I said this, can't remember if I actually posted this already or not), I'm not trying to say your concerns with real world content aren't valid, but I think focusing on test patterns is a distraction, kind of like trying to evaluate a DI with test patterns, or evaluate a projector's gamma response with a DI enabled.

You really can't use test patterns to evaluate dynamic functions that modify the image, especially ones that are content aware like madVR.

If we really wanted to use test patterns, we'd need a test pattern that was coded only up to 1000 or 4000 nits, or whatever our master we want to simulate is, but even then, madVR probably wouldn't respond the same way as it does with real content.
Actually the waveform is in nits, if I purchase the full version of the software for $350 odd dollars... The scopes are accurate, nothing in the readout is actually changed bar the labels.



I think we need to separate measuring from encoded data here.

JVC X9500 (RS620) | 120" 16:9 | Marantz AV7702 MkII | Emotiva XPA-7 | DIY Modular Towers | DIY TPL-150 Surrounds | DIY Atmos | DIY 18" Subs
-
MadVR Settings | UHD Waveform Analysis | Arve Tool Instructions + V3 Javs Curves
Javs is online now  
post #78 of 5932 Old 02-07-2018, 02:51 PM
AVS Forum Special Member
 
madshi's Avatar
 
Join Date: May 2005
Posts: 7,541
Mentioned: 443 Post(s)
Tagged: 0 Thread(s)
Quoted: 2309 Post(s)
Liked: 2905
Quote:
Originally Posted by Soulnight View Post
You probably mean you are using to change the max value dynamically for the highlights right now. This is already very good.
But for projectors with low brightness, you really need need the whole mathematic to change dynamically to better use the low lumens output.
It's basically a similar idea as what the current dynamic iris implementation in projector is doing to increase in picture contrast. If a dark inage is detected: the projector close-down an mechanical iris, reducing brightness in order to improve black levels but reduce dynamically the gamma value so that the rest of the pixels between white and black still keep the same brightness.
We do need to be careful not to harm the movie makers' intent, though. If we adjust too much, so that a dark cave scene suddenly has a similar average light level to a sunlight scene, that would be very wrong. So dynamic adjustments must be reasonable, not too extreme.

Quote:
Originally Posted by Soulnight View Post
I really believe if we would use Madvr real time analysis of every frame to improve through 2 different mechanisms which would NOT be used at the same time:
- the highlights (max peakY): for pictures which have a peakY higher than the user chosen "This display target nits". The target display nits remains here unchanged. Madvr only works out the highlights dynamically.
- the whole picture for "dark scenes": if you have a peakY lower than the chosen "this display target nits". Here you lower this display target nits to the real peakY. There is no highlights anymore. Nothing is compressed. You just get a brighter, more "redeable" picture with excellent shadow details. (the scene after the fire in Deadpool" where deadpool raises from the ashes at night is an excellent test).
-->and thanks to your rolling average method, there should be no flickering whatsoever and always best optimize for low brightness display like our projectors.
I'm already doing that!!!

One problem here is, though, that if we "lie" to madVR about the display's peak luminance capability, all pixels will measure darker than the UHD disc asks for. So e.g. if you tell madVR that your projector can do 300nits, but it can only do 75nits, then each pixel in shadow detail will be 4x darker than the UHD disc asks for. Which is one important reason why I don't like the whole "lying". I would much prefer if we could tell madVR the true luminance capability. I understand that currently if we do that the image becomes too bright and compressed. So I need to find a solution for that. But once a solution is found, we should get excellent shadow detail for dark scenes, because I can completely turn tone mapping off for those scenes, then, and each pixel will measure exactly as the UHD disc asks for.

Quote:
Originally Posted by Soulnight View Post
Javs may be on to something here.
Currently Madshi you code the saturation of the "blue" pixels correctly below the target white, but above you add some "green+red to make it brighter if needed while lossing saturation.
What if you would do the opposite:
1) you analyse the picture: you say ok max blue pixel PeakY=500nits for examples.
2) this one you give 100%sat 100%brightness to the display.
3) for the pixels which have also 100% saturation but lower brightness, like 300nits for example, you scale the brightness down but not linearly so you don't steal too much life out of the picture.
Yes. This is possible. BUT this would make the whole image dramatically darker. More details in my upcoming reply to Javs.

Quote:
Originally Posted by Javs View Post
Here is something that may help, my waveforms are actually kind of technically linear light if you look at them, though the scale of the waveform slightly changes, here is that 10,000nit pattern:

First, here is a legend on how to read the waveform:

Now here is the colour information as its encoded

Now like before, say we have a ~1000 nit film, that colour information will now look like this (keep in mind the waveform scale has now changed so 1000nits is the ceiling not 10,000)
Thanks. Looks interesting. What you can see is that with "preserve hue" turned off, the various colors use saturation at different "speeds". E.g. blue loses slowest. But when turning "preserve hue" on, saturation loss is more equal. This is important, because if you would do the same thing to mixed colors (instead of primary/secondary colors), having "unequal" saturation loss means you get hue shifts.

I've tried to understand/interpret the waveform graphs, but I'm not sure I can draw a lot of helpful information from those, to be honest. It's interesting to look at them, though.

Quote:
Originally Posted by Javs View Post
Again looking at these waveforms, we can see that colour content exists up near 10,0000 nits, its not stuck at its true values of 593, 6780 and 2627 on the linear scale according to this. So, it almost seems like we already have scaling even at 10,0000 nits?
I'm not sure I understand your question?

Quote:
Originally Posted by Javs View Post
Since we have blue content @ 10,000 nits, it seems the display knows it can only draw it at 593, yet its right there on the waveform as 10,000... is it possible when doing the tone mapping we are overlooking this and being too literal thus MadVR is not quite interpreting what the true values need to be?
Again, not sure what you mean?

Quote:
Originally Posted by stanger89 View Post
I want to reiterate this, because I'm not sure we're all realizing it, but there's a fundamental flaw in trying to get madVR to reproduce that test pattern the same way as a custom curve that's designed to "dumbly" (as it it doesn't understand the content, it just does what it's designed to do). A custom curve will not know or care that there is content above 1000 nits, it just sets everything above "1000 nits" to 100% blindly. That's why on those test patterns after you hit "1000 nits" everything is all the same.

madVR (and madshi correct me if I'm misunderstanding) is fundamentally different. It measures the max brightness of each frame (keeps a rolling average apparently, but that's not important in the case of a static test pattern). It sees that this test pattern has a max brightness (peak Y) of 10,000 nits, and so it tries to figure out how to fit that 10,000 nits of real information down into the very limited range that we told it is has to work with via the settings in madVR.
Very true, at least as long as you allow madVR to measure the peak luminance of the video frames. If you disable that feature, and if the SMPTE 2086 metadata says that the mastering display was 1000nits, then everything above 1000nits should also look the same with madVR.

Quote:
Originally Posted by stanger89 View Post
I think the nit labels on that test pattern, for colors specifically, are a misnomer. They should really be percentages, or code levels. Blue is coded over a range of 64 to 960, as is green, and red. When we're talking white, 960 means 10,000 nits, but when we're talking red, blue or green individually, that's not the case. 960 for blue means (or more accurately, be displayed as) 593 nits on an ideal HDR display, and 6780, 2627 respectively for Green and Red. It's still a 100% signal, and madVR won't be be scaling it down to some lower value, just because it's blue or green or red. madVR, or our custom curves, or our projectors don't care, they treat them all the same, 64 == 0% output, 960 == 100% output, it's in the analog/physical output (largely, though RGB gain/offset and CMS tweak it) where the real "scaling" is done to set each color to it's real nit level.
It's even more complicated because the UHD movie is YCbCr and not RGB. So you *can* have the Y channel requesting 10,000nits, and at the same time the CbCr channels can request pure 100% saturated blue. It's possible, and I think that is probably what this color bar test pattern does! I can't double check because this test pattern doesn't seem to be freely available, unfortunately.

Quote:
Originally Posted by Manni01 View Post
I've already said it, but I'll say it again just in case it got lost (then I'll go back to lurking):

MaxCLL indicates the brightest sub-pixel of the content (max(R,G,B)), not the brightest pixel.
Quote:
Originally Posted by Kris Deering View Post
Correct on MaxCLL, but the digital value of a sub pixel doesn't tell you what that represents for a percentage of the image. Blue will never be more than 8% of the image luminance, so max blue and max green have the same digital value but the green will contribute FAR more to the luminance of the image itself.
What is kind of confusing to me is that MaxCLL is supposed to specify the brightest subpixel in *Nits*. But that means it can't simply be "max(R,G,B)", because that would be a codeword, but not a Nits number. So how is MaxCLL calculated, really? If it starts by doing a "max" operation on the RGB channel codewords, then MaxCLL is completely useless, because if the highest codeword happens to be from the blue channel, that means squat for how bright the brightest subpixel in Nits really is!

For tonemapping I really need to know the brightest *white* pixel. Which idiot had the bright idea to define MaxCLL as the brightest RGB subpixel, when all consumer video is actually YCbCr, and Y is what really counts?

Quote:
Originally Posted by Kris Deering View Post
If there is one thing that is abundantly clear about HDR is that while the concept of metadata sounds great, not a single studio has been reliable/consistent in their use of that feature. ALL of them have screwed it up at some point, making it unreliable for use by displays that rely on it. Now we have HDR10+ coming down the pipe that will include per frame metadata and somehow I'm supposed to believe this will be better?? They couldn't reliably do it for a SINGLE value, but somehow they are magically going to get it right for EVERY frame?? Let's see how that goes.
Ok, I didn't know metadata was so unreliable. Good thing madVR is able to automatically measure each frame's peak luminance.

Quote:
Originally Posted by Manni01 View Post
What MadVR does with 1,000nits titles is excellent. The problem (for me) is how it deals with highlights when the content shoots up significantly above that.
Maybe it would be worth a try to use the same 1,000nits curve also for 4,000nits titles? Currently madVR changes the whole tone mapping curve for 4,000nits titles. Of course using the 1,000nits curve means any data above 1,000nits would be clipped. I wonder how much of a problem that would really be?

Quote:
Originally Posted by Manni01 View Post
Regarding the pumping, I hope that Madshi implements a similar algo to the one I suggested so that we can disable the frame analysis if we don’t like the results. Right now we can’t because maxbrightness is a pointless data on its own.
I have on my to do list to make use of MaxCLL, when available. But I do wonder how useful it really is, since it doesn't give me the Y channel Nits, but the Nits of the brightest subpixel. For tonemapping I need to know the brightest Y value of the video. So how can I make use of MaxCLL?

It's not a big problem if you just want to switch between 2 different curves. But I don't like such hard coded behaviours. I'd much rather react to each possible peak video luminance value differently. Because of that I'd need to find a way to "convert" MaxCLL into an Y Nits value somehow...
madshi is online now  
post #79 of 5932 Old 02-07-2018, 03:11 PM
AVS Forum Special Member
 
madshi's Avatar
 
Join Date: May 2005
Posts: 7,541
Mentioned: 443 Post(s)
Tagged: 0 Thread(s)
Quoted: 2309 Post(s)
Liked: 2905
Quote:
Originally Posted by madshi
You mean we would map 10,000nits blue to 593Nits, and 9,000nits blue to a bit less than 593Nits? And 8,000nits blue to yet another bit less? Etc?

What would we do with the Red and Green channels then? Would we apply exactly the same mapping? So 10,000nits green would become 593Nits, too? Or would we map 10,000nits green to the max the projector can do for green, which is respectable 6,780nits?
Quote:
Originally Posted by Javs View Post
Yeah so, for the bolded, I am not 100% sure, we probably definitely would not apply exactly the same mapping so green would also be 593 nits, I feel like each respective value would scale but they would all do it in the same fashion. Or maybe they do scale to the same figure?
Ok, let's get back to this little science topic.

It would be great if you could map 10,000nits blue to 593nits, and to map 10,000nits blue to 6,780nits, but unfortunately that's not going to work, because it would result in the blue channel getting much darker overall, while the green channel would only get a very small brightness reduction. The end result would be a heavy green tint. We are forced to use the exact same mapping curve for Red, Green and Blue, otherwise we destroy the color balance and introduce a heavy tint.

What we *could* do is use exactly the same logic for all colors. Which means we draw 10,000nits blue with 593nits. We draw 10,000nits green with 593nits. We draw 9,000nits green with e.g. 550nits, or something like that. However, by doing so we would turn out precious perfect 10,000nits projector into a modest 593nits projector.

So let's recapture:

Fact: A perfect 10,000nits display maxes out at 593nits for blue. So if an UHD movie asks for a 2000nits pure blue pixel, we're in big trouble. There's no perfect solution for this problem. But after some brainstorming, we've come up with 3 halfway decent workarounds. Let's look at them in some more detail:

1) Clipping

We draw 593nits pure blue with 593nits and 100% saturation. We draw 2,000nits pure blue with 593nits and 100% saturation. We draw 10,000nits pure blue with 593nits and 100% saturation.

Advantages/Disadvantages:
+ saturation is perfect
+ peak white properly measures 10,000nits
- luminance is clipped to 593nits (for pure blue), so we lose all highlight detail in bright blue image areas

2) Desaturation

We draw 593nits pure blue with 593nits and 100% saturation. We draw 2,000nits pure blue with 2,000nits and reduced saturation (maybe 70%, I don't know). We draw 10,000nits pure blue with 10,000nits and 0% saturation (= white).

Advantages/Disadvantages:
+ luminance is perfect
+ peak white properly measures 10,000nits
- saturation above 593nits blue reduces gradually, until at 10,000nits it's completely lost

3) Optimizing tonemapping for blue instead of white

We draw 10,000nits pure blue with 593nits and 100% saturation. We draw 9,000nits pure blue with maybe 550nits (I don't know). We draw 10,000nits pure green with 593nits. We draw 10,000nits white with 593nits.

Advantages/Disadvantages:
+ luminance is perfectly detailed (but much lower overall)
+ saturation is perfect
- the 10,000nits projector mutates into a 593nits projector

@Javs , is all the above clear to you? If so, we can move on to the next step, and discuss which of the above options we should actually use in real life and why.
madshi is online now  
post #80 of 5932 Old 02-07-2018, 03:38 PM
AVS Forum Special Member
 
Javs's Avatar
 
Join Date: Dec 2012
Location: Sydney
Posts: 7,708
Mentioned: 467 Post(s)
Tagged: 0 Thread(s)
Quoted: 6662 Post(s)
Liked: 6314
Quote:
Originally Posted by madshi View Post
Ok, let's get back to this little science topic.

It would be great if you could map 10,000nits blue to 593nits, and to map 10,000nits blue to 6,780nits, but unfortunately that's not going to work, because it would result in the blue channel getting much darker overall, while the green channel would only get a very small brightness reduction. The end result would be a heavy green tint. We are forced to use the exact same mapping curve for Red, Green and Blue, otherwise we destroy the color balance and introduce a heavy tint.

What we *could* do is use exactly the same logic for all colors. Which means we draw 10,000nits blue with 593nits. We draw 10,000nits green with 593nits. We draw 9,000nits green with e.g. 550nits, or something like that. However, by doing so we would turn out precious perfect 10,000nits projector into a modest 593nits projector.

So let's recapture:

Fact: A perfect 10,000nits display maxes out at 593nits for blue. So if an UHD movie asks for a 2000nits pure blue pixel, we're in big trouble. There's no perfect solution for this problem. But after some brainstorming, we've come up with 3 halfway decent workarounds. Let's look at them in some more detail:

1) Clipping

We draw 593nits pure blue with 593nits and 100% saturation. We draw 2,000nits pure blue with 593nits and 100% saturation. We draw 10,000nits pure blue with 593nits and 100% saturation.

Advantages/Disadvantages:
+ saturation is perfect
+ peak white properly measures 10,000nits
- luminance is clipped to 593nits (for pure blue), so we lose all highlight detail in bright blue image areas

2) Desaturation

We draw 593nits pure blue with 593nits and 100% saturation. We draw 2,000nits pure blue with 2,000nits and reduced saturation (maybe 70%, I don't know). We draw 10,000nits pure blue with 10,000nits and 0% saturation (= white).

Advantages/Disadvantages:
+ luminance is perfect
+ peak white properly measures 10,000nits
- saturation above 593nits blue reduces gradually, until at 10,000nits it's completely lost

3) Optimizing tonemapping for blue instead of white

We draw 10,000nits pure blue with 593nits and 100% saturation. We draw 9,000nits pure blue with maybe 550nits (I don't know). We draw 10,000nits pure green with 593nits. We draw 10,000nits white with 593nits.

Advantages/Disadvantages:
+ luminance is perfectly detailed (but much lower overall)
+ saturation is perfect
- the 10,000nits projector mutates into a 593nits projector

@Javs , is all the above clear to you? If so, we can move on to the next step, and discuss which of the above options we should actually use in real life and why.
Your points are indeed clear, but Madshi, I think we are still getting stuck on something. Or, I am still not clear on one other aspect that keeps bugging me, now correct me here if I am wrong please.

We know blue when measured is 593 nits. But, you should not be rendering it at 593 nits, you should be rendering it at 10,000 if that's what is in the content, no? When its rendered, and then measured it will read 593 nits. Or another way, if blue sits at 10,000 nits, and you render it with a linear value of 1. And its measured, it will read 593 nits.

That's why I have shared waveforms with you, seems there is a huge disparity in terminology and understanding between the waveform data i.e. what the actual information looks like in the file, vs how the data is measured off the screen. Its clear when y ou discuss MaxCLL that your brain is starting to suspect this as you are now questioning how thats even calculated.

Quote:
Fact: A perfect 10,000nits display maxes out at 593nits for blue. So if an UHD movie asks for a 2000nits pure blue pixel, we're in big trouble.
To me, my understanding is this, a 10,000 nits display asks for 10,000 nits blue, you render it as such, if there is a man at the screen measuring it, he will read 593 nits blue, correct?.

If the display asks for 2000 nits blue, you render it as such, the man now reads an appreciably lower figure as expected...

So, if you render 593 nits blue to the screen as you assume thats what the content is asking for at 10,000 nits, the man measures what value then?

Am I wrong?

You also asked Manni above why MacCLL is important and if we can trust it... have a look at this:

This has a maxcll of nearly 10,000 nits, yet a Mastering level of 4000, there is content there passing well through the 4000nits grey line barrier, so we would ideally not want to clip that assuming its only a 4000nits film, with dynamic frame measurements your tone mapping would actually be able to render that detail potentially without clipping, this is fantastic... I also found an example in Spiderman Homecoming where the real nits level is well past the MaxCLL, so there are going to be cases where it is best to truly measure the frame.



So, going back to something, I think we maybe need to scale all colour equally as you say, but render them as they truly are in the content, 10,000 nits for each relevant colour if that is what is required and asked of the content such as those patterns, don't reduce it to the known measurement values. If you render 593 nits blue to the screen when the content asks for 10,000, then what happens to the little man measuring it the screen? It will be well under 100 nits measured at that point will it not?

JVC X9500 (RS620) | 120" 16:9 | Marantz AV7702 MkII | Emotiva XPA-7 | DIY Modular Towers | DIY TPL-150 Surrounds | DIY Atmos | DIY 18" Subs
-
MadVR Settings | UHD Waveform Analysis | Arve Tool Instructions + V3 Javs Curves
Javs is online now  
post #81 of 5932 Old 02-07-2018, 03:55 PM
AVS Forum Special Member
 
Javs's Avatar
 
Join Date: Dec 2012
Location: Sydney
Posts: 7,708
Mentioned: 467 Post(s)
Tagged: 0 Thread(s)
Quoted: 6662 Post(s)
Liked: 6314
Or if I am totally wrong on all this, and 10,000 nits blue is indeed needing to be rendered at 593.

Then, did you not say you convert white to a linear light value of 1?

What happens if we do that with colours?

Blue is 593 or 1 for 10,000 nits
Green is 6780 or 1 for 10,000 nits
Red is 2627 or 1 for 10,000 nits

But what happens, if you have 1000 nits content with those same colours, assuming the master is still 10,000 (and the colours have a value of 1.0 for that) do you display each colour at 0.1 then?

Blue is 59.3 or 0.1 for 1,000 nits
Green is 678 or 0.1 for 1,000 nits
Red is 262.7 or 0.1 for 10,000 nits

If the rolling average is actually 1000 nits then those colours would be 1.0 again.

Or, maybe we can just have a setting that would apply the scale to a set nits level of either 1000, 4000, or 10,000. And MadVR would apply 1.0 scale to RGB at one of those selected settings and ignore the rolling average (a dark scene would end up with insanely saturated colours otherwise).

That way since you gave each colour a value of 1, and essentially you are reducing those values by pretty even percentage scales, you should have zero hue shifting, true?

I suppose I am just starting to think out loud now...

JVC X9500 (RS620) | 120" 16:9 | Marantz AV7702 MkII | Emotiva XPA-7 | DIY Modular Towers | DIY TPL-150 Surrounds | DIY Atmos | DIY 18" Subs
-
MadVR Settings | UHD Waveform Analysis | Arve Tool Instructions + V3 Javs Curves
Javs is online now  
post #82 of 5932 Old 02-07-2018, 03:56 PM
Advanced Member
 
Join Date: Dec 2015
Posts: 849
Mentioned: 4 Post(s)
Tagged: 0 Thread(s)
Quoted: 542 Post(s)
Liked: 294
This is pretty cool but I have no clue what you guys are trying to do.. Are you guys trying to come up with better curves? If so I want them when you finish...
jorgebetancourt is online now  
post #83 of 5932 Old 02-07-2018, 04:10 PM
AVS Forum Special Member
 
Javs's Avatar
 
Join Date: Dec 2012
Location: Sydney
Posts: 7,708
Mentioned: 467 Post(s)
Tagged: 0 Thread(s)
Quoted: 6662 Post(s)
Liked: 6314
Quote:
Originally Posted by jorgebetancourt View Post
This is pretty cool but I have no clue what you guys are trying to do.. Are you guys trying to come up with better curves? If so I want them when you finish...
It will be of no use to you unless you use MadVR to view HDR, which, I do right now, but in passthrough mode which then uses one of my curves. We are hoping to get the tone mapping to a point where we can use that instead mainly because it dynamically measures each frames luminance and can adapt to the content, which would be truly set and forget hopefully, just like the Lumagen seems to be for others.

JVC X9500 (RS620) | 120" 16:9 | Marantz AV7702 MkII | Emotiva XPA-7 | DIY Modular Towers | DIY TPL-150 Surrounds | DIY Atmos | DIY 18" Subs
-
MadVR Settings | UHD Waveform Analysis | Arve Tool Instructions + V3 Javs Curves
Javs is online now  
post #84 of 5932 Old 02-07-2018, 04:17 PM
Advanced Member
 
Join Date: Dec 2015
Posts: 849
Mentioned: 4 Post(s)
Tagged: 0 Thread(s)
Quoted: 542 Post(s)
Liked: 294
Damn this is crazy.. This is through a PC correct? Maybe when you guys are done Ill buy a pc and copy everything over...

I bought the oppos Ill get it in a week.. I bought because I was still using my xbox as a player..
jorgebetancourt is online now  
post #85 of 5932 Old 02-07-2018, 05:07 PM
AVS Forum Special Member
 
Manni01's Avatar
 
Join Date: Sep 2008
Posts: 8,940
Mentioned: 305 Post(s)
Tagged: 0 Thread(s)
Quoted: 5312 Post(s)
Liked: 5307
Quote:
Originally Posted by madshi View Post
What is kind of confusing to me is that MaxCLL is supposed to specify the brightest subpixel in *Nits*. But that means it can't simply be "max(R,G,B)", because that would be a codeword, but not a Nits number. So how is MaxCLL calculated, really? If it starts by doing a "max" operation on the RGB channel codewords, then MaxCLL is completely useless, because if the highest codeword happens to be from the blue channel, that means squat for how bright the brightest subpixel in Nits really is!
This is why I quoted the explanation and a link to the reference document in this post: https://www.avsforum.com/forum/24-dig...l#post55635236. I guess in theory it could be unreliable if you have a very bright blue pixel in the content, but overall it seems to be fairly reliable, at least for deciding which curve to apply.

Quote:
Originally Posted by madshi View Post
For tonemapping I really need to know the brightest *white* pixel. Which idiot had the bright idea to define MaxCLL as the brightest RGB subpixel, when all consumer video is actually YCbCr, and Y is what really counts?
There are so many stupid things about HDR10, especially regarding projectors, that it's better not to start enumerating them or it gets really depressing: absolute curve instead of relative curve, mastered to stupid nits levels for people with flat panels in a living room with ambient light, etc.

Quote:
Originally Posted by madshi View Post
Ok, I didn't know metadata was so unreliable. Good thing madVR is able to automatically measure each frame's peak luminance.
It's a great thing indeed, but with great power comes great responsibility

There is no doubt, given how unreliable HDR10 metadata is, that being able to dynamically measure the peak luminance for each frame is an asset. The question is what to do with that information so that it improves the picture and doesn't denature it (clips, desaturates, changes the relative brightness of each shot too much, etc).

Quote:
Originally Posted by madshi View Post
Maybe it would be worth a try to use the same 1,000nits curve also for 4,000nits titles? Currently madVR changes the whole tone mapping curve for 4,000nits titles. Of course using the 1,000nits curve means any data above 1,000nits would be clipped. I wonder how much of a problem that would really be?
No, that's awful. If there is content above 1000nits, you want to make more room for highlights to show it, otherwise you end up with visibly clipped information. It's fine to use a well-designed 4000nits curve with 1000nits content, but the opposite is not really an option.

Quote:
Originally Posted by madshi View Post
I have on my to do list to make use of MaxCLL, when available. But I do wonder how useful it really is, since it doesn't give me the Y channel Nits, but the Nits of the brightest subpixel. For tonemapping I need to know the brightest Y value of the video. So how can I make use of MaxCLL?
That's great, it will help. I wanted to mention the subpixel thing because I thought it might be part of the problem, but from my point of view MaxCLL (when present) is a perfectly valid way to select one of three curves I use. Once the relevant curve is selected, there is little need to adjust the highlights (IMO). There is a need to adjust the low end though for freak scenes such as the one in The Revenant, where the frame average is 5nits, which is very low. But the highlights show up with little to no clipping and a reasonably bright and balanced picture provided the brightness factor (diffuse white) and soft start point are set properly.

Quote:
Originally Posted by madshi View Post
It's not a big problem if you just want to switch between 2 different curves. But I don't like such hard coded behaviours. I'd much rather react to each possible peak video luminance value differently. Because of that I'd need to find a way to "convert" MaxCLL into an Y Nits value somehow...
I'm with you regarding not liking hard-coded solutions, but here is what I would suggest, from a practical point of view, irrespective of the theory:

1) At the moment, we can use custom curves (I use three but frankly I could just use one) which do not show the issues we see in MadVR in the highlights with 4,000nits titles. I can see two main reasons for this: a) these curves only change white gamma. They rely on a baseline made with the JVC Autocal, which means that the PJ is reasonably calibrated re gamut/gamma/RGB balance. The downside is a slight loss of saturation in the low end, but the upside is that 960 = whatever peakY is defined for the content. If the metadata is present and valid, using MaxCLL works fine to select the correct curve. So according to the algo I shared with you, I select based on MaxBrightness/MaxCLL one of three curves (1100nits, 2200nits or 4000nits). As a first step in resolving this issue, could you implement better support for MaxCLL so that we can disable the frame peak calculation and see how far or close you are without any dynamic fancy stuff. I think that until we get this right, we'll be running around in circle.

2) Once we are able to see that the way MadVR uses static metadata to adjust its internal curve gives us the expected results (especially regarding saturation and clipping of the highlights), I think you'll know a lot more about what's not working with projectors (less than 200nits peakY). Implementing a dynamic adjustment based on actual measurements (we all agree it's preferable provided there are safeguards to prevent wild variations) should be easier.

3) In my opinion, with a projector able to reach at most 200nits and more often around 100nits or less, we have to draw the line somewhere and I think it's silly to try to show up to 10,000nits highlights, even if they are present. I'm fine with clipping at 4,000nits, even if there is content above. And personally, I would privilege saturation over luminance, as long as it doesn't cause clipping. Once we've established that content doesn't go above 4,000nits at most (or is clipped), and ideally using MaxCLL/MaxBrightness we can lower this further to 2200nits or 1100nits, this means that we can get full saturation at the hard clip point.

I would be really interested to see the result of the pixel shader conversion if as a test you 1) used 4,000nits and not 10,000nits as a max in your calculations and 2) Didn't do any dynamic adjustment. If we were able to see something similar to what we get with a 4,000nits curve, then it means that we can move on. Otherwise, it means that there is something going on in your calculations that prevents us from getting 1) the saturation we get with a static curve simply changing the response for white gamma and 2) the satisfying resolution of highlights when present.

Maybe it sounds backward to you, but I have a feeling that if we don't try to walk before we try to run, we'll keep falling.

Do you agree that in theory, nothing should prevent MadVR from getting the same amount of saturation and highlights resolution as a static curve hard clipping at 1,100nits or 4,000nits?

I'm all for getting rid of hard-coded custom curves, but I don't want the result to be inferior (sometimes) because of theory. I want it to be as good or better, all the time.

I don't remember which JVC model you have, but if you have a model that supports the JVC Autocal you can import my curves without going through the rabbit hole of installing Arve's tool etc. That way you could compare the results between using a custom curve and using MadVR's conversion.

If your JVC is older, you can still use the JVC gamma controls to replicate the custom curves, they are text files and you can open them to see the values at each control point. If you reproduce this and play it with that curve, a BT2020 gamut and MadVR in passthrough mode, you should see something similar to what we are seeing (provided you pick a curve targeting a peakY close to one of those I've used, ie 86nits, 107nits or 120nits). If you tell me your peakY, I'll make a curve just for you. There is nothing special in the more recent JVCs about HDR handling. The factory implementation is super dumb, it's simply a modified power gamma curve turned into an S-curve mimicking an ST2084 curve. There is ZERO HDR specific processing in these units. If they detect HDR metadata, they switch to their single static curve and a BT2020 color profile. That's it. They don't look at the content of the metadata, they don't adapt anything to it. It's the dumbest implementation you can think of. So you can emulate it on any PJ.

JVC Autocal Software V11 Calibration for 2019 Models
Batch Utility V4.02 May 16 2019 to automate measurements files for madVR with support for BD Folders
Manni01 is online now  
post #86 of 5932 Old 02-07-2018, 05:12 PM
AVS Forum Special Member
 
markmon1's Avatar
 
Join Date: Dec 2006
Posts: 5,027
Mentioned: 92 Post(s)
Tagged: 0 Thread(s)
Quoted: 4216 Post(s)
Liked: 2646
Quote:
Originally Posted by Kris Deering View Post
but you still have to worry about pumping and artifacts that are similar to dynamic contrast/iris solutions with frame by frame analysis.
I feel like pumping would not be a concern. It's noticeable on dynamic iris because it's a mechanical part that has to physically change position as a scene changes. Here, it would be digital so one scene to the next would instantly change. You wouldn't get a moment of slightly brighter gradually faded as the scene progresses. I believe *that* is the pumping. Not to say that it might not still look weird. But without seeing this instant change in action, I wouldn't know. If we transitioned from say a scene inside a bright room directly to a space scene, I think it would be totally unnoticeable.

Video: JVC RS4500 135" screen in pure black room no light, htpc nvidia 1080ti.
Audio: Anthem mrx720 running 7.1.4, McIntosh MC-303, MC-152, B&W 802d3 LR, B&W HTM1D3 center, B&W 805d3 surround, B&W 702S2 rear, B&W 706s2 x 4 shelf mounted for atmos, 2 sub arrays both infinite baffle: 4x15 fi audio running on behringer ep4000 + 4x12 fi audio running on 2nd ep4000.
markmon1 is online now  
post #87 of 5932 Old 02-08-2018, 01:01 AM
AVS Forum Special Member
 
Manni01's Avatar
 
Join Date: Sep 2008
Posts: 8,940
Mentioned: 305 Post(s)
Tagged: 0 Thread(s)
Quoted: 5312 Post(s)
Liked: 5307
HDR10-DC1K vs HDR10-DC4K

I don't know if it will help but here is a comparison between my two main HDR10 curves, the one that clips at 4,000nits (DC4K) and the one that clips at 1,000nits (DC-1K) so you can see what I adjust and how different they are. In the two screenshots, the curve in bold is the curve the data relates to. It's the same two curves on both screenshots, I just swap the main curve so that you can compare the values for soft clip point etc.

I've chosen the curves designed for an actual projector peakY of 107nits, because that's the peakY used for Dolby Cinema. I call these Dolby Cinema Emulation curves. With Dolby Cinema, the content is mastered specifically to 107nits so completely different grade (the one we'd need for projectors), but they are HDR. I assume the diffuse white value used for Dolby Cinema is 48nits (the reference for SDR cinema), but I could be wrong. It would be great if Kris could get a confirmation from his sources. We can't use the same diffuse white as Dolby Cinema (except on films where MaxCLL doesn't go much above 100nits) because there is a lot more highlights in consumer graded HDR10. Hence why we tend to hover around 20-25nits with a PJ peakY of around 107nits. I used to be able to reach 200nits in high lamp and in that case my brightness factor was higher, around 6.25, for a reference white around 16nits, because I had more headroom for highlights.

In DC4K (hard clipping at 4,000nits), I start soft clip at 250nits content (50nits output) because I want everything up to 50nits (our reference white in a dedicated room for projectors) to follow ST2084. My brightness factor is 5 (5x50=250), so diffuse white is 20. The angle of the top of the curve is "flatter" because I make more room for headlights and you have a lot more levels to fit into roughly the same space. So if you don't want to clip the highlights, you have to lose a bit of contrast at the top of the curve (steer away from ST2084 faster) so that there is enough difference in light output between adjacent levels. The HDR patterns are only partially useful to assess clipping, because there is a very large gap between each band, contrary to SDR where we can have patterns where one band = 1 step. So you can check with a pattern and think you're not clipping, but you are in fact clipping a lot because there is not enough difference between each level. So you don't only clip from a certain point (say hard clipping from 4,000nits), you also clip between levels (i.e. too many input levels have the same output). Does that make sense?

In DC1K (hard clipping at 1,100nits), I start soft clip at 200nits (50nits output). The curve is steeper/brighter, diffuse white is 25, the brightness factor is 4 (4x50=200). The content is less bright overall and there is less headroom needed for highlights, which is why the curve is steeper.

I think if you could achieve something like this, adjusting the hard clip point to either 4,000nits or 1,100nits based on maxCLL/maxbrightness (you can add 2,200nits in the middle based on maxCLL with a brightness factor of 4.5 / diffuse white of 22.25, soft start point at 225nits), touching only white gamma, we should get similar results. Once we've established this and we get the level of saturation/highlights resolution we know we can get on our projectors using static curves, we can start using actual peakY on a frame by frame basis (with safeguards to avoid excessive variations and making sure its effect doesn't go against what the dynamic iris is trying to achieve) and improve on that dynamically.

Hope this helps.

Of course, feel free to discard/reject if you prefer to go another way. I'm not telling you what to do, I'm suggesting a way to reduce the number of factors so that we can do it step by step. I'm sure there are other/better ways to make progress, but I'm not clever enough to think of them.
Attached Thumbnails
Click image for larger version

Name:	DC4KvsDC1K.PNG
Views:	39
Size:	46.0 KB
ID:	2357772   Click image for larger version

Name:	DC1KvsDC4K.PNG
Views:	41
Size:	41.9 KB
ID:	2357774  

JVC Autocal Software V11 Calibration for 2019 Models
Batch Utility V4.02 May 16 2019 to automate measurements files for madVR with support for BD Folders

Last edited by Manni01; 02-08-2018 at 01:29 AM.
Manni01 is online now  
post #88 of 5932 Old 02-08-2018, 01:32 AM
AVS Forum Special Member
 
madshi's Avatar
 
Join Date: May 2005
Posts: 7,541
Mentioned: 443 Post(s)
Tagged: 0 Thread(s)
Quoted: 2309 Post(s)
Liked: 2905
Quote:
Originally Posted by Javs View Post
We know blue when measured is 593 nits. But, you should not be rendering it at 593 nits, you should be rendering it at 10,000 if that's what is in the content, no?
Correct. If the Y channel asks for 10,000nits, then that's what the display should ideally output (and it should measure 10,000nits, too!). And if the CbCr channels ask for pure blue, then that's what the display should ideally output. It's just the combination of 10,000nits Y and pure blue CbCr at the same time which the content *can* ask for, but which is impossible to render (unless you have a projector which is capable of outputting 168,634nits for white).

Quote:
Originally Posted by Javs View Post
When its rendered, and then measured it will read 593 nits.
Quote:
Originally Posted by Javs View Post
To me, my understanding is this, a 10,000 nits display asks for 10,000 nits blue, you render it as such, if there is a man at the screen measuring it, he will read 593 nits blue, correct?
Ideally: No! Measurement should ideally show 10,000nits! Whether it does, or not, depends on which compromise the tone mapping algorithm decides to use to render this pixel. See options 1) .. 3) in my previous comment.

Quote:
Originally Posted by Javs View Post
That's why I have shared waveforms with you, seems there is a huge disparity in terminology and understanding between the waveform data i.e. what the actual information looks like in the file, vs how the data is measured off the screen. Its clear when you discuss MaxCLL that your brain is starting to suspect this as you are now questioning how thats even calculated.
I do not know how the waveform data is retrieved (e.g. does the waveform analyzer look at the original YCbCr data before tone mapping, or at the RGB data after tone mapping?), and I'm not 100% sure how MaxCLL is really calculated. But neither of which has much meaning for madVR - except that it might be helpful to make use of MaxCLL as an alternative to measuring each frame's peak luminance. I know exactly what Y and CbCr say, and how they should ideally be rendered. There's zero doubt there.

Quote:
Originally Posted by Javs View Post
This has a maxcll of nearly 10,000 nits, yet a Mastering level of 4000, there is content there passing well through the 4000nits grey line barrier, so we would ideally not want to clip that assuming its only a 4000nits film, with dynamic frame measurements your tone mapping would actually be able to render that detail potentially without clipping, this is fantastic... I also found an example in Spiderman Homecoming where the real nits level is well past the MaxCLL, so there are going to be cases where it is best to truly measure the frame.
Yes, that's why I'm disappointed with MaxCLL, because due to the way it's calculcated (it just picks the brightest RGB subpixel), it's much less useful/reliable than it could and should have been.

Quote:
Originally Posted by Javs View Post
So, going back to something, I think we maybe need to scale all colour equally as you say, but render them as they truly are in the content, 10,000 nits for each relevant colour if that is what is required and asked of the content such as those patterns, don't reduce it to the known measurement values. If you render 593 nits blue to the screen when the content asks for 10,000, then what happens to the little man measuring it the screen? It will be well under 100 nits measured at that point will it not?
I don't completely understand this one. What you're saying sounds a bit like option "3) Optimizing tonemapping for blue instead of white" from my previous comment.

Quote:
Originally Posted by Javs View Post
Then, did you not say you convert white to a linear light value of 1?

What happens if we do that with colours?

Blue is 593 or 1 for 10,000 nits
Green is 6780 or 1 for 10,000 nits
Red is 2627 or 1 for 10,000 nits
Pure blue with an Y value of 1.0 should ideally measure as 10,000nits. If you define pure blue to be 593 nits and scale darker blue values down accordingly, then you make the whole blue channel dramatically darker, which will introduce a gigantic green tint to the image - unless you make the green channel just as dark. If you do that, you have option "3) Optimizing tonemapping for blue instead of white" from my previous comment.

Quote:
Originally Posted by Javs View Post
That way since you gave each colour a value of 1, and essentially you are reducing those values by pretty even percentage scales, you should have zero hue shifting, true?
Doing saturation changes in RGB (even in linear light) introduces hue shifts, as weird as it might seem. But that's a completely different topic.

Quote:
Originally Posted by jorgebetancourt View Post
This is pretty cool but I have no clue what you guys are trying to do.. Are you guys trying to come up with better curves? If so I want them when you finish...
No, I'm trying to explain to Javs the very complicated reason why madVR sometimes desaturates very bright saturated colors. It's not easy to understand, so we're on a little journey.

Quote:
Originally Posted by Manni01 View Post
This is why I quoted the explanation and a link to the reference document in this post: https://www.avsforum.com/forum/24-dig...l#post55635236. I guess in theory it could be unreliable if you have a very bright blue pixel in the content, but overall it seems to be fairly reliable, at least for deciding which curve to apply.
Ok, after reading that document multiple times and thinking about it, my best guess would be that MaxCLL is calculated on linear light RGB, with the RGB data scaled in such a way that 100% white would be RGB value (10,000 | 10,000 | 10,000). This way the formula "MaxCLL = Max(R, G, B)" would give us the brightest RGB subpixel in Nits.

If true, practically this means, in an extreme case, the content could have a peak Y value in the whole movie of 593 Nits, but a peak Blue channel of 10,000 Nits. In this situation madVR would measure max 593 Nits throughout the whole movie. But MaxCLL would (correctly) be set to 10,000 Nits.

How useless is that?

Quote:
Originally Posted by Manni01 View Post
Once the relevant curve is selected, there is little need to adjust the highlights (IMO). There is a need to adjust the low end though for freak scenes such as the one in The Revenant, where the frame average is 5nits, which is very low.
Doesn't a scene like that have a very low peak Y value, too? In that case madVR will completely disable tone mapping (only if peak measurements are enabled). So rendering such a scene correctly is by far the easiest job for madVR, as long as peak Y measurements are enabled.

Quote:
Originally Posted by Manni01 View Post
As a first step in resolving this issue, could you implement better support for MaxCLL so that we can disable the frame peak calculation and see how far or close you are without any dynamic fancy stuff. I think that until we get this right, we'll be running around in circle.
Considering the MaxCLL problems described above, I'm not too happy with MaxCLL. But I could easily offer a test option for you to play with which would allow you to manually select the SMPTE 2390 hard clipping point. I could also add another option which would allow you to define the soft clipping point, as well. In the long run I rather don't think these options should stay, but for playing/experimenting, and maybe finding better curves for madVR to automatically select in the future, adding such test options might be useful?

Quote:
Originally Posted by Manni01 View Post
Do you agree that in theory, nothing should prevent MadVR from getting the same amount of saturation and highlights resolution as a static curve hard clipping at 1,100nits or 4,000nits?
If we both use the same SMPTE 2390 curves (hard and soft clipping point), then we should be able to get the same highlight resolution. Saturation is a different beast. My understanding is that your static curves simply do a simple "gamma" processing. That means your static curves have no control over saturation and hue. Which means madVR should easily produce better results in saturation and hue, if all goes well.

Quote:
Originally Posted by Manni01 View Post
I'm all for getting rid of hard-coded custom curves, but I don't want the result to be inferior (sometimes) because of theory. I want it to be as good or better, all the time.
Definitely!!

Quote:
Originally Posted by Manni01 View Post
I'm with you regarding not liking hard-coded solutions, but here is what I would suggest, from a practical point of view, irrespective of the theory:

1) At the moment, we can use custom curves (I use three but frankly I could just use one) which do not show the issues we see in MadVR in the highlights with 4,000nits titles.
Which highlights issues do you see? If you mean the posterization, the jury is still out on whether it's madVR's fault. I say most likely no. You say most likely yes. But you didn't seem to be interested in actually locating the issue.

Quote:
Originally Posted by Manni01 View Post
If your JVC is older, you can still use the JVC gamma controls to replicate the custom curves, they are text files and you can open them to see the values at each control point. If you reproduce this and play it with that curve, a BT2020 gamut and MadVR in passthrough mode, you should see something similar to what we are seeing (provided you pick a curve targeting a peakY close to one of those I've used, ie 86nits, 107nits or 120nits). If you tell me your peakY, I'll make a curve just for you. There is nothing special in the more recent JVCs about HDR handling. The factory implementation is super dumb, it's simply a modified power gamma curve turned into an S-curve mimicking an ST2084 curve. There is ZERO HDR specific processing in these units. If they detect HDR metadata, they switch to their single static curve and a BT2020 color profile. That's it. They don't look at the content of the metadata, they don't adapt anything to it. It's the dumbest implementation you can think of. So you can emulate it on any PJ.
I have an X35, but to be honest, although I've always planned to finally calibrate the monster, I've been too lazy. So I fear that my gamma response is probably bonkers and shadow detail non-existant, so in its current state, my projector is probably not a good device to test this stuff on.

I haven't been using my eyes to tune tone mapping yet. I've just straight implemented SMPTE 2390, exactly as the spec requires, with superior saturation and hue algorithms than the spec asks for. So basically I've relied on the spec being "good".

All that said, I wonder: If you activate madVR's pixel shader HDR -> SDR conversion, how is your projector calibrated? Is it a BT.1886 calibration? I'm asking because currently madVR doesn't know the measured black level of your projector. So my SMPTE 2390 curve is flat at the bottom end. My thinking was that a BT.1886 calibration should already take care of the bottom end. So implementing black level compensation as 2390 suggests would actually pump up blacks too much. Of course it would also be possible for me to implement the 2390 black level compensation stuff, but then you'd need to change your projector's SDR calibration to not do BT.1886, but to do a pure gamma curve, I think? Not sure if the end result would be much different to using BT.1886 and no 2390 black compensation, though.

Quote:
Originally Posted by markmon1 View Post
I feel like pumping would not be a concern. It's noticeable on dynamic iris because it's a mechanical part that has to physically change position as a scene changes. Here, it would be digital so one scene to the next would instantly change. You wouldn't get a moment of slightly brighter gradually faded as the scene progresses.
Actually, when I first implemented madVR's frame measurements, I tested with the "Life of Pie" demo HDR file, and it produced noticeable flickering, because due to the way the waves reflect the sun differently in each frame, the peak Y would vary quite a bit. As a result, I implemented the rolling average algorithm which nicely solved the flickering.

Currently, my rolling average is "dumb", though, which means if there's a sudden change (e.g. actors walking from sunlight into a dark room), the rolling average take a bit to adjust. But our eyes need a bit to adjust, too, so maybe that's not a problem, I'm not sure. I could still try to detect dramatic scene changes and update the rolling average more quickly in that situation. Well, there's always something to improve, I guess...
madshi is online now  
post #89 of 5932 Old 02-08-2018, 02:00 AM
AVS Forum Special Member
 
Manni01's Avatar
 
Join Date: Sep 2008
Posts: 8,940
Mentioned: 305 Post(s)
Tagged: 0 Thread(s)
Quoted: 5312 Post(s)
Liked: 5307
Quote:
Originally Posted by madshi View Post
Ok, after reading that document multiple times and thinking about it, my best guess would be that MaxCLL is calculated on linear light RGB, with the RGB data scaled in such a way that 100% white would be RGB value (10,000 | 10,000 | 10,000). This way the formula "MaxCLL = Max(R, G, B)" would give us the brightest RGB subpixel in Nits.

If true, practically this means, in an extreme case, the content could have a peak Y value in the whole movie of 593 Nits, but a peak Blue channel of 10,000 Nits. In this situation madVR would measure max 593 Nits throughout the whole movie. But MaxCLL would (correctly) be set to 10,000 Nits.

How useless is that?
I agree it's theoretically useless, but it practice it works for me 99% of the time. I only wanted to clarify the theory for you in case you based some of your calculations on the idea that MaxCLL was the brightest pixel. From a "let's use a static metadata when it's valid to try to select an optimal curve" point of view, it works very well in 99% of cases. Not to say we should do this, just to say that until we have a dynamic way that works, it's better than nothing.

Quote:
Originally Posted by madshi View Post
Doesn't a scene like that have a very low peak Y value, too? In that case madVR will completely disable tone mapping (only if peak measurements are enabled). So rendering such a scene correctly is by far the easiest job for madVR, as long as peak Y measurements are enabled.
Absolutely. And as reported, MadVR does a stella job simply doing this. For me, that's the most important benefit of the current implementation. The Revenant is a 1,000nits title (apparently with a maxCLL around 800) and MadVR reproduces it perfectly, and far better than my 1,000nits curve (unless I bump the low end following BT2390, but that kills contrast for the rest of the movie, so I don't want to). Remember when I posted raving comments about your implementation? It's because I had only watched a couple of 1,000nits movies and thought you were doing an amazing job. That hasn't changed. Then, I started watching a few 4,000nits titles (with content significantly above 1,100nits) and that's where I started to see issues.

Quote:
Originally Posted by madshi View Post
Considering the MaxCLL problems described above, I'm not too happy with MaxCLL. But I could easily offer a test option for you to play with which would allow you to manually select the SMPTE 2390 hard clipping point. I could also add another option which would allow you to define the soft clipping point, as well. In the long run I rather don't think these options should stay, but for playing/experimenting, and maybe finding better curves for madVR to automatically select in the future, adding such test options might be useful?
That would be great, for testing of course. It would also be great if you could make the diffuse white / brightness factor accessible, so I can match this. I agree that ideally we don't want to keep these but to test things it would be very useful.

Quote:
Originally Posted by madshi View Post
If we both use the same SMPTE 2390 curves (hard and soft clipping point), then we should be able to get the same highlight resolution. Saturation is a different beast. My understanding is that your static curves simply do a simple "gamma" processing. That means your static curves have no control over saturation and hue. Which means madVR should easily produce better results in saturation and hue, if all goes well.
Agreed about the theory, especially in the low end. In the high end, it's not true yet in practice.

Quote:
Originally Posted by madshi View Post
Which highlights issues do you see? If you mean the posterization, the jury is still out on whether it's madVR's fault. I say most likely no. You say most likely yes. But you didn't seem to be interested in actually locating the issue.
I don't mean the posterization/banding, I've parked this for now but I agree it might not be MadVR related. I had tested my SDR BT2020 calibration using the UB900 and I couldn't see any issues, which is why I had ruled it out, but I realised the HDR to SDR slider was set to 0 (which clips fairly low) so I lowered the slider to 5 and a similar issue started to appear. I have no idea why that is because I used to use SDR BT2020 with the UB900 with its (very good) HDR to SDR conversion and never saw anything like this. It might be the JVC Autocal for my SDR BT2020 calibration. I don't have the time to diagnose this but when I have the time I'll go through your list and will try to pinpoint the issue. For now, I'm happy to rule out MadVR. I picked this one because it was the most visible issue, but it's probably a red herring (pun intended).

I mean clipping/desaturation in highlights, I'll provide examples if you want when I have the time.

Quote:
Originally Posted by madshi View Post
I have an X35, but to be honest, although I've always planned to finally calibrate the monster, I've been too lazy. So I fear that my gamma response is probably bonkers and shadow detail non-existant, so in its current state, my projector is probably not a good device to test this stuff on.
Fair enough. Yes you will most likely have a huge gamma droop, which kills the dimensionality of your picture. But I understand you have better things to do .

Quote:
Originally Posted by madshi View Post
I haven't been using my eyes to tune tone mapping yet. I've just straight implemented SMPTE 2390, exactly as the spec requires, with superior saturation and hue algorithms than the spec asks for. So basically I've relied on the spec being "good".
The spec is okay except for the bump in the low end, which kills contrast when used with static curves. I've had a chat with Zoyd and he's added an option to not take it into account in HCFR. I prefer not to follow BT2390 for the highlights because it clips content too much (the top end of my curves is flatter). I don't think it's been designed with projectors in dedicated rooms in mind. I think it's still optimized for OLEDs or displays with at least 500nits peakY and some ambient light in the room, but I could be wrong. So as far as I'm concerned (high-end projector in dedicated room with 0nits ambient light), you're right not to apply the black compensation part. It might be a good idea to make it an option for users with ambient light in the room or lower-end projectors.

Quote:
Originally Posted by madshi View Post
All that said, I wonder: If you activate madVR's pixel shader HDR -> SDR conversion, how is your projector calibrated? Is it a BT.1886 calibration? I'm asking because currently madVR doesn't know the measured black level of your projector. So my SMPTE 2390 curve is flat at the bottom end. My thinking was that a BT.1886 calibration should already take care of the bottom end. So implementing black level compensation as 2390 suggests would actually pump up blacks too much. Of course it would also be possible for me to implement the 2390 black level compensation stuff, but then you'd need to change your projector's SDR calibration to not do BT.1886, but to do a pure gamma curve, I think? Not sure if the end result would be much different to using BT.1886 and no 2390 black compensation, though.
I agree 100%, see my discussion with Zoyd above after I tested his BT2390 implementation in HCFR. At least for high-end projectors in a dedicated room, the BT2390 black compensation isn't needed. As far as I can see, the calibration doesn't (and shouldn't) need BT1886, but that's something we can assess later.

When I activate pixel shader, I have an SDR BT2020 calibration with a power gamma of 2.4. I don't think it's correct to use BT1886 with SDR BT2020, because it's not encoded that way (unlike recent blurays which almost all follow BT1886).

JVC Autocal Software V11 Calibration for 2019 Models
Batch Utility V4.02 May 16 2019 to automate measurements files for madVR with support for BD Folders

Last edited by Manni01; 02-08-2018 at 02:10 AM.
Manni01 is online now  
post #90 of 5932 Old 02-08-2018, 02:28 AM
AVS Forum Special Member
 
Manni01's Avatar
 
Join Date: Sep 2008
Posts: 8,940
Mentioned: 305 Post(s)
Tagged: 0 Thread(s)
Quoted: 5312 Post(s)
Liked: 5307
Re the 4,000nits curve, I just took a look at the HCFR thread (which I don't usually follow as I don't use HCFR on a regular basis anymore, but as Calman doesn't support BT2390 yet I had to use it to generate targets when I tested BT2390) and they have also added an option to change the highlights to get much closer to my curve, for the reasons I indicated in my post above (the one where I posted my curves). BT2390 causes clipping in the highlights due to the shape of the curve (that takes longer to steer away from ST2084).

If you look at the 4,000nits curve I posted above, you should see what I mean. The line between the soft clip point and the hard clip point is much straighter. You lose a bit of contrast, but you can show a lot more levels, so you have far less clipping.

I think if you do a similar change and steer away from BT2390 especially for content with highlights above 1100nits (you can make it an option as well, similar to the low end bump) we should see a significant improvement re clipping on our projectors. That won't solve the desaturation issue, but at least that's one problem that should be reduced significantly. Happy to check in a new build with the manual/test controls enabled. If I still find issues, I'll give you examples.

JVC Autocal Software V11 Calibration for 2019 Models
Batch Utility V4.02 May 16 2019 to automate measurements files for madVR with support for BD Folders

Last edited by Manni01; 02-08-2018 at 02:36 AM.
Manni01 is online now  
Sponsored Links
Advertisement
 
Reply Digital Hi-End Projectors - $3,000+ USD MSRP

Tags
hdr , madvr , sdr , ton mapping

Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off