Originally Posted by stanger89
OK, now I just blazed through the last few pages, so forgive me if I missed something, but I've been thinking about this....
First, we know that SDR is mastered for 100 nits peak white, and in general we calibrate our projectors for about 50 nits peak white. Now SDR uses a relative curve, so everything scales well.
Now, for HDR, we have ST.2084 which is an absolute curve. We know that HDR is intended to be roughly the same (brightness wise) as SDR, up to 100 nits, with the range above that being for highlights.
So the question is, why do we not want to use a 2x multiplier in Calman for calibrating HDR? That would make content 100 nits equal to 50 nits on screen, which would seem to be exactly what we're looking for . Then for a custom curve, wouldn't you calibrate it "linear" all the way up to the 100 nit point, and then apply the roll off between 100 nits (is that ~50%?) and 4000 nits or so?
It seems to me that 4, 5, 6x multipliers are far too large, for example if I tell Calman to apply a 5x multiplier, and then calibrate everything "perfectly", content that's supposed to be 100 nits (per the PQ curve) will only be 20 nits, which is less than half as bright as we want (50 nits). This is actually what I found myself when I was playing around with the multiplier in Calman, I attempted to calibrate with a very high multiplier (like 10x) because I had calibrated to a peak white of under 100 nits (iris was somewhat closed), the result was everything was way too dark, much darker than if I had my Pansonic do DRC and output SDR.
I do intend to try and make my own custom PQ/Gamma curve when I get some time to play with it. I really want to try a 2x multiplier. If I understand how that works in Calman, that should be the HDR equivalent of how we calibrate SDR.
Of course the trick is, that curve will be unique to each particular setup (screen size, aperture, lamp, throw, etc).
I'm not criticizing at all, bravo for all the great work and information you've provided. Just trying to understand and hopefully help get to the bottom of this.
Thanks to the great info here and with Calman, it's invaluable.
As I said, the multiplier is related to the peak brightness you use. If you use high lamp, you can use a lower multiplier than in low lamp.
For example, to target 1000nits, I use a multiplier of 5 in high lamp and of 7 in low lamp. I would never use a multiplier of 10 in high lamp (or even in low lamp), it would be way too dark.
Next, if you have a peak white of less than 100nits, you would indeed get much better results in SDR BT2020. This is something I've talked about already. In my opinion, if you get less than 100nits peak, you should simply forget about HDR, get your DI back and play everything in SDR BT2020 targeting your peak white (not 50nits as we used to).
One of the reasons for this is that as I explained earlier, there is no free lunch. As you lower the multiplier, the curve gets steeper. Yes, it gets you more brightness, but it desaturates the picture. If you had a x2 multiplier, you would have a brighter picture, but it would be black and white, which I'm sure negates a bit the point of HDR
Also one of the downsides of this is that you would have to start the roll-off super early, because you would be clipping at 200nits with a x2 multiplier (again using your 100nits actual peakY example), so you would have to start rolling off way before 50%. This is what the BT2390 curve does, to accommodate people with very dim set-ups (as low as 100nits peakY I guess), but that would produce and unwatchable picture and you would be leaving a lot of contrast on the table compared to an HDR picture. Your highlights would be super compressed.
Another downside is that the steeper the curve, the less control you have over shaping it because it's almost vertical. It means crunched blacks (unless you raise them out of target) and clipped highlights (unless you start rolling off very early, which means very little contrast in the part of the picture we want. We have less levels to play with, and I suspect we'd get a lot of banding. It would also be harder to see the data in Calman and make precise changes. With just a few control points to use for the curve itself (5 vs 9 as you would have to use at least 40% for the roll-off), your curve will be nothing like a PQ gamma curve. We only have 9 control points to start with anyway as there is nothing above 4000nits (90%), so both 90% and 95% are maxed out. This leaves us 5-80% to play with, but only 5-50% when clipping at 200nits (60% is 240nits, so will be maxed out too). 9 control points isn't much but works fine. 5 Control points isn't good enough to shape a precise enough curve.
I think the range to use is 800-1100nits for the target, maybe down to 600nits but lower than that I expect it to be pointless as it would be too desaturated.
All this is because unlike HD/Bluray which is mastered to 100nits peak white, we are dealing with content which is mastered to 1000nits-4000nits peak white.
What you suggest would therefore work great if we could get the Dolby Cinema version of the content. Mastered to 100nits (a perfect peak white for us in a dedicated room), it could use a x2 multplier and it would look AMAZING. Until people accept that the way the content is mastered makes a big difference with the way we can adapt it to our dedicated room, this misunderstsanding will remain.
Of course if you try a x2 multiplier and I'm wrong, please let us know, I'd like to hear how you can get good results with such a low multiplier.
Again, I reckon that if you can get around 100nits peak white or below, you should watch SDR BT2020. The picture will look much better: more contrast (a LOT more), better saturation, better black levels, etc. I target 100nits for my SDR BT2020 mode and it looks amazing, not blinding at all, like bluray does if I target 100nits. I do set the SDR conversion slider to -6 to resolve up to 1200nits (with the Panny contrast set to -2) though, which explains why. With the slider to the 0 default, I couldn't be able to do this, but the the panny clips content above 600nits, which is a bit too low to my taste.
There is a point for HDR displayed as HDR, but IMHO only if you can shoot up significantly above 100nits, at least with a custom curve such as the one we can produce today.
Originally Posted by rak306
What a fantastic contribution to this thread!!!
THANK YOU Manni01!!!!
Out of curiosity, has anyone measured the gamma D curve with the JVC recommended settings, and compared to the PQ curve to see how they differ. (Yes I know, each situation is different, screen size, gain, throw.)
But it would be very informative for anyone who has done this to publish their results in either the form of "nits out vs nits in" or "nits out vs % video in".
You're welcome, I have posted these screenshots at the very beginning. I showed how pushing contrast in the JVC to raise brightness hurts the PQ curve, creating a bump in the high end which produces the harsh, over-contrasty picture that we should try to avoid at all cost
Originally Posted by wse
This thread is so technical that it's hard to follow, I know I stated this before but I wished they had a Calibration for dummies like me
It's called AV Science
You shouldn't embark in this if you don't have the underlying knowledge, and you don't have to. You can simply follow my recommendations for Gamma D, or you can hire a calibrator. Believe me, you don't want to spend the time learning this. I wish I hadn't
Originally Posted by rak306
Not Manni: but I asked a similar question a few days ago, and from Manni's reply, (if I understood him correctly) I believe the answer is he is not changing the curve before about the 75% point, which corresponds to ~1000 NITS, and by scaling by a factor of 5, he gets a real peak of 200 NITS @ 1000NIT input. But for 50% video (100 NIT input), he is still getting near 100 NIT output.
As far as gamma D (or any gamma) curve in the JVC RS500/600, it's a relative curve. This is because the peak lumens out (at 100% video) depends on how far you are from the screen, how big the picture is, and what your screen gain is.
Now what I think JVC should do, is have the user input his peak NIT output on screen. This could be either as measured from a meter, or estimated by the user. (JVC could provide a table of lumens out vs the ratio picture_width/throw_distance. The table would include both power settings and several bulb ages. This should be accurate to 20-40%. Then the user can calculate peak NIT on screen = 3.4*screen_gain* lumens /( (picture width**2) *9/16) ) , picture width in meters).
With the max NIT output on screen, the "gamma" curve becomes absolute. Then JVC should provide a proper PQ curve with some user controls (e.g when to start to deviate from the PQ curve). Of course I would not hold my breath, JVC has moved on from the RS600.
BT.2390 defines how to map PQ to a display of lesser peak output than 10k NITS, and who knows whether it looks good or not. Manni01 has suggested, that it starts to early for his taste. But FYI, when using the equations in BT.2390, for a maxLUM of 100NITs, at 100 NIT input you get 50 NIT output
You got it right and your recommendations for the future are spot on. I kept mine realistic, because I want JVC to act on this for our models.
I don't NEED to roll off earlier, so I don't WANT to. I would leave a LOT of contrast on the table. Why would I want to do that?
Of course, others with dimmer set-ups should make their own curves to tune it to their needs, or get someonme to do it for them.
Originally Posted by atabea
Although the information posted here could seem intimidating, just remember you don't have to know EVERYTHING that's discussed. I certainly don't! If you just focus on a couple of items you will be at a great starting point;
1) learn the auto-cal procedure which, in practice, is not all that bad and
2) import one or two of Manni's custom gamma curves to your JVC.
The rest will come in time. Of course, hiring someone like ChadB will solve all your problems
Yes, that's a good compromise
I highly recommend hiring someone like Chad B if one is not able to follow the discussion above. If one is, I wrote this and shared my curves to make it possible for some with compatible set-ups to get great results in their room. But my goal is not to deprive hard working, competent calibrators from getting paid to to the work they have learnt to do. Chad is a wonderful example, he is really dedicated to the JVCs, and he knows them inside out. He started using the JVC Autocal right away when he saw the potential instead of poo-pooing it because of the use of the Spyder, was on the multiplier as soon as he saw the potential. This is wonderful, I wish all calibrators were like him, and he deserves to reap the rewards for all this hard work
Originally Posted by mbw23air
He told me he can do HDR at low or high lamp but he says he gets better results at high lamp so I had him do HDR in high lamp. Im sure it will look great either way you choose.
I did more tests late last night and I do also prefer high lamp, for the reasons I mentioned earlier. You get better results and more control over the curve. The low end looks better, and you have more headroom for the highlights. I would use low lamp if I could, but with this custom curve I'd say got for max brightness (high lamp, iris fully open) if you can, and shape the curve from that if you can bear the fan noise/heat.
Originally Posted by rollon1980
I actually agree with this. It doesn't leave a lot of headroom but it is likely the only way to ensure tone-mapping that's kind of faithful to the original. I mean by this analogy we should really map 100nits to 100nits straight and then tail off the rest of the curve to allow for some highlights. It likely wouldn't work very well, though, in creating the illusion of HDR. It would just be SDR with not much dynamic range in the highlights. Dolby Vision, where are you? :-/
Yes, Dolby Vision (or rather studios), where are you with content mastered to 100nits peak white, as for cinema...
I honestly can't blame the industry for this choice. UHD Bluray is such a niche, we got lucky to have it at all with the rise of streaming. They couldn't have targetted our niche of a niche, they had to optimize for flat panels in a living room. We just have to make the most out of this non-optimized content. Playing with this, I also understand why DV might never happen for projectors (with consumer content) until we reach a
higher peak white. Too much compression of the dynamic range, too much desaturation. It looks great if we tune the curve to each set-up, but they need manufacturers to give them the means to achieve this reliably and consistently (i.e internal meter to measure actual peak white of the specific setup in an optimized, locked out mode).
Originally Posted by danbez
Is it recommended to do another AutoCal run after import the new custom Gamma curves? I just did an AutoCal a few weeks ago, but before Manni released those.
Not needed, just import the custom curve that better fits your needs.
Originally Posted by COACH2369
Chad and I exchanged a few text messages today after reading about his findings in regards to a custom Gamma setting. He was just at my place two weeks ago, so I wanted to see if he would need to make adjustments on my system.
Based on my screen type, screen size and not wanting to have high lamp mode engaged he still thinks the SDR 2020 will be the better option for me. My projector is too close to me to have high lamp mode engaged.
I second this. If you can bear the fan noise and heat, high lamp is preferable, just like with 3D.
Originally Posted by zombie10k
it's not necessary as it calibrates all gamma modes, try importing the curve and running the A/B comparison between Gamma D and custom 3, it won't be hard to know which one is which.
for those with the Panasonic UB900, if you pause for extended periods of time I have seen the JVC switch back to Gamma D. keep an eye on this during testing.
Yes I had mentioned this earlier, a long pause causes a return to Gamma D. That's why I would like to be able to find a working IP code for custom gamma. I want to program iRule to send the custom1 gamma code before or after I press play on my UHD Bluray screen.
Originally Posted by claw
I imported two low lamp gammas; 140-1000-4000 and 140-1000-1000.
140-1000-4000 worked really well for the 4000 nit Fury Road. Two other 4000 nit discs; Jupiter and Pacific Rim, while better than I could previously get in HDR, were still darker than I would like. Just as Manni had suggested we would find if we had a larger screen than his.
After using both the black and the white clipping patters for the 140-1000-1000 gamma, I preferred it over the 4000 nit gamma when viewing 1000 nit discs. I got just a bit more brightness with it. It worked really well with Lucy. A darker 1000 nit disc, John Wick, was still darker than I would like, as I expected.
This is why I released the no roll-off versions, with warning, so people can select them to eek more perf out of titles mastered to 1000nits. The downside is the clipping of the details with 4000nits titles (or even the 1100nits titles), so I wouldn't use that curve on its own.
If this is still too dark (which isn't only a function of the screen size, the gain also plays a large part, see the good results Zombie10K gets on his 142" diag screen with a 2.8 gain), you need a custom curve with a lower multiplier, or you need to replace your screen with a smaller/higher gain one. This is to be expected.
Originally Posted by zombie10k
my 2.8HP @ 142" has a big impact on the overall HDR presentation. brightness wasn't the issue in my setup, it was the terrible overall tone of Gamma D. overcooked highlights, weak mid-range, poor shadow details. so many attempts to get it right but Gamma D was the bottleneck.
now the tone is much more even across the image and bright enough where i can clamp the iris down quite a bit in high lamp or run low close to wide open.
Glad the new curves also work for you
Originally Posted by stevenjw
That's exactly what I was thinking. AVS along with other JVC dealers and reviewers that definitely read this thread (you know who your are).
These are the folk that can get the ear of JVC better than our rants/wishes in this thread. Manni's list is pretty much complete, but I'd settle for no. 1 (working DI fade2black in HDR) and either no. 3 (allow selective Gamma choice setting for HDR instead of GammaD default) or no. 4 (individual IR codes for Gamma settings). If we got no. 3, we wouldn't even need no. 4 really. No. 2 is a nice to have improvement for GammaD, but with custom gamma curves, is no longer that important IMHO. No. 1 and no. 3 would be good enough for me.
We would still need #4
because it's not's OK to not have working IP codes for custom gamma. They are in the control guide, they just don't work. We need to know why and fix this, not only for us, for all the future models. Also it would be nice for geeks to have a 4000nits, 1100nits and 1000nits button in iRule to select the optimized curve for each title and eek the last bit of performance from the PJ. It's not necessary (the 4000nits curves are not significantly dimmer overall) but it's a nice way to get the best contrast/brightness you can get for each title if you can live with the hassle.
is the most immediate, most needed change. It would be very easy to implement, so no excuse for them not to deliver this at a minimum to make up for #2
, which really should also be corrected. It's a HUGE bug. Not everyone can make their own curve or hire a calibrator to make one for them. We should be able to approximate it with Gamma D without killing our on/off. Gamma D is a bit more saturated that custom curves, so some might prefer that option as long as it doesn't raise the black floor and can resolve the low end.