Originally Posted by mrtickleuk
Yes. And the reason is not one of preference but because PQ is absolute, it's the viewing light level it's mastered for. We must refer to the surround luminance level for which it is mastered, which is normally 5 nits (ie very dark).
And then (try to) replicate that in our rooms. (Ref: SpectraCal White Paper link at the HDTVtest link.)
With HLG content, this isn't a problem and you can view at whatever surround light level you like.
The thing is though, no matter what something is mastered for, if you view it with no surrounding light, your eyes will adjust and it will look correct. If you have too much ambient light, your eyes adjust to that, not to the image itself. At 5 nits surrounding light, or lower, it should look correct. Above that and it won't.
Originally Posted by alexanderg823
There should not even be this much of a debate.
The entire purpose of hdr is increased contrast.
The entire purpose of increased contrast is increased depth and realism.
So to suggest hdr isn't about realism is flat out wrong.
The problem here is there are two different versions of "realism" we're talking about. You can simulate the realism of the amount of light we actually see when we're there in person, but doing that for bright content significantly reduces the amount of contrast you can get in the highlights, so while you increase realism in average light level, you reduce realism in the contrast of the image. Since our eyes can adjust to just about any light level above a certain amount when viewed in the proper conditions (our eyes work like a camera aperture which opens or closes appropriately to control the amount of light exposed), there's no real benefit to making the average light level hundreds of nits, because your eyes will adjust to make the main subject look the same as if it were mastered for an average light level of 100 nits, and so all you're effectively doing is reducing the relative contrast of the highlights.
Personally, when I grade for HDR, I start by clipping it at 100 nits, making sure the main detail looks as good as it can in those first 100 nits, and then I simply unclip everything above that, and make sure everything that was previously clipped looks correct as well (in many cases, not much extra work is needed there). I have always felt that HDR needs to feel like an uncapping of what was previously available in SDR. As long as the grade is done well, and there's a good source being used, you'll have plenty of new information that gets revealed doing it that way. In my experience, a lot of content didn't have a good enough source, so they either overly brighten the image, or they use a process which effectively steepens the gamma curve a lot, and then adjusts it so that the peaks are a lot brighter, which gives you mostly the same median brightness, but much more crushed shadows and highlights that are contrasty, but in an unnatural way. These grading styles are essentially like a higher bit depth version of the fake HDR modes many TVs have, so I'm not a fan.
Before the mods start going on another deleting rampage, yes we're still specifically talking about the clip discussing technicolor preparing for future HDR broadcasts of sporting events. I was worried that brighter method may be what technicolor may be doing rather than investing in getting the cameras set up to provide more optimal source for getting a beautiful HDR image straight away without needing to manipulate it much at all. Of course I'm not there to see it with my own eyes so it's hard to say 100%, but just in case they are doing it that way, I was just explaining that it's pretty much wasting a massive amount of the potential HDR has.