Originally Posted by tomvinelli
It is confusing isn't it . Even if we understood the technical side of things ,its still confusing. I suppose on a scope of some kind you would see a difference, but my eyes which are good don't see it. And with good HDR is there are other factors involved to get good HDR. Like the more NITS you have the better the HDR ,but I have no idea what a nit is .All I know is it makes HDR better. In the audio world a CD is 16 bits but then there is 24 bit audio and like you said it would be hard to hear the difference
A nit is a unit of brightness. It's another name for the unit "Candela per square metre". Think of it as this, within a square that is one meter by one meter in dimensions, each nit represents another candle flame in that area. The more nits there are, the brighter the image is. SDR is mastered with a maximum brightness of 100 nits. This limits what you can do visually, because in real life things can get much brighter. So you're limited to choosing whether to use a higher or lower exposure. A higher exposure and details in the shadows and most of the middle-bright objects will be nicely visible, but the brighter highlights will essentially crush to white. This is why, for example, the center of a fireplace will appear white in SDR, because you simply can not display the bright yellows and oranges in the middle of a fireplace in an SDR image without lowering the exposure. But if you lower the exposure, then everything else in the image is simply too dark to see. So that's why we have HDR. We can properly display those brighter elements (high nits) without lowering the exposure to compensate, giving them a realistic contrast as they would appear in the real world. So HDR allows us to utilize a palette that goes up to 10,000 nits. That means the brightest highlights in an HDR image can potentially be 100 times brighter than an SDR image, while the shadows and midtones remain at roughly the same brightness. This expansion of dynamic range is what gives HDR it's "pop".
The problem is right now, most TVs fall in the range of 500-1000 nits of peak brightness. So what if a movie is encoded with brighter highlights than that? Well, either the set will clip those highlights away, or it will try to "tonemap" them, or use a curve that tries to balance preserving those brighter details while not sacrificing overall brightness of the image. The problem is, standard HDR10 uses the same metadata for the entire movie. You have information about the brightest and darkest capabilities of the mastering monitor, and you have information that describes the brightest pixel in the movie, and the brightest average luminance of a frame. While these are useful to directing tonemapping, the problem is it's only describing two frames out of an entire movie. Those pieces of metadata may not be all that representative of the entire movie as a whole. Dynamic metadata, such as that used by DV solves this, as it gives the player information about each scene, not just the movie as a whole. Dolby uses this to calculate a different tonemapping curve for each scene in the movie, depending on what that scene contains. However, they go a step above that as well. These tonemapping algorithms are specially custom designed to take advantage of the specific capabilities of the display. Not just how bright or dark they go, but more in depth than that as well, for example, on my OLED, I've noticed that when you adjust the OLED light setting higher, this adjusts the tonemapping, and DV becomes more aggressive in how highlights are tonemapped to avoid triggering the display's ABL. Little quirks and things like that with each display, things like color gamut, color volume capabilities, minor differences in panel gammas, local dimming, etc. Dolby compares your displays capabilities to those of the display used to master the movie, and creates a custom tailored tonemapping curve for every scene. It's really incredible what they accomplish with this.
For the difference between Dolby Vision and HDR10, use a movie mastered at 4000 nits, with a high MaxCLL. (you'll need to research the metadata for the movie if your player doesn't display it). Look for a bright scene, and then compare the two. Pay attention to the details in the brightest parts of the image. Dolby Vision will almost always do a much better job at preserving these details than HDR10 will. HDR10+ has the potential to improve that, but I'm guessing their tonemapping algorithms still won't touch how well optimized Dolby's tonemapping curves are able to achieve. Simply put, if you want an image that looks as close as possible to what the director saw in the editing room, you want to use Dolby Vision, because its tonemapping will do a much better job getting you as close as possible.