Originally Posted by cmdrdredd
Right I think it's well known that DV requires a different grading process than HDR10 by itself. From some of the early press releases and info Dolby made it seem like grading for DV and outputting both a DV and HDR10 compatible encode is possible. The way they made it seem originally is that DV handles the HDR10 metadata with certain base values and then on top of that allows scene by scene values catered to the specific TV in use. I'm sure they have thought about this in regard to UHD Blu-Ray and made it possible for a studio to grade their movie for HDR once but retain compatibility with HDR10 only players and TVs. Movies not graded for DV will have to be re-graded using Dolby's tools if the studio wants to release a Dolby Vision version but can still retain HDR10 compatibility in the end. If not then it seems like a huge waste of resources to regrade at all.
You know what's a waste of resources? Not updating all the current players to support DV. Because if you did that, then you wouldn't need both formats at all, or any enhancement layers of any kind. All you'd need is the DV encode, as while the two systems often do have different grades (probably due to the different peak nits chosen), they don't necessarily have to. You can have a dolby vision movie decoded by a player and then "translated" to HDR10 using metadata telling it where to cap the max nits at. You could probably also have another piece of metadata which guides the tone mapping process further, similar to the lookup table information you can provide to youtube to automate the HDR-to-SDR downconversion process while retaining your preferred SDR grade.
Basically, you'd just decode the DV video, and reformat the colors to match the provided target HDR10 metadata on the player, and then send that as HDR10 to the TV. It's funny, because that kind of conversion is easily possible, while the other way around requires all this complicated enhancement layer stuff that just ends up wasting space in the end. And honestly, I don't see any reason why current players would be incapable of that. They're all firmware upgradeable, and would take minimal processing to handle it.
Originally Posted by puddy77
"In an example embodiment, the base layer image data under techniques as described herein comprises a specific constitution of a lower-bit depth version of VDR image data and the remaining difference between base layer and the original VDR image is carried in the enhancement layer."
That's almost exactly the same process as 3D Blu-ray...
"The image data may comprise base layer image data of a lower bit depth quantized from a higher bit depth (e.g., 12+ bits) VDR image and carried in a base layer image container (a YCbCr 4:2:0 image container), and enhancement layer image data comprising residual values between the VDR image and a prediction frame generated from the base layer image data. The base layer image data and the enhancement layer image data may be received and used by the downstream device to reconstruct a higher bit depth (12+ bits) version of the VDR image."
The problem is, you can't just convert from 10bit to 12 bit, or there's no benefit. You actually have to add the extra information in those higher bits. But not only that, you need to convert the levels, as levels change wildly in DV encodes while they remain fairly consistent in HDR10 encodes (hence, static metadata). So you have to change the levels by a large degree, inconsistently throughout the movie, and then you have to add a bunch of detail that wasn't there already. I'm not saying the process is impossible, just that it wouldn't be very efficient due to the vast amount of very different new information you have to account for in the DV stream.
Thanks for the patent link though, I'll have to have a read through that to see if it answers any of my remaining questions.
Originally Posted by Tom Roper
1023 = 10,000 nits (No known display)
920 = 4,000 nits (peak luminance on Dolby Pulsar monitor)
844 = 2,000 nits (peak luminance on Dolby PRM32FHD)
767 - 1000 nits (peak luminance on Sony BVMX300)
528 = 108 nits (peak luminance on Dolby Cinema projector)
519 = 100 nits
447 = 48 nits (peak luminance on DCI projection and Dolby Cinema 3D)
0 = 0 nits
The relationship between the 10 bit code value to nits follows the transfer function of the PQ curve. All you have to do get the equivalent 12 bit code value for any given nits is multiply times 4. (Bit shift left 2)
I am not saying you are wrong nor right in your bit rate calculations but you are definitely wrong about the 100 nit level being a different percentage for 10 bit than for 12, or different between HDR10 and DV. Both use the exact same PQ transfer function.
This is only true if the HDR10 encode was mastered for 10,000 nits, but it also doesn't take into account the dynamic nature of Dolby Vision. Dolby Vision doesn't use the same max nits for every frame like HDR10 does, so the conversion doesn't work the same for every scene. One scene may only be 200nits bright, so it uses peak nits of 200nits on the DV encode, while the HDR10 encode remains at 2000 nits, or whatever was chosen for that movie. So in that scene, the 100nit level would be 50% of the max nits for the scene
, but would also be 5% of the max nits for the full movie
(obviously converted to logarithmic values), that's why the relationship changes scene to scene on a DV encode vs HDR10. HDR10 pixels are graded based on the max nits for the full movie while DV pixels are graded based on the max nits for the scene. That's what you're not taking into account. Now if it was a simple 12bit static metadata situation (basically HDR12 instead of DV), that would be a much easier
problem to solve
Originally Posted by heavyharmonies
I agree 100%. I come to this thread for discussion on 4K HDR *TITLES* hence the thread title, not more scientific discussions, debate, and arguing of DV vs. HDR10, color spaces, etc. Those discussions belong elsewhere and just muddy up this thread. They have their place... just not here.
This discussion is relevant to those titles as many of them are going to be receiving DV encodes in the future. Understanding how this technology works is important to making decisions about those future titles. While there are many benefits to Dolby Vision, I started much of this tangent because I was worried about how the extra encoded data on these UHD discs is going to potentially negatively impact the experience, rather than enhance it, due to over compression. It's also relevant because I think it's fair to be a bit annoyed that there haven't been much in the way of news about DV upgrades for those of us who were early adopters to the UHD format and bought the first players. Which is basically most of the people in this thread. Aside from the Oppo 203 of course.