Originally Posted by WiFi-Spy
The larger gamut is the 100% P3 gamut of the mastering display that in then transformed into BT.2020 signaling for home release. The current home HDR10 displays range from 80-97% of P3, and dynamic metadata becomes more important the farther the performance of the HDR10 TV deviates from the mastering display. Beyond the standard for dynamic metadata, what is really lacking with HDR10 is a standardized gamut re-mapping and luminance tone-mapping algorithms, or even a recommendation. For as much as the discussion about Dolby Vision gets bogged down in licensing and political debates, The fact that Dolby has one tone mapping strategy for all the DV TVs, makes it kind of a de facto standard for that technology, since it isn't being dictated by how each TV manufacturer thinks HDR should look.
The smaller color volume referred to by ST-2094 is not the percentage of P3 mastering volume a particular consumer display is capable of.
If we back up to what Florian was saying, a point was being made that one of the differences between DV and HDR10 is "dynamic metadata" whereupon Florian said that we have that in HDR10 as well, known as ST-2094.
ST-2094 is a scene by scene transformation from a larger color volume into a smaller one that accounts for the out of gamut values that occur when creative content graded for a very dark or a very bright scene in a larger color volume is transformed into a smaller color volume (BT709). Since the color volume of a 100% P3 mastering monitor is bounded within HDR10's BT-2020, and since ST-2094 is not tied to any specific hardware, there will be no dynamic transformations for out of gamut values for an HDR10 display that can't do 100% P3. All that's need is the existing static metadata for that.
Problem to be solved:High Dynamic Range / Wider Color Gamut content captured for example using the newly standardized SMPTE ST 2084:2014 Electro-Optical Transfer Function requires a set of industry standardized metadata to ensure a consistent transformation of this content to the existing BT.709 (add in SMPTE references) standard. This metadata is bounded by the characteristics of the mastering display as defined in SMPTE ST 2086:201x, but to ensure creative intent is maintained, content dependent metadata is also required. This standard will define the required metadata set. If color mapping is performed without this metadata, the resulting out of gamut content will suffer severely in visual quality due to clipping.
The SMPTE UHDTV Ecosystem Study Group has reported on this subject in Annex B “High Dynamic Range Imaging” and Annex C “Standard Dynamic Range Color Space Conversion” in the report dated March 28, 2014. The example cited above aligns closely with section C.4 and C.5 specifically cautioning the conversion between UHDTV and HDTV/SDTV.
The science of color volume mapping has seen major advances recently. A one-size-fits-all solution to map between different color volumes does not currently exist. The logical extension to the current color volume mapping techniques common in image processing is the use of a content-dependent dynamically parameterized color transformation. The color volume mapping transformation is derived for every scene/segment based on the metadata that characterizes the content. For example, a very dark scene will be mapped differently from a very bright scene when transforming from a large color volume to a more restricted color volume. This can be effectively achieved only when this transformation is guided by metadata.
Project scope: develop multi-part standards for specifying the semantics and representation of content-dependent metadata needed for color volume transformation of high dynamic range and wide color gamut imagery to smaller color volumes (e.g. BT.709 or Digital Cinema) in mastering applications. The metadata entries constitute the logical concept (semantics) for immediate work followed by the physical encoding (representation) specification based on an extensible metadata international standard such as ISO 16684. The standards should specify metadata necessary to support the mastering of high dynamic range and wide color gamut content for next generation distribution as well as physical media formats.
The metadata should allow for parameterized color transformation that is variable along a timeline. The resulting metadata standard should include clip-based with optional title-level, shot-level, frame-level, and other applicable groupings of color volume transformation metadata and may include both public and private metadata.
The metadata is carried in parallel and synchronized with the mastered content and can be used for real time transformation during mastering or for a deferred transformation in distribution.
The metadata should also characterize certain colorimetric attributes of the content to aid in the transformation process. One example would be minimum, average, maximum luminance values as well as parametric controls for the color volume transformation process derived during mastering performed in a perceptually linear color space.
The standards implementation should not be tied to any specific hardware.
The metadata set specified by the resulting standard should use the mastering display color volume metadata specified in ST 2086 “Mastering Display Color Volume Metadata Supporting High Luminance and Wide Color Gamut Images”.
Specific methods for generation and use of the metadata are out of the scope of this standard.[/quote]