HDR10+ at Samsung QLED/HDR10 Summit

One of the most important presentations at the Samsung QLED/HDR10 Summit last week was given by Dr. Yeong-Taeg Kim, who works at the Samsung Research Center in Irvine, CA. The topic was HDR10+, which adds dynamic metadata to HDR10. Samsung has been at the forefront of HDR10+ development, but it’s freely available to anyone who wishes to implement it, just like HDR10. In fact, HDR10+ is codified in SMPTE standard ST 2094-40, just as HDR10 is codified in ST 2086.

Dr. Kim started with a review of HDR10, which uses the PQ (Perceptual Quantizer) EOTF (electro-optical transfer function) as codified in ST 2084, 10-bit precision, BT.2020 color primaries (currently used as a “container” for DCI/P3 primaries), and static metadata as codified in SMPTE ST 2086. The metadata include the mastering display’s maximum and minimum luminance, RGB color primaries, and white point.

The metadata also include MaxCLL (Maximum Content Light Level) and MaxFALL (Maximum Frame-Average Light Level), which relate to the content, not the mastering display. MaxCLL is the light level of the brightest pixel in an entire program, and MaxFALL is the maximum average light level of the brightest frame in the program. An entire program—say, a movie—has one value for each in HDR10. Presumably, MaxCLL does not exceed the maximum luminance of the mastering display.

MaxCLL and MaxFALL are vital for tone mapping—which, in turn, is critical if a consumer display can’t achieve the maximum luminance and/or color volume of the display used to master the content. If the consumer display knows MaxCLL and MaxFALL for a given program, and those values exceed the capabilities of the display, it applies tone mapping, which reduces high light levels to match the display’s capabilities.

Unfortunately, the exact tone-mapping process is not standardized; each manufacturer implements its own algorithms. In general, as the light level in the program approaches the display’s maximum output capability, the brightness of the content is “rolled off” until values higher than the display’s maximum capability are reduced to match that capability.

hdr10+
In this example, the peak brightness of the consumer display is 500 nits. If the content’s MaxCLL is 1000 nits, the display rolls off its output as the brightness in the content approaches 1000 nits according to the red curve. If the MaxCLL is 2000 nits, the display rolls off its output according to the blue curve. The overall result is dimmer than with content that has a MaxCLL of 1000 nits.

Because there is only one value for MaxCLL and MaxFALL for an entire video file, they are called “static” metadata—that is, they don’t change over the course of the program. This requires a compromise in the tone-mapping algorithm, as illustrated below:

hdr10+
In this example of static metadata, the tone-mapping algorithm has been optimized to preserve low-level images, which causes high-level images to be blown out.

hdr10+
In this example, the tone-mapping algorithm has been optimized to preserve high-level detail, which causes low-level images to be too dim. According to Samsung, this is the typical choice made by display manufacturers.

HDR10+ implements dynamic metadata by identifying the maximum red, green, and blue luminance—collectively called MaxRGB—in up to 15 different percentiles within each scene. Basically, the MaxRGB values of each pixel within a scene are evaluated according to their relationship with other MaxRGB values in the same scene. For example, if one pixel’s MaxRGB values are higher than half the values in the scene, it is said to be in the 50th percentile; if a pixel’s MaxRGB values are higher than 99% of the values in the scene, it is said to be in the 99th percentile.

Another element of HDR10+ metadata is called the OOTF (optical-optical transfer function). This function describes the conversion of light to digital values in the camera and the conversion of digital values to light in the display. The OOTF is specified in the mastering process, but the math is way too daunting to explain here.

The bottom line is that dynamic metadata allow dynamic tone mapping, in which different tone-mapping curves are applied to different scenes depending on how dark or bright they are. The result is more consistent and better performance with content mastered at different brightness levels and TVs with different peak-luminance capabilities.

hdr10+
Using dynamic tone mapping, the character of dark, mid-level, and bright scenes is preserved by using different tone-mapping curves on a display that cannot reach the peak brightness of the content.

As mentioned earlier, HDR10+ is a royalty-free, open standard like HDR10. Even better, it’s backward compatible with HDR10; an HDR10+ stream can be embedded with both static and dynamic metadata. However, because it includes so much more metadata, it requires HDMI 2.1 to convey from one device to another.

On the other hand, it can be easily used by a TV’s internal apps. HDR10+ is implemented in all 2017 Samsung TVs, and Amazon Video provides streams encoded in HDR10+ to the Amazon app in these TVs. With 12 partners so far—mostly chip makers with some mastering- and encoding-tool makers—Samsung fully expects Hollywood studios, other streaming providers, and TV makers to support HDR10+.

In the demo area, Samsung was playing content in HDR10+ on one screen and the same content in HDR10 on the other, as seen in the photo at the top of this article. I didn’t see much difference between the two, but most of the content was fairly bright. I wish the content had a greater range of bright and dark scenes, which would have illustrated the advantages of HDR10+ more dramatically.