AVS Forum banner
  • Inside the Components That Make or Break Your Speakers. Episode 12 of the AVSForum Podcast is now live! Click here for details.

HDR10+ at Samsung QLED/HDR10 Summit

12K views 36 replies 19 participants last post by  video_analysis  
#1 · (Edited by Moderator)
One of the most important presentations at the Samsung QLED/HDR10 Summit last week was given by Dr. Yeong-Taeg Kim, who works at the Samsung Research Center in Irvine, CA. The topic was HDR10+, which adds dynamic metadata to HDR10. Samsung has been at the forefront of HDR10+ development, but it's freely available to anyone who wishes to implement it, just like HDR10. In fact, HDR10+ is codified in SMPTE standard ST 2094-40, just as HDR10 is codified in ST 2086.

Dr. Kim started with a review of HDR10, which uses the PQ (Perceptual Quantizer) EOTF (electro-optical transfer function) as codified in ST 2084, 10-bit precision, BT.2020 color primaries (currently used as a "container" for DCI/P3 primaries), and static metadata as codified in SMPTE ST 2086. The metadata include the mastering display's maximum and minimum luminance, RGB color primaries, and white point.

The metadata also include MaxCLL (Maximum Content Light Level) and MaxFALL (Maximum Frame-Average Light Level), which relate to the content, not the mastering display. MaxCLL is the light level of the brightest pixel in an entire program, and MaxFALL is the maximum average light level of the brightest frame in the program. An entire program—say, a movie—has one value for each in HDR10. Presumably, MaxCLL does not exceed the maximum luminance of the mastering display.

MaxCLL and MaxFALL are vital for tone mapping—which, in turn, is critical if a consumer display can't achieve the maximum luminance and/or color volume of the display used to master the content. If the consumer display knows MaxCLL and MaxFALL for a given program, and those values exceed the capabilities of the display, it applies tone mapping, which reduces high light levels to match the display's capabilities.

Unfortunately, the exact tone-mapping process is not standardized; each manufacturer implements its own algorithms. In general, as the light level in the program approaches the display's maximum output capability, the brightness of the content is "rolled off" until values higher than the display's maximum capability are reduced to match that capability.

Image
In this example, the peak brightness of the consumer display is 500 nits. If the content's MaxCLL is 1000 nits, the display rolls off its output as the brightness in the content approaches 1000 nits according to the red curve. If the MaxCLL is 2000 nits, the display rolls off its output according to the blue curve. The overall result is dimmer than with content that has a MaxCLL of 1000 nits.

Because there is only one value for MaxCLL and MaxFALL for an entire video file, they are called "static" metadata—that is, they don't change over the course of the program. This requires a compromise in the tone-mapping algorithm, as illustrated below:

Image
In this example of static metadata, the tone-mapping algorithm has been optimized to preserve low-level images, which causes high-level images to be blown out.

Image
In this example, the tone-mapping algorithm has been optimized to preserve high-level detail, which causes low-level images to be too dim. According to Samsung, this is the typical choice made by display manufacturers.

HDR10+ implements dynamic metadata by identifying the maximum red, green, and blue luminance—collectively called MaxRGB—in up to 15 different percentiles within each scene. Basically, the MaxRGB values of each pixel within a scene are evaluated according to their relationship with other MaxRGB values in the same scene. For example, if one pixel's MaxRGB values are higher than half the values in the scene, it is said to be in the 50th percentile; if a pixel's MaxRGB values are higher than 99% of the values in the scene, it is said to be in the 99th percentile.

Another element of HDR10+ metadata is called the OOTF (optical-optical transfer function). This function describes the conversion of light to digital values in the camera and the conversion of digital values to light in the display. The OOTF is specified in the mastering process, but the math is way too daunting to explain here.

The bottom line is that dynamic metadata allow dynamic tone mapping, in which different tone-mapping curves are applied to different scenes depending on how dark or bright they are. The result is more consistent and better performance with content mastered at different brightness levels and TVs with different peak-luminance capabilities.

Image
Using dynamic tone mapping, the character of dark, mid-level, and bright scenes is preserved by using different tone-mapping curves on a display that cannot reach the peak brightness of the content.

As mentioned earlier, HDR10+ is a royalty-free, open standard like HDR10. Even better, it's backward compatible with HDR10; an HDR10+ stream can be embedded with both static and dynamic metadata. However, because it includes so much more metadata, it requires HDMI 2.1 to convey from one device to another.

On the other hand, it can be easily used by a TV's internal apps. HDR10+ is implemented in all 2017 Samsung TVs, and Amazon Video provides streams encoded in HDR10+ to the Amazon app in these TVs. With 12 partners so far—mostly chip makers with some mastering- and encoding-tool makers—Samsung fully expects Hollywood studios, other streaming providers, and TV makers to support HDR10+.

In the demo area, Samsung was playing content in HDR10+ on one screen and the same content in HDR10 on the other, as seen in the photo at the top of this article. I didn't see much difference between the two, but most of the content was fairly bright. I wish the content had a greater range of bright and dark scenes, which would have illustrated the advantages of HDR10+ more dramatically.
 

Attachments

#2 ·
it requires HDMI 2.1 to convey from one device to another.

All I needed to read to know Samsung is full ####........ Not sure about their stubborn refusal to support Dolby Vision in the first place.

Yes I understand that they can update TVs to work with the HDR10+ through streaming content but I don't give a (insert blank) about streaming content so it does nothing for me there.


NOW...... with ALL this said of course there's one route Samsung can go to build up good rapport and relationship with it's customer base which is to release another one of those evolution kits with HDMI 2.1 outputs.
 
#22 ·
They aren't adopting DV because of the licensing fees I'm sure. I'm not going to make a qualitative statement on that though. Do you know if it's even possible for a Samsung TV to update it's HDMI ports by releasing a new "evolution kit" or "one connect" which is what shipped with my KS9800 series and is the intermediate component that the HDMI input plugs into
 
#3 ·
Question about High Nit Displays

Will dynamic metadata HDR10+ allow for more detail to be retained in bright scenes and more detail revealed in dark scenes or shadows on a high nit display than static metadata?
 
#5 ·
Will dynamic metadata HDR10+ allow for more detail to be retained in bright scenes and more detail revealed in dark scenes or shadows on a high nit display than static metadata?

Yes. (Just like DV does.)


Richard
 
#4 ·
Samsung is committed to developing HDR10+, which adds dynamic metadata to HDR10. This presentation at the QLED/HDR10 Summit was quite illuminating.

http://www.avsforum.com/hdr10-samsung-qledhdr10-summit/
Thanks for the report.

On the other hand, it can be easily used by a TV’s internal apps. HDR10+ is implemented in all 2017 Samsung TVs, and Amazon Video provides streams encoded in HDR10+ to the Amazon app in these TVs.
This is present tense. I'm assuming this is a typo? Amazon have announced that they will do it in the future, but there's been no announcement that they've launched any HDR10+ services yet.
 
#7 · (Edited)
This is so interesting because 1) This is all done in Dolby Vision right now without any special HDMI interfaces and from the streaming and disk content that's also available right now, it's excellent and 2) Sony and LG are already doing their own internal Dynamic HDR processing in their 2017 sets and at least in the Sony A1E, it works very well. The LG does a good job as well but there are differing opinions on the results. Even if this becomes a standard, my "opinion" is that it's very doubtful that manufacturers like LG and Sony are going to implement it when they already have their own process, although not a standard, right now.
 
#8 ·
Screw these companies that want to keep changing specs every year , when they can't even get it right in the first place . I will sit out this time and invest my monies in more two channel audio , at least that format sticks around for awhile . Samsung must be in bed with the hdmi guys , because hdmi 2.0 and hdcp 2.2 is all we should of needed for many years to come .
 
#34 ·
It's an underhanded effort by them to reach into our wallets when the newness of the 15K in upgrades we performed for our cinema improvements for the last 12 months and I haven't even had time to realize the benefits but, It's time to upgrade again and feed this obsession? I'm going to need to learn safecracking in order to afford it.
 
  • Like
Reactions: King Richard
#9 ·
Samsung is committed to developing HDR10+, which adds dynamic metadata to HDR10. This presentation at the QLED/HDR10 Summit was quite illuminating.

http://www.avsforum.com/hdr10-samsung-qledhdr10-summit/
I spoke with a Samsung rep in Best Buy a few months ago (worked for Samsung, not Best Buy) and was told the 2016 and 2017 Samsung TVs already have HDMI 2.1 and just need a firmware update when the spec rolls out, I wish there was a way to confirm this for sure.
 
This post has been deleted
#14 ·
I guess my main thing is I don't like to spread FUD, and *until* Samsung or any manufacturer explicitly states that their previous gen TV's will be upgraded to the new spec, any hearsay from reps at Best Buy or even hdmi technical directors need to be considered bunk!
 
#15 ·
Presumably, MaxCLL does not exceed the maximum luminance of the mastering display.
Hi Scott,

This is not true, for example these 13 UltraHD Movies below have higher MaxCLL from the peak output of their mastering display used:

Image


You can use a 4000 nit monitor for color-grading a 10000 nit movie, just you don't clip above 4000 nits and look the waveform monitor and histogram.

BTW 49 HDR10 movies have MaxCLL 0 nits.
 
#18 ·
Haha that's hilarious. I remember hearing that MaxCLL hack from the interview with Mr (Dr?) Spears from SpectraCal a couple years back.

What I don't get is why all HDR10 displays don't just re-compute the metadata per-frame. The PQ encoding values are fixed, set in stone, and not relative. Meaning they don't, say, max out their use of the 64-940 bit range depending on the mastering display. Those values are pre-set and fixed in stone, and always refer to absolute luminance values.

I can't for the life of me understand why TVs with dynamic LED control (per frame) for SDR content wouldn't use that exact same chip to simply ignore static metadata (except maybe primary coordinates for the gamut, not the luminance range), and recompute the metadata internally every frame, for HDR10 static content or UHD Blurays with MaxCLL set to 0.

MaxCLL isn't the only value in the metadata stream, is it? It probably also has MinCLL, AvgCLL, and of course mastering primary coordinates in x,y (CIE). I see how the mastering gamut is necessary to map to the display's gamut, but not why it's absolutely necessary to refer to the luma metadata. It should be computable trivially and probably already is, for SDR content which doesn't have metadata but for which the per-frame dimming algorithms work already.
 
#20 ·
MaxCLL isn't the only value in the metadata stream, is it? It probably also has MinCLL, AvgCLL, and of course mastering primary coordinates in x,y (CIE).
For HDR10 there are only two values specified in addition to the SMPTE 2086 mastering display color volume. One is MaxCLL, the other is MaxFALL. If you calculate the average level for each frame in a stream, MaxFALL is the highest average level out of all the frames in the stream.
 
#19 ·
Unfortunately, it was pretty clear from the outset that MaxCLL wasn't going to get any consensus for how it should be calculated or used. When that happens the only thing you can do is ignore it because the information it conveys is unreliable.

There are many cases where MaxCLL exceeds the mastering display peak luminance as Ted has pointed out. A couple of ways this can happen. One is that MaxCLL is a very sensitive statistical measure (Samsung also mentioned this in their presentation) so even a single bright pixel (i.e. created from ringing during a HD to UHD upscale) can produce a spike in the value. Another is that the workflow used by some studios to "future-proof" their content actually involves grading to values that exceed the mastering display and inserting a temporary tonemapping for viewing on the mastering display during the grading process which never gets baked into the final image.
 
#28 ·
I disagree, dynamic metadata in HDR10+ or DV could easily be recomputed upstream, prior to broadcast, on a frame-by-frame basis, so HLG isn't really necessary. HLG is a perceptually inefficient encoding compared to PQ, too. More bits, less peak nits (5000), less coding efficiency (more banding).
 
#29 ·
What is your proposed solution to the issue I described previously? Specifically, how do you avoid changing tone-mapping within a scene without knowing when scenes have changed and/or what is coming next?
 
#33 ·