AVS Forum banner
1 - 4 of 4 Posts

· Registered
Joined
·
2 Posts
Discussion Starter · #1 ·
Hello everyone,
I'm new to this whole HDR color universe so I've been reading about HDR systems lately.
Even though a lot of the mist has cleared up, I have a simple question that I cannot seem to find the answer to.
Please, feel free to correct me for any statement, as it is very possible that I have misunderstood some of the information.

With HDR10/HDR10+ systems static/dynamic metadata is defined for the content.
The mastering display info is important as color volume mapping occurs first on the mastering display.
Assuming even Dolby Pulsar (DCI-P3 color space, 4000 nits), it is still far away from the limits of the standard, and the standard color space (BT. 2020) includes roughly 60% of all visible colors.
So color primaries, black level and peak luminance are important if you want to reproduce on the customer display what was seen on the mastering display as closely as possible.
The metadata also includes the following per content according to SMPTE ST2084:
- MaxCLL (Maximum Content Light Level)
- MaxFALL (Maximum Frame-Average Light Level)
And SMPTE ST2094 introduces information on a frame-by-frame/per-scene basis. (HDR10+)
If a relative EOTF curve is used, the maximum usually means "whatever the display can show", thus the light level information is vital for color reproduction.
However, SMPTE ST2084 EOTF is an absolute luminance curve - it means that the original light level of the frame can be computed by the receiver/display from the decoded signal.
Then why it is necessary to include this information?
The only reasons I can think of are:
1. scene light level calculation is not possible without precomputation
-> e.g. real-time computation is too late for temporal control of luminance/chroma roll off during display adaptation?
= you are always interested in the next scene light level, not the current/past one
2. per frame light level computation is too expensive
-> i.e. for optimization purposes - simply to avoid computation per frame in the playback device / display
3. there is something that I do not understand in the whole pipeline

And I am concerned about the last one :)

If this is not the right place for this question, please move it.
 

· Registered
Joined
·
712 Posts
Hello everyone,
I'm new to this whole HDR color universe so I've been reading about HDR systems lately.
Even though a lot of the mist has cleared up, I have a simple question that I cannot seem to find the answer to.
Please, feel free to correct me for any statement, as it is very possible that I have misunderstood some of the information.

With HDR10/HDR10+ systems static/dynamic metadata is defined for the content.
The mastering display info is important as color volume mapping occurs first on the mastering display.
Assuming even Dolby Pulsar (DCI-P3 color space, 4000 nits), it is still far away from the limits of the standard, and the standard color space (BT. 2020) includes roughly 60% of all visible colors.
So color primaries, black level and peak luminance are important if you want to reproduce on the customer display what was seen on the mastering display as closely as possible.
The metadata also includes the following per content according to SMPTE ST2084:
- MaxCLL (Maximum Content Light Level)
- MaxFALL (Maximum Frame-Average Light Level)
And SMPTE ST2094 introduces information on a frame-by-frame/per-scene basis. (HDR10+)
If a relative EOTF curve is used, the maximum usually means "whatever the display can show", thus the light level information is vital for color reproduction.
However, SMPTE ST2084 EOTF is an absolute luminance curve - it means that the original light level of the frame can be computed by the receiver/display from the decoded signal.
Then why it is necessary to include this information?
The only reasons I can think of are:
1. scene light level calculation is not possible without precomputation
-> e.g. real-time computation is too late for temporal control of luminance/chroma roll off during display adaptation?
= you are always interested in the next scene light level, not the current/past one
2. per frame light level computation is too expensive
-> i.e. for optimization purposes - simply to avoid computation per frame in the playback device / display
3. there is something that I do not understand in the whole pipeline

And I am concerned about the last one :)

If this is not the right place for this question, please move it.
It's included largely because of reason #1. Real-time analysis can support at most only a few frames forward buffering so there is no way to know for sure how high the values may spike to beyond that small window. MaxFALL together with MaxCLL provide an upper bound for the entire run time.

It is also useful information to have access to outside of the display. During distribution/transcoding for example there are places you need to know what range the content utilizes. For example anytime a conversion between HLG and PQ might take place it's necessary to know what luminance to use for going between an absolute and relative luminance encoding (http://downloads.bbc.co.uk/rd/pubs/papers/HDR/BBC_HDRTV_PQ_HLG_Transcode_v2.pdf). Technically the SMPTE 2086 mastering display metadata should be enough to know the display luminance for this conversion, but it's good to check MaxCLL to ensure the content does not exceed that. It's impractical to run the entire movie through and analyze it every time you just want that little bit of information. So this information is used for more than just tonemapping within the tv. I suppose that qualifies as reason #3 on your list, but very few people need to understand the whole pipeline to this level of specificity.
 

· Registered
Joined
·
259 Posts
1 - 4 of 4 Posts
This is an older thread, you may not receive a response, and could be reviving an old thread. Please consider creating a new thread.
Top