Hello everyone,
I'm new to this whole HDR color universe so I've been reading about HDR systems lately.
Even though a lot of the mist has cleared up, I have a simple question that I cannot seem to find the answer to.
Please, feel free to correct me for any statement, as it is very possible that I have misunderstood some of the information.
With HDR10/HDR10+ systems static/dynamic metadata is defined for the content.
The mastering display info is important as color volume mapping occurs first on the mastering display.
Assuming even Dolby Pulsar (DCI-P3 color space, 4000 nits), it is still far away from the limits of the standard, and the standard color space (BT. 2020) includes roughly 60% of all visible colors.
So color primaries, black level and peak luminance are important if you want to reproduce on the customer display what was seen on the mastering display as closely as possible.
The metadata also includes the following per content according to SMPTE ST2084:
- MaxCLL (Maximum Content Light Level)
- MaxFALL (Maximum Frame-Average Light Level)
And SMPTE ST2094 introduces information on a frame-by-frame/per-scene basis. (HDR10+)
If a relative EOTF curve is used, the maximum usually means "whatever the display can show", thus the light level information is vital for color reproduction.
However, SMPTE ST2084 EOTF is an absolute luminance curve - it means that the original light level of the frame can be computed by the receiver/display from the decoded signal.
Then why it is necessary to include this information?
The only reasons I can think of are:
1. scene light level calculation is not possible without precomputation
-> e.g. real-time computation is too late for temporal control of luminance/chroma roll off during display adaptation?
= you are always interested in the next scene light level, not the current/past one
2. per frame light level computation is too expensive
-> i.e. for optimization purposes - simply to avoid computation per frame in the playback device / display
3. there is something that I do not understand in the whole pipeline
And I am concerned about the last one
If this is not the right place for this question, please move it.
I'm new to this whole HDR color universe so I've been reading about HDR systems lately.
Even though a lot of the mist has cleared up, I have a simple question that I cannot seem to find the answer to.
Please, feel free to correct me for any statement, as it is very possible that I have misunderstood some of the information.
With HDR10/HDR10+ systems static/dynamic metadata is defined for the content.
The mastering display info is important as color volume mapping occurs first on the mastering display.
Assuming even Dolby Pulsar (DCI-P3 color space, 4000 nits), it is still far away from the limits of the standard, and the standard color space (BT. 2020) includes roughly 60% of all visible colors.
So color primaries, black level and peak luminance are important if you want to reproduce on the customer display what was seen on the mastering display as closely as possible.
The metadata also includes the following per content according to SMPTE ST2084:
- MaxCLL (Maximum Content Light Level)
- MaxFALL (Maximum Frame-Average Light Level)
And SMPTE ST2094 introduces information on a frame-by-frame/per-scene basis. (HDR10+)
If a relative EOTF curve is used, the maximum usually means "whatever the display can show", thus the light level information is vital for color reproduction.
However, SMPTE ST2084 EOTF is an absolute luminance curve - it means that the original light level of the frame can be computed by the receiver/display from the decoded signal.
Then why it is necessary to include this information?
The only reasons I can think of are:
1. scene light level calculation is not possible without precomputation
-> e.g. real-time computation is too late for temporal control of luminance/chroma roll off during display adaptation?
= you are always interested in the next scene light level, not the current/past one
2. per frame light level computation is too expensive
-> i.e. for optimization purposes - simply to avoid computation per frame in the playback device / display
3. there is something that I do not understand in the whole pipeline
And I am concerned about the last one
If this is not the right place for this question, please move it.