Originally Posted by spacediver
More or less - perhaps instead of "gamma being brightened", it's more accurate to say "the rate of luminance growth is increased" (relative to the rate it grows in a 2.4 function).
As for the point gamma values, I'm actually not super comfortable with the idea of point gamma estimates. Gamma to me is a value that describes the function as a whole, in particular a function of the type V^γ, where V is the video input level, and γ is the gamma.
The important thing in video when it comes to luminance functions is the resulting brightness relationships between video input levels, and I'm not sure how useful a series of point gamma estimates are for illustrating this.
I come from more of an image creation/processing background where, generally speaking*, lower gamma results in a subjectively brighter image, and higher gamma = a subjectively darker or more contrasty image, as illustrated here
. (*Some software apps will invert that relationship though, by treating gamma as a 1/γ quantity instead.) So thinking about the Rec. 1886 EOTF in terms of how it alters the effective display gamma at different stimulus levels actually helps me to better visualize the transfer function's "distortive" effects on the imagery being displayed.
I can see how that might be confusing if you're more accustomed to thinking in terms of measured luminance, or a simple power-law which gets applied to all stimulus levels (like in the good ole NTSC days
). But the net effect of the transfer functions in standards like Rec. 1886, Rec. 709, sRGB, etc. is to vary
the effective encoding or decoding gamma based on the stimulus. Since gamma is not constant in these functions, they cannot be accurately represented by a simple power-law like the one you described above (V^γ).
I don't wanna put words in anyone's mouth, but I think that may be the distinction you're really trying to draw attention to in your comments above, namely the difference between an OETF/EOTF approach vs. encoding/decoding gamma represented as a simple power-law. That's something that Scott and Joel sort of glossed-over in the interview.
One important thing to remember when computing the effective display/decoding gamma for a given stimulus is that both the stimulus or "input" value, and the luminance or "output" value need to be normalized to the range 0... 1.
In the first example I gave above, the stimulus is already normalized to V=0.50. To calculate the effective gamma at that stimulus though, L or luminance first needs to be normalized, using (L-Lb)/(Lw-Lb)...
(26.3197-1.0) / (100-1.0) = 0.2558
Then the effective gamma can be computed using the natural log of the normalized
luminance divided by the natural log of normalized stimulus V...
ln 0.2558 / ln 0.50 = 1.967
If you don't normalize both the "input" and "output" values then all you're doing is applying a power law to your absolute or relative luminance values vs. stimulus, which may be interesting to look at on a graph,... but it's not "gamma" imho.