
Page 2 of 2  1  2 

Thread Tools 
Old
06102014, 05:26 PM
Advanced Member
Join Date: May 2013
Location: Toronto
Posts: 993
Mentioned: 4 Post(s)
Tagged: 0 Thread(s)
Quoted: 142 Post(s)
Liked: 81
Quote:
Originally Posted by TVOD
I didn't add any compensation for the lowlight linear section of the encoded curve, but then again as this trying to emulate a CRT that may be correct. This adds a toe to the overall curve which if we wanted to avoid would add a linear section to the lowlights of the curve. If the display device used for grading did not have a linear section, then compensation might be built into the video by the colorist.
I didn't add any compensation for the lowlight linear section of the encoded curve, but then again as this trying to emulate a CRT that may be correct. This adds a toe to the overall curve which if we wanted to avoid would add a linear section to the lowlights of the curve. If the display device used for grading did not have a linear section, then compensation might be built into the video by the colorist.
By encoded curve, are you talking about the camera gamma?
If so, then yes, compensation would be built in by the colorist, perhaps through image rendering protocols or something. From what I understand, things are display referred these days  encoding gamma is all over the place, or linear in the case of RAW I believe  so I think the idea is just to have a standard video gamma function that attempts to be perceptually uniform. Let the studios and film makers worry about the transfer from capture to video rendering
Sponsored Links  
Advertisement


Old
06102014, 05:34 PM
AVS Special Member
Join Date: Jun 2003
Posts: 4,881
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 75 Post(s)
Liked: 71
Yes for lack of a better term. Perhaps source gamma would be better.
Quote:
On live cameras I think the gamma is typically set and left alone through a show. On the other hand, for color correction gamma is considered an active control. Turn it 'til it looks purty
Originally Posted by spacediver
If so, then yes, compensation would be built in by the colorist, perhaps through image rendering protocols or something. From what I understand, things are display referred these days  encoding gamma is all over the place  so I think the idea is just to have a standard video gamma function that attempts to be perceptually uniform. Let the studios and film makers worry about the transfer from capture to video rendering
If so, then yes, compensation would be built in by the colorist, perhaps through image rendering protocols or something. From what I understand, things are display referred these days  encoding gamma is all over the place  so I think the idea is just to have a standard video gamma function that attempts to be perceptually uniform. Let the studios and film makers worry about the transfer from capture to video rendering
Old
06102014, 05:42 PM
Advanced Member
Join Date: May 2013
Location: Toronto
Posts: 993
Mentioned: 4 Post(s)
Tagged: 0 Thread(s)
Quoted: 142 Post(s)
Liked: 81
Quote:
and hope that a true artist is behind the controls
As for your equations, when I have time, I'm gonna spend some time going through your math with matlab so I can understand it better. Also meant to ask  by gain control, do you mean adjusting the number that the function is multiplied by (if gamma was 1.0, then would gain control be the same as "slope control")?
Old
06102014, 06:06 PM
AVS Special Member
Join Date: Jun 2003
Posts: 4,881
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 75 Post(s)
Liked: 71
Quote:
Yes. The multiply function, which is essentially reducing the gain if one thinks in hardware terms, would change the slope of the lines so they start at the offset and go to 1. Another way to say it is 1 remains normalized. In the pdf, they mention no values go below 0 with L = a(max[(V +b),0])^2.4. Clips at 0.
Originally Posted by spacediver
and hope that a true artist is behind the controls
As for your equations, when I have time, I'm gonna spend some time going through your math with matlab so I can understand it better. Also meant to ask  by gain control, do you mean adjusting the number that the function is multiplied by (if gamma was 1.0, then would gain control be the same as "slope control")?
and hope that a true artist is behind the controls
As for your equations, when I have time, I'm gonna spend some time going through your math with matlab so I can understand it better. Also meant to ask  by gain control, do you mean adjusting the number that the function is multiplied by (if gamma was 1.0, then would gain control be the same as "slope control")?
Old
06122014, 07:53 PM
AVS Special Member
Join Date: Apr 2003
Posts: 6,802
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 201 Post(s)
Liked: 242
This is a toughy. So I admire Scott and Joel takin a swing at it. I've tried to explore this subject from as many different angles and perspectives as a layperson can. And it still gives me conniptions at times. I'm still a little foggy on exactly how Rec. 1886 is designed to work, for example. And I wonder if one of the more mathematicallyminded individuals here could walk us through a simple example of how that works.
To make it easy, let's say I have a TV that's 100 cd/m^2 (or ~29 fL) at 100% stimulus white, and 1.0 cd/m^2 (~0.29 fL) at 0% stimulus black. How would I compute the luminance in cd/m^2 for a 50% stimulus gray on the TV using the reference EOTF equations in Annex 1 of Rec. 1886?
If I compute variables a and b as prescribed there, then I think I get the following values to plug into the equation L = a(max[(V + b),0])^γ...
Lw =100 cd/m^2
Lb=1.0 cd/m^2
V=0.50 (for 50% stimulus)
γ=2.4
a=68.319794966769544905908328292063
b=0.17203055971855843024373678451221
From there, I'm not quite sure what to do with the "max" and "0" components. But if I throw those out, and just solve: L = a(V + b)^γ, then I get...
L=26.319655973940688390413988944723 cd/m^2
Does that seem about right?
To make it easy, let's say I have a TV that's 100 cd/m^2 (or ~29 fL) at 100% stimulus white, and 1.0 cd/m^2 (~0.29 fL) at 0% stimulus black. How would I compute the luminance in cd/m^2 for a 50% stimulus gray on the TV using the reference EOTF equations in Annex 1 of Rec. 1886?
If I compute variables a and b as prescribed there, then I think I get the following values to plug into the equation L = a(max[(V + b),0])^γ...
Lw =100 cd/m^2
Lb=1.0 cd/m^2
V=0.50 (for 50% stimulus)
γ=2.4
a=68.319794966769544905908328292063
b=0.17203055971855843024373678451221
From there, I'm not quite sure what to do with the "max" and "0" components. But if I throw those out, and just solve: L = a(V + b)^γ, then I get...
L=26.319655973940688390413988944723 cd/m^2
Does that seem about right?
ADU
Last edited by ADU; 06122014 at 08:05 PM.
Old
06122014, 08:02 PM
Advanced Member
Join Date: May 2013
Location: Toronto
Posts: 993
Mentioned: 4 Post(s)
Tagged: 0 Thread(s)
Quoted: 142 Post(s)
Liked: 81
yep, you've calculated your a and b correctly.
So plugging in those values into the equation, you get:
L = a(V+b)^γ = 68.32* ((0.5+0.172)^2.4)
= 68.32 * (0.672^2.4)
= 68.32 * 0.385
= 26.31 cd/m^2
just saw ur edit  yes, the max function basically says to evaluate the V+b part of the function, compare it to 0, and choose the larger value. Then proceed with the rest of the function. It's there presumably to prevent negative luminance targets that may arise with a negative inputted black level. I suppose this might arise in an automated process where an instrument reads a very dark display and instrument noise results in a negative reading for the black level.
So plugging in those values into the equation, you get:
L = a(V+b)^γ = 68.32* ((0.5+0.172)^2.4)
= 68.32 * (0.672^2.4)
= 68.32 * 0.385
= 26.31 cd/m^2
just saw ur edit  yes, the max function basically says to evaluate the V+b part of the function, compare it to 0, and choose the larger value. Then proceed with the rest of the function. It's there presumably to prevent negative luminance targets that may arise with a negative inputted black level. I suppose this might arise in an automated process where an instrument reads a very dark display and instrument noise results in a negative reading for the black level.
Last edited by spacediver; 06122014 at 08:29 PM.
Old
06122014, 08:11 PM
AVS Special Member
Join Date: Apr 2003
Posts: 6,802
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 201 Post(s)
Liked: 242
Interesting. So where do the "max" and "0" values in the equation come into play? (It's been along time since I was in algebra class.)
ADU
Old
06122014, 08:20 PM
AVS Special Member
Join Date: Apr 2003
Posts: 6,802
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 201 Post(s)
Liked: 242
Quote:
just saw ur edit  yes, the max function basically says to evaluate the function as you have done, and the look at the output (in this case, 26.31). Then take the max value of 26.31 and 0, and choose that.
the max function is presumably to prevent negative luminance targets that may arise with a negative inputted black level. I suppose this might arise in an automated process where an instrument reads a very dark display and instrument noise results in a negative reading for the black level. shrug
the max function is presumably to prevent negative luminance targets that may arise with a negative inputted black level. I suppose this might arise in an automated process where an instrument reads a very dark display and instrument noise results in a negative reading for the black level. shrug
ADU
Old
06122014, 08:30 PM
Advanced Member
Join Date: May 2013
Location: Toronto
Posts: 993
Mentioned: 4 Post(s)
Tagged: 0 Thread(s)
Quoted: 142 Post(s)
Liked: 81
no prob
I just fixed my explanation btw  the part you quoted from me contains an error.
I just fixed my explanation btw  the part you quoted from me contains an error.
Old
06122014, 09:37 PM
AVS Special Member
Join Date: Apr 2003
Posts: 6,802
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 201 Post(s)
Liked: 242
Quote:
no prob
I just fixed my explanation btw  the part you quoted from me contains an error.
I just fixed my explanation btw  the part you quoted from me contains an error.
If the above is correct, then I get the following luminance values for 25%, 50% and 75% stimulus on my hypothetical 1.0  100 cd/m^2 display...
L=8.6173 cd/m^2 for 25% stimulus
L=26.3197 cd/m^2 for 50% stimulus
L=56.2258 cd/m^2 for 75% stimulus
...which means the effective gamma of the display is approximately 1.850 at 25% stimulus, 1.967 at 50% stimulus, and 2.029 at 75% stimulus.
All of those effective gamma values are lower than the 2.40 exponent in the Rec. 1886 equation, which means the gamma is being brightened, moreso in the shadow detail, to compensate for the less than ideal (for dark room viewing) black level of 1.0 cd/m^2 on the display.
ADU
Last edited by ADU; 06122014 at 09:39 PM.
Old
06122014, 10:19 PM
Advanced Member
Join Date: May 2013
Location: Toronto
Posts: 993
Mentioned: 4 Post(s)
Tagged: 0 Thread(s)
Quoted: 142 Post(s)
Liked: 81
Quote:
All of those effective gamma values are lower than the 2.40 exponent in the Rec. 1886 equation, which means the gamma is being brightened, moreso in the shadow detail, to compensate for the less than ideal (for dark room viewing) black level of 1.0 cd/m^2 on the display.
As for the point gamma values, I'm actually not super comfortable with the idea of point gamma estimates. Gamma to me is a value that describes the function as a whole, in particular a function of the type V^γ, where V is the video input level, and γ is the gamma.
The important thing in video when it comes to luminance functions is the resulting brightness relationships between video input levels, and I'm not sure how useful a series of point gamma estimates are for illustrating this.
Old
06132014, 10:40 AM
AVS Special Member
Join Date: Jun 2003
Posts: 4,881
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 75 Post(s)
Liked: 71
Quote:
All of those effective gamma values are lower than the 2.40 exponent in the Rec. 1886 equation, which means the gamma is being brightened, moreso in the shadow detail, to compensate for the less than ideal (for dark room viewing) black level of 1.0 cd/m^2 on the display.
Old
06132014, 09:59 PM
AVS Special Member
Join Date: Apr 2003
Posts: 6,802
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 201 Post(s)
Liked: 242
Quote:
More or less  perhaps instead of "gamma being brightened", it's more accurate to say "the rate of luminance growth is increased" (relative to the rate it grows in a 2.4 function).
As for the point gamma values, I'm actually not super comfortable with the idea of point gamma estimates. Gamma to me is a value that describes the function as a whole, in particular a function of the type V^γ, where V is the video input level, and γ is the gamma.
The important thing in video when it comes to luminance functions is the resulting brightness relationships between video input levels, and I'm not sure how useful a series of point gamma estimates are for illustrating this.
As for the point gamma values, I'm actually not super comfortable with the idea of point gamma estimates. Gamma to me is a value that describes the function as a whole, in particular a function of the type V^γ, where V is the video input level, and γ is the gamma.
The important thing in video when it comes to luminance functions is the resulting brightness relationships between video input levels, and I'm not sure how useful a series of point gamma estimates are for illustrating this.
I come from more of an image creation/processing background where, generally speaking*, lower gamma results in a subjectively brighter image, and higher gamma = a subjectively darker or more contrasty image, as illustrated here. (*Some software apps will invert that relationship though, by treating gamma as a 1/γ quantity instead.) So thinking about the Rec. 1886 EOTF in terms of how it alters the effective display gamma at different stimulus levels actually helps me to better visualize the transfer function's "distortive" effects on the imagery being displayed.
I can see how that might be confusing if you're more accustomed to thinking in terms of measured luminance, or a simple powerlaw which gets applied to all stimulus levels (like in the good ole NTSC days ). But the net effect of the transfer functions in standards like Rec. 1886, Rec. 709, sRGB, etc. is to vary the effective encoding or decoding gamma based on the stimulus. Since gamma is not constant in these functions, they cannot be accurately represented by a simple powerlaw like the one you described above (V^γ).
I don't wanna put words in anyone's mouth, but I think that may be the distinction you're really trying to draw attention to in your comments above, namely the difference between an OETF/EOTF approach vs. encoding/decoding gamma represented as a simple powerlaw. That's something that Scott and Joel sort of glossedover in the interview.
One important thing to remember when computing the effective display/decoding gamma for a given stimulus is that both the stimulus or "input" value, and the luminance or "output" value need to be normalized to the range 0... 1.
In the first example I gave above, the stimulus is already normalized to V=0.50. To calculate the effective gamma at that stimulus though, L or luminance first needs to be normalized, using (LLb)/(LwLb)...
(26.31971.0) / (1001.0) = 0.2558
Then the effective gamma can be computed using the natural log of the normalized luminance divided by the natural log of normalized stimulus V...
ln 0.2558 / ln 0.50 = 1.967
If you don't normalize both the "input" and "output" values then all you're doing is applying a power law to your absolute or relative luminance values vs. stimulus, which may be interesting to look at on a graph,... but it's not "gamma" imho.
ADU
Last edited by ADU; 06142014 at 01:44 PM.
Old
06142014, 12:06 AM
Advanced Member
Join Date: May 2013
Location: Toronto
Posts: 993
Mentioned: 4 Post(s)
Tagged: 0 Thread(s)
Quoted: 142 Post(s)
Liked: 81
Quote:
So thinking about the Rec. 1886 EOTF in terms of how it alters the effective display gamma at different stimulus levels actually helps me to better visualize the transfer function's "distortive" effects on the imagery being displayed.
Quote:
Then the effective gamma can be computed using the natural log of the normalized luminance divided by the natural log of normalized stimulus V...
ln 0.2558 / ln 0.50 = 1.9669
If you don't normalize both the "input" and "output" values then all you're doing is applying a power law to your stimulus vs. absolute or relative luminance values, which may be interesting to look at on a graph,... but it's not "gamma" imho.
ln 0.2558 / ln 0.50 = 1.9669
If you don't normalize both the "input" and "output" values then all you're doing is applying a power law to your stimulus vs. absolute or relative luminance values, which may be interesting to look at on a graph,... but it's not "gamma" imho.
yep, if I understand you correctly, this distinction is shown below  the red curves show the point gamma estimates using the normalized approach. (these were calculated a while back using the same log ratios you described, although I don't think I used natural log).
Last edited by spacediver; 06142014 at 12:15 AM.
Old
06142014, 11:44 AM
AVS Special Member
Join Date: Jun 2003
Posts: 4,881
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 75 Post(s)
Liked: 71
Renormalizing the black point will show the effective resulting gamma, but the actual offset process can use a constant exponential value for the gamma correction. If one is viewing in a brighter environment and raises the brightness to compensate, vision does not renormalize the black level to alter its sensitivity curve. The reason for all this is because the black level is raised to a level where vision is less sensitive to changes in light.
In the reference EOTF case (constant 2.4), one could alter the gamma correction exponent to a lower number (less contrast) based on the increase in offset applied after the gamma correction, or one could apply the offset before a constant gamma correction exponent to do the same thing. Both require adjustment of gain to normalize the 1 point by multiplying by (1offset).
While it's interesting to show the curves and calculate the effective gamma by renormalizing the black level with an offset added, as vision doesn't do that I don't know how useful it is. It does makes for an interesting discussion.
(Cue the crickets)
In the reference EOTF case (constant 2.4), one could alter the gamma correction exponent to a lower number (less contrast) based on the increase in offset applied after the gamma correction, or one could apply the offset before a constant gamma correction exponent to do the same thing. Both require adjustment of gain to normalize the 1 point by multiplying by (1offset).
While it's interesting to show the curves and calculate the effective gamma by renormalizing the black level with an offset added, as vision doesn't do that I don't know how useful it is. It does makes for an interesting discussion.
(Cue the crickets)
Last edited by TVOD; 06142014 at 03:25 PM.
Old
06152014, 09:42 PM
AVS Special Member
Join Date: Apr 2003
Posts: 6,802
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 201 Post(s)
Liked: 242
Quote:
yep, if I understand you correctly, this distinction is shown below  the red curves show the point gamma estimates using the normalized approach. (these were calculated a while back using the same log ratios you described, although I don't think I used natural log).
The blue curve on the first graph looks like a plot of a simple 2.4 power law. If so, then that would represent the effective gamma of the Rec. 1886 EOTF when Lb=0 (iow, if the display emitted no light at 0% stimulus).
I don't have enough info to comment on the two "point" graphs. Imo though, there is no such thing as "relative" and "absolute" gamma. And the term "compressed" is something I associate with gamma correction rather than black offsets.
If I wanted to show the effective gamma of the Rec. 1886 EOTF on a luminance versus stimulus plot, then I would normalize both axes on the graph to the range 0... 1.
The Rec. 1886 PDF explains how the 10bit video codes used for V (stimulus) are normalized. 8bit consumer video codes would be normalized using (D16)/(23516), or (D16)/219. And the resulting values would be plotted along the horizontal xaxis.
The luminance values in cd/m^2 would be similarly normalized using (LLb)/(LwLb), and plotted along the vertical yaxis.
To plot the effective gamma vs. stimulus, I would use the natural log of (LLb)/(LwLb) and divide that by the natural log of (D16)/219 to compute the gamma values for the vertical yaxis.
Others, including some luminaries in the video business, may disagree with the above. But that's the way I personally would calculate the values.
ADU
Last edited by ADU; 06152014 at 09:50 PM.
Old
06152014, 10:27 PM
AVS Special Member
Join Date: Apr 2003
Posts: 6,802
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 201 Post(s)
Liked: 242
Quote:
Which is to ultimately compensate for the nonlinear visual sensitivity which is similar to a 1/2.4 gamma curve.
In an "average surround", perception of brightness is generally in between a cubed and squared root. In a dim or dark room it probably averages closer to a cubed root, or 1/3.
Also, the perceptual benefits of nonlinear decoding aren't limited solely to displays calibrated to the Rec. 1886 spec. All televisions in use today (and in the past) have roughly the same nonlinearity, which is generally somewhere around the geometric mean of a cube and square.
ADU
Old
06162014, 11:30 AM
AVS Special Member
Join Date: Jun 2003
Posts: 4,881
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 75 Post(s)
Liked: 71
Key word is "similar". But the reason for increasing the contrast of the dark areas (lowering the effective gamma value) with an offset added is due to vision's nonlinear sensitivity. If vision was linear, then adding an offset after gamma correction without a change in effective gamma would have been appropriate.
Old
06162014, 12:14 PM
Advanced Member
Join Date: May 2013
Location: Toronto
Posts: 993
Mentioned: 4 Post(s)
Tagged: 0 Thread(s)
Quoted: 142 Post(s)
Liked: 81
if vision was linear, the ideal gamma would be 1!
Old
06162014, 08:37 PM
AVS Special Member
Join Date: Jun 2003
Posts: 4,881
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 75 Post(s)
Liked: 71
Quite true. However, we still may have had gamma as the earliest reason was to compensate CRTs. One can only speculate how digital would have evolved from analog standards if vision was linear, especially with early composite digital.
Old
11012014, 02:58 PM
AVS Special Member
Join Date: Apr 2003
Posts: 6,802
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 201 Post(s)
Liked: 242
Nice article by Chris H. on this subject.
I tend to agree with VT's comments btw. It's worth noting that Rec. 1886 was not really intended for home use.
Projectors may work differently, but if you're using a directview flat panel in a room with dim or moderate lighting, and the blacks actually look "black", and the whites look "white", then I don't see much reason to deviate significantly from a flat 2.4.
I tend to agree with VT's comments btw. It's worth noting that Rec. 1886 was not really intended for home use.
Projectors may work differently, but if you're using a directview flat panel in a room with dim or moderate lighting, and the blacks actually look "black", and the whites look "white", then I don't see much reason to deviate significantly from a flat 2.4.
ADU
Last edited by ADU; 11012014 at 03:56 PM.
Old
05202015, 11:41 AM
Senior Member
Join Date: Oct 2012
Posts: 322
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 121 Post(s)
Liked: 101
Quote:
Then the effective gamma can be computed using the natural log of the normalized luminance divided by the natural log of normalized stimulus V...
ln 0.2558 / ln 0.50 = 1.967
ln 0.2558 / ln 0.50 = 1.967
That measure of effective (or "point") gamma is different to the loglog plot gradient, which is sometimes used. May I ask why you have chosen to calculate it your way rather than the other way?
I don't think you are alone in doing that, but I am wondering what the underlying rationale is.
Thank you very much,
Old
05202015, 09:52 PM
AVS Special Member
Join Date: Apr 2003
Posts: 6,802
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 201 Post(s)
Liked: 242
Quote:
That measure of effective (or "point") gamma is different to the loglog plot gradient, which is sometimes used. May I ask why you have chosen to calculate it your way rather than the other way?
I don't think you are alone in doing that, but I am wondering what the underlying rationale is.
Thank you very much,
I don't think you are alone in doing that, but I am wondering what the underlying rationale is.
Thank you very much,
http://en.wikipedia.org/wiki/Gamma_correction
That should give you the slope on a loglog plot for each point, if the gamma is not constant. If you think there's another way to calculate it, you'll have to explain it to me.
ADU
Old
05202015, 09:56 PM
AVS Special Member
Join Date: Apr 2003
Posts: 6,802
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 201 Post(s)
Liked: 242
IMO, the blue curve on this sRGB plot at the above Wikipedia Gamma link is wrong btw, because it represents the "point gamma" (apparently) as constant below the break point.
I'm not sure how the above plot was computed, but based on my math, the sRGB decoding transfer function has a gamma of about 1.8 at the break point (which is 4.045% stimulus or .0031 relative luminance). The gamma approaches 1.0 as you get closer to a 0% stimulus, as shown on this graph posted by Joel B. of SpectraCal...
I'm not sure how the above plot was computed, but based on my math, the sRGB decoding transfer function has a gamma of about 1.8 at the break point (which is 4.045% stimulus or .0031 relative luminance). The gamma approaches 1.0 as you get closer to a 0% stimulus, as shown on this graph posted by Joel B. of SpectraCal...
ADU
Last edited by ADU; 05202015 at 10:00 PM.
Old
05212015, 10:45 AM
Senior Member
Join Date: Oct 2012
Posts: 322
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 121 Post(s)
Liked: 101
Quote:
Natural log of (normalized) output divided by natural log of (normalized) input is the only way I know to compute gamma (point or otherwise), as it's defined here...
http://en.wikipedia.org/wiki/Gamma_correction
That should give you the slope on a loglog plot for each point, if the gamma is not constant. If you think there's another way to calculate it, you'll have to explain it to me.
http://en.wikipedia.org/wiki/Gamma_correction
That should give you the slope on a loglog plot for each point, if the gamma is not constant. If you think there's another way to calculate it, you'll have to explain it to me.
I'll use P(f,x) to denote a point gamma function P with arguments f and x. Think of it as meaning "the point gamma of function f at point x". And let's use f(x) to denote an arbitrary function that relates luminance to digital video level. f(x) could be the Rec. 709 OETF, or its inverse, or the BT.1886 function, etc. In short, any function we would like to find the point gamma of at various points along its curve. The type of point gamma you have used can be defined as follows:
1. P(f,x) = Log(f(x))/Log(x)
Note: you can interpret Log(x) as the natural logarithm of x if you so wish, but there is no particular assumed base because a change of base makes no difference to the value of the formula above.
Another definition for P(f,x) we could use is, in written English, "the gradient of the loglog plot of f(x) at x". If you look at some posted graphs, that is clearly the implied definition. I've derived the following formula for that type of point gamma:
2. P(f,x) = x f'(x)/f(x)
Note: that yields the gradient of the loglog plot of f(x) at the point (Log(x),Log(f(x))) in that plot.
If you would like the derivation, I can supply that on request, but I'll omit it for the sake of brevity now. Now we can ask whether the two definitions for P(f,x) are equivalent. That is, can we choose either one and get the same result? In general the answer the answer is "no". I will come back to that later. First let's examine a case where the two definitions do give the same result. Let f(x) be a simple power law "gamma function":
f(x) = x^k
k is an arbitrary constant, but it could be 2.4. And I've used scare quotes above, because there is a gamma function Γ used in mathematics that is completely different to what we are discussing here. I am, therefore, not really keen on the name "gamma function", but you know what I'm referring to. So let's calculate P(f,x) using the two definitions above.
1. P(f,x) = Log(f(x))/Log(x) = Log(x^k)/Log(x) = k Log(x)/Log(x) = k
2. P(f,x) = x f'(x)/f(x) = x k x^(k1)/x^k = k x^k/x^k = k
So, for that particular gamma function f(x), the two definitions yield the same result, which is the exponent k in x^k. Now, let's consider a different gamma function f(x):
f(x) = k
Again, k is an arbitrary constant, but we could suppose it is any number in the interval from 0 to 1. The function f(x) is a bit silly and I could have chosen more realistic functions, but let's not complicate matters unnecessarily. Again, we calculate P(f,x) using the two definitions:
1. P(f,x) = Log(f(x))/Log(x) = Log(k)/Log(x)
2. P(f,x) = x f'(x)/f(x) = x 0/k = 0
Two different results! It should be clear now that the two formulae for P(f,x) do not give the same result for all functions f(x). Where they agree, f(x) is a solution of this equation:
3. f'(x) = f(x) Log(f(x))/(x Log(x))
I am not entirely sure, but I think the only solutions f(x) of equation 3 are of the form x^k. But, of course, we would like to use point gamma to find the effective gamma for other functions of a different form to that  that is largely the point of point gamma. And if  as appears to be the case  the two definitions for P(f,x) give different results, then we need to know which to choose.
Quote:
IMO, the blue curve on this sRGB plot at the above Wikipedia Gamma link is wrong btw, because it represents the "point gamma" (apparently) as constant below the break point.
I'm not sure how the above plot was computed, but based on my math, the sRGB decoding transfer function has a gamma of about 1.8 at the break point (which is 4.045% stimulus or .0031 relative luminance). The gamma approaches 1.0 as you get closer to a 0% stimulus, as shown on this graph posted by Joel B. of SpectraCal...
I'm not sure how the above plot was computed, but based on my math, the sRGB decoding transfer function has a gamma of about 1.8 at the break point (which is 4.045% stimulus or .0031 relative luminance). The gamma approaches 1.0 as you get closer to a 0% stimulus, as shown on this graph posted by Joel B. of SpectraCal...
The constant gamma in the plot is a consequence of the lower luminance part of the Rec. 709 OETF being linear. If we let f(x) be the inverse of that part of the function then:
f(x) = x/4.5
Applying definition 2 for P(f,x), the loglog plot gradient is as follows:
2. P(f,x) = x f'(x)/f(x) = (x/4.5)/(x/4.5) = 1
You are not getting that because your definition of point gamma is not the same as the loglog plot gradient definition.
Cheers.
Last edited by fluxo; 05212015 at 10:51 AM.
Old
05212015, 09:28 PM
AVS Special Member
Join Date: Apr 2003
Posts: 6,802
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 201 Post(s)
Liked: 242
Thank you (I think ).
Some of your above math is a little out of my league, but I'll put my thinkin cap on, and do my best to try to understand it.
I wish you'd chosen a different example for your plot than the inverse of the Rec. 709 transfer function, because that means I have to turn all the values upsidedown in my head. But the green curve on your plot looks pretty close to the inverse gamma of the Rec. 709 encoding function, as I understand it.
Some of your above math is a little out of my league, but I'll put my thinkin cap on, and do my best to try to understand it.
I wish you'd chosen a different example for your plot than the inverse of the Rec. 709 transfer function, because that means I have to turn all the values upsidedown in my head. But the green curve on your plot looks pretty close to the inverse gamma of the Rec. 709 encoding function, as I understand it.
ADU
Old
05212015, 09:28 PM
AVS Special Member
Join Date: Apr 2003
Posts: 6,802
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 201 Post(s)
Liked: 242
FWIW, here's my plot of the Rec. 709 OETF. This is the inverse of the blue curve on your graph above. So the horizontal xaxis represents the relative luminance of the scene being recorded. We'll call it "Lscene" to avoid confusion with the L of the display used in Rec. 1886. And the vertical yaxis represents the encoded/gammacorrected video values V. Both are normalized to the range 0... 1.
The values on the graph, which range from .495 to 1.00, represent the gamma as I see it at various points on the curve, using ln V / ln Lscene (or Log(f(x))/Log(x), if you like).
The resulting curve is pretty close to a square root, as shown below. (The green area represents the difference between the two.)
A 1.0 gamma is represented by a straight diagonal line between the lower left corner and upper right corner of the graphs. And imo, there is only one point in the Rec. 709 OETF where gamma is 1.0, and that's in the bottom left corner, where the normalized scene luminance and encoded video values are both 0.
This is my current understanding of gamma on the encoding side. A plot of the sRGB forward (encoding) transformation would look similar, except that the average values would be closer to .45 rather than a square root.
(EDIT: I've changed the above variables to "Lscene" and "V" to be more consistent with info in the Rec. 1886 spec and Rec. 709 spec. However, it's important to remember that "L" represents different things in these two specs. The "L" In Rec. 1886 represents the relative luminance of the display. And the "L" in Rec. 709 represents the relative luminance of the scene being recorded.)
The values on the graph, which range from .495 to 1.00, represent the gamma as I see it at various points on the curve, using ln V / ln Lscene (or Log(f(x))/Log(x), if you like).
The resulting curve is pretty close to a square root, as shown below. (The green area represents the difference between the two.)
A 1.0 gamma is represented by a straight diagonal line between the lower left corner and upper right corner of the graphs. And imo, there is only one point in the Rec. 709 OETF where gamma is 1.0, and that's in the bottom left corner, where the normalized scene luminance and encoded video values are both 0.
This is my current understanding of gamma on the encoding side. A plot of the sRGB forward (encoding) transformation would look similar, except that the average values would be closer to .45 rather than a square root.
(EDIT: I've changed the above variables to "Lscene" and "V" to be more consistent with info in the Rec. 1886 spec and Rec. 709 spec. However, it's important to remember that "L" represents different things in these two specs. The "L" In Rec. 1886 represents the relative luminance of the display. And the "L" in Rec. 709 represents the relative luminance of the scene being recorded.)
ADU
Last edited by ADU; 05222015 at 10:27 PM.
Old
05212015, 09:44 PM
AVS Special Member
Join Date: Apr 2003
Posts: 6,802
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 201 Post(s)
Liked: 242
sRGB encoding gamma vs. sRGB encoded picture values (Csrgb), as I see it...
ADU
Last edited by ADU; 05222015 at 10:48 AM.
Old
Today, 12:09 PM
Senior Member
Join Date: Oct 2012
Posts: 322
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 121 Post(s)
Liked: 101
Quote:
FWIW, here's my plot of the Rec. 709 OETF. This is the inverse of the blue curve on your graph above. So the horizontal xaxis represents the relative luminance of the scene being recorded. We'll call it "Lscene" to avoid confusion with the L of the display used in Rec. 1886. And the vertical yaxis represents the encoded/gammacorrected video values V. Both are normalized to the range 0... 1.
The values on the graph, which range from .495 to 1.00, represent the gamma as I see it at various points on the curve, using ln V / ln Lscene (or Log(f(x))/Log(x), if you like). [...]
The values on the graph, which range from .495 to 1.00, represent the gamma as I see it at various points on the curve, using ln V / ln Lscene (or Log(f(x))/Log(x), if you like). [...]
This is my (partial) plot:
I've focused on the area of interest.
Quote:
The resulting curve is pretty close to a square root, as shown below. (The green area represents the difference between the two.)
Quote:
A 1.0 gamma is represented by a straight diagonal line between the lower left corner and upper right corner of the graphs. And imo, there is only one point in the Rec. 709 OETF where gamma is 1.0, and that's in the bottom left corner, where the normalized scene luminance and encoded video values are both 0.
And, somewhat unfortunately, it is also undefined at (1,1), because Log(f(x))/Log(x) at that point is 0/0.
You are, of course, free to choose the definition of point gamma that you prefer. And I am undecided as to what is the best definition. However, I should state that there are some peculiar and perhaps undesirable properties of the normalisation & Log(f(x))/Log(x) type approach. For example, consider the following function:
f(x) = x^2.4, x < 0.9
f(x) = 0.9^2.4, x >= 0.9
Plotted in red, versus the ideal 2.4 power law gamma (blue, dashed):
You can imagine a display that clips at input level 0.9 (normalised to range [0,1]). So, for 90% of its range, it perfectly tracks the ideal power law function, but then flatlines for the final 10%. To calculate point gamma using your approach, one would normalise the function. After doing so, we end up with this:
The normalisation procedure has matched up the end points of the plot with the endpoints of the perfect 2.4 power law plot. Other points on the two curves are now separated, where they were not before. As in exercise in curve fitting, this is suboptimal.
Using the Log(f(x))/Log(x) definition, the average point gamma up to input value 0.9 is 1.9, even though that part of the original nonnormalised curve is essentially a perfect x^2.4 curve. Or to put it another way: you could have a display that when showing darker content is exactly the same as a display that perfectly targets the 2.4 exponent power law and yet the average gamma of that display for those input values is calculated to be considerably less than 2.4. Again, this is as a consequence of the normalisation scaling. The calculated gamma for the lower points is affected by the higher points and, potentially, a further measurement could find a new point that forces a rescaling of the whole range and consequently changes effective gamma at every input level.
For the whole input range, the average point gamma is 1.71. Using the other definition of point gamma (loglog gradient), it is 2.16.
Note that the other defintion of point gamma is invariant under vertical scaling. E.g., letting P(f,x) = x f'(x)/f(x), we can choose a new function g(x) = k f(x), for some constant k. It follows that:
P(g,x) = x g'(x)/g(x) = x k f'(x)/ (k f(x)) = x f'(x)/f(x) = P(f,x)
In plain English: if you use the loglog gradient approach, then normalisation does not change the calculated point gamma.
Last edited by fluxo; Today at 12:16 PM.
Sponsored Links  
Advertisement



Thread Tools  


Posting Rules  