The following is my current understanding of the issue. I do not claim to have perfect knowledge, and welcome any suggestions for improvement and correction.

Many of us have heard about calibrating our displays so that the gamma is correct (2.2, 2.4, take your pick). The reason often given for doing this is so that you can be more confident that your display matches that of the reference displays on which content was mastered. This is correct, but there is something deeper at play here, and understanding this is important to appreciate the importance of the EOTF transfer function recommended in BT.1886.

In an 8 bit PC RGB framework, we have 256 possible shades of gray. Ideally, we want each successive shade of gray to be

*just* distinguishable from the previous shade, and we want this to hold true whether we're talking about the difference between [23 23 23] and [24 24 24] or whether we're talking about [210 210 210] and [211 211 211].

This will ensure that we're making efficient use of the available dynamic range. If each successive step was too easy to distinguish, we'd be wasting our visual's system ability to discriminate finely. If each successive step was not distinguishable, we'd be limiting our overall dynamic range, and we'd be limited to less than 256 perceivable shades of gray.

In the former case, we'd get some nasty artifacts, where scenes that are supposed to have smoothly varying shades instead have unsightly banding/posterization. In the latter case, we'd get a lot of perceptual crushing/clipping.

So how do we go towards achieving a perceptually uniform relationship between our input level (V) and our perceptual lightness response (L*)?

A naive solution is to set up our systems so that the luminiance (L) is a linear function of our input level (V). So if we double our RGB signal, we double our luminance output.

The problem with this is that the human lightness response (L*) is not a linear function of luminance (L).

See the following set of plots, which shows the results from several studies on the relationship between luminance and lightness.

Notice how the curves depict a nonlinear relationship, where the slope of the line starts out high and decreases as luminance increases.

What this means is that we are more sensitive to changes in luminance at the dark end than we are at the lighter end.

Now look at the natural relationship between the input voltage in a CRT monitor, and the resulting light output:

Notice how the nonlinear relationship here is "opposite" to that in the previous graph? In fact, the relationship of input signal (V) to luminance (L) is very close to the inverse of the relationship between luminance (L) and lightness (L*)!

Think about what this means. If we have a nonlinear relationship (A) between input (V) and luminance (L) and the inverse relationship (A') between luminance (L) and lightness (L*), we can state the following:

L = A(V)

L* = (A')L

therefore L* = (A)(A')V

since A' is the inverse of A, then (A)(A)' depicts a linear function (i.e. they "cancel out").

**And thus, a CRTs natural response characteristics will produce a perceptually uniform set of brightness levels!**
This is not by design, but is rather an extremely fortunate coincidence (as Charles Poynton points out in this

paper, from which the above two images were taken).

Ok, so now we have an understanding of why we calibrate so that the relationship between our input signal (V) and our measured luminance (L) is of the form:

L = V^Gamma

where Gamma is an exponent value (typically 2.4).

Now this is fine when we are dealing with displays that have very low black levels. However, in reality, many displays do not have reference grade black levels.

Suppose we calibrate a display that has a minimum black level of 10 percent of the maximum luminance. On a display that had a maximum luminance of 100 cd/m2, this would mean a black level of 10 cd/m2. That's high, but it's good to use extreme examples for purposes of illustration.

How do we calibrate? We want a gamma of 2.4, but this assumes a very low black level.

A naive solution would be to do something like the following. Take a look at the image below. The blue curve represents a gamma of 2.4 with a black level of 0. The red curve represents a gamma of 2.4 with a black level of 10 percent.

Now there's an immediate problem we can see. The red curve reaches maximum luminance prematurely. This would clip the whites.

The solution is to scale the function so that L(1) = 1. See below:

This particular shifting and scaling approach, however, is not the best way to compensate for a non zero black level.

Remember the nonlinear human lightness response? Notice how the function starts at a luminance of 0. What do you suppose would happen if we did those same original experiments, but now tested observers starting at 10 cd/m2 instead of 0 cd/m2. The naive solution above assumes that the lightness response would

*shift!*. But of course, it doesn't. Our ability to detect differences between any two light levels does not depend on the black level of the display we are viewing!

(that last bit isn't strictly true - we do actually undergo light adaptation, and the dynamic range of our neural response to luminance does recalibrate based on the surrounding light conditions, including ambient light, and light in other parts of the image, but we can ignore this for now, as it's not relevant to the endpoint of this discussion).

Remember, the luminance response of a display to the input signal is quite flat at low black levels. This is great, because we are naturally very sensitive to changes in luminance at low black levels. But if we simply shift the luminance function of the display up to our new black level, this means that the display is going to be outputting very small changes in luminance at the new (and higher) black level. Because we are not as sensitive at this higher black level, this means that we will not be able to distinguish different shades of gray as easily anymore, and we will experience

*perceptual clipping*.

So, given the fixed human lightness response to luminance, how do we deal with displays that have non-zero black levels?

Enter

BT.1886
This recommendation specifies a formula that scales and offsets the gamma curve in such a way that two things happen:

1) The curve begins at the black level of the display (which one ideally measures).

2) The shape of the curve is steepened at the low end of the display.

Crucially, the degree of steepening is calculated as a function of the black level. Higher black levels are going to have to have steeper functions so that shades retain their discriminability at the higher black levels.

See below, where I have contrasted the naive shift and scaling approach with the correct BT.1886 implementation:

Now based on discussion in another thread, it appears to be the case that LightSpace implements the naive solution, rather than the BT.1886 recommendation. However, it is hard to get clarification from Steve Shaw, possibly because he doesn't want to risk making his code public, which is understandable.

**It may well be the case that LS does a fantastic job of implementing BT.1886, but I'm sure users of LS would like to know for sure.**
There may be a way to find out if LS uses the correct BT.1886 implementation, if LightSpace users on this forum can do a simple experiment.

This would involve importing a profile into LightSpace, and seeing how LightSpace judges the luminance response (specified in the profile) relative to its implementation of the BT.1886 target.

The profile used in this experiment should have an artificially high black level, as discrepancies between naive and correct BT.1886 implementations will increase as the black level increases (this is why naive implementations will not cause as much harm in studios with reference displays).

To aid in this, here is a set of values you can use, which represent a correct Bt.1886 function with a black offset of 10% max luminance.

**Input signal (V) ~~~~~~~~~ Luminance(L)**

0 ~~~~~~~~~~~~~~~~~~~~10

10 ~~~~~~~~~~~~~~~~~~~~ 14.3901

20 ~~~~~~~~~~~~~~~~~~~~19.5425

30 ~~~~~~~~~~~~~~~~~~~~25.7497

40 ~~~~~~~~~~~~~~~~~~~~ 32.9765

50 ~~~~~~~~~~~~~~~~~~~~ 41.2658

60 ~~~~~~~~~~~~~~~~~~~~ 50.6583

70 ~~~~~~~~~~~~~~~~~~~~61.1922

80 ~~~~~~~~~~~~~~~~~~~~ 72.9041

90 ~~~~~~~~~~~~~~~~~~~~ 85.8289

100 ~~~~~~~~~~~~~~~~~~~~100
If you need me to convert the input signal into PC or Video RGB levels, let me know.

I don't have LS, and have never used it, so I don't know how to do this experiment, but Steve has said that you can manually enter profiles in XML format.