from Vision Research
, similar to an older study
, and very similar to the gamma calibration images that we're familiar with
The logic is as follows:
In order to assess the gamma of a display, you need to know the relationship between input signal and output luminance. Once you have that, you can fit the relationship to a gamma function, and then you have your gamma. If desired, you can use this to create an inverse function and a LUT to linearize the gamma of the display.
Figuring out the input signal is easy. Figuring out the luminance requires a luminance meter/photometer.
However, you can do some clever psychophysical things to get at the relative
luminance of the display, and if u can get a few data points for input signal vs. relative luminance, you can get your gamma (the shape of the function is independent of the absolute values of luminance).
So, the first thing you do is create two patches on the display. One is a reference patch, and the other is a calibration patch. The reference patch contains an array of alternating black and white lines. The black lines are set to the minimum luminance of the display (0), and the white lines are set to the maximum (1). With a high enough spatial frequency of this array (observer can sit far from display if necessary), the eyes will perceive the reference patch as a homogeneous patch, whose luminance is the mean of 1 and 0 (i.e. 0.5). The observer then adjusts the input signal for the calibration patch until they become perceptually identical.
Let us call this adjusted input signal V(0.5).
We now have three data points.
Input Signal: [0 0 0]
Relative Luminance: 0
Input Signal: [255 255 255]
Relative Luminance: 1
Input Signal: [V(0.5) V(0.5) V(0.5)]
Relative Luminance: 0.5
This procedure is then followed iteratively: The next reference patch contains alternating lines of 0 and 0.5 luminance (to yield the input signal associated with 0.25 luminance), the next one is between 0.5 and 1, and so on...
Here are the results - on the left is the curve generated through this psychophysical process, and the right is the one generated through photometric measurement. Ignore the black dots for now - they concern the residual luminance error due to non zero luminance black level issues (which they discuss in some detail). My understanding is that despite the fact that the relative luminance approach assumes a black level of 0 cd/m^2, they still got excellent results:
The rest of the paper deals with an interesting way to calibrate the LUT for a bit stealing approach - this refers to a method where you can effectively increase the grayscale bit depth of the display by "borrowing" from other color channels, enough that it will increase the luminance by a small amount, but not enough that the human visual system can detect changes in chromaticity. In order to do this, you need to know the correct ratios of the color channels to use when "borrowing", and part of the process involves knowing how the luminances of the primary colors match up. They used a motion illusion to do this, but I won't describe it here (partially because I don't yet fully understand it, but also because it's not as relevant to this forum).