Ok, I have just been over this with our colour scientist, and no, we are not going to discuss this any further.
We have very valid reasons for the value we use, and we will not be changing it.
Very happy for others to disagree with us, as that is each and every one's prerogative.
But we stand by the value we use.
But please understand this is not something that limit our users to our values.
As we do not link profiling with calibration all users can define their own 'values' for final calibration after the profiling process has finished.
Just to be clear, you say sRGB is 1.95, but you don't have any data to back that up.
You also won't disclose what formula you use to calculate that.
We are wrong, you are right, and the reason you are right is proprietary.
Thank you for the contribution to the forum.
CalMAN Lead Developer
I've always thought that sRGB was 2.2, from what I've heard. But by trying to understand this thread, you guys seem to be leaning towards 2.3? I'm calibrating to sRGB for games mainly so if I'm wrong in assuming it's 2.2, is there a right one to be choosing?
I believe the authors of sRGB were trying to approximate 2.2. However, the effective exponent of the sRGB reverse transformation varies from 1.0 at black to 2.275 at white. It's about 2.224 at "middle gray" though, as shown on my table below.
|% Stimulus||Relative Luminance||Effective Power
|4.045% (break point)||.0031||1.798|
There are alot of different ways of computing an "average" exponent on a function like this. One of those methods might well yield a value in the 1.95 range. But based on the table above and graph below, I think a 1.95 gamma would be a poor approximation to the sRGB reverse transformation.
It would not have been that difficult to design the sRGB reverse transfomation to hit exactly 2.20 at 50% stimulus rather 2.224 btw. IMO though the real target of the sRGB transformation was probably 0.45 (or 1/2.2222...) on the encoding side, as shown on the graph below.
(click to enlarge for a better view)
Prior to the implementation of sRGB, most PC graphics applications (excluding Macs and SGI) typically used either 0.45 or 1/2.2 (0.454545...) for gamma correction. I guess the designers of sRGB probably felt that 0.45 was the simpler and perhaps more commonly used value, and used that as their guide.
The 0.45 and 1/2.2 values were most likely adopted by the PC graphics industry from the older NTSC camera gamma standard btw.
HCFR stable build 3.4.2 [donations accepted]
Build a color correction cube for your eeColor box using ArgyllCMS, A Comparison of 3DLUT Solutions for the eeColor box
Some common display real world black levels, Some movie luma and luminance statistics
PN60F8500AFXZA T-FXPAKUC 1206.3 + SamyGO - some recommended settings
We used the 1.95 value just to differentiate from a power law 2.2 gamma (the most common Rec709 gamma flavour).
What we have always done for the actual calibration is to accurately map the variable compound gamma profile - with the linear segment near black.
Having said all that, there is a train of thought that the sRGB standard actually defines the display as a standard 2.2 power law, with the variable compound gamma being suitable just for encoding.
This is not 'proven' theory, but has real merit.
This is something Charles Poynton and I have briefly discussed, and hopefully he will elaborate his thoughts on this.
(Our discussion was actually stimulated by this very thread...)
In the real world, no sRGB display we have profiled direct from a manufacturer has ever had a linear segment near black (the main different component of the compound gamma concept).