Originally Posted by TomHuffman
I haven't responded because I have been on Thanksgiving vacation and not following the thread.
I really don't understand why these debates cannot remain respectful and without personal animus. It is not personal. It was not my intention to question your "honor" and I'll ignore your unhelpful assertions about what I do and do not understand.
Okay, so how doesn't
your method of eliminating color error overestimate dE at, say, 30% for a given absolute difference in u' and v'?
Of course, the CIELUV and CIELAB specifications include consideration of the brightness of the measured sample compared to a reference. This is because the colors in a gamut include a specified level of brightness. Thus, a measured color whose brightness deviates from that reference has a higher dE than one that doesn't. However, the same cannot be said for white.
White is, um, definitional. It's a normalization thing. Yn == 1.0.
The basis of this debate is stated in your claim that "To compute dE, you would need to specify a measured Y (normalized) and an assumed gamma to compute dE in the above situation."
That is the entirety of this argument: should an assumed gamma and a measured Y be included in dE calculations for grayscale.
Good. This is where it gets fun, because the "real" "debate" seemed to be whether eliminating gamma error was a good thing, and since that debate is settled, let's talk specific implementations.
Note: we need an assumed gamma because this the ONLY sense in which we could assume a "correct" level of brightness for white.
White is white. It's a definitional thing. ALL luminance values need to be normalized against the luminance for white, which is 1.0 (what "normalization" means).
Unlike color, the definition of white includes no specific level of brightness. Indeed, the specified level of brightness for color is defined relative to the brightness of reference white, which is NOT defined in advance.
Wow. Simply wow. Which definition of color includes a specific level of brightness? Is red like 3.14159 ftL? Does it turn magenta when it gets to 9 ftL? Does color care whether you are in metric or imperial measures?
Here's a hint: all luminance values need to be normalized against white. That is the definition. As a result of this, gamma is undefined at Yn = 1.0.
Now gray, on the other hand does depend upon what flavor of gray you are trying to produce. In other words, you need to know some things about what was intended versus what was actually produced.
So this all boils down to whatever reasons one can offer for or against including a measured Y and an assumed gamma in one's dE calculations for grayscale. The reasons against, as I have suggested, are compelling. Let me enumerate them:
1. Unlike color, there simply is no specification for the "correct" level of brightness for white. There is only what the target gamma specifies.
I'm done chiding you for being sloppy on distinguishing gray from white. The "correct" level of gray is a combination of the target gamma formula, the target exponent and the signal level.
2. There is not even any fixed definition of the "correct" gamma. This is both because there is more than one way to calculate gamma and because gamma requires assumptions about viewing environment and the characteristics of the display on which the content is mastered.
Good reasons to eliminate gamma error, but then computing gamma error, if it is so unknown, should be pretty tough, huh?
3. The methodology you suggest
You have clearly demonstrated that you do not know this.
is a literal application of the CIE specification for a context in which it was never intended.
Huh? They did not know about gamma and grayscale in the 70s?
The CIE specification is intended for grading color difference,
Of which, grayscale is an example.
not for grayscale tracking, which poses a different set of issues.
Such as?? I'm really curious. This is stuff that seems not to have been included in the SMPTE Journal.
4. The methodology you suggest is inconsistent with industry standards. There is no other source I am aware of that calculates grayscale error in the way you describe.
Aside from the CIE, right? But hey, what do they know?
In other words, show me how you aren't overestimating error? Or concede that it is something you aren't worried about.
By the way, which method are we talking about, again? I'm not sure since I'm pretty sure we have not disclosed our actual algorithm, pretty much ever. And we most assuredly have not done so in this thread.
ColorFacts does not.
We take pride in having deployed things like point gamma calculations years before they did. This really isn't any different.
Greg Rogers' Display Calibration Calculator does not.
Greg generally uses dE(uv).
ISF recommends that it is MORE important to get the low end of the grayscale right rather than LESS important as you seem to suggest.
You are making this too easy:
Originally Posted by TomHuffman What's wrong with the ISF description of color?
I am a graduate of the ISF seminar, and I think that the organization has performed a valuable service at educating the public about the importance of accurate video. However, the ISF understanding of color is not entirely clear.
Cliff Plavin of Progressive Labs has always argued for dC only for grayscale. I have resisted this, because although I agree that dL should be ignored, I didn’t see any reason to ignore dH. However, when you look at the actual data, the contribution of dH to grayscale errors is negligible. Thus, I now think this is a defensible position.
Really? You think that dE76 doesn't encompass dH inside dC? Would it surprise you to learn that it does?
The one you advocate here is not. If you are aware of any other reputable source that calculates grayscale dE the way you advocate here, then I'd sure like to know.
This is getting pedantic, but the only methods I have talked about are either a) the standards themselves, or b) people's specific "adjustments" to them.
5. This methodology has the effect of systematically and sometimes grossly understating grayscale error. One might argue the point because of the eye’s poor sensitivity to color at very low light conditions, as was suggested in the original post's irrelevant discussion of scotpic, mesopic, photopic vision. But as I pointed out, the level of illumination required for this phenomenon to matter is much lower than any grayscale reading would ever include. Nonetheless, you apparently continue to endorse the view that we should calculate the dE of a specified white, say x0.314, y0.351, differently at 90% stim than at 100% stim where our variable sensitivity to color perception is surely irrelevant.
Tom - Please stop. Seriously. Reading comprehension is not your strong suit. I'm having fun picking this apart, but now you are embarrassing yourself.
So that we are clear: the method
should be constant across any set of numbers. The output is what changes based upon the inputs. All that I posted, above, was a set of numbers that were straight CIE76 with dL = 0 and a constant du' and dv'. That's all. The point was to demonstrate a desirable property of color error that was intended by the CIE, and that also shows up in CIE94 and CIE00. It is this property that our little trick preserves, while eliminating gamma error. No, we do not aribtrarily set dL to 0.
The rest of this is simply demonstrating that despite having published a dE calculator of your own
, you do not really understand what goes on in the numbers. Your complaint, in this case, is with the CIE itself, not us..
Your own example has the dE of the SAME COLOR OF WHITE dropping nearly 2 points for no other reason than from going from 100% to 90% stim and reduced by more than half at 40% stim.
Yep. The numbers are what they are, it's not our method. In fact, it is a lot closer to yours. The only thing I did was let L* vary, rather than hold it constant to show how the CIE intended the equation to work. Obviously, you did not recognize that.
If you think it is as easy to see a given color difference when there is approx. 1/7th - 1/10th as much light around as what you are adapted to for white then that sounds like a testable hypothesis. One might even develop a model, of people's color perception. Here's a hint: if you want to be taken seriously, don't use your "sunlight/inside light" example from your WSR article. That one was simply bad.
I frankly find it difficult to even take this very seriously. It appears to arise from a rigid application of a mathematical model without regard to the consequences for practical application much less common sense.
What consequences are there for practical application? That you set a high hurdle for imperceptibility? That you have to rely on software that does all of the computation for you in real time? That our users have to toggle a control if they want to use it? Do tell. I'm curious here, too.
As for common sense: I'm all ears. I'd really be entertained to learn why you prefer tool vendors who were fast-and-loose with published standards.
Consider the following. Assume a 35 fL peak output, 2.2 gamma display. Also assume a consistent white of x0.314, y0.351. This is the white point for DCI. Now what’s the dE of this white relative to a Rec. 709 reference of x0.3127, y0.329? According to the methodology you endorse here, at 100% stim it is 17.5 (you get a slightly different number I suppose because of rounding differences), but at 20% stim it is 3.4, well within the SMPTE’s recommendation of a dE of no more than 4.0 L*a*b* units. So, using your recommended methodology, x0.314, y0.351 at 20% stim is a perfectly acceptable calibration result.
Careful, you are perilously close to saying that SMPTE uses L* when computing dE (you would be right there, though individuals vary in interpretation as this thread demonstrates). However, your math is off. Let's leave aside Lab76 alone until we master Luv. In CIE76 or CIE94 terms, it is over 9.
Here’s the RGB values for that color of white.
Our numbers match here.
Bill, this is madness.
Are you really telling your customers to not bother with an error this large at the low end of the grayscale because our eyes just won’t be sensitive to it?
Here’s a simple experiment. Just look at a 20% (approx. 3.5 cd/m2) window or field of x0.314, y0.351. If you can’t tell the difference between that and another window or field of x0.3127, y0.329 then getting your display calibrated is the least of your worries.
Agreed. However, your "simple experiments", judging by the WSR article, often involve such "minor nuisances" as significant changes in ambient light spectrum and luminous flux, so one does have to double-check these things.
6. This recommendation isn't even internally inconsistent. Surely you do not also advocate using a different set of standards for judging the dE of pri/sec colors when using 75% windows as opposed to 100% patterns. But you should to be consistent with your view that the eye becomes progressively less sensitive to color at lower light levels and that the dE measurement should reflect this.
Huh? It does, and so does yours, unless you use dE(uv) for gamut work. We recommend people turn off gamma correction when doing primary/secondary work. This was discussed previously.
If you did then, all else being equal, a color of red at 75% should have a lower dE than the same color of red at 100% stim.
A given absolute difference in chromaticity would be less visible at lower light levels. Whether it drops below the threshold of visibility is another matter. This is curious from someone who just published something on the underappreciated nature of brightness in color measurement. Ironic, even.
Of course, you don’t advocate this because the brightness of color is judged against white as a reference, which changes proportionally. But that’s the point, isn’t it? The brightness of white is relevant for gamma, levels, and calculating the dE of color. It is NOT relevant for calculating the dE of itself.
Huh? Are you confusing white, which is a point, with gray, which is a line connecting white and black? If white changes, then the entire adaptation model changes (must be re-normalized). Luminous flux is captured in the "Lightness" component of color error. You really don't understand this, do you?
You seem to want to make the narrow point that IF you include a measured lightness element and assumed gamma in dE calculations that you get the result you describe. I don’t question this result. What I question is whether this approach offers a reasonable calibration methodology or sound color science.
Gauntlet thrown; gauntlet accepted. I am now free to post my critique of your CMS document. Hilarity will ensue.
BTW, Grassmann's Law explains why the brightness of RGB equals the brightness of white and why the brightness of each of the secondaries equals the brightness of the contributing primaries. That was not the issue I mentioned, which was the relative brightness (relative to each other and relative to the original white at the same level of stimulus) of the grays that result after removing all of the chroma from color signals.
Tom - Here's a clue, as in "let me spell it out for you":
The additive mixture of primaries is an algebraic property between the defined primary locations and the white point. If your white point is constant (say, D65), but your primaries deviate from the established standard, then a change in the relative mix of the primaries will change the amount of total light, aka Grassman's law. When you change a saturation control, you are changing the relative mix of these primaries (note my red desaturation example, above). In other words, whether the predicted "no change" in luminance actually occurs is entirely dependent upon the actual locations of the primaries.
This application of Grassman's law is, I suspect, more advanced than your understanding since your CMS document includes the contradictory admonishments to set red to as close to 21% as possible using a saturation control, while not using color decoder controls to change the gamut. However, we'll give you a head start to make wholesale changes. It's late, and I am now bored with this utterly and thoroughly.