Originally Posted by fafrd
Would a high-density LUT allow this issue to be overcome (with proper meters and calibration methodology)?
No - again, look into the comparative columns - all devices were calibrated to a dE max (CIE 1931, dE2000) of 0,00 or very close to it. All narrow primary spectral peak devices were able to be calibrated to max dE of 0,00 over 125 color checker patterns - yet those showed the highest occurrences of the problem.
Originally Posted by fafrd
Would the Sharp Quatron 4th (yellow) sub pixel give it an advantage in reducing this issue versus standard 3-primary (RGB) subpixel configuration based on narrow filters and/or narrow-source white light (ie:QDs)?
Again - no, because its the CIE 1931 observer function thats breaking. Once it is declared invalid for narrow spectral peak devices, it might help - but more likely you will need three more primaries to combat the problem - because they are becoming so narrow.
Also - understand, that "color metamerism failure" is just the way the problem is explained and not something that "suddently was occuring".
Here is the basic process the scientists went through.
1. They recognized that inter subject variation was far larger than intra subject variation - meaning, even people with normal color perception would percieve color widely more different than they would solve the same color matching task multiple times.
2. They recognized - years back (2010), that when people would have to match colors between two different screen technologies - and one of them was a narrow peak device - the error margin (according to CIE1931/dE2000) would suddently double.
3. As a proposed solution, very, very many groups of different color matching functions were devised and attributed to observers (people). Think of it as - instead of one standardized observer (ie. CIE1932 2°) the industry would have to "manage" 8-100's.
4. Mathematical models were developed that could take all this information on how different people (normal visual perception (ie. not color blind) would percieve color and apply it across existing screen technologies and color matching functions. Meaning - you could make assessments not only for CIE1931 2°, but also for 1964 CIE 10°, Judd Voss - and so on. CIE 1931 2° is just used in this example to explode the lid of the problem - because the calibration industry is still using it.
5. And this is the new part - when using this new indicator for color perception (and don't forget - CIE 1931 2°/dE 2000 also is an indicator for color perception -- so they are in effect just competing models (= there CANT BE color metamerism failure AND dE "non noticeable difference"), of which with current screen technologies one cant be true anymore) to assess different screen technologies - they found that > on older (not narrow band primary peak) devices, calibrated to the old paradigm (CIE 1931 2°/dE2000) - the new "color metamerism failure" indicator would produce error margins (how the normal public percieves colors) which were lower by a factor of 100% or even 200%.
Which is lucky - because calibrators could have been calibrating under a wrong assumption with higher occurances of metamerism failure for years - but they now have an excuse, that this problem manifests itself far more pronounced since the industry pushed towards narrow spectrum primary devices.
Looking at the premise of the CIE 1931 2° observer - it can be classified as broken and obsolete - but it will probably remain as a comparative measure so people can be retrained.
The real problem is that the industry right now is pumping out all these devices manufactured to a wrong paradigme (= that very many people percieve colors on very differently) - which CANT be recalibrated to fit the new paradigm.
Sadly - the comparison above didn't include a Quantum dot display - because Table VII for it would be very funny to look at.
In the experiment, Laser-Projectors were used which can be recalibrated very freely (see dE2000 of 0,00 over a 150 color sample). LCD (Quantum Dot) or white OLED panels can not.
Basically meaning, that as soon as the industry started to bet on narrow spectrum peak devices and pump them out into the market, they backed a technology that broke the color science model used since 1931 - remember the Sony OLED that even the best spectroradiometers wouldnt calibrate correctly - which Sony "fixed (= more likely not, but maybe patched up a bit)" by using Judd Voss instead of CIE 1931 2°.
The actual fix is more problematic - since its a heavy cost driver (4 more primaries!) and/or stands directly opposed to the current primary aim of the industry reaching a rec2020 gamut. Notice that even the "best performing" Laser Projection devices in the experiment can only - either be configured so that according to the enhanced perception model, percieved errors are minimized OR a full rec2020 gamut is reached.
The good news is, that so many different observer "impression vectors" (shown as elipses in the 3D graph above) can be "kept in sync" - even on a narrow primary peak standard. Bad news is, you probably need 4 more primaries and still won't reach rec2020 gamut, even with lasers. Even worse news is - at the current trajectory the industry is on - in a few months time, color accuracy errors will become more and more pronounced "as percieved by more and more people".
Now lets see how fast you can reboot an entire industry.