This is a myth repeated so often that I have lost count. Decent colorimeters typically do no better nor no worse on average with color than they do with white. True, the biggest errors tend to be with color, especially green. However, the smallest errors are generally with color as well. It is not uncommon for one of these instruments to read red within 0.002 of a reference device. I have verified this again and again. This myth simply will not die.
This is not a myth! Simply speak with the developers directly that designed these instruments and ask them if the products are certified for measurement of color! The answer which you will receive is that they are not certified for color. The reason for this is simply that the devices are calibrated to work within a specific colorspace as they are calibrated on CRT/ LCD displays in the lab. The digital display industry currently provides a wide variety of products which use an assortment of lamp types (CCFL, LED, edge Lit, backlit), LCD and plasma panels with very different characteristics for primary colors then the typical CRT displays of a few years ago. When the manufacturer calibrating the instrument uses a specific display technology the best possible results will be obtained by the end user by using the identical display. Using anything else will always provide results which are oftentimes unpredictable and may be quite inaccurate. The instrument cannot be certified accurate for the simple reason that it responds differently to each display technology and each manufacturers displays even when it is brand new and freshly calibrated.
Providing your theory of how the filter based product works only describes part of the issues which calibrators encounter on a frequent basis. Even if you have found that filter based instruments may frequently measure Red to within .002 of a reference device which I find interesting in itself it makes it very difficult to calibrate a display controlling only one primary correctly!
Originally Posted by ghibliss
This type of instrument would require a library of 3X3 matrices to be developed for each and every display model which one wishes to calibrate to achieve any degree of accuracy.
"Any degree of accuracy" is a meaningless metric. Obviously, the more closely linked the display used for color correction is to the display being measured, the better the results, with the exact same display being the best choice of all.
You apparently do not like my choice of words when I said "one wishes to calibrate to achieve any degree of accuracy". I believe that most people would relate "accuracy" to a calibrated industry reference instrument.
Originally Posted by PlasmaPZ80U View Post
Yes, I am aware of all of this but I'm wondering if you or Tom Huffman could provide some numbers on how much accuracy is to be gained on average between a regular meter and the PRO version. For example, if I have a Samsung Standard backlit LCD, how much more accurate will the D2 PRO be over the regular D2 in terms of chroma and luminance tolerances/errors.
I'm just trying to quantify the difference between the PRO version and the regular version of the D2 or DTP-94.
It's a fair question. First, understand that without testing several meters and taking an average, I can't give a precise answer to this. The offer just started so I don't yet have a database of corrections to refer to.
Having said that, I just tested one Display 2 (not a new one) on a Samsung LCD and the average error in the x axis was 0.012 and in the y axis 0.008. That's an average--it was higher in some cases, lower in others. Although not scientific, I suspect that this result would be typical for used D2s. New ones, of course, will measure a little better.
Is this type of error now commonly considered acceptable with a profiled meter or one using a set of supplied offsets? How much error does the instrument have for a primary in the factory supplied state to know if this is a meaningful improvement?