Originally Posted by ConnecTEDDD
From that I can understand, CalMAN will take more measurements (means more time will be required) to have a better idea of that the display is capable of, and later it will decide what colorpoints to fix.
If I understand well, this means that with a Lumagen 20xx with 9x9x9 3D-LUT, If the display is linear, CalMAN will correct less colorpoints and use interpolation to the others, but to identify this it takes some more initial measurements so the total calibration time propably will be the same, but with less calibrated points and the end.
Does the user will be able to enable/disable this feature?
It doesn't require any additional readings. It just doesn't kick in until it's gone through enough measurements that in theory our interpolation would be perfect. Then it's still an iterative process where we only add a minimum number of points, but continue to validate and add points until we've reached a threshold where our interpolation produces results that are below a definable threshold. We also can hard cap DLC so if you want to turn it to 0 in custom mode you can do that.
So the initial user selection of the target Hardware 3D LUT Device the user has, for the generation from CalMAN of the cLUT will be very important.
CalMAN has to know each selected as a target hardware fixed control points to avoid measuring / trying to correct patches where the hardware don't have the exact color point available in 3D-LUT table space, right?
Seems more like a eeColor users feature than a Lumagen (current 9-Point 3D-LUT Models)
To calibrate a lumagen, you've always had to connect as a lumagen, it isn't supported in the virtual LUT. The Lumagen has always been treated separately since we released the VirtualLUT, since it doesn't actually use VirtualLUT, it's direct hardware control. We write to the hardware after each pass. That said we can still use DLC on it.
We actually can use points in-between hardware points to find better average error values. That's the reason we see 19 as the default luminance ramps, even though a majority of devices are only 17 point.
For the Lumagen, running the full hardware size will give the statistically significant best results. DLC may shave 20% off the time, without changing the perceptual outcome, but that's about the limit of it's capability.
For larger 3D LUTs, DLC can not only be faster, it will give much more consistent results. In the past on a color checker we'd typically see 1 or 2 outliers that report dE's in the 1-1.5 range while the rest are around 0.5. Not that the 1-1.5's are visible, but we aren't seeing those results. So the results are statistically improved, and may be subtly better visually as well.