A comparison of 3DLUT solutions for the eeColor box - Page 13 - AVS Forum
Forum Jump: 
Reply
 
Thread Tools
post #361 of 656 Old 03-02-2014, 01:09 PM - Thread Starter
AVS Special Member
 
zoyd's Avatar
 
Join Date: Sep 2006
Location: Planet Dog
Posts: 4,434
Mentioned: 6 Post(s)
Tagged: 0 Thread(s)
Quoted: 81 Post(s)
Liked: 303
Quote:
Originally Posted by ihd View Post

So my hopefully not too unreasonable take on this is that, during initial display profiling, one needs to not disregard (or at least not completely disregard) perceptually insignificant colour regions, because we don't yet know that the device will display them as intended - at that point, all we know is that, if reproduced correctly, fine gradations in those regions will be imperceptible. Or to put it another way, all we know is that those input patches might not be so difficult to tell apart once displayed on a non-linear device. But once we have corrected for the difference between the patches and what a given device actually displays on-screen, then those displayed colour regions revert to perceptual insignificance. We now won't be able to visually differentiate the patch for hue 1 from the patch for hue 3, and that is as it should be. And now we can focus our patch sets on verifying that the finest-grained differences are being reproduced correctly.

^--- Nice post. Yes, ideally you want two passes, the first to identify the areas with the largest non-linearities and then design a sampling set that weights for both non-linear trouble spots and perceptual impact. See also this post.
zoyd is offline  
Sponsored Links
Advertisement
 
post #362 of 656 Old 03-02-2014, 01:09 PM
Advanced Member
 
spacediver's Avatar
 
Join Date: May 2013
Location: Toronto
Posts: 773
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 39 Post(s)
Liked: 65
Quote:
Originally Posted by ihd View Post


Now at the assessment or display verification stage, weighting in favour of entirely new, and preferably the most perceptually significant / visually resolvable regions surely makes sense. By this point we should know very well where the reproduction errors are in the display device: all we want now is to confirm that we have made the best corrections to these that we can, and to examine the calibrated display's performance most closely within areas that are visually/perceptually significant. We are no longer concerned that an input of hue 1 will result in hue 3 (or hue 5, for that matter): if we found such an error in the sampling process during initial characterisation, we have corrected for it now. Why now go back and sample these colours again? At best we'll discover that we didn't correct them as much as we hoped.

Your post brings up an interesting tension that I think underlies the discussion here, and that is the difference between physical errors underlying failures of additivity, and error as defined by Delta E.

If all we were interested in was the search for physical errors, then I would completely agree with Mike and Steve. You need to sample the LUT uniformly. So if the LUT has 100 values and you only have 11 verification samples to allocate, then you should allocate them at the LUT values of 0, 10, 20, 30... 100.

This is because, without any prior knowledge of how the display is performing on a physical level, we must assume that physical errors adopt a uniform probability density function (pdf). Of course, if we had some a priori knowledge that light tends to leak from one subpixel to the others at higher luminous intensities, then we could use this information to prioritize our verification samples to regions of the LUT that correspond to these deficiencies.

But yes, if all we were interested in were physical errors, then uniform LUT sampling would make sense.

Now suppose we could quantify physical errors with a single number. It could be based on the number of photons that leak between sub pixels. If one were to examine how Delta E varies as a function of this physical variable (photon leakage), one would find that it changes dramatically depending on the wavelengths of light that are involved, and their amplitudes. In other words, delta E's relationship to photon leakage varies depending on spectral signature and spectral power.

Now it is certainly true that more photon leakage will be associated with a higher delta E. But in purely colorimetric terms, Delta E's relationship to color change depends on which color space we are talking about. Given that we do know something about the color space of the display, we should exploit this during our verification sampling. This is because even if we assume that the probability of photon leakage is uniform throughout the LUT, we know that the probability of delta E magnitude is not uniform throughout the LUT.
spacediver is online now  
post #363 of 656 Old 03-02-2014, 01:09 PM
AVS Special Member
 
TomHuffman's Avatar
 
Join Date: Jan 2003
Location: Springfield, MO
Posts: 6,360
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 15 Post(s)
Liked: 204
Quote:
Originally Posted by ConnecTEDDD View Post

Hello Tom, in case you are using Lumagen as an external pattern generator LightSpace's Engine converts these targets and sends to Lumagen 16-235 commands to generate them, my disk also has all LightSpace patches created to 16-235 levels.

In case you are using LightSpace's Internal Generator, they are at 0-255 range there.
My question was more theoretical. What is the point of calibrating invisible colors? There might be some value--however slight--to including samples above 235, but below 16 are invisible on a properly adjusted display.

I have a secondary, but related, concern. Not only are these below-16 colors invisible, they are also unmeasurable on a reasonably high-contrast display. Trying to reliably measure color below, say 5% video (digital 27) is a waste of time.

Tom Huffman
ChromaPure Software/AccuPel Video Signal Generators
ISF/THX Calibrations
Springfield, MO

TomHuffman is online now  
post #364 of 656 Old 03-02-2014, 01:11 PM
Advanced Member
 
spacediver's Avatar
 
Join Date: May 2013
Location: Toronto
Posts: 773
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 39 Post(s)
Liked: 65
Quote:
Originally Posted by ihd View Post


One would expect however that an initial display characterisation of 'reasonable' granularity would (a) identify areas of the colour space within which a given display is most non-linear and provide the means to design a custom patch set whose granularity (sampling frequency) is weighted in favour of these high-error areas, and that (b) such patch sets could also be weighted to favour, or not favour, perceptually significant regions, and their post-calibration colour renditions compared and assessed.

yep, I think Graeme was articulating a similar idea (when it comes to profiling):

See his posts here and here
spacediver is online now  
post #365 of 656 Old 03-02-2014, 04:24 PM
Advanced Member
 
Iron Mike's Avatar
 
Join Date: Oct 2006
Posts: 781
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 12 Post(s)
Liked: 351
Quote:
Originally Posted by ihd View Post

This has been a very interesting and illuminating AVS thread, and my thanks to all who've made it so. It's the first I've felt compelled to subscribe and post a response to - basically, the conclusions that this particular beginner in calibration has reached (or jumped to!) from reading thus far.

The next three paragraphs only discuss initial display characterisation or profiling, something made much easier with spacediver's helpful graph, with labelled hues, in post #349.

If a display is reproducing input colours faithfully, then hues within a perceptual low-resolution region (to the left on the graph) will be reproduced as colours belonging in that region, and such non-linearities as exist (e.g. hue 1 switching places with hue 3) will be imperceptible. But what if the reproduction is sufficiently non-linear that these input colours, when displayed, resemble those further to the right, i.e. hues which can be resolved by the human visual system? Or if they are reproduced as any other colour outside a given low-resolution region, and contrast visibly with others within it? (In the latter case, hues 1 and 3 changing places might also have consequences, depending on the adjacent, contrasting colour.) So unless one samples these perceptually insignificant hues reasonably well one cannot know if the colours displayed actually correspond to the input patch values, so that theoretically insignificant regions, once reproduced, no longer remain so.

I'm unsure how important this is in practice - I guess it depends how severely actual displays deviate within narrow bands. Another variable might be screen materials and/or ambient light (who knows what effect those LEDs in your equipment stack might have biggrin.gif ).

One would expect however that an initial display characterisation of 'reasonable' granularity would (a) identify areas of the colour space within which a given display is most non-linear and provide the means to design a custom patch set whose granularity (sampling frequency) is weighted in favour of these high-error areas, and that (b) such patch sets could also be weighted to favour, or not favour, perceptually significant regions, and their post-calibration colour renditions compared and assessed.

Now at the assessment or display verification stage, weighting in favour of entirely new, and preferably the most perceptually significant / visually resolvable regions surely makes sense. By this point we should know very well where the reproduction errors are in the display device: all we want now is to confirm that we have made the best corrections to these that we can, and to examine the calibrated display's performance most closely within areas that are visually/perceptually significant. We are no longer concerned that an input of hue 1 will result in hue 3 (or hue 5, for that matter): if we found such an error in the sampling process during initial characterisation, we have corrected for it now. Why now go back and sample these colours again? At best we'll discover that we didn't correct them as much as we hoped.

(And perhaps there was always a smaller probability that errors in perceptually less significant areas would lead to poor visual results, as device errors in these regions would have to be larger to have an impact - which some of the calibration results using perceptually-optimised patch sets in this thread seem to attest to.)

So my hopefully not too unreasonable take on this is that, during initial display profiling, one needs to not disregard (or at least not completely disregard) perceptually insignificant colour regions, because we don't yet know that the device will display them as intended - at that point, all we know is that, if reproduced correctly, fine gradations in those regions will be imperceptible. Or to put it another way, all we know is that those input patches might not be so difficult to tell apart once displayed on a non-linear device. But once we have corrected for the difference between the patches and what a given device actually displays on-screen, then those displayed colour regions revert to perceptual insignificance. We now won't be able to visually differentiate the patch for hue 1 from the patch for hue 3, and that is as it should be. And now we can focus our patch sets on verifying that the finest-grained differences are being reproduced correctly.

I suppose one might think that if hue 3 in the perceptually insignificant region can be displayed inaccurately enough for this to become significant, then this can happen in the opposite direction - somehow during calibration hue 7 is transformed into hue 3, and this is missed during verification because hue 3 isn't sampled in the post-calibration verification sampling scheme, biased as this is towards perceptual space. But this shouldn't matter, because while we probably won't be sampling hue 3, we likely will be sampling hue 7, so that any errors in 7's reproduction will be caught that way (and of course we do want to catch those).

yes, u got part of it and that is what I and others have been trying to explain to others here.... consumer grade displays are not perfect linear displays... even expensive grading displays are not.... u can't just dismiss hues based on ASSUMING that the display is perfect...

if u don't sample in a region u don't know what's going on in the region and that is what some of these "optimized" patch sets do, they dismiss or undersample entire regions.

what u don't consider (or have not mentioned) yet are these facts:

1) I don't wanna profile X times, because if u do that, u can just profile ONCE with a larger set instead of wasting time. You can feed the color engine all that data from the ONE profile and it will have a much better understanding of what is going on with the display

2) ur validation patch set needs to include some colors that u initially profiled. for 2 reasons:

a) u wanna actually evaluate the accuracy of the color engine that calculated offsets, don't just blindly trust the solution u are using, otherwise there will be unpleasant surprises. There are big differences, that is why some solutions produce a superior image - u will see this rather quickly if u simply try various solutions

b) u need to evaluate the performance of the LUT box as well - the hardware can and does add problems. there are multiple steps how to evaluate a LUT's performance, the LUT being active in the LUT box evaluation is the last evaluation, and LUT boxes tend to (more or less) add distortion ONCE THE LUT IS ACTIVE, ergo the box and therefore the LUT itself perform now differently. The Mini adds more distortion than the eeColor, we've shown this in all the tests in the past....

The entire signal path and all components play a role in the final accuracy of the calibration...

something some peeps here that have no experience in the real world have a hard time understanding...

- M

calibration & profiling solutions: Lightspace, Spaceman, Calman, Argyll, ColorNavigator, basICColor
profiling & calibration workflow tools: Display Calibration Tools
meter: Klein K-10 A, i1Pro, i1D3
AVS thread: Lightspace & Custom Color Patch Set & Gamma Calibration on Panasonic 65VT60
Iron Mike is offline  
post #366 of 656 Old 03-02-2014, 04:42 PM
Advanced Member
 
Iron Mike's Avatar
 
Join Date: Oct 2006
Posts: 781
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 12 Post(s)
Liked: 351
I'm gonna add on to 2b) before I get a thousand questions:


1) u profile a color, e.g. red 255|0|0 when the LUT box has no LUT active - it is more or less in "passthrough" mode - and we will see red is not spot on (in this example)

2) the color engine will calculate an offset in the LUT for red

3) we evaluate the LUT and therefore the color red with the LUT box still in "passthrough" mode - professional solutions such as Lightspace do have the options to do that and ONLY (!!!) here u'll see the true performance of the LUT (in numbers and graphs) and how good the color engine actually is.

4) now we make the LUT active in the LUT box. evaluate again, this is what u will be living with when u use the display with this LUT. hopefully the numbers and graphs are very close or matching to what u got in step (3). If not, then the LUT box introduced distortion.


u will see with different LUT boxes (especially consumer grade boxes) there can be different results....


- M

calibration & profiling solutions: Lightspace, Spaceman, Calman, Argyll, ColorNavigator, basICColor
profiling & calibration workflow tools: Display Calibration Tools
meter: Klein K-10 A, i1Pro, i1D3
AVS thread: Lightspace & Custom Color Patch Set & Gamma Calibration on Panasonic 65VT60
Iron Mike is offline  
post #367 of 656 Old 03-02-2014, 07:16 PM
AVS Special Member
 
sillysally's Avatar
 
Join Date: Jun 2006
Location: Chicago
Posts: 3,667
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 46 Post(s)
Liked: 253
Quote:
Originally Posted by zoyd View Post

Attention LightSpace users: Just released! zoyd's ultimate profiler pak 3K cool.gif 3000 patches of pure hand-forged awesomeness. Designed with cutting-edge color science tools, and a little of my own special sauce to kick it up a notch! With a special configuration aimed at video color reproduction, this pak is guaranteed to bring about a state of calibration nirvana. Just import using the csv tab of your display characterization dialog and away you go.

But wait, that's not all...included with zoyd's ultimate profiler pak 3K is the now famous 1000 patch quasi-random perceptual space-filling volume verification pak (also know as 1000 points of light) A torture test of awesome proportions beyond all previous verification tools. Just run the 1000 points of light after your display is calibrated and send the results to our technical support staff (that's me), and we'll report back how awesome your display color response really is!



zoyds_ultimate_pak_3k.zip 27k .zip file

Now I read this. I am plowing though a very large profile from Steve, I will check your patch set out tomorrow.

Enjoyed how your worded your post. biggrin.gif

ss
sillysally is offline  
post #368 of 656 Old 03-02-2014, 08:31 PM
Advanced Member
 
gwgill's Avatar
 
Join Date: Jan 2013
Location: Melbourne, Australia
Posts: 564
Mentioned: 3 Post(s)
Tagged: 0 Thread(s)
Quoted: 25 Post(s)
Liked: 70
Quote:
Originally Posted by spacediver View Post

If there is no correlation whatsoever, then everything is a crapshoot, and the whole idea of verification samples is meaningless, unless you're simply measuring the anchor points used in profiling.

If a devices response isn't monotonic or even cohesive, then 1) You can't profile it with a practical number of sample points 2) You can't correct it with a cLUT that has a practical number of grid points 3) You're implying that between points you have corrected the colors are wild and unpredictable, and hence it isn't a device that's suitable for image display, just psychedelic screensaver patterns.
gwgill is offline  
post #369 of 656 Old 03-02-2014, 08:39 PM
Advanced Member
 
spacediver's Avatar
 
Join Date: May 2013
Location: Toronto
Posts: 773
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 39 Post(s)
Liked: 65
Quote:
Originally Posted by gwgill View Post

If a devices response isn't monotonic or even cohesive, then 1) You can't profile it with a practical number of sample points 2) You can't correct it with a cLUT that has a practical number of grid points 3) You're implying that between points you have corrected the colors are wild and unpredictable, and hence it isn't a device that's suitable for image display, just psychedelic screensaver patterns.

I prefer the term rainbow nonsense smile.gif
spacediver is online now  
post #370 of 656 Old 03-02-2014, 08:54 PM
Advanced Member
 
gwgill's Avatar
 
Join Date: Jan 2013
Location: Melbourne, Australia
Posts: 564
Mentioned: 3 Post(s)
Tagged: 0 Thread(s)
Quoted: 25 Post(s)
Liked: 70
Quote:
Originally Posted by spacediver View Post

If all we were interested in was the search for physical errors, then I would completely agree with Mike and Steve. You need to sample the LUT uniformly. So if the LUT has 100 values and you only have 11 verification samples to allocate, then you should allocate them at the LUT values of 0, 10, 20, 30... 100.
If you are dealing with something that has a non-linear transfer function, and you are interested in (say) the statistics of the errors in the output, then you would ideally pick sample points uniformly distributed in the output space. Or you could get the equivalent effect by picking sample points uniformly in the input space and then weighting the output errors to compensate for a non-uniform distribution of output sample points. Or some combination of both.
gwgill is offline  
post #371 of 656 Old 03-02-2014, 09:01 PM
Advanced Member
 
spacediver's Avatar
 
Join Date: May 2013
Location: Toronto
Posts: 773
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 39 Post(s)
Liked: 65
Zoyd, Graeme, I just had a thought.

It should be possible to simulate a display, and then randomly introduce nonlinearities across the LUT. The key here is that you'd have advance knowledge of exactly how each of the 10 - 16 odd million points are performing.

One could then simulate a variety of verification sampling schemes and see which scheme provided a better representation of actual display performance.
spacediver is online now  
post #372 of 656 Old 03-03-2014, 02:04 PM
Member
 
Vitalii427's Avatar
 
Join Date: Jan 2014
Location: Kiev, Ukraine
Posts: 15
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 14
Quote:
Originally Posted by spacediver View Post

Zoyd, Graeme, I just had a thought.

It should be possible to simulate a display, and then randomly introduce nonlinearities across the LUT. The key here is that you'd have advance knowledge of exactly how each of the 10 - 16 odd million points are performing.

One could then simulate a variety of verification sampling schemes and see which scheme provided a better representation of actual display performance.
That's what I was talking about in posts #259 and #299
Vitalii427 is offline  
post #373 of 656 Old 03-03-2014, 02:26 PM
Advanced Member
 
spacediver's Avatar
 
Join Date: May 2013
Location: Toronto
Posts: 773
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 39 Post(s)
Liked: 65
Quote:
Originally Posted by Vitalii427 View Post

That's what I was talking about in posts #259 and #299

Indeed, you did. My bad!

In order to do this, we'd need a reasonably good idea of how and why displays fail to be perfectly additive. This was briefly discussed here.

Sub pixel leakage seems to be an issue.

This would be a good matlab project to learn more about displays and colorimetry.
spacediver is online now  
post #374 of 656 Old 03-04-2014, 03:40 AM
Member
 
fluxo's Avatar
 
Join Date: Oct 2012
Posts: 115
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 10 Post(s)
Liked: 38
Thank you for your great contribution, zoyd.

I apologize if this has been asked already. I wonder whether you considered randomising the patch sizes, too, in a second different test run?

For some display types, that would not make any difference. On a plasma display, some interesting differences might be found.
fluxo is offline  
post #375 of 656 Old 03-04-2014, 06:07 AM - Thread Starter
AVS Special Member
 
zoyd's Avatar
 
Join Date: Sep 2006
Location: Planet Dog
Posts: 4,434
Mentioned: 6 Post(s)
Tagged: 0 Thread(s)
Quoted: 81 Post(s)
Liked: 303
Quote:
Originally Posted by fluxo View Post

Thank you for your great contribution, zoyd.

I apologize if this has been asked already. I wonder whether you considered randomising the patch sizes, too, in a second different test run?

For some display types, that would not make any difference. On a plasma display, some interesting differences might be found.

I prefer to use a fixed pattern configuration and look at sensitivity to average video level/luminance when verifying. None of the pattern generators can randomize either window size or background at the moment anyway.

Here are some measurements I did to test pattern geometry sensitivity. The CalMAN measurements were all made using a calibration based on a 4% window overlaid on an actual video frame with average relative luminance (ARL) of 13% (BT.1886 transfer function)

Verification patterns the same as calibration patterns:


6.5% Windows

11% Windows

11% Windows from Ted's Disk

20% Average Video Level

25% Average Video Level

35% Average Video Level

41% Average Video Level



Here is the inverse case using my random 1000 verification patches. This time the calibration used 11% windows and the verification uses 13% ARL patterns.


So the maximum deviation I found was between 6.5% windows and APL/ARL type windows with an average 1.95 dE vs. 0.5 dE. I suppose you could try and randomize the profiling pattern geometry to reduce this difference, but I still prefer to weight the calibration with what I have found to be typical film relative luminance weighting of ~15%. I also found that the 25%-35% APL (average picture/video level) were the best matches to this luminance weighting.
ConnecTEDDD and fluxo like this.
zoyd is offline  
post #376 of 656 Old 03-04-2014, 06:22 AM
Member
 
fluxo's Avatar
 
Join Date: Oct 2012
Posts: 115
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 10 Post(s)
Liked: 38
Thank you very much for your reply, zoyd. I'm thinking through what you've said.
fluxo is offline  
post #377 of 656 Old 03-04-2014, 06:37 AM
AVS Special Member
 
ConnecTEDDD's Avatar
 
Join Date: Sep 2010
Location: Athens, Greece
Posts: 2,197
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 51 Post(s)
Liked: 425
Zoyd, Did you used your Notebook Output as a pattern generator, or an external hardware?

Did you used my Calibration Disk from your Notebook as a Blu-Ray Playback, or the Media Files Version, or you used the Blu-Ray Disk from stand-alone Blu-Ray Player.

Ted's LightSpace CMS Calibration Disk Free Version for Free Calibration Software: LightSpace DPS + CalMAN ColorChecker
S/W: LightSpace CMS, SpaceMan ICC, SpaceMatch DCM, CalMAN 5, CalMAN RGB, ChromaPure, CalPC, ControlCAL
Meters: JETI Specbos 1211, Klein K-10A, i1PRO2, i1PRO, SpectraCAL C6, i1D3, C5
ConnecTEDDD is online now  
post #378 of 656 Old 03-04-2014, 07:05 AM - Thread Starter
AVS Special Member
 
zoyd's Avatar
 
Join Date: Sep 2006
Location: Planet Dog
Posts: 4,434
Mentioned: 6 Post(s)
Tagged: 0 Thread(s)
Quoted: 81 Post(s)
Liked: 303
zoyd is offline  
post #379 of 656 Old 03-04-2014, 03:56 PM - Thread Starter
AVS Special Member
 
zoyd's Avatar
 
Join Date: Sep 2006
Location: Planet Dog
Posts: 4,434
Mentioned: 6 Post(s)
Tagged: 0 Thread(s)
Quoted: 81 Post(s)
Liked: 303
So we saw in this post that when fed the same perceptually optimized patch set that the Argyll algorithm generated a statistically significant (but perceptually tiny) performance improvement over the LS algorithm. After the discussion of device space vs. perceptual space sampling approaches I was curious as to how the two algorithms cope with sparsely sampled profiles, so I tested this with a minimal set of patches - 20 single channel RGB steps, 50 neutral axis steps, and a 5x5x5 device space cube = 243 patches including 4 black and 4 white.


Here are the results:




A couple of things to note here. First, the skewed and bumpy nature of the histograms (longish tails to the high dE side) demonstrates how random sampling picks up poor gamut correction performance. Secondly, it's clear that the LS algorithm does not cope well with sparse sampling and I think this is the reason Steve&Co focus so much on "coverage" in device space coordinates - because this is something the LS algorithm requires to get good results and is not (as much of) a limitation for the Argyll algorithm. The LUT that LS creates in this case is actually not usable due to some nasty interpolation problems within the yellow region of the gamut while the Argyll LUT is perfectly fine. Whether or not this is related to sotti's comment about the limitations of tri-linear interpolation I don't know but in any case it's a trade-off LS has made in sacrificing some robustness for computational speed.
zoyd is offline  
post #380 of 656 Old 03-05-2014, 04:27 PM
AVS Special Member
 
sotti's Avatar
 
Join Date: Aug 2004
Location: Seattle, WA
Posts: 6,586
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 12 Post(s)
Liked: 165
Quote:
Originally Posted by zoyd View Post

Whether or not this is related to sotti's comment about the limitations of tri-linear interpolation I don't know but in any case it's a trade-off LS has made in sacrificing some robustness for computational speed.

I know a 17 point LUT has significant compromises as the tri-linear interpolation causes significant artifacting. I haven't done much testing with 33s with the new tech I was working on. But 64/5's the impact should pretty small.

Joel Barsotti
SpectraCal
CalMAN Lead Developer
sotti is offline  
post #381 of 656 Old 03-06-2014, 03:01 AM
Advanced Member
 
Light Illusion's Avatar
 
Join Date: Aug 2010
Posts: 522
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 2 Post(s)
Liked: 211
The colour engine we uses is indeed 'tuned' for for populated data sets, as sparse sets are not at all viable for accurate final calibration, so there is a 'points limit' at which you will get garbage returned.
This is very intentional to gain the best accuracy with well populates data sets.
So basically the above 'test' is exactly as we expect, and not valid at all.
And no, we do not use tri-linear interpolation anywhere within LightSpace.

Steve
ConnecTEDDD likes this.

Steve Shaw
LIGHT ILLUSION

Light Illusion is offline  
post #382 of 656 Old 03-06-2014, 03:23 AM
Advanced Member
 
Iron Mike's Avatar
 
Join Date: Oct 2006
Posts: 781
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 12 Post(s)
Liked: 351
I hope each and everyone here who claimed superior accuracy and perfectness of dE is reading this thread....


http://www.avsforum.com/t/1520371/huge-de94-differences-between-chromapure-and-calman


u know who u are.... biggrin.gif
ConnecTEDDD likes this.

calibration & profiling solutions: Lightspace, Spaceman, Calman, Argyll, ColorNavigator, basICColor
profiling & calibration workflow tools: Display Calibration Tools
meter: Klein K-10 A, i1Pro, i1D3
AVS thread: Lightspace & Custom Color Patch Set & Gamma Calibration on Panasonic 65VT60
Iron Mike is offline  
post #383 of 656 Old 03-06-2014, 08:01 AM
AVS Special Member
 
sotti's Avatar
 
Join Date: Aug 2004
Location: Seattle, WA
Posts: 6,586
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 12 Post(s)
Liked: 165
Quote:
Originally Posted by Iron Mike View Post

I hope each and everyone here who claimed superior accuracy and perfectness of dE is reading this thread

I don't think anyone has claimed dE is perfect, just that it is the best objective metric that we have available.

True comparisons can only be done with an objective metric.

Even a double blind test is still just personal preference, you've only removed bias.

In this thread we've been specifically referencing the dE2000 metric, which is also the default one CalMAN uses. Anyone who is serious about color science knows dE isn't perfect, but that doesn't make it useless and it's still the best way to evaluate a 3D LUT.

Joel Barsotti
SpectraCal
CalMAN Lead Developer
sotti is offline  
post #384 of 656 Old 03-06-2014, 09:35 AM
Advanced Member
 
spacediver's Avatar
 
Join Date: May 2013
Location: Toronto
Posts: 773
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 39 Post(s)
Liked: 65
Quote:
Originally Posted by Iron Mike View Post

I hope each and everyone here who claimed superior accuracy and perfectness of dE is reading this thread....


http://www.avsforum.com/t/1520371/huge-de94-differences-between-chromapure-and-calman

You do realize that the main issue in that thread was whether dE should be calculated with respect to the color error across the volume, or across a two dimensional chromaticity projection, right? That issue has nothing to do with whether dE is itself a good objective metric to use.

I imagine that in a 3D LUT context, one would use the full volumetric reporting of dE, as one is interested in more than just chromaticity.
spacediver is online now  
post #385 of 656 Old 03-06-2014, 09:57 AM
Advanced Member
 
spacediver's Avatar
 
Join Date: May 2013
Location: Toronto
Posts: 773
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 39 Post(s)
Liked: 65
Quote:
Originally Posted by Light Illusion View Post

The colour engine we uses is indeed 'tuned' for for populated data sets, as sparse sets are not at all viable for accurate final calibration, so there is a 'points limit' at which you will get garbage returned.
This is very intentional to gain the best accuracy with well populates data sets.
So basically the above 'test' is exactly as we expect, and not valid at all.
And no, we do not use tri-linear interpolation anywhere within LightSpace.

Steve

Would it be possible to tune the LS engine to perform well under such sparse conditions?
spacediver is online now  
post #386 of 656 Old 03-06-2014, 03:21 PM
Advanced Member
 
Iron Mike's Avatar
 
Join Date: Oct 2006
Posts: 781
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 12 Post(s)
Liked: 351
Quote:
Originally Posted by sotti View Post

Anyone who is serious about color science knows dE isn't perfect.

yup, couldn't agree more... biggrin.gifbiggrin.gifbiggrin.gif

calibration & profiling solutions: Lightspace, Spaceman, Calman, Argyll, ColorNavigator, basICColor
profiling & calibration workflow tools: Display Calibration Tools
meter: Klein K-10 A, i1Pro, i1D3
AVS thread: Lightspace & Custom Color Patch Set & Gamma Calibration on Panasonic 65VT60
Iron Mike is offline  
post #387 of 656 Old 03-06-2014, 03:26 PM
AVS Special Member
 
sotti's Avatar
 
Join Date: Aug 2004
Location: Seattle, WA
Posts: 6,586
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 12 Post(s)
Liked: 165
Quote:
Originally Posted by Iron Mike View Post

yup, couldn't agree more... biggrin.gifbiggrin.gifbiggrin.gif

But they use it everywhere
http://www.cis.rit.edu/fairchild/PDFs/PRO39.pdf

Everything any serious color scientist is doing is relying heavily on the fact that dE2000 is the most objective, most accurate metric we have to communicate about color differences.

Joel Barsotti
SpectraCal
CalMAN Lead Developer
sotti is offline  
post #388 of 656 Old 03-06-2014, 03:38 PM
Advanced Member
 
Iron Mike's Avatar
 
Join Date: Oct 2006
Posts: 781
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 12 Post(s)
Liked: 351
Quote:
Originally Posted by sotti View Post

But they use it everywhere
http://www.cis.rit.edu/fairchild/PDFs/PRO39.pdf

Everything any serious color scientist is doing is relying heavily on the fact that dE2000 is the most objective, most accurate metric we have to communicate about color differences.

yes, of course.... who is not using dE ?!

It's about understanding what the formula tries to do and the limitations that it can have in REAL WORLD application...

calibration & profiling solutions: Lightspace, Spaceman, Calman, Argyll, ColorNavigator, basICColor
profiling & calibration workflow tools: Display Calibration Tools
meter: Klein K-10 A, i1Pro, i1D3
AVS thread: Lightspace & Custom Color Patch Set & Gamma Calibration on Panasonic 65VT60
Iron Mike is offline  
post #389 of 656 Old 03-06-2014, 05:05 PM
Senior Member
 
Wouter73's Avatar
 
Join Date: Jun 2012
Location: Alkmaar, Netherlands
Posts: 321
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 3 Post(s)
Liked: 33
Yes, but in getting colors right, it still trumps "visual validation".

Visual validation is usefull for things dE doesnt do, like artifacts and the effect of ABL. Not judging greyscale accuracy or color accuracy.
Wouter73 is offline  
post #390 of 656 Old 03-06-2014, 08:07 PM
Advanced Member
 
spacediver's Avatar
 
Join Date: May 2013
Location: Toronto
Posts: 773
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 39 Post(s)
Liked: 65
The only practical problem that I'm aware of involving dE is when people blindly assume that non sampled points will conform to the average or distribution of the sampled points.

However this is a limitation of temporal resources, not of the adequacy of the formula itself. If you had the time to measure each and every point on the display (e.g. all 255 grayscale steps), then dE will provide excellent information.

Intelligent LUT generation and sampling techniques can reduce the impact of this limitation, and a visual assessment can provide a sanity check.
derekjsmith likes this.
spacediver is online now  
Reply Display Calibration

User Tag List

Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off