New calibration disc - Page 77 - AVS Forum
Forum Jump: 
Reply
 
Thread Tools
post #2281 of 2307 Old 12-06-2013, 09:37 PM
Advanced Member
 
Iron Mike's Avatar
 
Join Date: Oct 2006
Posts: 775
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 11 Post(s)
Liked: 349
Quote:
Originally Posted by zoyd View Post

Your argument is that the software is unable to self-validate it's results using straightforward calculations of the color difference between it's intended targets and what it actually produces. Argyll CM CP and HCFR have been around for a few years (Argyll the longest) and cross-checked to death, so I'm not sure why you are so skeptical of a set of measurements demonstrating a certain level of performance. It's not rocket science, where in particular do think the measurements are problematic? The meters have knowable levels of precision and the software calculates color differences based on what the meters tell them, what's the problem?

no, that is not my argument. The argument is that self validation is not "scientific" in addition to the fact that sw applications are prone to error due to the nature of complexity, upgrades etc - just look at the bug fix history of Argyll.... and that goes for any sw.

no Enterprise level dev would make statements like "my sw performs the same as this high end Pro solution because I think I used all the right formulas and I think I'm doing the same thing than they are although I have not tested their solution" - that is delusional, amateur junior level dev talk - it's about this statement, not about Argyll.

You don't know what LS is doing and you don't know what CM & CP are doing, especially when it comes to cube calculation.... they all use different approaches.

calibration & profiling solutions: Lightspace, Spaceman, Calman, Argyll, ColorNavigator, basICColor
profiling & calibration workflow tools: Display Calibration Tools
meter: Klein K-10 A, i1Pro, i1D3
AVS thread: Lightspace & Custom Color Patch Set & Gamma Calibration on Panasonic 65VT60
Iron Mike is offline  
Sponsored Links
Advertisement
 
post #2282 of 2307 Old 12-06-2013, 09:42 PM
Advanced Member
 
Iron Mike's Avatar
 
Join Date: Oct 2006
Posts: 775
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 11 Post(s)
Liked: 349
Quote:
Originally Posted by spacediver View Post

Ok so it seems we're closer to being on the same page here.

But can you understand how saying something like this makes no sense?
You've just now agreed that you can cross validate by comparing 3D LUTs by running the greyscale, gamma, gamut, reports, etc. You can therefore directly compare the results of different 3D LUTs, and you don't need to know anything about the source code.

Furthermore, grayscale, gamma, and gamut will NOT show the success of a 3d LUT. You need to sample the entire colorspace to assess the success of a 3DLUT.

I never said that u need to know the source code.... (?!?)

it does make sense if u READ THE ENTIRE CONVERSATION and know what's involved... I don't have time to explain you all the specifics here... and of course u should validate more points that's exactly what I STATED a few posts ago that the best way is to run a full 17^3 as verification... i was just giving u examples as u are asking these very basic questions...

calibration & profiling solutions: Lightspace, Spaceman, Calman, Argyll, ColorNavigator, basICColor
profiling & calibration workflow tools: Display Calibration Tools
meter: Klein K-10 A, i1Pro, i1D3
AVS thread: Lightspace & Custom Color Patch Set & Gamma Calibration on Panasonic 65VT60
Iron Mike is offline  
post #2283 of 2307 Old 12-06-2013, 10:31 PM
Advanced Member
 
spacediver's Avatar
 
Join Date: May 2013
Location: Toronto
Posts: 739
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 27 Post(s)
Liked: 65
Quote:
Originally Posted by Iron Mike View Post

I never said that u need to know the source code.... (?!?)

it does make sense if u READ THE ENTIRE CONVERSATION and know what's involved... I don't have time to explain you all the specifics here... and of course u should validate more points that's exactly what I STATED a few posts ago that the best way is to run a full 17^3 as verification... i was just giving u examples as u are asking these very basic questions...

I've just re-read from the beginning (and I see your discussion about the 17^3).

I agree that you need empirical data from BOTH SW's to make the claim that Zoyd was making.
spacediver is online now  
post #2284 of 2307 Old 12-06-2013, 11:01 PM
Advanced Member
 
Iron Mike's Avatar
 
Join Date: Oct 2006
Posts: 775
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 11 Post(s)
Liked: 349
Quote:
Originally Posted by spacediver View Post

I've just re-read from the beginning (and I see your discussion about the 17^3).

I agree that you need empirical data from BOTH SW's to make the claim that Zoyd was making.

it gets quite interesting when u validate a display (corrected by a LUT) with multiple apps... sometimes u see that 3 different apps report 3 different things... u need to make sure to use the same gear and the exact same meter offsets and u need to do these validation sets somewhat fast so that display drift is a non-issue (for Plasma etc), so a K10 is very useful not only because of the top-of-the-food-chain repeatability (u know it's reporting the same thing in all apps) but also because of it's speed so u can do everything very fast and rule out display drift...

and when u (sometimes) have 3 different apps tell u 3 different things, u then have to use your eyes on good calibration images to make a decision which app you can trust...

calibration & profiling solutions: Lightspace, Spaceman, Calman, Argyll, ColorNavigator, basICColor
profiling & calibration workflow tools: Display Calibration Tools
meter: Klein K-10 A, i1Pro, i1D3
AVS thread: Lightspace & Custom Color Patch Set & Gamma Calibration on Panasonic 65VT60
Iron Mike is offline  
post #2285 of 2307 Old 12-06-2013, 11:47 PM
Advanced Member
 
spacediver's Avatar
 
Join Date: May 2013
Location: Toronto
Posts: 739
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 27 Post(s)
Liked: 65
Yea I can see how those complexities may arise in both the profiling phase, and the testing/validation phase.

A sanity check would be to see whether the different apps create the same profile on a given display (keeping warm up time constant). Then you can place confidence in the fact that these apps at least measure the same way. Thus, if any differences arise in the 3DLUTs, then you can be more confident that this is due to differences in the cube creation code (interpolation algorithms, transforms, etc.).

Another solution may be to feed the different apps an identical profile (you could create a theoretical one), and see whether they output the same cube. I'm pretty sure you can do this in Argyll (I guess that is one advantage of a multiple hoops approach: you can intervene at any given hoop!). I wonder if you can do a similar thing in LS.

That would be an awesome experiment to do actually. You could do it without any instruments.
spacediver is online now  
post #2286 of 2307 Old 12-07-2013, 01:50 AM
AVS Special Member
 
ConnecTEDDD's Avatar
 
Join Date: Sep 2010
Location: Athens, Greece
Posts: 2,117
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 27 Post(s)
Liked: 410
Quote:
Originally Posted by xvfx View Post

Because the last I read all this dE stuff was, anything below 3 wasn't noticeable unless taking your sets into perfectly controlled studio locations and performing calibration contests.

1.0dE color difference is easily visible to this test.

Ted's LightSpace CMS Calibration Disk Free Version for Free Calibration Software: LightSpace DPS + CalMAN ColorChecker
S/W: LightSpace CMS, SpaceMan ICC, SpaceMatch DCM, CalMAN 5, CalMAN RGB, ChromaPure, CalPC, ControlCAL
Meters: JETI Specbos 1211, Klein K-10A, i1PRO2, i1PRO, SpectraCAL C6, i1D3, C5
ConnecTEDDD is online now  
post #2287 of 2307 Old 12-07-2013, 01:53 AM
AVS Special Member
 
sillysally's Avatar
 
Join Date: Jun 2006
Location: Chicago
Posts: 3,641
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 34 Post(s)
Liked: 249
Quote:
Originally Posted by gwgill View Post

The Klein is a lovely instrument, particularly for low light measurement, but do you actually think it's more accurate than a top end spectro ?

Notice their spec. only mentions accuracy for illuminant A, not for a display. Their conformance to the 1932 2 degree curves is nice, but not perfect. Typically you need correction matrices for each display type with a colorimeter to make up the difference, and they are created by calibrating against a ... spectrometer.

Is there any concrete data about their accuracy beyond the spec. sheet ?

I didn't see any reason to get into this debate because I have never used any other software than LS and CM for my 17^3 LUT's.

However I have just completed testing two Jeti 1201's with my VT60 and comparing those repeatable readings with my K10-A and 1IPro 2.
Conclusion is that the K10-A is spot on an much faster, the I1Pro 2 is better and faster than the Jeti 1201(the 1201 seems to have a issue). The Jeti 1211 is said to be 3X faster than the 1201 if auto sync is working. However with the Jeti or I1Pro you should use LLH when running a 17^3 LUT, still there is a question about the very low light readings of the I1Pro or the Jeti's. So from a practical standpoint and getting the best readings possible, imo you are much better off using a K10-A and profiling it and the display to a I1Pro. Unless possible errors in low light readings and taking 5 to 6 hours using a I1Pro or Jeti 1211, as opposed to two hours with a profiled K10-A , means nothing to the calibrator.

ss
ConnecTEDDD likes this.
sillysally is offline  
post #2288 of 2307 Old 12-07-2013, 02:58 AM
AVS Club Gold
 
Carbon Ft Print's Avatar
 
Join Date: Nov 2009
Location: FL
Posts: 150
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 47
Re: "I don't have that much to prove - my expertise has been on display and up for criticism for a long time now in the shape of ArgyllCMS and in the various papers I've had published in Color Science and computer graphics."

Your experience within a world class development environment is much different than mine. Perphaps its a cultural thing and that's why our views are different. Because making the projector/display as linear as possible is so critical and fundamental to calibrating colors quickly and accurately between hard meter reads, what tools do you provide in your software program for pre-cal and post cal to determine the linearity of the projector/display, before and after like I've shown? This may be the reason why Iron Mike says your software requires many iterations and tweaking to get it right. Light Space requires only 1 read per color patch. If you do not recognize linearity as being important, because it is so fundamental to quick and accurate color calibration for LUTs that involve millions of colors, there is no way I would select your software based on the same criteria as I used to select Light Space. One easy solution is to buy Light Space Home Cinema for its LUT tools and add it to your collection of tools to see how effective your LUTs are from another prespective. Yes? Like Iron Mike, I use multiple software calibration programs for verification purposes. This is very common among ardent professional and amateur calibrators. We all respect the calibration software programs that are out there, including yours, and acknowledge they all have their own strengths and weaknesses ... that's why we use so many.


Kind Regards,

JJ
wink.gif
ConnecTEDDD likes this.
Carbon Ft Print is offline  
post #2289 of 2307 Old 12-07-2013, 04:54 AM
AVS Special Member
 
zoyd's Avatar
 
Join Date: Sep 2006
Location: Planet Dog
Posts: 4,427
Mentioned: 6 Post(s)
Tagged: 0 Thread(s)
Quoted: 75 Post(s)
Liked: 299
Quote:
Originally Posted by spacediver View Post

I've just re-read from the beginning (and I see your discussion about the 17^3).

I agree that you need empirical data from BOTH SW's to make the claim that Zoyd was making.

And what of the case where the existing empirical data shows perfect agreement with the required standard? Perfect meaning within the limits of the probes ability to measure a difference between the device and the required standard. In that case it's irrelevant what the other software does. This is the point I was making that gets lost in all the trumped up complexities. I agree that validation of different techniques is always a good thing and something I do in my real life all the time, but there is nothing fundamentally wrong with what I said given the current measured performance of ArgyllCMS. The validation argument is also much stronger when you are considering using new and untested instruments or amongst established instruments when you are testing new display technologies. The software's core function (reporting the probes assessment of color differences) is already well validated and is all that is needed to assess the performance of a 3dLUT in mapping a source space to a target device.

Also, people need to separate the process of calibration (metrology) from the end result (what you see). The first is a well defined methodology with accepted rules for how well the process works in matching a device to a standard. If there is a disconnect between that process and the images it produces, that is another discussion.
zoyd is online now  
post #2290 of 2307 Old 12-07-2013, 10:34 AM
Advanced Member
 
spacediver's Avatar
 
Join Date: May 2013
Location: Toronto
Posts: 739
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 27 Post(s)
Liked: 65
Quote:
Originally Posted by zoyd View Post

And what of the case where the existing empirical data shows perfect agreement with the required standard? Perfect meaning within the limits of the probes ability to measure a difference between the device and the required standard. In that case it's irrelevant what the other software does. This is the point I was making that gets lost in all the trumped up complexities. I agree that validation of different techniques is always a good thing and something I do in my real life all the time, but there is nothing fundamentally wrong with what I said given the current measured performance of ArgyllCMS.

In the context of judging whether one piece of software's cube is adequate, then yes, I would agree that so long as the delta E's are below perceptual threshold, then the software is adequate.

But if we are judging which cube is superior, I would argue that you need to see which cube has lower delta E's, and I say this for a few reasons:

1: Individual sensitivities to color differences exist as a distribution. Having lower delta E's means that highly sensitive individuals are better protected from perceptual color inaccuracies.

2: Having lower delta E's gives more margin of error for things like display drift.

3: More fundamentally, when we talk about one cube being superior than another, our judgments shouldn't simply stop at "beyond this point it doesn't matter, as nobody can tell the difference". We should acknowledge that lower delta E's reflects superior cube algorithms, regardless of whether this has practical significance.

(note that nothing I am saying suggests that one is better than the other. For all I know they end up with identical results).
spacediver is online now  
post #2291 of 2307 Old 12-07-2013, 10:47 AM
Advanced Member
 
spacediver's Avatar
 
Join Date: May 2013
Location: Toronto
Posts: 739
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 27 Post(s)
Liked: 65
Quote:
Originally Posted by zoyd View Post

The software's core function (reporting the probes assessment of color differences) is already well validated and is all that is needed to assess the performance of a 3dLUT in mapping a source space to a target device.

Yep, this makes sense to me. That's why I thought it was important in this discussion to tease apart validating cube algorithms from validating measurement abilities (metrology). The latter seems to be a trivial issue, although I think IronMike brought up some interesting caveats (making sure offsets are the same, etc.).
spacediver is online now  
post #2292 of 2307 Old 12-07-2013, 01:03 PM
AVS Special Member
 
zoyd's Avatar
 
Join Date: Sep 2006
Location: Planet Dog
Posts: 4,427
Mentioned: 6 Post(s)
Tagged: 0 Thread(s)
Quoted: 75 Post(s)
Liked: 299
Quote:
Originally Posted by spacediver View Post

In the context of judging whether one piece of software's cube is adequate, then yes, I would agree that so long as the delta E's are below perceptual threshold, then the software is adequate.

But if we are judging which cube is superior, I would argue that you need to see which cube has lower delta E's, and I say this for a few reasons:

1: Individual sensitivities to color differences exist as a distribution. Having lower delta E's means that highly sensitive individuals are better protected from perceptual color inaccuracies.

2: Having lower delta E's gives more margin of error for things like display drift.

3: More fundamentally, when we talk about one cube being superior than another, our judgments shouldn't simply stop at "beyond this point it doesn't matter, as nobody can tell the difference". We should acknowledge that lower delta E's reflects superior cube algorithms, regardless of whether this has practical significance.

(note that nothing I am saying suggests that one is better than the other. For all I know they end up with identical results).

In the abstract or as a general calibration strategy I would agree with you but in the context of video calibration the software is actually not the weakest link. The biggest limitation is 8 bit quantization, you can generate a 0.3-0.4 dE00 color differences by flipping just 1 bit in 1 color channel. So once your residual error spread reaches a standard deviation of about that level you are just measuring bit noise and further improvement is not possible.

If you want to provide more margin to accommodate observer variability that is not a precision question or algorithm issue either, for that you must use the most accurate meter you have.
zoyd is online now  
post #2293 of 2307 Old 12-07-2013, 01:11 PM
Advanced Member
 
spacediver's Avatar
 
Join Date: May 2013
Location: Toronto
Posts: 739
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 27 Post(s)
Liked: 65
Quote:
Originally Posted by zoyd View Post

In the abstract or as a general calibration strategy I would agree with you but in the context of video calibration the software is actually not the weakest link. The biggest limitation is 8 bit quantization, you can generate a 0.3-0.4 dE00 color differences by flipping just 1 bit in 1 color channel. So once your residual error spread reaches a standard deviation of about that level you are just measuring bit noise and further improvement is not possible.

I think I understand your point. You're saying that the cube can only perform within the limitations of the available bit depth, and once that limit has been reached, it is meaningless to talk about improvements in the algorithm. So if your cube performs at that level, it is in principle not able to perform better, and you can therefore confidently say that it's performing just as well as any other algorithm out there.


Still would be fascinating to see how a cube generated by LS would compare to a cube generated by argyll, given the exact same profile. Anyone know whether it's possible to manually feed LS a profile?
spacediver is online now  
post #2294 of 2307 Old 12-07-2013, 01:20 PM
AVS Special Member
 
TomHuffman's Avatar
 
Join Date: Jan 2003
Location: Springfield, MO
Posts: 6,353
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 14 Post(s)
Liked: 204
Quote:
Originally Posted by spacediver View Post

I think I understand your point. You're saying that the cube can only perform within the limitations of the available bit depth, and once that limit has been reached, it is meaningless to talk about improvements in the algorithm. So if your cube performs at that level, it is in principle not able to perform better, and you can therefore confidently say that it's performing just as well as any other algorithm out there.


Still would be fascinating to see how a cube generated by LS would compare to a cube generated by argyll, given the exact same profile. Anyone know whether it's possible to manually feed LS a profile?
The primary difference between competing cube calibration approaches will lie in the specific approach taken with interpolation. The 8-bit video color space includes over 10 million colors. The largest cube calibrations target just under 5,000 colors. That means that only 0.05% of the colors are directly addressed. The remaining 99.95% of the color space is adjusted through interpolation.

Tom Huffman
ChromaPure Software/AccuPel Video Signal Generators
ISF/THX Calibrations
Springfield, MO

TomHuffman is offline  
post #2295 of 2307 Old 12-07-2013, 01:26 PM
Advanced Member
 
spacediver's Avatar
 
Join Date: May 2013
Location: Toronto
Posts: 739
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 27 Post(s)
Liked: 65
yep, and it would be awesome to see how (or if) the interpolation differs between the approaches.
spacediver is online now  
post #2296 of 2307 Old 12-07-2013, 01:30 PM
AVS Special Member
 
ConnecTEDDD's Avatar
 
Join Date: Sep 2010
Location: Athens, Greece
Posts: 2,117
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 27 Post(s)
Liked: 410
Quote:
Originally Posted by TomHuffman View Post

Quote:
Originally Posted by spacediver View Post

I think I understand your point. You're saying that the cube can only perform within the limitations of the available bit depth, and once that limit has been reached, it is meaningless to talk about improvements in the algorithm. So if your cube performs at that level, it is in principle not able to perform better, and you can therefore confidently say that it's performing just as well as any other algorithm out there.


Still would be fascinating to see how a cube generated by LS would compare to a cube generated by argyll, given the exact same profile. Anyone know whether it's possible to manually feed LS a profile?
The primary difference between competing cube calibration approaches will lie in the specific approach taken with interpolation. The 8-bit video color space includes over 10 million colors. The largest cube calibrations target just under 5,000 colors. That means that only 0.05% of the colors are directly addressed. The remaining 99.95% of the color space is adjusted through interpolation.

A 17-Point Cube Profiling (4913 Color Points) for 8bit color depth is just about perfect.

If you go to a larger cube you will get 'noise' errors due to display/probe instability, which will greatly reduce the quality of the final result, unless you filter out the noise, which just gets you back to a smaller data set.

Ted's LightSpace CMS Calibration Disk Free Version for Free Calibration Software: LightSpace DPS + CalMAN ColorChecker
S/W: LightSpace CMS, SpaceMan ICC, SpaceMatch DCM, CalMAN 5, CalMAN RGB, ChromaPure, CalPC, ControlCAL
Meters: JETI Specbos 1211, Klein K-10A, i1PRO2, i1PRO, SpectraCAL C6, i1D3, C5
ConnecTEDDD is online now  
post #2297 of 2307 Old 12-07-2013, 02:00 PM
AVS Special Member
 
sotti's Avatar
 
Join Date: Aug 2004
Location: Seattle, WA
Posts: 6,581
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 6 Post(s)
Liked: 163
Quote:
Originally Posted by spacediver View Post

yep, and it would be awesome to see how (or if) the interpolation differs between the approaches.

As far as i know CalMAN is the only package using a new volumetric interpolation algoritihim (new in 5.2), I don't know about argyll but as its open source its not a secret. Otoh lightspace is black box. They dont talk about how it works or why they think its better. They jusy insist it is without any quantifible data. I don't believe chromapure uses interpolation, as it only does direct hardware 3d luts. But I could be wrong about that.

CalMAN also differs in that it calibrates a number of points and then interpolates from there.

Joel Barsotti
SpectraCal
CalMAN Lead Developer
sotti is offline  
post #2298 of 2307 Old 12-07-2013, 02:38 PM
Advanced Member
 
spacediver's Avatar
 
Join Date: May 2013
Location: Toronto
Posts: 739
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 27 Post(s)
Liked: 65
Quote:
Originally Posted by sotti View Post

As far as i know CalMAN is the only package using a new volumetric interpolation algoritihim (new in 5.2), I don't know about argyll but as its open source its not a secret. Otoh lightspace is black box. They dont talk about how it works or why they think its better. They jusy insist it is without any quantifible data. I don't believe chromapure uses interpolation, as it only does direct hardware 3d luts. But I could be wrong about that.

CalMAN also differs in that it calibrates a number of points and then interpolates from there.

well, it should be a simple matter to find out right? Just feed LS and argyll the same profile and see whether the resulting 3D LUTs are the same. Is it possible to feed LS a profile?

(by profile I mean the values at the five thousand measured points)
spacediver is online now  
post #2299 of 2307 Old 12-07-2013, 03:49 PM
Advanced Member
 
Light Illusion's Avatar
 
Join Date: Aug 2010
Posts: 518
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 211
You can actually define any profile and load into LightSpace, so long as it's in the correct XML format, and the RGB source Triplet data matches the read data.
It's actually very flexible like that.

Also, the colour engine in LS is way beyond an algorithm, using a combination of different mathematical computations to combine in a single powerful process that actually calibrates all of the volumetric points - no interpolation, except when exporting a LUT with a greater point cloud size.

The colour engine also assess all points measured before performing the calibration on each and every point, so knows the cross-colour distortion involved in the display, and can compensate for that.
That's needed as making a change/correction in one area of colour space will change the required correction in another area due to the cross-colour distortion.

It is actually very interesting 3D maths biggrin.gif

Steve

Steve Shaw
LIGHT ILLUSION

Light Illusion is offline  
post #2300 of 2307 Old 12-07-2013, 03:53 PM
Advanced Member
 
spacediver's Avatar
 
Join Date: May 2013
Location: Toronto
Posts: 739
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 27 Post(s)
Liked: 65
very cool, thanks Steve.
spacediver is online now  
post #2301 of 2307 Old 12-07-2013, 04:04 PM
AVS Special Member
 
zoyd's Avatar
 
Join Date: Sep 2006
Location: Planet Dog
Posts: 4,427
Mentioned: 6 Post(s)
Tagged: 0 Thread(s)
Quoted: 75 Post(s)
Liked: 299
Quote:
Originally Posted by spacediver View Post

I think I understand your point. You're saying that the cube can only perform within the limitations of the available bit depth, and once that limit has been reached, it is meaningless to talk about improvements in the algorithm. So if your cube performs at that level, it is in principle not able to perform better, and you can therefore confidently say that it's performing just as well as any other algorithm out there.

Yes, that is my supposition. The only thing that the software can do at this level is push the residual distribution around (maybe offload errors into less important points in the gamut) and perhaps clean up the tails, and that will depend on things like how it interpolates to the final LUT dimensions and how it handles detector noise, etc. But these second order tweaks will likely not be noticeable to the end user.
zoyd is online now  
post #2302 of 2307 Old 12-07-2013, 06:20 PM
AVS Special Member
 
sotti's Avatar
 
Join Date: Aug 2004
Location: Seattle, WA
Posts: 6,581
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 6 Post(s)
Liked: 163
Quote:
Originally Posted by Light Illusion View Post

Also, the colour engine in LS is way beyond an algorithm, using a combination of different mathematical computations to combine in a single powerful process that actually calibrates all of the volumetric points - no interpolation, except when exporting a LUT with a greater point cloud size.

Steve, seriously, you need to look up the word interpolation.

http://www.merriam-webster.com/dictionary/interpolate

3: to estimate values of (data or a function) between two known values.


Since thee final data isn't emperical (you didn't actual measure your final values), it's created from calculations that are an estimate. No matter how good an estimation is, it is interpolation. You are either being completely pedantic, or your comprehension is seriously lacking.

Joel Barsotti
SpectraCal
CalMAN Lead Developer
sotti is offline  
post #2303 of 2307 Old 12-08-2013, 01:47 AM
Advanced Member
 
Light Illusion's Avatar
 
Join Date: Aug 2010
Posts: 518
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 211
True - I was just adopting your applied use of the term rolleyes.gif

Steve Shaw
LIGHT ILLUSION

Light Illusion is offline  
post #2304 of 2307 Old 12-08-2013, 10:15 AM
AVS Club Gold
 
Carbon Ft Print's Avatar
 
Join Date: Nov 2009
Location: FL
Posts: 150
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 47
Quote:
Originally Posted by sotti View Post

Steve, seriously, you need to look up the word interpolation.

http://www.merriam-webster.com/dictionary/interpolate

3: to estimate values of (data or a function) between two known values.


Since thee final data isn't emperical (you didn't actual measure your final values), it's created from calculations that are an estimate. No matter how good an estimation is, it is interpolation. You are either being completely pedantic, or your comprehension is seriously lacking.

"You can rob me. You can starve me and you can beat me and you can kill me. Just don't bore me!" -- Gunnery Sgt. Tom Highway (Clint Eastwood), Heartbreak Ridge looooooool biggrin.gif
Carbon Ft Print is offline  
post #2305 of 2307 Old 12-08-2013, 03:52 PM
Advanced Member
 
gwgill's Avatar
 
Join Date: Jan 2013
Location: Melbourne, Australia
Posts: 544
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 13 Post(s)
Liked: 68
Quote:
Originally Posted by ConnecTEDDD View Post

A 17-Point Cube Profiling (4913 Color Points) for 8bit color depth is just about perfect.

If you go to a larger cube you will get 'noise' errors due to display/probe instability, which will greatly reduce the quality of the final result, unless you filter out the noise, which just gets you back to a smaller data set.
That's a very interesting comment, and I'll explain why:

Such a limitation is the characteristic of a profiling approach in which each measurement is taken literally, and the resulting device behavior model is a perfect fit to the measurement points. One sign that such an approach is likely being used is if it requires a regular grid of test points. A regular grid is very easy to create and turn into a profile, but easy is not necessarily optimal - a regular grid explores the RGB space inefficiently.

Using such a profiling scheme it's easy to understand why the use of highly repeatable and accurate instruments is of more importance that it otherwise would be (ie. Klein colorimeters ?)

A more mature profiling approach doesn't take each measurement literally, instead it makes an allowance for the inherent inaccuracy in each measurement, and uses the overall response of all the measurements to reduce noise and inaccuracy of the resulting device model. Although it may seem counter-intuitive, the resulting profile is more accurate than any single measurement. Another way of viewing this is that it is an approach that allows the averaging of different measurement points within the color space. This is not equivalent to averaging multiple readings of the same test point, since it simultaneously explores the device behavior in more detail, and reduces measurement noise . One of the signs that such an approach may be being used is that it does not need a highly structured set of measurements, but can use a scattered data set.

The result is a profiling process that is more forgiving of inconsistency and noise in the instrument measurements, and allows the accuracy of more visually critical areas of the device response to be improved by structuring the measurement data set to sample these areas more densely.
gwgill is online now  
post #2306 of 2307 Old 12-08-2013, 04:07 PM
Advanced Member
 
Light Illusion's Avatar
 
Join Date: Aug 2010
Posts: 518
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 211
LightSpace needs no structured measurements - which is why it can work with film density measurements, not just display profile measurements.
And is why it can work with any input profile data set - cube structured or otherwise.

Basically, there is no 'required' data set structure for LightSpace, but as Ted accurately says, a 17^3 profile set is optimal based on all standard probe and display limitations.
Should you chose to define your own DIP data set for LightSpace you can easily use that.

That's the power of the colour engine within LightSpace.

Steve Shaw
LIGHT ILLUSION

Light Illusion is offline  
post #2307 of 2307 Old 12-08-2013, 04:13 PM
Advanced Member
 
gwgill's Avatar
 
Join Date: Jan 2013
Location: Melbourne, Australia
Posts: 544
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 13 Post(s)
Liked: 68
Quote:
Originally Posted by Light Illusion View Post


Also, the colour engine in LS is way beyond an algorithm, using a combination of different mathematical computations to combine in a single powerful process that actually calibrates all of the volumetric points - no interpolation, except when exporting a LUT with a greater point cloud size.
Great marketing speak. "our algorithms are beyond algorithms.. etc."

The fact is that any respectable color profiling package (and there are several to choose from, including LS), "calibrates all of the volumetric points". Since you aren't measuring 16 million test point, yes you are using interpolation, just like everyone has to (anything else like nearest neighbor would be unacceptable).
Quote:
The colour engine also assess all points measured before performing the calibration on each and every point, so knows the cross-colour distortion involved in the display, and can compensate for that.
That's needed as making a change/correction in one area of colour space will change the required correction in another area due to the cross-colour distortion.
To translate back to standard nomenclature rather than marketing speak: LS creates a profile of the device response, and then uses that to lookup what display values are needed to reproduce the desired response for each point of the 3DLut grid, just like all the other profiling packages do. Yes, there are lots of details that can and are done differently, and will result in product differences, but casting it in obscure terms doesn't actually make it a different basic process.
gwgill is online now  
Reply Display Calibration

User Tag List

Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off