A comparison of 3DLUT solutions for the eeColor box - Page 10 - AVS Forum
Forum Jump: 
 2Likes
 
Thread Tools
Old 02-27-2014, 08:17 PM
Advanced Member
 
Iron Mike's Avatar
 
Join Date: Oct 2006
Posts: 795
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 25 Post(s)
Liked: 361
Quote:
Originally Posted by spacediver View Post

Yes, it is simple. I'm just trying to figure out why Mike and Steve don't agree.
They seem to be hung up on this idea of device space being sacrosanct, but human perception is not encoded in device space, so I don't understand the obsession with it.
.

I can't speak for Steve but he seems to agree with "optimized sets" as he posted some himself.

second, there's no obsession, you just gotta understand what you have and what you don't have in a given set. It's all been stated and demonstrated here in this thread.

it's an evolution from traditional sets, all very simple.

calibration & profiling solutions: Lightspace, Spaceman, Calman, Argyll, ColorNavigator, basICColor
profiling & calibration workflow tools: Display Calibration Tools
meter: Klein K-10 A, i1Pro, i1D3
AVS thread: Lightspace & Custom Color Patch Set & Gamma Calibration on Panasonic 65VT60
Iron Mike is offline  
Sponsored Links
Advertisement
 
Old 02-27-2014, 09:16 PM
Advanced Member
 
spacediver's Avatar
 
Join Date: May 2013
Location: Toronto
Posts: 943
Mentioned: 3 Post(s)
Tagged: 0 Thread(s)
Quoted: 120 Post(s)
Liked: 75
Quote:
Originally Posted by Iron Mike View Post

I can't speak for Steve but he seems to agree with "optimized sets" as he posted some himself.

yet in the same breath you get thoughts like this:

Quote:
Originally Posted by Light Illusion View Post

Actually, no, when performing accurate calibration human perception should not be taken into account, as that changes with the colour mix being displayed at any given time.
(As well as environmental considerations)

And as you never know in advance what colour will be shown in what combination you can't use that human perception as a basis for calibration.

This is why accurate calibration is done to a set of target values that encompass total volumetric accuracy..

You really do have to treat all colours equally, or you will never be sure of the final calibration.

These statements appear to betray an alarming ignorance of fundamental color theory.
Vitalii427 likes this.
spacediver is online now  
Old 02-27-2014, 09:29 PM
Advanced Member
 
Iron Mike's Avatar
 
Join Date: Oct 2006
Posts: 795
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 25 Post(s)
Liked: 361
Quote:
Originally Posted by spacediver View Post

yet in the same breath you get thoughts like this:
These statements appear to betray an alarming ignorance of fundamental color theory.

spacediver,

u can choose any cal approach that u like and u can subscribe to any theory that u see fit, but I FOR SURE treat all colors (--> hues) equally.
ConnecTEDDD likes this.

calibration & profiling solutions: Lightspace, Spaceman, Calman, Argyll, ColorNavigator, basICColor
profiling & calibration workflow tools: Display Calibration Tools
meter: Klein K-10 A, i1Pro, i1D3
AVS thread: Lightspace & Custom Color Patch Set & Gamma Calibration on Panasonic 65VT60
Iron Mike is offline  
Old 02-27-2014, 09:31 PM
Advanced Member
 
spacediver's Avatar
 
Join Date: May 2013
Location: Toronto
Posts: 943
Mentioned: 3 Post(s)
Tagged: 0 Thread(s)
Quoted: 120 Post(s)
Liked: 75
Quote:
Originally Posted by Iron Mike View Post

spacediver,

u can choose any cal approach that u like and u can subscribe to any theory that u see fit, but I FOR SURE treat all colors (--> hues) equally.

What you don't seem to understand is that "equally" is a relative term. I've asked you this before in a different manner, but you didn't reply (and neither did Steve), so I'll ask it again.

In which space are you talking about when you advocate for "equal treatment" of colors?
Vitalii427 likes this.
spacediver is online now  
Old 02-27-2014, 09:38 PM
Advanced Member
 
Iron Mike's Avatar
 
Join Date: Oct 2006
Posts: 795
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 25 Post(s)
Liked: 361
Quote:
Originally Posted by spacediver View Post

What you don't seem to understand is that "equally" is a relative term. I've asked you this before in a different manner, but you didn't reply (and neither did Steve), so I'll ask it again.

In which space are you talking about when you advocate for "equal treatment" of colors?

Most of my answers implied this: H|S|B or (HSV).

Intuitive and perceptually relevant.
ConnecTEDDD likes this.

calibration & profiling solutions: Lightspace, Spaceman, Calman, Argyll, ColorNavigator, basICColor
profiling & calibration workflow tools: Display Calibration Tools
meter: Klein K-10 A, i1Pro, i1D3
AVS thread: Lightspace & Custom Color Patch Set & Gamma Calibration on Panasonic 65VT60
Iron Mike is offline  
Old 02-27-2014, 09:47 PM
Advanced Member
 
spacediver's Avatar
 
Join Date: May 2013
Location: Toronto
Posts: 943
Mentioned: 3 Post(s)
Tagged: 0 Thread(s)
Quoted: 120 Post(s)
Liked: 75
Yes, very intuitive, and indeed perceptually relevant in that the independent axes are readily understandable as distinct perceptual attributes.

But they are not perceptually uniform spaces, and this is the critical point.

Perceptual uniformity is a fundamental notion in color science, and is particularly important in the field of calibration.

There is a reason that Delta-E, which is the gold standard of objective quantifiable color difference, is not simply the euclidean distance between two points in a space like HSV, but is rather more akin to the euclidean distance in a perceptually uniform space such as CIELAB or CIELUV.

Ignoring this reality can end up with some rather disastrous consequences. If, for example, one were to construe the accuracy of a display in units of HSV, there would be no guarantee that the display was actually well calibrated. Even though the deltas might be low, they could be perceptually huge.
Roland.Online and Vitalii427 like this.
spacediver is online now  
Old 02-27-2014, 10:00 PM
Advanced Member
 
Iron Mike's Avatar
 
Join Date: Oct 2006
Posts: 795
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 25 Post(s)
Liked: 361
Quote:
Originally Posted by spacediver View Post

Yes, very intuitive, and indeed perceptually relevant in that the independent axes are readily understandable as distinct perceptual attributes.

But they are not perceptually uniform spaces, and this is the critical point.

Perceptual uniformity is a fundamental notion in color science, and is particularly important in the field of calibration.

There is a reason that Delta-E, which is the gold standard of objective quantifiable color difference, is not simply the euclidean distance between two points in a space like HSV, but is rather more akin to the euclidean distance in a perceptually uniform space such as CIELAB or CIELUV.

Ignoring this reality can end up with some rather disastrous consequences. If, for example, one were to construe the accuracy of a display in units of HSV, there would be no guarantee that the display was actually well calibrated. Even though the deltas might be low, they could be perceptually huge.

you are a great example that you read a lot yet you don't understand the full picture..... or know how to differentiate.

nobody said dE is (or should be) calculated from HSV (?).... and nobody said that they are going to evaluate a display via HSV... (?)

there's different ways (--> models) to look at color, and one picks the right tool for the right task. If color patch sequences are defined in RGB, then HSB / HSV is a convenient way to evaluate them - but there are also other ways. You can do whatever, it won't change what the sequence consists of (it's just a different view of it).... so no, u can't run away from the truth. biggrin.gif

and one more thing.... u can easily convert between all spaces...
ConnecTEDDD likes this.

calibration & profiling solutions: Lightspace, Spaceman, Calman, Argyll, ColorNavigator, basICColor
profiling & calibration workflow tools: Display Calibration Tools
meter: Klein K-10 A, i1Pro, i1D3
AVS thread: Lightspace & Custom Color Patch Set & Gamma Calibration on Panasonic 65VT60
Iron Mike is offline  
Old 02-27-2014, 10:09 PM
Advanced Member
 
spacediver's Avatar
 
Join Date: May 2013
Location: Toronto
Posts: 943
Mentioned: 3 Post(s)
Tagged: 0 Thread(s)
Quoted: 120 Post(s)
Liked: 75
Quote:
Originally Posted by Iron Mike View Post



nobody said dE is (or should be) calculated from HSV (?)

Exactly. And I'm trying to get you to think about why we don't use HSV for dE, and to make the connection to the color space in which we verify the performance of a 3D LUT.
Vitalii427 likes this.
spacediver is online now  
Old 02-27-2014, 10:15 PM
Advanced Member
 
Iron Mike's Avatar
 
Join Date: Oct 2006
Posts: 795
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 25 Post(s)
Liked: 361
Quote:
Originally Posted by spacediver View Post

Exactly. And I'm trying to get you to think about why we don't use HSV for dE, and to make the connection to the color space in which we verify the performance of a 3D LUT.

I'm good, thanks. There is NOTHING that you need to get me to think about... biggrin.gif

calibration & profiling solutions: Lightspace, Spaceman, Calman, Argyll, ColorNavigator, basICColor
profiling & calibration workflow tools: Display Calibration Tools
meter: Klein K-10 A, i1Pro, i1D3
AVS thread: Lightspace & Custom Color Patch Set & Gamma Calibration on Panasonic 65VT60
Iron Mike is offline  
Old 02-27-2014, 10:20 PM
Advanced Member
 
spacediver's Avatar
 
Join Date: May 2013
Location: Toronto
Posts: 943
Mentioned: 3 Post(s)
Tagged: 0 Thread(s)
Quoted: 120 Post(s)
Liked: 75
Quote:
Originally Posted by Iron Mike View Post

I'm good, thanks. There is NOTHING that you need to get me to think about... biggrin.gif

If you want a bit more of a spoon feed, you can read this post here, or any number of other posts in this thread.
spacediver is online now  
Old 02-27-2014, 10:46 PM
Advanced Member
 
Iron Mike's Avatar
 
Join Date: Oct 2006
Posts: 795
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 25 Post(s)
Liked: 361
Quote:
Originally Posted by spacediver View Post

If you want a bit more of a spoon feed, you can read this post here, or any number of other posts in this thread.

u are confusing quite a few things....

it is not about the perceptual distance, it is about the fact that you can already differentiate between the 2 colors, ergo your eyes perceive 2 different colors, ergo your patch set should profile these 2 colors so that the LUT can calculate accurate, nice, clean compensations for it.

Unless you HAVE TO (as in: are forced to) trim your patch set so much that you obviously now wanna exclude perceptually closer points rather than points that are not so perceptually close. These colors are still NEVER irrelevant. U're apparently okay with cutting colors your eyes can perceive and u fail to realize that now u're automatically undersampling specific parts of the gamut.

Well, I'm not okay with that and I'm not being forced to cut my sets. I want the best possible quality out of my display profiles as I don't do them every day. But I do watch content on my calibrated TV set every day.... (are u slowly but surely getting that ur trade-off is not really worth it.... ?)

The reason and the logic of dE calculation is something different that you again confuse with patch set creation - and just too be clear as stated now 100 times, incl by Zoyd in his OP:

When u cut down your patch set, u will have to make compromises. Live with it. And if possible, make smart compromises.

so stop telling people your fairy tales b/c you're too cheap on time to properly profile a display. There is no cheap way out, and I'm not saying that grid sequences are the holy grail. Really not.

What if a movie (or a specifically color graded scene) consists mainly of the colors that you cut from your set ? U now have to live off the rough interpolation your inferior color engine came up with as it did not get enough data about that part of the gamut because u thought u can save 5 mins.... well, enjoy your movie. smile.gif


So please read this over and over:

Colors are never irrelevant. Treat all colors equally. Don't be cheap, otherwise u get what u pay for. biggrin.gif
ConnecTEDDD likes this.

calibration & profiling solutions: Lightspace, Spaceman, Calman, Argyll, ColorNavigator, basICColor
profiling & calibration workflow tools: Display Calibration Tools
meter: Klein K-10 A, i1Pro, i1D3
AVS thread: Lightspace & Custom Color Patch Set & Gamma Calibration on Panasonic 65VT60
Iron Mike is offline  
Old 02-27-2014, 11:08 PM
Advanced Member
 
spacediver's Avatar
 
Join Date: May 2013
Location: Toronto
Posts: 943
Mentioned: 3 Post(s)
Tagged: 0 Thread(s)
Quoted: 120 Post(s)
Liked: 75
I'm talking about verification samples here, and how distributing them evenly over a perceptually uniform space is superior to distributing them evenly over a perceptually non uniform space. This is what it actually means to treat all colors equally here.
Vitalii427 likes this.
spacediver is online now  
Old 02-27-2014, 11:24 PM
Advanced Member
 
Iron Mike's Avatar
 
Join Date: Oct 2006
Posts: 795
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 25 Post(s)
Liked: 361
Quote:
Originally Posted by spacediver View Post

I'm talking about verification samples here, and how distributing them evenly over a perceptually uniform space is superior to distributing them evenly over a perceptually non uniform space. This is what it actually means to treat all colors equally here.

ahhhh, I see we're making progress.... wink.gif

Well spacediver, the LUT was created from the profiling patch set and that did under-sample colors and that did show also in the visual eval....

I pointed this out as did quite a few others, I'm not sure why all of a sudden you are ignoring this.... (?)

But, since you are mentioning validation patch sets:

if your validation patch set now ignores (because it thinks those are "irrelevant" colors again or because u cut the set down again non-evenly) the colors that your initial profiling patch set under-sampled, u will never know about the ugly dE numbers that are hiding there....

stats (as in: dE reports) only show what u actually sample. so again, u should sample equally in the gamut so u get an overview of everything that is going on in all parts of the gamut.... if u leave out certain parts of the gamut, u'll never know what's going on there....

No color is ever irrelevant.

And to come full circle:

the good things is, no matter how badly distributed (or poorly sampled) your patch set is, a visual eval will show problems and it did as outlined in this thread.

- M
ConnecTEDDD likes this.

calibration & profiling solutions: Lightspace, Spaceman, Calman, Argyll, ColorNavigator, basICColor
profiling & calibration workflow tools: Display Calibration Tools
meter: Klein K-10 A, i1Pro, i1D3
AVS thread: Lightspace & Custom Color Patch Set & Gamma Calibration on Panasonic 65VT60
Iron Mike is offline  
Old 02-27-2014, 11:34 PM
Advanced Member
 
spacediver's Avatar
 
Join Date: May 2013
Location: Toronto
Posts: 943
Mentioned: 3 Post(s)
Tagged: 0 Thread(s)
Quoted: 120 Post(s)
Liked: 75
Quote:
Originally Posted by Iron Mike View Post


stats (as in: dE reports) only show what u actually sample. so again, u should sample equally in the gamut so u get an overview of everything that is going on in all parts of the gamut.... if u leave out certain parts of the gamut, u'll never know what's going on there....

Well there are two things I would imagine play a role in selecting a set of verification samples.

You don't want them to be systematically correlated with your profiling points, coz that's kinda cheating if you're just verifying the anchor points and not assessing the quality of the interpolated points.

And you want to prioritize sampling taking into account the perceptual distribution of color difference.

You can achieve both by using a quasi random approach, where you randomly sample the space, but weight according to perceptual color difference. As far as I understand, this is what Zoyd did.

I'm not in a position to comment on whether this runs the risk of excluding regions of color, due to the random element, and if so, how big this risk is. But I will grant that if a space is undersampled, and it happens to be a space that encompasses a large degree of perceptual color difference, then there is a problem.
Vitalii427 likes this.
spacediver is online now  
Old 02-27-2014, 11:48 PM
Advanced Member
 
Iron Mike's Avatar
 
Join Date: Oct 2006
Posts: 795
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 25 Post(s)
Liked: 361
Quote:
Originally Posted by spacediver View Post

Well there are two things I would imagine play a role in selecting a set of verification samples.

You don't want them to be systematically correlated with your profiling points, coz that's kinda cheating if you're just verifying the anchor points and not assessing the quality of the interpolated points.

And you want to prioritize sampling taking into account the perceptual distribution of color difference.

You can achieve both by using a quasi random approach, where you randomly sample the space, but weight according to perceptual color difference. As far as I understand, this is what Zoyd did.

I'm not in a position to comment on whether this runs the risk of excluding regions of color, due to the random element, and if so, how big this risk is. But I will grant that if a space is undersampled, and it happens to be a space that encompasses a large degree of perceptual color difference, then there is a problem.

and that is why I personally think it is just smart to visually evaluate a patch set before investing time using it, so u know what you are actually profiling / validating with.

There's HSB, there's the cube view... etc

and since u never know what colors will be part of the signal, as in: displayed in a movie or a TV show it would not be smart to under-sample anything unless you are forced to.

and btw, I don't randomly sample the space, I fully control what I sample.
ConnecTEDDD likes this.

calibration & profiling solutions: Lightspace, Spaceman, Calman, Argyll, ColorNavigator, basICColor
profiling & calibration workflow tools: Display Calibration Tools
meter: Klein K-10 A, i1Pro, i1D3
AVS thread: Lightspace & Custom Color Patch Set & Gamma Calibration on Panasonic 65VT60
Iron Mike is offline  
Old 02-28-2014, 12:03 AM
Advanced Member
 
spacediver's Avatar
 
Join Date: May 2013
Location: Toronto
Posts: 943
Mentioned: 3 Post(s)
Tagged: 0 Thread(s)
Quoted: 120 Post(s)
Liked: 75
Quote:
Originally Posted by Iron Mike View Post

and that is why I personally think it is just smart to visually evaluate a patch set before investing time using it, so u know what you are actually profiling / validating with.

There's HSB, there's the cube view... etc

and since u never know what colors will be part of the signal, as in: displayed in a movie or a TV show it would not be smart to under-sample anything unless you are forced to.

and btw, I don't randomly sample the space, I fully control what I sample.

Random sampling can be a powerful way of getting a representative look into the whole population, and this principle underlies a lot of statistics. This is especially important given that you don't know, a priori, what will be displayed in a movie. You can think of a random sample as "randomly sampling" a few dozen scenes from a few dozen movies to evaluate. The use of perceptual weighting helps to ensure that the perceptual holes are roughly the same size. So you get the best of both worlds.

The problem with sampling uniformly in device space, or something like HSV, is that you inevitably end up with unevenly sized perceptual holes. If the display is extremely well calibrated (i.e. if all 16.7 million points, or 10.65 million points in RGB 16-235) have low dE's, then perceptual holes in the verification samples are not a problem. But if the display has weaknesses, a perceptually uniform verification is more likely to reveal them.
spacediver is online now  
Old 02-28-2014, 12:06 AM
Advanced Member
 
spacediver's Avatar
 
Join Date: May 2013
Location: Toronto
Posts: 943
Mentioned: 3 Post(s)
Tagged: 0 Thread(s)
Quoted: 120 Post(s)
Liked: 75
Quote:
Originally Posted by Iron Mike View Post


Well spacediver, the LUT was created from the profiling patch set and that did under-sample colors and that did show also in the visual eval....

I pointed this out as did quite a few others, I'm not sure why all of a sudden you are ignoring this.... (?)

As for this point, I'm not ready to comment, as my understanding of the profiling methodology is tenuous.
spacediver is online now  
Old 02-28-2014, 12:07 AM
AVS Special Member
 
Chad B's Avatar
 
Join Date: Oct 2002
Location: Piqua, OH
Posts: 2,127
Mentioned: 3 Post(s)
Tagged: 0 Thread(s)
Quoted: 105 Post(s)
Liked: 504
Quote:
Originally Posted by JimP View Post

How about differences in profiles between two 64F8500s? Stated differently, if I could find a pro calibrator to do a profile of a 64F8500 using their Jeti 1211 and my i1D3, would that profile be spot on for my 64F8500?

An extention of that is would a profile made between a Panasonic plasma using a calibrators 1211 and my i1D3 transfer well to my 64F8500?


When I'm saying profile, what I'm thinking is the correction factors.
I don't know, I have not experimented with trying that. When I profile I don't take care to perfectly match axis, FOV, and position because I make a fresh profile on each display. That means that the way I normally do it would not transfer particularly well even though it is fine for the individual calibration I am working on.
ConnecTEDDD likes this.

ISF/THX calibrator with Jeti 1211 reference spectro
Latest reviews:LG 65UB9800Samsung UN65HU8550UN65HU9000, UN85HU8550, LG 55EC9300, LG 55EA8800 (including 3D)

Why I don't publish settings, Review index and rankings, Florida tour Dec '14, Texas tour Jan '15
Chad B is online now  
Old 02-28-2014, 12:15 AM
Advanced Member
 
Iron Mike's Avatar
 
Join Date: Oct 2006
Posts: 795
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 25 Post(s)
Liked: 361
Quote:
Originally Posted by spacediver View Post

Random sampling can be a powerful way of getting a representative look into the whole population, and this principle underlies a lot of statistics. This is especially important given that you don't know, a priori, what will be displayed in a movie.

since you don't know what will be displayed in a movie, u need to be prepared for everything, ANY color.

Therefore u sample all colors equally. And by that - in a HSB perspective I mean HUES... u can then start making adjustments to saturation and brightness coverage and distribution depending on perceptual importance, as in below 30-40% brt the perceptual importance goes down and above 80% brt as well...

I'll post an example later.
ConnecTEDDD likes this.

calibration & profiling solutions: Lightspace, Spaceman, Calman, Argyll, ColorNavigator, basICColor
profiling & calibration workflow tools: Display Calibration Tools
meter: Klein K-10 A, i1Pro, i1D3
AVS thread: Lightspace & Custom Color Patch Set & Gamma Calibration on Panasonic 65VT60
Iron Mike is offline  
Old 02-28-2014, 12:35 AM
Advanced Member
 
spacediver's Avatar
 
Join Date: May 2013
Location: Toronto
Posts: 943
Mentioned: 3 Post(s)
Tagged: 0 Thread(s)
Quoted: 120 Post(s)
Liked: 75
Quote:
Originally Posted by Iron Mike View Post

since you don't know what will be displayed in a movie, u need to be prepared for everything, ANY color.

Therefore u sample all colors equally. And by that - in a HSB perspective I mean HUES... u can then start making adjustments to saturation and brightness coverage and distribution depending on perceptual importance, as in below 30-40% brt the perceptual importance goes down and above 80% brt as well...

Ok, let's say you sample the hues uniformly in HSB, and you slice the pie into say a hundred equal pieces, and sample the edges of those slices. You've still got to contend with the following fact: Some of these slices will encompass a large number of JNDs (just noticeable differences), and some will encompass a smaller number of JNDs. For those slices that encompass a large # of JNDs, there is an undersampling, and potential weaknesses in the display may go unnoticed. For those slices that encompass a small # of JNDs, you've wasted resources by oversampling a region beyond what is necessary.

As for the brightness, keep in mind that CIE L* (lightness) is the perceptually uniform quantity (not the same as lightness in HSL).
Roland.Online likes this.
spacediver is online now  
Old 02-28-2014, 12:54 AM
Advanced Member
 
Iron Mike's Avatar
 
Join Date: Oct 2006
Posts: 795
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 25 Post(s)
Liked: 361
Quote:
Originally Posted by spacediver View Post

Ok, let's say you sample the hues uniformly in HSB, and you slice the pie into say a hundred equal pieces, and sample the edges of those slices. You've still got to contend with the following fact: Some of these slices will encompass a large number of JNDs (just noticeable differences), and some will encompass a smaller number of JNDs. For those slices that encompass a large # of JNDs, there is an undersampling, and potential weaknesses in the display may go unnoticed. For those slices that encompass a small # of JNDs, you've wasted resources by oversampling a region beyond what is necessary.

As for the brightness, keep in mind that CIE L* (lightness) is the perceptually uniform quantity (not the same as lightness in HSL).

I said HSB / HSV = not HSL.

since we both don't know which colors will occur anywhere, you'll have to treat all colors equally.

Like I said I don't sacrifice the quality of my profiling sets.

An assumption is that they could be more efficient, but since you don't know which colors appear where, that is an assumption.

calibration & profiling solutions: Lightspace, Spaceman, Calman, Argyll, ColorNavigator, basICColor
profiling & calibration workflow tools: Display Calibration Tools
meter: Klein K-10 A, i1Pro, i1D3
AVS thread: Lightspace & Custom Color Patch Set & Gamma Calibration on Panasonic 65VT60
Iron Mike is offline  
Old 02-28-2014, 01:04 AM
Advanced Member
 
spacediver's Avatar
 
Join Date: May 2013
Location: Toronto
Posts: 943
Mentioned: 3 Post(s)
Tagged: 0 Thread(s)
Quoted: 120 Post(s)
Liked: 75
Quote:
Originally Posted by Iron Mike View Post

I said HSB / HSV = not HSL.

I know. Read my post carefully. I was discussing HSB, and only brought up HSL in the context of the idea of lightness, just as a reminder not to conflate CIE L* with the L in HSL.

Quote:
Originally Posted by Iron Mike View Post

since we both don't know which colors will occur anywhere, you'll have to treat all colors equally.

Like I said I don't sacrifice the quality of my profiling sets.

An assumption is that they could be more efficient, but since you don't know which colors appear where, that is an assumption.

Ok, you've just repeated yourself from before. You've completely ignored the JND issue that I discussed in my last post.
spacediver is online now  
Old 02-28-2014, 01:25 AM
AVS Special Member
 
fight4yu's Avatar
 
Join Date: Oct 2009
Location: SF Bay Area
Posts: 1,729
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 2 Post(s)
Liked: 43
Quote:
Originally Posted by Iron Mike View Post

since you don't know what will be displayed in a movie, u need to be prepared for everything, ANY color.

Therefore u sample all colors equally. And by that - in a HSB perspective I mean HUES... u can then start making adjustments to saturation and brightness coverage and distribution depending on perceptual importance, as in below 30-40% brt the perceptual importance goes down and above 80% brt as well...

I'll post an example later.

fascinating dialog and I learned a lot.
Just a question. for visual eval, won't you also would need to know if that "movies" consists of enough colors for you to eval? The same "undersample" could happen if the movie you choose say only use half or a quarter of the millions of color? You might be able to go with sampling a large variety of movies, I guess, but I also struggle to know what is the "golden" standard for those movies. Any recommendation of what movies you check and etc?
fight4yu is offline  
Old 02-28-2014, 03:31 AM
Member
 
Vitalii427's Avatar
 
Join Date: Jan 2014
Location: Kiev, Ukraine
Posts: 15
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 14
Quote:
Originally Posted by Iron Mike View Post

...anything...

Congrats! Mike, you are invincible. No one can convince you. It seems to me that you trolling this thread. Nothing personal. But we all see that objective assessment of the test results shows that the 17^3 (4913) grid is inferior to 2500 OFPS and of course to OFPS with the same number of points. In my previous post I've described how we can eliminate hardware errors, time spent on measurements, visual evaluation, etc. We can generate "perfect" 65^3 LUT based on all 16.7M points (with no interpolation) which will show what the maximum accuracy we can achieve with this particular virtual display and such LUT. Also we can check all 16.7M points for those who don't understand the role of random in statistics. Then we can inspect dE distribution and will clearly see which patch set get us LUT closer to "perfect". For me OFPS is obviously better. You can do it all yourself if you still don't "believe" science. When you do understand this then we can discuss meter offsets and visual evaluation.

Vitalii427 is offline  
Old 02-28-2014, 04:02 AM
Member
 
Kukulcan's Avatar
 
Join Date: Sep 2012
Location: Italy
Posts: 134
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 10 Post(s)
Liked: 23
Quote:
Originally Posted by Vitalii427 View Post

Congrats! Mike, you are invincible.

Yes, he is! And we are fool! Definitely! If we really think to be able to Knock Out a guy whose nick is Iron Mike, we are terribly fool.....biggrin.gif

I think it's time to stop feeding the troll...
Kukulcan is offline  
Old 02-28-2014, 04:10 AM
Advanced Member
 
Iron Mike's Avatar
 
Join Date: Oct 2006
Posts: 795
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 25 Post(s)
Liked: 361
Quote:
Originally Posted by Vitalii427 View Post

Congrats! Mike, you are invincible. No one can convince you. It seems to me that you trolling this thread. Nothing personal. But we all see that objective assessment of the test results shows that the 17^3 (4913) grid is inferior to 2500 OFPS and of course to OFPS with the same number of points. In my previous post I've described how we can eliminate hardware errors, time spent on measurements, visual evaluation, etc. We can generate "perfect" 65^3 LUT based on all 16.7M points (with no interpolation) which will show what the maximum accuracy we can achieve with this particular virtual display and such LUT. Also we can check all 16.7M points for those who don't understand the role of random in statistics. Then we can inspect dE distribution and will clearly see which patch set get us LUT closer to "perfect". For me OFPS is obviously better. You can do it all yourself if you still don't "believe" science. When you do understand this then we can discuss meter offsets and visual evaluation.

Vitali,

yeah, for me the general technique of the OFPS approach is also better, not sure where you got all that other stuff you were talking about, but I just don't under-sample colors... as it was done in all provided and used patch sets. ROOM FOR IMPROVEMENT.

You can use whatever you believe in. It's not my display you're "calibrating". biggrin.gif
Carbon Ft Print likes this.

calibration & profiling solutions: Lightspace, Spaceman, Calman, Argyll, ColorNavigator, basICColor
profiling & calibration workflow tools: Display Calibration Tools
meter: Klein K-10 A, i1Pro, i1D3
AVS thread: Lightspace & Custom Color Patch Set & Gamma Calibration on Panasonic 65VT60
Iron Mike is offline  
Old 02-28-2014, 04:23 AM
Member
 
Vitalii427's Avatar
 
Join Date: Jan 2014
Location: Kiev, Ukraine
Posts: 15
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 14
Quote:
Originally Posted by Iron Mike View Post


I just don't under-sample colors... as it was done in all provided and used patch sets. ROOM FOR IMPROVEMENT.

 

When you use grid you do "under-sample" in blue area vs green area for example. If you mean 2500 is few samples then use 9000 OFPS instead 21^3 and you win! That's simple.

 

Quote:
Originally Posted by Iron Mike View Post

not sure where you got all that other stuff you were talking about
 
Here:

 

Quote:
Originally Posted by Vitalii427 View Post
 

Meter accuracy absolutely unrelated to such color engine test. We do not even need a real display. Display can be simulated with all 16.7M points. Then you can choose any subset (grid,ofps,etc.), generate LUT with any of these engines and check all 16.7M points corrected with LUT. All this can be done with Mathlab for example.

 
Vitalii427 is offline  
Old 02-28-2014, 04:38 AM
Advanced Member
 
Iron Mike's Avatar
 
Join Date: Oct 2006
Posts: 795
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 25 Post(s)
Liked: 361
Quote:
Originally Posted by Vitalii427 View Post

When you use grid you do "under-sample" in blue area vs green area for example. If you mean 2500 is few samples then use 9000 OFPS instead 21^3 and you win! That's simple.

biggrin.gifbiggrin.gifbiggrin.gif

oh man......

have you actually read the thread ?

I don't use grid sequences but I def would use them over an unbalanced, badly "optimized" custom set - as with grid seq "u know what u're getting". Obviously there's lots of room for improvement in grid seq. Not sure why u're trying tell me that...... (?!)

The (bad) meter offset thing that you're jumping on was something completely different, which u would know if would have read the thread.... it had nothing to do with OFPS.

and just to be clear:

with grid seq u don't undersample in blue vs green - all primaries (and that includes blue and green) and secondaries are sampled with same point count - all colors in between these hues are sampled equally (with less points then prim/sec but all sampled equally to their relative position) to all the other colors with the same offset to the next primary / secondary hue....

Like I said, u need to understand what the algorithms do.... but clearly grid seq leave room for improvement, that's why I don't use them, and when I do then with lots of weight.

calibration & profiling solutions: Lightspace, Spaceman, Calman, Argyll, ColorNavigator, basICColor
profiling & calibration workflow tools: Display Calibration Tools
meter: Klein K-10 A, i1Pro, i1D3
AVS thread: Lightspace & Custom Color Patch Set & Gamma Calibration on Panasonic 65VT60
Iron Mike is offline  
Old 02-28-2014, 05:01 AM
Member
 
Vitalii427's Avatar
 
Join Date: Jan 2014
Location: Kiev, Ukraine
Posts: 15
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 14
Quote:
Originally Posted by Iron Mike View Post


biggrin.gifbiggrin.gifbiggrin.gif

oh man......

have you actually read the thread ?

I don't use grid sequences but I def would use them over an unbalanced, badly "optimized" custom set - as with grid seq "u know what u're getting". Obviously there's lots of room for improvement in grid seq. Not sure why u're trying tell me that...... (?!)

The (bad) meter offset thing that you're jumping on was something completely different, which u would know if would have read the thread.... it had nothing to do with OFPS.

and just to be clear:

with grid seq u don't undersample in blue vs green - all primaries (and that includes blue and green) and secondaries are sampled with same point count - all colors in between these hues are sampled equally (with less points then prim/sec but all sampled equally to their relative position) to all the other colors with the same offset to the next primary / secondary hue....

Like I said, u need to understand what the algorithms do.... but clearly grid seq leave room for improvement, that's why I don't use them, and when I do then with lots of weight.

I know that you use your own patch set. The only thing I trying to show that OFPS is not an unbalanced, badly "optimized" custom set.

One more time: grid is sampled evenly in device coordinates and not in perceptual (dE) coordinates. About perception blue and green see below:

 

Quote:
Originally Posted by Vitalii427 View Post
 

For our eyes numerically linear space is not linear at all. Here's the picture from wikipedia color difference article which demonstrates it:

 

 

Quote:

Tolerancing concerns the question "What is a set of colors that are imperceptibly/acceptably close to a given reference?" If the distance measure is perceptually uniform, then the answer is simply "the set of points whose distance to the reference is less than the just-noticeable-difference (JND) threshold." This requires a perceptually uniform metric in order for the threshold to be constant throughout the gamut (range of colors). Otherwise, the threshold will be a function of the reference color—useless as an objective, practical guide.

In the CIE 1931 color space, for example, the tolerance contours are defined by the MacAdam ellipse, which holds L* (lightness) fixed. As can be observed on the diagram on the right, the ellipses denoting the tolerance contours vary in size. It is partly due to this non-uniformity that lead to the creation of CIELUV and CIELAB.

More generally, if the lightness is allowed to vary, then we find the tolerance set to be ellipsoidal. Increasing the weighting factor in the aforementioned distance expressions has the effect of increasing the size of the ellipsoid along the respective axis.

 

Good read for you and everybody who don't understand what I and spacediver are talking aboutA guide for Understanding Color Tolerancing.

Vitalii427 is offline  
Old 02-28-2014, 05:57 AM - Thread Starter
AVS Special Member
 
zoyd's Avatar
 
Join Date: Sep 2006
Location: Planet Dog
Posts: 4,995
Mentioned: 40 Post(s)
Tagged: 0 Thread(s)
Quoted: 426 Post(s)
Liked: 416
Quote:
Originally Posted by Kukulcan View Post

Yes, he is! And we are fool! Definitely! If we really think to be able to Knock Out a guy whose nick is Iron Mike, we are terribly fool.....biggrin.gif

I think it's time to stop feeding the troll...

When someone argues from a senseless position, I think it is appropriate to question their motives. (see also the discussion beginning here for an example of similar behavior)
Quote:
Originally Posted by Vitalii427 View Post

Congrats! Mike, you are invincible. No one can convince you. It seems to me that you trolling this thread. Nothing personal. But we all see that objective assessment of the test results shows that the 17^3 (4913) grid is inferior to 2500 OFPS and of course to OFPS with the same number of points. In my previous post I've described how we can eliminate hardware errors, time spent on measurements, visual evaluation, etc. We can generate "perfect" 65^3 LUT based on all 16.7M points (with no interpolation) which will show what the maximum accuracy we can achieve with this particular virtual display and such LUT. Also we can check all 16.7M points for those who don't understand the role of random in statistics. Then we can inspect dE distribution and will clearly see which patch set get us LUT closer to "perfect". For me OFPS is obviously better. You can do it all yourself if you still don't "believe" science. When you do understand this then we can discuss meter offsets and visual evaluation.

This is a good point, one could easily demonstrate this through model simulations. The theory is described in Graeme's paper and I've presented confirmation of it's advantages in practice (by a large margin), not to mention that for those of us who understand color science principles it's just common sense. There really is nothing more to say. Barring any similarly rigorous demonstrations to the contrary, patch sets based on FPS in perceptual coordinates are optimal for reducing aggregate color errors when one must undersample the volume.
zoyd is online now  
 
Thread Tools


Forum Jump: 

Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off