or Connect
AVS › AVS Forum › Display Devices › Display Calibration › MadVR - ArgyllCMS
New Posts  All Forums:Forum Nav:

MadVR - ArgyllCMS - Page 8

post #211 of 1679
Quote:
Originally Posted by gwgill View Post

Right, but where does your black = 0.022 cd/m^2 come from though ?

The only knowledge that collink has about the display is the profile, and any BT.1886 black offset applied has to be chosen to result in a negligible difference between the black the display is capable of, and the black it produces for zero input to the device link.

So from the profile:

xicclu -ff -ir -px basetargets.icm
0.000000 0.000000 0.000000 [RGB] -> Lut -> 0.000000 0.000000 0.000000 [XYZ]

The profile says the display has a perfect zero black. So that's what collink has to work with. To use anything else would result in a raised black when the black point XYZ if inverted.

ie.

.022/146.5 = .00015

xicclu -fif -ir -px basetargets.icm
0.000145 0.000150 0.000124 [XYZ] -> Lut -> 0.016065 0.015489 0.013713 [RGB]

which is a result that is likely to be 0.022 cd/m^2 higher than the actual black, which is probably not what you want.

So it doesn't matter what you measure as a black point, it's the black point reckoned by the profile that counts, because that's what's used to predict the RGB values needed to achieve the target.

Now why the profile isn't so accurate at the black point, and whether it matters is a different question. 0.022 cd/m^2 is at or below most instruments accurate capabilities though.


So why doesn't the profile contain my actual measured black point so collink can calculate the desired target response (column 7 above) for everything above black when given the -IB switch? If the profile can't accurately represent my display black point than this will never work properly. One would have to put the desired gamma response into the source profile. Can the switch be modified to accept a user specified black point just for the purpose of calculating the correct above black targets and proceed based on that?

The display pro can reliably measure Y down to 0.005 cd/m^2. I can go back through all my measurements files and and the 0 input value is always 0.025 cd/m^2 +/- a little bit. Achieving the gamma roll-off that the bt.1886 function provides creates a very noticeable improvement for my display in alleviating black crush during daytime viewing which is why I'd like to have it working in the LUT workflow rather than baked into the display calibration. That way I can easily switch from day to night LUTs.
Edited by zoyd - 6/3/13 at 6:55am
post #212 of 1679
Is it time to update the guide? There are some elementary changes in gwgill last update that requires this? Just as user of ArgyllCMS not go completely wrong and dispiriting ;-)
post #213 of 1679
Thread Starter 
Quote:
Originally Posted by MSL_DK View Post

Is it time to update the guide? There are some elementary changes in gwgill last update that requires this? Just as user of ArgyllCMS not go completely wrong and dispiriting ;-)
Yes....but I'm seeing the same issues as Zoyd with the latest beta binaries. Issues with low level brightness 0-5% (raised and wrong hue) and the posterization(?) issue with red. Will provide logs, files, and screenshots when I get a chance.

Graeme,
The last working binaries without issues are dated 05-21-2013. May I host these binaries elsewhere for download? (ie Google docs)
post #214 of 1679
Quote:
Originally Posted by N3W813 View Post

Issues with low level brightness 0-5% (raised and wrong hue) and the posterization(?) issue with red.

Same here. I have posterization not only with red. Low brightness hue problem also is present on my system. If you want any data just tell me what you need smile.gif Older version was fine except black level raised...
post #215 of 1679

TYVM for the tutorial in the OP and it's fantastic to see Graeme Gill on AVS providing his expertise to the madVR environment cool.gif

 

I am well aware that 3DLUT's are technically the most advanced solution for gamut mapping, but so far it would appear that no application that would be compatible with Avisynth or madVR took their full potential in account IMHO?

 

To be perfectly clear, what is there that an ArgyllCMS 3DLUT for madVR could do that this PS script cannot: http://www.avsforum.com/t/912720/ ?

 

From what I understood, the major limitation of this script is that it does gamut mapping at 100% sats but not quite 75/50/25%? Would that make a seriously visible difference if you have a cheap colorimeter(i1d2) and recalibrate on a monthly basis? I don't run a mastering house yet ^^

 

Also, primaries/secondaries saturations can be measured in HCFR, on the top chart you see each respective sat and on the bottom one compared against REC709:

 

 

When I asked tritical(who wrote ddcc()) and yesgrey whether they could compensate for those, they told me that this was a no-go.......can Argyll do that?

post #216 of 1679
Graeme can give you more details but yes, ArgyllCMS generates full 3dLUTs which correct for all saturation and luminance levels in the gamut. For a dramatic example of it's capabilities see this post.
post #217 of 1679
Quote:
Originally Posted by zoyd View Post


So why doesn't the profile contain my actual measured black point so collink can calculate the desired gamma response (column 8 above) when given the -IB switch?
Because then the profile would be inconsistent. On the one hand the table would be saying that the black point is zero, and on the other hand some other entry in the profile would have a different value. And if one is then used with the other, you get a raised black.
Quote:
If the profile can't accurately represent my display black point than this will never work properly. One would have to put the desired gamma response into the source profile. Can the switch be modified to accept a user specified black point and proceed based on that?
The whole point of offsetting the input is to maintain the contrast from the black point. If the devices black is so close to zero that it isn't accurately captured by the profile, then the difference between that and the accurately offset curve is likely to be small enough that it makes little practical difference. (I estimate a maximum of a little over 1 DE at 10% input when using the effective gamma adjustment).

The whole BT.1886 thing is really designed for getting the best out of LCD type displays with black points considerably higher than 0.02 cd/m^2.

None the less, I think that it might be possible to improve the profiles black point accuracy. I will add a default option in targen to add some extra black patches as well as white ones for display type devices.

The other is more of hack (with unknown side effects at this stage): to artificially increase the weight of near black patches during profile creation.

The result is something more plausible with your .ti3 file, although not perfect - you are within a factor of 5 of the quantization limit of a 16 bit XYZ ICC profile at a relative black of 0.00015.
post #218 of 1679
Quote:
Originally Posted by zoyd View Post

The image was extracted from the BD "Tree of Life" but I see it in all sources, BD, DVD, cable, streaming etc. so I'm thinking it must be the eecolor workflow.
[*] Everything was fine prior to may30th release

One of the interesting things about any sort of channel independent transfer curve adjustment like BT.1886 or any similar gamma change, is that because it can change the ratio between channel values, it may also change the hue and saturation. So the space the adjustment is done in has an impact on the result.

So ideally the BT.1886 adjustment should be done in the source devices colorspace (ie. Rec709), which is what the previous code was doing.

To accommodate xvYCC which gains an expanded gamut by allowing colors with -ve values, I've attempted to switch to a BT.1886 adjustment in a PCS based space. Some research into HDR brightness compression indicated that a brightness adjustment that preserved the channel ratio's was desirable, so I used an L*a*b* type space so that I could apply a BT.1886 type adjustment just to L*, and retain the same a*b* values.

I suspect this is the cause of the artefacts you are seeing.

So I've switched the code to using a pure XYZ type approach. Once again it won't quite be the same as doing it in Rec709 space, but it should work with wide gamut source spaces, and doesn't seem to produce the same types of artefacts as the previous approach.

http://www.argyllcms.com/Win32_collink_3dlut.zip now has these changes.
You should try re-creating the display profile and then the 3lut and see if this problem is fixed.
post #219 of 1679
Quote:
Originally Posted by gwgill View Post

Because then the profile would be inconsistent. On the one hand the table would be saying that the black point is zero, and on the other hand some other entry in the profile would have a different value. And if one is then used with the other, you get a raised black.

First of all, thanks very much for sticking with this issue, I'm sure you are sick of bt.1886 by this point. smile.gif Part of the problem is my ignorance about profiles in general, how is there any ambiguity in black point when it's measured in my case with about 10% precision? Why does the table ever say it's zero?
Quote:
The whole point of offsetting the input is to maintain the contrast from the black point. If the devices black is so close to zero that it isn't accurately captured by the profile, then the difference between that and the accurately offset curve is likely to be small enough that it makes little practical difference. (I estimate a maximum of a little over 1 DE at 10% input when using the effective gamma adjustment).

The whole BT.1886 thing is really designed for getting the best out of LCD type displays with black points considerably higher than 0.02 cd/m^2.

The ITU recommends matching their explicit transfer function for black levels above 0.01 cd/m^2 and in my case it is an effectively much higher than 1 DE perceptual change when this is done (for the better). The 5% and 10% levels get raised by a factor of 2.58 and 1.64 respectively, and this is easily distinguishable in real content.
Quote:
None the less, I think that it might be possible to improve the profiles black point accuracy. I will add a default option in targen to add some extra black patches as well as white ones for display type devices.

The other is more of hack (with unknown side effects at this stage): to artificially increase the weight of near black patches during profile creation.

The result is something more plausible with your .ti3 file, although not perfect - you are within a factor of 5 of the quantization limit of a 16 bit XYZ ICC profile at a relative black of 0.00015.
Quote:
So I've switched the code to using a pure XYZ type approach. Once again it won't quite be the same as doing it in Rec709 space, but it should work with wide gamut source spaces, and doesn't seem to produce the same types of artefacts as the previous approach.

Thanks again very much, I'll give a whirl tonight.
post #220 of 1679
Quote:
Originally Posted by zoyd View Post

Graeme can give you more details but yes, ArgyllCMS generates full 3dLUTs which correct for all saturation and luminance levels in the gamut. For a dramatic example of it's capabilities see this post.

 

Ouh, impressive! And is there any simple way to run a verification measurement pass with HCFR in madVR without using a zillion manual test patterns?

post #221 of 1679
Quote:
Originally Posted by zoyd View Post

Part of the problem is my ignorance about profiles in general, how is there any ambiguity in black point when it's measured in my case with about 10% precision? Why does the table ever say it's zero?
Because the type of scattered data interpolation I'm using to create a regular set of grid values regularises the data using a smoothness criteria. That lets you trade-off smoothness (ie. suppressing noise/inaccuracy in the readings) against the interpolation accuracy for any single input value. It is also necessary because typically the number of fitting variable (ie. the grid values) vastly exceeds the number of data points (ie. this protects against so called "over-fitting").

So even if the black point is measured as a certain value, the "consensus" of all the measured points (and particularly those in the immediate vicinity) might be that it is not a likely value, and the profile will therefore return a slightly different value for the black point.
post #222 of 1679
Quote:
Originally Posted by gwgill View Post

Because the type of scattered data interpolation I'm using to create a regular set of grid values regularises the data using a smoothness criteria. That lets you trade-off smoothness (ie. suppressing noise/inaccuracy in the readings) against the interpolation accuracy for any single input value. It is also necessary because typically the number of fitting variable (ie. the grid values) vastly exceeds the number of data points (ie. this protects against so called "over-fitting").

So even if the black point is measured as a certain value, the "consensus" of all the measured points (and particularly those in the immediate vicinity) might be that it is not a likely value, and the profile will therefore return a slightly different value for the black point.

ah, ok so it's a trade-off with noise suppression. Can Y be weighted separately from X and Z since it always has higher precision? I manually added 10 repeated identical white and black measurements to the .ti3 file and ran the new code and the target values are almost spot on. smile.gif The 5% level is only 8% lower than desired and it's all better from there on up. Will see what the LUT looks like later.
post #223 of 1679
Quote:
Originally Posted by zoyd View Post

ah, ok so it's a trade-off with noise suppression. Can Y be weighted separately from X and Z since it always has higher precision?
In principle yes. What to do with the capability is not clear. After some experiments on this aspect I settled on slightly less smoothing for X & Z than Y (and correspondingly less for a* & b* than L*). This seems counter-intuitive, and perhaps could be visited again.
post #224 of 1679
Thread Starter 
Quote:
Originally Posted by leeperry View Post

Ouh, impressive! And is there any simple way to run a verification measurement pass with HCFR in madVR without using a zillion manual test patterns?

Not at the moment. The only way is to use manual patterns on various calibration discs.
post #225 of 1679
I thought madshi mentioned thinking about implementing some test patterns for evaluation and then linking them to ArgyllCMS through the command.com switch, but I don't know if that is being followed up.
post #226 of 1679
Thread Starter 
Madshi just released 0.86.2 version of MadVR yesterday. Integration with ArgyllCMS is not on the release notes. I think it may be a task on his long to-do list. smile.gif
post #227 of 1679
Did some testing with the new builds and there is good news and bad news (mostly good).

First the good, I did not observe any artifacts in either the eeColor or madVR LUTs so that looks to be entirely fixed. The measured transfer function with the bt.1886 switch -IB is also working quite well. The 10% and 20% levels are right on target with the rest of the levels within 0.05 of target. I used a profile with 50 grey scale patches, maybe adding some more would smooth it out a bit. I did need to add > 10 duplicates of the black point measurement to the .ti3 file (I did not remeasure the 4500 patches) in order for collink to generate the correct targets. Less than that the black point was still reporting as zero in the profile.




The bad news. For some reason the -IB switch generates a LUT which does not correct the edge of the gamut. Not sure what is going on there.

No LUT


collink -v -G -ir -et -Et -qh


collink -v -G -ir -et -Et -qh -IB
post #228 of 1679
Quote:
Originally Posted by zoyd View Post

The bad news. For some reason the -IB switch generates a LUT which does not correct the edge of the gamut. Not sure what is going on there.
BT.1886 in XYZ space isn't working correctly off the neutral axis. I'll have to have a think about it. I suspect it simply isn't possible to do it in a CIE space, since the mapping curve has to be anchored between colorant 0 and 1. How to do it without reference to the input encoding (so it can be used with xvYCC) is therefore a puzzle.

Perhaps xvYCC is simply unusable without defining a gamut for it, and I'm not even sure providing an image gamut to define it will work, because I suspect that image gamuts are assumed to be smaller than the space they are encoded in. So the simplest thing may be to remove xvYCC encoding support and revert back to doing BT.1886 in the input colorspace.
post #229 of 1679
If xvYCC is the source of all the trouble, and if you get headaches trying to solve the problems, then I'd say comment it out and go back to the old solution for now. At least until we have some real material that needs/uses xvYCC.

Just my 2 cents, of course...
post #230 of 1679
Thread Starter 
@Graeme,
Tested the latest beta (06-05-13) last night. Black levels issue is gone but I still see posterization issue with the color red at low levels. My scripts and logs are attached.

3dlut_060513.zip 626k .zip file
post #231 of 1679
Quote:
Originally Posted by N3W813 View Post

@Graeme,
Tested the latest beta (06-05-13) last night. Black levels issue is gone but I still see posterization issue with the color red at low levels. My scripts and logs are attached.

3dlut_060513.zip 626k .zip file

That's a nice script. Is the guide eventually going to include it, and avoid dispcalgui entirely? smile.gif
post #232 of 1679
I've changed the code back to doing BT.1886 in the input RGB space, + the black point alignment to ensure that the black shouldn't get raised in the process. xvYCC is still enabled, but will bomb if you try and use it with BT.1886. It's in the usual place //www.argyllcms.com/Win32_collink_3dlut.zip
post #233 of 1679
Quote:
Originally Posted by madshi 
This is a direct quote from his email:
Quote:

I downloaded the latest build just now, but collink [CRC: A2389135] from 6/7/2013 still only accepts -r255.

If I try to use -r256 I get the error message "Diagnostic: Resolution flag (-r) argument out of range (256) " with it aborting and showing help text.
post #234 of 1679
Quote:
Originally Posted by cyberbeing View Post

I downloaded the latest build just now, but collink [CRC: A2389135] from 6/7/2013 still only accepts -r255.
Sorry about that - I reverted to a previous version to fix the BT.1886 stuff, and lost that change.
Try it again. I've also re-enabled BT.1886 for xvYCC (not that anyone can
currently use it) by hard coding xvYCC to be Rec709 primaries, and using the given (presumably wider gamut) source profile for defining the xvYCC gamut for the purposes of gamut mapping and BT.1886 adjustment.
post #235 of 1679
Hi Gill, and thanks for your interest in video color correction!
I'm using a very old projector with a terrible white point (around 15dE from daylight) and a somehow limited red level, Argyll Perceptual Gamut Mapping and MPC-HC integrated icc support (no madvr). I can't calibrate it because it will loose too much brightness.
As I discovered you started working on madvr, I tested it using this forum guide as a reference, and noted that it recommend using "-ial" (absolute in lab) linking.
Results using a/ial seems great, ("life-like" colors) however as Red brightness output is limited, it tends to show some posterization on bright red colors.
My question: is there a way to do perceptual mapping (p/pa, compressing hdtv gamut into projector one) without mapping the white point?
Thanks for your work, Davide!
post #236 of 1679
Quote:
Originally Posted by gwgill View Post

I've changed the code back to doing BT.1886 in the input RGB space, + the black point alignment to ensure that the black shouldn't get raised in the process. xvYCC is still enabled, but will bomb if you try and use it with BT.1886. It's in the usual place //www.argyllcms.com/Win32_collink_3dlut.zip

Tested and BT.1886/gamut mapping working very well for eeColor, thanks Graeme.
post #237 of 1679
Quote:
Originally Posted by gwgill View Post

Quote:
Originally Posted by cyberbeing View Post

I downloaded the latest build just now, but collink [CRC: A2389135] from 6/7/2013 still only accepts -r255.
BTW - I wouldn't expect using direct 256 resolution to make any measurable or discernible difference. Other factors will dominate the accuracy of the overall calibration/emulation. In order of decreasing influence:

Stability of the display & stability and accuracy of the instrument.
The number of samples used to make the profile.
The resolution of the profile cLUT.
The resolution of the device link cLUT.

The effective grid resolution of 3000 measured points is (very roughly) 15. The -qh profile cLUT resolution is 33. The default device link cLUT resolution for VIdeo 3dLuts is 65. So increasing the 65 to 256 is changing the least limiting parameter.

If you have more time to spend, and a stable meter (like the i1d3), then I'd recommend using to to read more samples.
post #238 of 1679
Quote:
Originally Posted by berga0d View Post

My question: is there a way to do perceptual mapping (p/pa, compressing hdtv gamut into projector one) without mapping the white point?

Yes, most of the interesting gamut mappings are white point relative, hence the recommended workflow of correcting the white point outside the ICC profiles, by using device calibration curves. So you can either skip the dispcal step completely, or use dispcal to target the native white point and just use the calibration curves for neutral axis/response curve calibration, and then use a white point relative (ie. non-absolute colorimetric) gamut mapping.

dispcal
will use the native white point by default (ie. simply omit any option such as -t -T -w that would set a white point target.)
post #239 of 1679
Quote:
Originally Posted by gwgill View Post

The effective grid resolution of 3000 measured points is (very roughly) 15. The -qh profile cLUT resolution is 33. The default device link cLUT resolution for VIdeo 3dLuts is 65. So increasing the 65 to 256 is changing the least limiting parameter.
Gill, so what number of patches would correspond to 65 resolution cLUT? IIRC, the maximum number of patches I have in dispcalGUI is 2000 something (massive testchart).
post #240 of 1679
Quote:
Originally Posted by Elix View Post

Gill, so what number of patches would correspond to 65 resolution cLUT? IIRC, the maximum number of patches I have in dispcalGUI is 2000 something (massive testchart).

That resolution is 274625 patches which would take about 76 hours to measure at 1 patch/second. You don't need that many to get excellent results, 2500 will do it in most cases, for a particularly bad gamut the most I've had to use was 4500. In dispcalGUI you can build custom test charts of any size.
Edited by zoyd - 6/11/13 at 4:46am
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Display Calibration
AVS › AVS Forum › Display Devices › Display Calibration › MadVR - ArgyllCMS