or Connect
AVS › AVS Forum › Display Devices › Display Calibration › MadVR - ArgyllCMS
New Posts  All Forums:Forum Nav:

MadVR - ArgyllCMS - Page 45

post #1321 of 1702
Does madVR support PotPlayer or vice versa?
post #1322 of 1702

yes.

 

every directshow based videoplayer can use madvr theoretically.

 

Quote:
(1) First of all you need to use a media player which supports using madVR as a renderer. Your choices are currently MPC-HC, Zoom Player, J.River Media Center 16, PotPlayer and KMPlayer.

source: http://forum.doom9.org/showpost.php?p=1271417&postcount=3

post #1323 of 1702
Quote:
Originally Posted by gwgill View Post

I did make a change to collink a while back that attempted to address this issue. Without a specific example it's hard to know why it is still occurring.

I've tried it with the latest colprof and collink, and the same behavior is still occuring. To rule out absolutely any wrong level settings I applied the 3dlut to my monitor, where the bars still exceed 235 even though madVR is set to 0-255.
Code:
targen.exe -v -d3 -e4 -B12 -s0 -g128 -m0 -b0 -V1.5 -N1.0 -c rec709.icm -f250 -W "baseline" > logs\Create.Pattern.Chart.log
dispread.exe -v2 -dmadvr -c1 -Yp -y7 "baseline" >logs\dispread.log
colprof -v -qh -bl -aX -V1.5 "baseline" > logs\colprof.log
collink -v -qh -3m -et -Et -G -iaw -IB rec709.icm "baseline.icm" 3DLUT_1iaw.icm > logs\collinkiaw.log
collink -v -qh -3m -et -Et -G -ir -IB rec709.icm "baseline.icm" 3DLUT_1ir.icm > logs\collinkir.log
collink -v -qh -3m -et -Et -G -ia -IB rec709.icm "baseline.icm" 3DLUT_1ia.icm > logs\collinkia.log

This issue only exists for -iaw and -ia. The 3dlut for -ir is fine.

I have attached all the logs, the ti1 and ti3 file as well as the icm from colprof.

Hope this helps.
monvologsandfiles.zip 264k .zip file
post #1324 of 1702
Do we still need to download the Argyll tools update if we're using the 64-bit version?
post #1325 of 1702
Quote:
Originally Posted by paulkono View Post

Do we still need to download the Argyll tools update if we're using the 64-bit version?
As far as I know, yes. The biggest changes are in dispcal, dispread and targen, and I don't think those really benefit from 64-bit anyway, so you should just be able to replace your 64-bit executables with the newer 32-bit versions.
post #1326 of 1702
Hello

I'm looking to improve my use of Argyll/MadVR 3DLUT by generating a CCSS file with my ColorMunki Photo probe.
Unfortunately the file creation fails (step 3) whereas measures have been properly made.
This occurs both under W7 with the cmd window running in administrator mode than XP in unspecified mode (administrator to I guess).
I tried without success to specify a full path for the file.
Here is the command used.:

c:/Argyll_LUT_3D+MADVR/Argyll_V1.6.2/bin/ccxxmake.exe -S -H -d 1 -TVSony_ LCD_White_LED TestCor.ccss

The error message is:

Writing CCSS File 'TestCor.ccss' failed

Do you have an idea of correction?
post #1327 of 1702
Quote:
Originally Posted by Kerlucun View Post

Hello

I'm looking to improve my use of Argyll/MadVR 3DLUT by generating a CCSS file with my ColorMunki Photo probe.
Unfortunately the file creation fails (step 3) whereas measures have been properly made.
This occurs both under W7 with the cmd window running in administrator mode than XP in unspecified mode (administrator to I guess).
I tried without success to specify a full path for the file.
Here is the command used.:

c:/Argyll_LUT_3D+MADVR/Argyll_V1.6.2/bin/ccxxmake.exe -S -H -d 1 -TVSony_ LCD_White_LED TestCor.ccss

The error message is:

Writing CCSS File 'TestCor.ccss' failed

Do you have an idea of correction?

I think the last part
Code:
-TVSony_ LCD_White_LED  TestCor.ccss[
is wrong, as the name of the matrix is not a parameter.
Try this:
Code:
c:/Argyll_LUT_3D+MADVR/Argyll_V1.6.2/bin/ccxxmake.exe -S -H -d 1  "TVSony_ LCD_White_LED  TestCor.ccss"

edit:
Unless you mean the use the -T parameter, then:
Code:
c:/Argyll_LUT_3D+MADVR/Argyll_V1.6.2/bin/ccxxmake.exe -S -H -d 1  -T "TVSony_ LCD_White_LED"  TestCor.ccss
post #1328 of 1702
Sorry monvo,

My memory was wrong.
I copy the exact command line from the txt file where I stored it.



c:/Argyll_LUT_3D+MADVR/Argyll_V1.6.2/bin/ccxxmake.exe -S -H -d 1 -T LCD_White_LED -N TestCor.ccss
post #1329 of 1702
Any clues as to why targen.exe -v -V1.6 -N1.0 -G -e4 -B8 -s0 -g64 -m0 -b0 -f1500 -c Rec709.icm -d3 results in an overall less accurate calibration with higher black levels than targen.exe -v -d3 -e4 -B4 -s40 -g128 -m9 -b9 -f0 -N1.0 -V1.6 -p1.0 ???

BTW, I switched to Windows 8.1 and I .ti1 file generation using targen.exe -G command is MUCH faster than before (with Windows 7 SP1).
post #1330 of 1702
Quote:
Originally Posted by monvo View Post

I've tried it with the latest colprof and collink, and the same behavior is still occuring. To rule out absolutely any wrong level settings I applied the 3dlut to my monitor, where the bars still exceed 235 even though madVR is set to 0-255.

I'm not quite clear what you think is a problem. It's normal for the WTW color wedges to exceed 235 if the device has a greater gamut in that area than the source color space. Typically this doesn't happen for the grey wedge if the white is set to the maximum that the display is capable of. This need not be the case if absolute colorimetric is being used, while relative colorimetric ensure this. The changes I made to extrapolation seem to be working as expected - the same ratio of RGB used for white is extrapolated out as far as it can within the precision of the lookup grid, while still mapping 1,1,1 to 1,1,1 to ensure that the Rec709 "sync" levels are preserved.
post #1331 of 1702
I need some help with this CMS tool because I don t understand how it works. Let me explain what I already know and what I don t understand smile.gif

I own a Sony VPL-VW1000ES projector. This projector has Greyscale and Gamma correction options but it DOES NOT have a Color Management System. I also own Chromapure and I use a (calibrated) meter, the i1 Display Pro III. I already calibrated the greyscale and gamma of my unit to get it as good as possible. I now have a pretty decent Greyscale and Gamma curve. But because the VW1000 projector has no CMS the colors can be a little bit better. Not that they are far off but still. I thought about buying a Lumagen for this but then came across this topic. Thankfully I m using my HTPC as source for all my content with Media Player Classic-BE and madVR as output renderer. What I don t understand is HOW this tool can actually help me get my colors perfect. I just read the starting post but NOTHING is mentioned about doing measurements etc.

What am I missing?
post #1332 of 1702
Quote:
Originally Posted by sanderdvd View Post

I also own Chromapure and I use a (calibrated) meter, the i1 Display Pro III.

Calibrated? For which types of displays? Have you got a special calibration file for your projector generated with a spectrometer, or a generic file suitable for all vprs with UHP lamp (maybe provided by Chromapure)? Your id3 works with other software? I'm not sure that an id3 sold by Chromapure works with Argyll...
Quote:
Originally Posted by sanderdvd View Post

What I don t understand is HOW this tool can actually help me get my colors perfect. I just read the starting post but NOTHING is mentioned about doing measurements etc.

Of course you have to make measurements!
Quote:
Originally Posted by sanderdvd View Post

What am I missing?

Probably all the other 1330 posts....

Have you installed the software indicated in the first post? already tried to launch dispcalGUI?
post #1333 of 1702
I return to the topic on write error when creating a .ccss file by the ccxxmake command.
I tested the targen command, which unlike ccxxmake was able to create a .ti1 file in the same conditions.
Therefore, it is likely that the problem is not a bad DOS/Windows environment.
I then tried to create a .ccmx file, the measures go well but the step 4 (generation of the file) resulted in an abort of the process.





944_appcompat.txt 20k .txt file
post #1334 of 1702
thxz for your reply Kukulcan.

The meter I am using with Chromapure is this one: http://www.chromapure.com/products-d3pro.asp. All I know is that Chromapure told me that they calibrated the meter before it was shipped to me and that it works more accurate than an OEM i1 Display Pro III meter (which can be bought separatelty if someone wants).

I only read the opening post and see nothing there about making measurements so that s why I asked. I will do some initial setup tonight and report back to you.
post #1335 of 1702

my t1 display pro direct from x-rite is delivered with corrections for all kind of displays so there should be some on the disc you got there.

 

the opening post is a how2 to make measurements X-)

 

so you just have to follow the first post.

post #1336 of 1702
Quote:
Originally Posted by mightyhuhn View Post

my t1 display pro direct from x-rite is delivered with corrections for all kind of displays so there should be some on the disc you got there.

the opening post is a how2 to make measurements X-)

so you just have to follow the first post.
When I open Chromapure I always have to select a display type: front projection, plasma tv etc etc. Is that what you mean?
post #1337 of 1702

 

corrections aren't listed in the how2 so you can try this:

 

open dispcalgui

 

tools -> "import colorimeter corrections from other profiling software" -> auto 

 

now you should see a bunch of files.

if there is a with a name like this : ProjectorFamily_07Feb11.edr select it!

know make sure it is selected under correction.

 

that's it just follow the rest on post 1.

 

just some infos you are using a beamer so make sure madTPG is set to use fullscreen and disable osd.

 

gl!

post #1338 of 1702
Should we be loading correction profiles from the disk provided by X-Rite for the i1 Display Pro?
post #1339 of 1702

 

yes.

 

but go the easy way and install the software and let dispcalgui find them for you

post #1340 of 1702
I think dispcal.exe is getting progressively worse... I got real lucky and managed to create several .cal files with earlier VidTools versions using the following settings:

Brightness - 49
Contrast - 68

which resulted in raised blacks and contracted CR from the original 1750:1 to 1600:1. However, I improved the situation by changing TV settings to:

Brightness - 46
Contrast - 75

which resulted in proper blacks and a much better CR of 2050:1 with only a tiny loss in accuracy (BT.1886 gamma curve @ 10% IRE ended up being 2.12 instead of 2.07)

With the latest dispcal.exe I can't change my brightness and contrast to achieve a better CR because it completely eradicates accuracy of the calibration, and yet the original settings still do not provide the lowest black level when using a generated .cal file.

Using Windows Border-less Gaming app/utility and Monitor Calibration Wizard I managed to force and preserve the 2050:1 CR calibration in all my games, which is much better than user/service-menu based calibration yielded! All thanks to ArgyllCMS!
post #1341 of 1702
Graeme, I found what I think is a bug in disprd_read_drift() (in spectro/dispsup.c) that affects how the parameters are parsed (forgot to report it earlier). In the first conditional block it checks for |p->bdrift == 0|, but then checks |p->bdrift == 0| again below - I think this should be |p->wdrift == 0|. Presumably this means that currently, enabling only black drift compensation actually ends up enabling both.
post #1342 of 1702
Quote:
Originally Posted by VerGreeneyes View Post

Graeme, I found what I think is a bug in disprd_read_drift() (in spectro/dispsup.c) that affects how the parameters are parsed (forgot to report it earlier). In the first conditional block it checks for |p->bdrift == 0|, but then checks |p->bdrift == 0| again below - I think this should be |p->wdrift == 0|. Presumably this means that currently, enabling only black drift compensation actually ends up enabling both.
It's a bug (thanks), but I think the consequence is reading an unnecessarily white patch when only black drift compensation is enabled. The actual calculations are controlled by the flags explicitly in code below that.
post #1343 of 1702
got a question. I'm planning on measuring my display's "native" luminance function, and then using Matlab to create three 1D LUTs based on this data, so that I can meet any arbitrary luminance function, whether it be a flat 2.2, or 2.4, or BT.1886.

I've just learned how to use MadVR, and was wondering whether it's possible to directly use 1D LUTs, or whether I'd have to compromise by sampling it into a 3DLUT.

I guess another option would be to find a profile loader that would load in the 1D Luts into windows. If so, would MadVR automatically use it?
post #1344 of 1702
Quote:
Originally Posted by spacediver View Post

got a question. I'm planning on measuring my display's "native" luminance function, and then using Matlab to create three 1D LUTs based on this data, so that I can meet any arbitrary luminance function, whether it be a flat 2.2, or 2.4, or BT.1886.
You can, but.. this is exactly what dispcal does. It reads a set of values for each color channel and grey to create a model of the luminance function, then refines it in several passes to meet the desired function. The only difference from 'flat' 2.2 or 2.4 is that it always applies either an output or an input offset (since your display's black won't be 0), so you can't just give it a response that it would have to clip. Aside from that detail, though, the following should get you what you want:
Code:
dispcal -dmadvr -qu -g2.2 -k0 flat2.2.cal
dispcal -dmadvr -qu -g2.4 -k0 flat2.4.cal
dispcal -dmadvr -qu -g2.4 -k0 -f BT.1886.cal
though note that this will use your display's native white point, which may not be what you want (you can override it using -t [temp] -T [temp] or -w x,y as explained in the documentation).

These will then produce .cal files that dispwin can load, for instance with |dispwin flat2.2.cal|. And finally, they can be used during profiling using dispread's -K flag (with the madvr test pattern generator) and integrated in or appended to a 3DLUT with collink's -a and -H flags respectively.
Edited by VerGreeneyes - 12/27/13 at 9:44am
post #1345 of 1702
thanks VerGreeneyes, appreciate the excellent info.

I've spent hours and hours working on getting my display's grayscale just about perfect in WinDAS (FW900, average delta E < 1.0!), so will definitely be using native white point! And yep, by flat I meant "compressed" so that it would fit within my luminance range. Nice to know that dispcal doesn't clip by default.

Prompted by your response, I've gone through some of the documentation of those three commands (so powerful!). I can see the advantages of using dispcal and dispread, as the profiling will be done in the same environment that I'm playing back material (MadVR renderer).

(incidentally, what would -qu do: shouldn't it be something like -qh?)

Just to clarify the workflow, once I've created the .cal file (whether through dispcal or manually), what would be the easiest way to integrate it into MadVR if I'm not using 3dLUTs. I've already verified that MadVR uses my desktop gamma settings (if I change the desktop color settings, then the MadVR renderer incorporates these changes real time), so maybe it would automatically incorporate a profile loaded by dispwin?

If not, would it be best to somehow create a linearized 3DLUT (a 17x17x17 array of values of 1's?) and then integrate the .cal file into it? Based on your post, and the collink documentation, it seems that a gamma curve (1dLUT/CLUT) can either be appended to, or integrated into a 3DLUT. I can understand how integrating would work, where the values in the 17x17x17 cube are scaled so that they reflect the desired gamma curve. But what would append mean?


Suppose I wanted to do all this manually - would I be able to create my own CLUT in a text editor or something, and rename it to a .cal file? I was planning on making a 256x3 array of values as my CLUT.

Lastly (sorry for the horde of questioning), if I were to go the dispcal route, that would mean I'd have to rely on dispcal's black point drift correction. I use a DTP-94, and my understanding is that one needs to completely cover the sensor to block out any light, when doing a drift correction. Is using my display's black point going to suffice? (I suppose it would, if the black point doesn't change, and can be used as an anchor/reference value by dispcal).
Edited by spacediver - 12/27/13 at 2:16pm
post #1346 of 1702
Quote:
Originally Posted by spacediver View Post

(incidentally, what would -qu do: shouldn't it be something like -qh?)
-qu is 'ultra' quality; it uses more steps and a tighter tolerance for the final pass than high quality, though the difference may be minimal (depending on how well behaved your display is).
Quote:
Originally Posted by spacediver View Post

Just to clarify the workflow, once I've created the .cal file (whether through dispcal or manually), what would be the easiest way to integrate it into MadVR if I'm not using 3dLUTs.
Once you have a .cal file you can load it into your graphics card's videoLUT and it will be applied to the output of every program on your system, including madVR (unless you set madVR to disable the videoLUT). If you use the .cal file during profiling with dispread and colprof, the 1D curves will be added to the ICC profile and loaded by Windows after you install the profile with dispwin -i. Without a 3DLUT, the correct setting for madVR is 'this display is already calibrated' - this will make it apply the correct source white point.
Quote:
Originally Posted by spacediver View Post

Based on your post, and the collink documentation, it seems that a gamma curve (1dLUT/CLUT) can either be appended to, or integrated into a 3DLUT. I can understand how integrating would work, where the values in the 17x17x17 cube are scaled so that they reflect the desired gamma curve. But what would append mean?
Appending simply makes madVR load the 1D curves into your videoLUT - so it's only useful if you're using one calibration for your regular workflow, and want to use another calibration for watching video. Integrating the 3DLUT and 1DLUT values means madVR will do the whole correction on its own, so you should enable the 'disable GPU gamma ramps' option.
Quote:
Originally Posted by spacediver View Post

Suppose I wanted to do all this manually - would I be able to create my own CLUT in a text editor or something, and rename it to a .cal file? I was planning on making a 256x3 array of values as my CLUT.
The .cal files include some metadata and the relative luminance for each correction, but technically you could make your own. I don't think it would be that useful, though. Remember that the values you want are corrections, just putting the luminance function you want in there isn't going to work - so you should probably leave it to dispcal.
Quote:
Originally Posted by spacediver View Post

Is using my display's black point going to suffice? (I suppose it would, if the black point doesn't change, and can be used as an anchor/reference value by dispcal).
I think that's how it works, yes, otherwise it would be a very clunky system. IIUC, dispcal and dispread's black/white drift compensation compensate for drift compared to whatever they started out as, to get more consistent measurements during refinement and make the measured values more comparable across a long run.
post #1347 of 1702
thanks so much, you've been incredibly concise and helpful. I think I'll take your suggestion and let dispcal deal with it. (I fully understand that the values are corrections and not simply the luminance function: I was planning on calculating the correction factor for each of the 256 values using Matlab. I might still do that later, but it might be wiser to let dispcal generate its own cal first, and then I can get a sense of the formatting).

And once I upgrade to a new PC, get windows 7 (instead of xp), and get a bluray drive, I may end up going the 3DLUT route smile.gif
post #1348 of 1702
Quote:
Originally Posted by VerGreeneyes View Post

Code:
dispcal -dmadvr -qu -g2.2 -k0 flat2.2.cal
dispcal -dmadvr -qu -g2.4 -k0 flat2.4.cal
dispcal -dmadvr -qu -g2.4 -k0 -f BT.1886.cal
though note that this will use your display's native white point, which may not be what you want (you can override it using -t [temp] -T [temp] or -w x,y as explained in the documentation).

I successfully ran the third option (although I used -f0, which is what I think it would have defaulted to had I just used the switch -f as you suggested?). Took a while - over 600 patches. Definitely noticed an improvement in the ability to see detail, though things do look slightly washed out (although this could certainly be just me having to adapt - when I look at a smooth grayscale ramp, the ramp certainly looks more perceptually uniform and less crushed with the .cal file applied. I probably need to find some good reference video material).

I may try some different parameters.



A few questions.

At one point, it displayed a dark gray patch, and asked me to raise my brightness until a target was hit. Why does it ask me to do this when it could just adjust the correction in the CLUT at that point to meet the target? This way I have to sacrifice black level.

Another issue is that I thought it would use my display's native white point (I didn't use the -t, -T or -w switches). Yet when I look at the CLUT itself, it's clear that each of the three RGB channels has been adjusted independently. I'm guessing this is done so that each of the three channels meets the specified gamma target, but in doing so, wouldn't it alter the chromaticity across my grayscale?

Is there anyway to get it to just use grayscale test patches, and apply the corrections equally across the channels?

Finally, I'm curious - is there any limit to how successful gamma ramps can be? Suppose you have a severely crushed display, say a gamma of 4. Suppose it's so bad that from 0-50 percent gray, the luminance never goes above 0.05 cd/m2, but by peak white it's accelerated super fast up to 100 cd/m2. Could an appropriate CLUT create a "final product" that is indistinguishable from a display that naturally had that perfect final product? I'm having trouble seeing why this couldn't be the case, but something tells me it's not as easy as this.

The reason I ask is that after doing my hardware calibration, my CRT was severely black crushed. I'm wondering if I'd achieve better results with dispcal if I were to start off with a better curve.
Edited by spacediver - 12/28/13 at 6:37am
post #1349 of 1702
Quote:
Originally Posted by spacediver View Post

At one point, it displayed a dark gray patch, and asked me to raise my brightness until a target was hit. Why does it ask me to do this when it could just adjust the correction in the CLUT at that point to meet the target? This way I have to sacrifice black level.
It's trying to get the desired level at 1% luminance, so presumably it would have to make less corrections in the LUTs later. I wouldn't worry too much about this step - in fact you can pass in -m and it will skip straight to the actual calibration. Since you're using your native bright point and have already adjusted it, there's no real need to go through the steps in this menu.
Quote:
Originally Posted by spacediver View Post

Another issue is that I thought it would use my display's native white point (I didn't use the -t, -T or -w switches). Yet when I look at the CLUT itself, it's clear that each of the three RGB channels has been adjusted independently. I'm guessing this is done so that each of the three channels meets the specified gamma target, but in doing so, wouldn't it alter the chromaticity across my grayscale?
A display's black point often doesn't have the same chromaticity as its white point, and one of the goals of dispcal is to get a consistent chromaticity along the neutral axis, so it's adjusting each channel independently to achieve that (this is also what the peqDE parameter in the final pass is for). Note that it has to make a choice near black: try to match the chromaticity of white (raising the black point to achieve this), or gradually shift the chromaticity over to that of black. This is what the -k0 parameter does - it tells dispcal to leave the chromaticity of black untouched. -k1 makes it match the chromaticity of white, -k0.5 makes it do 50% of each and leaving the parameter out lets it choose what to do based on your display's black level.
Quote:
Originally Posted by spacediver View Post

Finally, I'm curious - is there any limit to how successful gamma ramps can be? Suppose you have a severely crushed display, say a gamma of 4. Suppose it's so bad that from 0-50 percent gray, the luminance never goes above 0.05 cd/m2, but by peak white it's accelerated super fast up to 100 cd/m2. Could an appropriate CLUT create a "final product" that is indistinguishable from a display that naturally had that perfect final product? I'm having trouble seeing why this couldn't be the case, but something tells me it's not as easy as this.
The major problems are black and white 'walls'. If the dispcal finds that, say, the bottom 10% and top 10% of RGB values all give the same luminance, it will compensate for this by limiting the range of the LUT. This should work just fine, but the more limited the range of the LUTs, the less precision the display will use when applying them. The LUT values are applied with 16-bit precision per color channel, but your display can probably only display colors with 8-bit (or even 6-bit) precision, and potentially uses dithering to achieve the rest. But if the LUT starts at 10% and ends at 90%, the display will only be able to use 80% of its internal precision. That may not be the end of the world, but it's something to keep in mind.
post #1350 of 1702
Quote:
Originally Posted by VerGreeneyes View Post

It's trying to get the desired level at 1% luminance, so presumably it would have to make less corrections in the LUTs later. I wouldn't worry too much about this step - in fact you can pass in -m and it will skip straight to the actual calibration. Since you're using your native bright point and have already adjusted it, there's no real need to go through the steps in this menu.

Ok I"ll skip that part then and go straight to the calibration. thanks smile.gif
Quote:
Originally Posted by VerGreeneyes View Post

A display's black point often doesn't have the same chromaticity as its white point, and one of the goals of dispcal is to get a consistent chromaticity along the neutral axis, so it's adjusting each channel independently to achieve that (this is also what the peqDE parameter in the final pass is for). Note that it has to make a choice near black: try to match the chromaticity of white (raising the black point to achieve this), or gradually shift the chromaticity over to that of black. This is what the -k0 parameter does - it tells dispcal to leave the chromaticity of black untouched. -k1 makes it match the chromaticity of white, -k0.5 makes it do 50% of each and leaving the parameter out lets it choose what to do based on your display's black level.

The thing is, I'd rather not put much faith in the chromaticity readings of my black level. I have more faith in my CRT's ability to maintain a good electron beam balance throughout its range, based on the adjustments I made during the service mode procedure. And even if it turns out that the black level is slightly off, then if dispcal "gradually shifts the chromaticity over to that of black" with -k0 (I'm assuming by gradually, you mean that starting from white, and going down to black, it adjusts the chromaticity gradually), then I'll end up with larger delta Es in the midtones (assuming that my white level had perfect chromaticity).

Is there an option to have it just try to meet D65 at each step (except for the first step, where doing so would raise the black level), instead of this gradual "chromaticity blending" method? (I may have misunderstood what you meant, however).

Quote:
Originally Posted by VerGreeneyes View Post

The major problems are black and white 'walls'. If the dispcal finds that, say, the bottom 10% and top 10% of RGB values all give the same luminance, it will compensate for this by limiting the range of the LUT.

Why would it compensate by limiting the range. Suppose values 0-20 are all at 0.01 cd/m2, and 235-255 are all at 100 cd/m2. Wouldn't you want to apply corrections to these ranges? If you limited the range of your LUT, then you'd leave these "walls" uncorrected! What am I missing here?

Quote:
Originally Posted by VerGreeneyes View Post

This should work just fine, but the more limited the range of the LUTs, the less precision the display will use when applying them. The LUT values are applied with 16-bit precision per color channel, but your display can probably only display colors with 8-bit (or even 6-bit) precision, and potentially uses dithering to achieve the rest. But if the LUT starts at 10% and ends at 90%, the display will only be able to use 80% of its internal precision. That may not be the end of the world, but it's something to keep in mind.

For what it's worth, I'm on a CRT (a high end model), and given that it's analog, it should in principle support an arbitrary bit depth (well, limited by the quantization of voltage in the electronics). But even if it did support 16 bit color depth, wouldn't I need a video card that supported this? My video card only supports 8 bit color.

I have a feeling I'm conflating bit depth with precision. Is 16 bit precision doing the calculations in 16 bit to avoid accumulation of rounding errors, and then presenting the final result in an 8 bit depth?
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Display Calibration
AVS › AVS Forum › Display Devices › Display Calibration › MadVR - ArgyllCMS