Quote:
Originally Posted by
ConnecTEDDD
When you display a black field, you still have area of the screen which is displaying stuff...the multiple menus of LG with the calibration controls. Because we don't want to produce image retention, we don't display a full field xx% Grayscale patch when navigating the menus.
The procedure you are following it gives you some different measurements to your low end (because to the instant image retention) which that image retention will disappear if you measure some time later so you will have difference in measurements.
I never had a single problem doing a manual cal like that or taking any sweep. In real content pixels are changing their status 24 times per seconds, there no time for stabilization.
Quote:
Originally Posted by ConnecTEDDD
...when I need to change something to the internal settings, I'm displaying a full field black pattern...
But you just said that was what you were doing. Or perhaps I just misinterpreted.
You can ignore my crazy process in my last post (it was a separate comment, not part of the preceding portion).
When I first started calibrating my E6 I was doing full 20-point grayscale sweeps then adjusting, and repeating until I was done. Gamma dialed to be 2.4.
Afterwards I watched Colors of Journey via MadVR's HDR>SDR gamut mapping using shaders. I set the video to repeat when done.
The darkest scenes are at the beginning--campfire, starry sky and leaving a mountain side road tunnel.
These looked good, but the second pass was different. The starry sky was a very dark grey now, and the tunnel was terrible with very noticeable changes in luminance while leaving it.
At this point I thought my meter was at fault, so to test this:
Made no adjustments and did a sweep, waited 2 minutes and swept again. Waited 2 more minutes, swept.
Each set was brighter than the previous (continuing this pattern beyond three times, all results were very close but also barely rising).
Checked for retention by looking at a 5% full screen field pattern, and I saw the after image of the window patterns.
Then I decided to measure everything manually while waiting 5 minutes between measurements (so tedious

).
Upon checking those scenes again the first pass was notably darker, and the second pass was lifted again, and brighter content was not as bright which made it even easier to notice the difference in lifted dark content in the second pass.
This is what started me going OCD on understanding my panel's behavior and trying my crazy ideas because why not.
I don't see what's wrong with this procedure other than the image retention raising my measurements and chasing them in a circle trying to control them.
I've done what I could to rule out sources of error (not meter, not meter temperature).
Watching real content is always going to have
some degree of image retention present at all times. If the content is "slow", then you can see the after images on field patterns much easier. I forgot to mention but depending on Contrast setting, ABL can increase the time
until retention happens.
I still think it's worth considering real world usage to some degree. In my opinion it does not make sense to define gamma in and for a synthetic environment where it will never be at with real content. I mean, real content is not going to be a black screen.
And by stabilization, I was referring to voltage to each pixel (The part that's controlling it's refresh rate and the sub pixels). I might be using the wrong word to describe what I mean. Basically how performance changes due to power delivery (think computers. Unstable voltage can cause artifacts, miscalculations, and even crashes/freezes/BSODs/etc.). I don't trust the power supply in any commercial display to be of decent quality, just passable. Same goes for any power supply that comes in a Dell/HP/etc. This comparison is not accurate at all, but I think you see the kind of point I'm trying to make here.
If I understand you correctly, you are taking very quick measurements (before mine even does one

), and I don't think you would notice the pixels still getting brighter before they stabilize and shortly after start to increase again to absolute peak output that real content will not ever get near (requires a static signal and takes too long).
I still don't know why continuous, real-time measurements vs measure>wait 5 seconds at black screen>measure>repeat, both done for 30 seconds, create different results. The latter is always higher than the continuous measurements. I've been completely stumped regarding this, but I think stability is somehow related.
Then again, if I had a higher quality meter my observations
might be completely different. Has anyone REALLY examined how compatible meters like the i1d3 are with this technology? Does meter behavior change as luminance/spectral characteristics shift? What about Panel glass and/or the polarized 3D screen filter?
Iron Mike has implied that a Judd-Voss derived white point is needed for most (all?) OLEDs. We know what Sony OLEDs are. As far as I know we don't have one agreed upon for LG OLEDs, but perhaps having one could improve measurement "stability" by closer matching glass/oled composition characteristics?
Right or wrong, I really wish I wasn't so OCD about all of this.
Ignorance is bliss, as they say.