Originally Posted by sillysally
LightSpace pattern 'Anisometric' patch sequence seems to work well when using LightSpace.
I tried using argyllCMS custom made patch sequence for my EF9500 and used Mike's tool's to convert for LightSpace. I liked 'Anisometric' patch sequence better.
imho, you must learn what works best with the calibration program you are using. I have LightSpace, Calman and argyllCMS and used them for many years.
You may be right to some degree about retention, but that is why I run a large patch set to determine how well all my color points turned out. When your calibration check always comes back well within specs how can I disagree with that.?
Sorry. I didn't mean to imply you were having an issue with ABL. I was referencing it and temperature as being gremlins that should be addressed for calibration.
I'm also sorry for the long post (I'm OCD). I've written this line after everything else, because I'd like to make readers aware that I'm not really satisfied with how I have translated my thoughts for this post, but I have other things to do and cannot revise this until I'm satisfied.
My apologies if this doesn't make sense to anyone.
I'm basing this off of how I see retention on my E6, which may or may not be similar in behavior to your panel or it's retention (luminance, vs chroma, or a mix of both? how does your panels power delivery effect voltage stability, retention tendancies, circuit heat, power saving functions at the hardware-design level, etc).
Is your verification set in the same or similar
order (voltage/brightness, perhaps even hues)? This is a rhetorical question.
I'm really bad at communicating (putting my thoughts into words), so allow me to propose a situation where I think this kind of question would apply:
You (the reader) do a series of patches for, lets say 2 hours (to stay within automatic noise reduction the panels do, which I think is every 4 hours. This is completely seperate from manual de-noising that we can start via menu options).
You've verified that yes, there is retention visible after the set was ran.
You wait 2 hours and play dark content (<70 nits) to eliminate image retention and noise (read: panel is now in very similar condition as when you started the measurement series.)
Now you run a verification test.
Turns out the results are very good, but you did verify that retention was also present after verifying.
Now here is the question... How do you prove that your measurements are truly good?
Because image retention was present at the end of both, did you ever consider that this retention may actually be skewing the measurements and thus LUT corrections?
Also, the reverse can also apply. Let's say retention does not happen during this. The OLEDs will behave differently with real content than it does with synthetic content (read: calibration/profiling/verification patches).
And we're back to the same question again but inverted.
Either way retention is still being a jerk to you (directly to your measurements or indirectly with real content).
This actually brings up a big question:
What IS this retention?
How does the retention in both of the above situations compare?
How does this effect measurements?
How does retention effect voltage stability?
Does it effect everything linearly or vary with different luminance and/or voltage?
The only answer I can think of is to simply observe your panels behavior and then make a judgement call.
Here is the reason why I chose to speak about this:
I did three sets of measurements for 6 minutes each. After each set I ran 15 minutes of an LG demo video (Colors of Journey, MadVR doing HDR>SDR conversion), then manually did noise cleaning.
Preparation: I start with a 20% black field (~3 nits) up for 20 minutes, then switching to a 70% field for 5 minutes then a 0% black field for 2.5 minutes.
Set A) Continiously measure a 5% window of 70% white for 6 minutes.
Set B) Measure the same 5% 70% white, wait 20 seconds, measure again, repeating for 6 minutes.
Set C) Same as B, but I wait 5 seconds, measure, then wait 20 seconds, repeating for 6 minutes.
Naturally, all three sets show that retention is causing luminance to progressively increase the longer this process continues.
I found it interesting that no set behaved as I expected.
Set A was not the brightest at the end of 6 minutes.
Set B had the largest differences between measurements at the same timestamp.
Set C quickly got brighter, very large increase in the first 4 or so measurements but
Sets B and C were practically identical at the end, but also higher than set A (by ~20-30 nits, basically a whole IRE gamma target worth of difference).
Curious, I re-ran set A. Same result, but I also decided to wait 30 seconds and take a final measurement. Now luminance was practically identical to sets B and C.
IMO this is very unusual behavior, which worries me.
What I think is happening:
A pixel is sampling/holding a signal, and during this time voltage is "locked" in but continues to build up in the components driving the pixel. During this time pixel voltage is stabilizes for a few seconds before the retained build up starts to leak through (basically drifting). This continues until the pixel get's a new signal where it removes voltage restriction to change to and lock on to said new signal, at which point the build up flows out.
I've only taken a quick look at board schematics, but I can tell that power phase delivery is only just passable, IMO. I simply do not believe that this should be overlooked given the characteristics I observed on my E6. It's just to unreliable to be stable.
I've been using a 20% field between measurements for this purpose for manual calibration. This isn't really practical for profiling purposes, though, and that's where creating a pattern sequence manually can notably offer an improvement compared to automatically generated sequences (which, AFAIK
, cannot factor this behavior when generating sequences).