Originally Posted by harlekin
Nope, its the TV.
Basically gamma isn't stable upon continuous readings. (At least not on high IREs.)
Does it matter yes. How much? We aren't sure - but no one seems to be too bothered. The actual brightness of any field is also influenced by field size and overall on screen brightness (when the LGs ABL kicks in) in the reverse direction.
So what we do currently is either try to "measure fast" (use sweeps, never measure several high brightness fields in a row), or to insert darker grey patterns every few fields (afair theres only a beta test version of HCFR out there that does that - but youd'd probably have to ask zoyd) - because "if those high luminance fields only stay on screen for a few seconds - their readings are at least stable".
So short answer is - don't use continuous measurements when calibrating those OLEDs. Instead use the camera "snapshot" feature (if you use HCFR as a pattern generator, which is recommended, because its just faster in general) as many times in a row as you need - but with at least a few seconds of pause in between.
The long answer is - yes it isn't textbook and yes it is a fault of the TV (directly influencing color perception) but no one seems to care that much...
The meter is fine. It only happens on those TVs (not on LED LCDs, not on CRTs, not on CCFL LCDs, not on QD LED LCDs ...).
I've only calibrated on a very small sample of displays, all of which are LG "Smart"-type displays.
I think the "drifting" is a combination of things and not related to one, which is probably what you're implying by saying "fault of the TV", but I just wanted to give my pennies on this based on my small experience.
Brighter requires more power going to each pixel/backlight.
More power means more heat.
I feel confident to presume consumer displays have a sizable tolerance range for not just manufacturing but engineering as well. Internal components may have more ripple or simply become unstable the longer the current stays constant.
The more accurate your meter the easier it is to measure this phenomena.
Also, heat may change the spectral characteristics of the backlight (or pixel) and/or everything light passes through (substrates, filters, screen material composition, etc).
That's quite a large number of combinations where characteristics can be subject to change. And each of these points may be effected by temperature differently, and for these points voltage can further change things up differently at any temperature (read: non-linear behavior per component, which may or may not stay the same at different temperatures and/or voltage).
But wait, there's more!
How components degrade or simply age overtime is also a factor in this, and even this may be different per component.
But wait, there's still more!
Viewing environment can also join the mix! Room temperature, elevation, humidity and similar.
But wait th.... Ok I'm just gonna stop now. You get the idea.
I'm sure there are a ton
of more factors related to this, but I suspect most of this is not a practical cause for concern
. Temperature is definitely
worth considering during calibration.
As for short and quick measurements, they may not be an ideal solution for some displays.
Just like a signal can start to drift the longer it's shown, the inverse is true as well.
A signal can be unstable at first and require time to normalize.
I'm assuming this is why some meters have an integration time (hardware averaging?).
This is always going to be different between panels, regardless of measuring devices. What kind of difference this really is, is another matter entirely that others can (and have) discussed here in the forums.
Also, I don't think you should assume that no one cares. I believe the discussion is not happening simply because we (the users) can't really do anything about it.
Higher quality components cost more, tighter engineering takes longer and costs more, and the more money a company spends for R&D will also raise the price we pay for the display. More sales = lower price = more money. In theory anyways. No product competition = price hike = more money (if you can't get it from someone else, and you want it, you've gotta pay up).
We users (AVSForum) are NOT the target market, and if we want to see changes then we have to inform the true target market and join forces.
But this will not happen for something like this. You'll see calibrated displays on a showroom floor before this happens
Speaking of averaging, I looked around for an answer to a question regarding this kind of averaging, but it was never really discussed.
Q: Is HCFR's "adaptive integration" something to enable if you have a meter that supports this at the hardware level (ex: ColorMunk Display)?
I've been assuming the more averaging the better but in the back of my mind I've also been thinking that this may conflict with such meters or otherwise skew measurements.
I've only been enabling it for 75-100 IRE calibration points (read: anytime repeatable measurements begin to float around.
I also manually average 50 measurements for a single patch (rounding to nearest integer), then I make two averages with the same measurements with everything above and below that average (I exclude measurements that match this original average, if rounded to nearest integer). Finally I find the average of those
It's my own personal average-ception
And yes, this is overkill, but it let's me sleep at night (I has extreme OCD...)
I do think averaging a series of measurements to effectively counter balance any floating measurements.