Originally Posted by zoyd
The 50-90% levels are there so you can see where ABL cuts-on and how severe it is , it doesn't change the conclusion that something is going on at the low APL end. Btw, the patterns I used were only made to measure
the effect, (it sounds like you may be thinking these are for calibration, they are not). All I'm concerned about is defining a stable operating range where luminance does not vary with surround for any input level. That range is where you can build a pattern to calibrate for your target gamma and it will not change. Another way to think of it is to ask the questions, Why do I use 10-18% windows to calibrate gamma on a plasma?answer: To avoid ABL distorting the measurement. Why would I use windows in a constant APL background? answer: To avoid dynamic contrast distorting the measurement. As long as the constant APL luminance is also below the ABL cut-on you've solved both problems.
10-18% windows on a black background will give you a gamma calibration that very seldom represents your target value when viewing real video content on the displays measured above. As I mentioned earlier, the measurements indicate that the average "error" will be about 0.1, so a 2.3 gamma calibration will for most material really give you 2.4 response. This is not a huge error but I think it's large enough to pay attention to.
You have not yet shown anything that proves there is something "going on" with small area patterns on that display. It's certainly possible
from the measurements you have posted, but the graph you have posted looks rather similar to the general ABL response I have measured on a number of Plasma displays, which would suggest that you're simply seeing the ABL in effect, rather than the display doing anything untoward.
Most of the Plasmas I have measured have high peak numbers using a 1% pattern because they are not power limited (this is how they can put high brightness & contrast numbers on spec sheets) which drops off at about 5–10%, stays constant to about 25–30% on most models (the KRPs go to 50%) and then starts to drop considerably as APL goes above that.
That's why you can't simply look at the results you have posted in isolation, the need to be referenced against the display's ABL response to have any kind of meaning.
Considering that 25% APL is usually right at the upper-end of where the ABL tends to "level out" with most Plasma displays, it seems like a bad idea to be using patterns that bright.
It seems like you are choosing patterns that give you the results that you want from your display, rather than patterns which are best for taking accurate measurements of the display's response.
Considering that the average picture level is something that is going to fluctuate depending on the content you are viewing, why would you intentionally calibrate your display in a power limited state?
I've often seen numbers in the 20–30% APL range thrown around the forums here as being the average level for most content, but whenever I actually analyse the content I watch, most scenes are not
in that range. They are usually either quite a bit lower than that, or quite a bit higher than that. (averaging those
numbers might end up at 20–30% though)
Just for fun, I decided to pick three films at random, go through all the chapter markers, and grab the APL of them. (72 images)Michael Clayton
: Average, 16 APL. Range, 1–51 APL.Wall-E
: Average, 24 APL. Range, 1–60 APL.Almost Famous
: Average, 20 APL. Range, 1–63 APL.
While they average out at 20 APL, the range is far greater, and most
scenes were either below 15% APL (often in the 5–10% range) or above 30% APL. There were very few in the 20–30 range.
I have yet to see any compelling argument made for going against the display's native response—how it was designed
to operate—and intentionally calibrating it in a power-limited state. While it's obviously a very limited selection, this seems to be the case with most content I watch, and if you play games at all, APL is considerably higher than 25%.