Originally Posted by xrox
I'm sorry but I disagree that individual subfields could ever be noticed by anyone. Temporal integration of light by the human eye prevents this.
You cannot detect flicker directly but you can detect it indirectly via the stroboscopic effect.
I present you this scientific proof, from a different industry (lighting industry):Some papers like this one use "flicker" to define anything that turns ON/OFF even beyond the human's ability to detect it directly, and even indirectly.
This may be different terminology that plasma engineers use. But as you can see, from this chart, 10,000Hz flicker can be indirectly detectable via the stroboscopic effect (Page 6 Flicker Parameters for Reducing Stroboscopic Effects from Solid-state Lighting Systems
-- partially why new electronic ballasts for fluorescent lights have moved to 20,000Hz to get well beyond stroboscopic detection thresholds.
The banding is due our eyes integrating subfield pulses along the motion vector. It is not subfields revealing themselves or flicker or whatever. It is a motion artifact. Most PDPs have fixed this issue AFAIK.
That's exactly what I'm talking about. The flicker is the motion artifact, and the motion artifact is the flicker revealing itself; the boundary is blurred when we go from detectable flicker (~75Hz approx flicker fusion threshold) going into indirect stroboscopic detection. If you strictly define "flicker" as "directly detectable flicker", then you're correct -- nobody can detect flicker *directly* beyond roughly a high-double-digit range Hz (person varying; some people have high flicker thresholds, others much lower, and depends on ambient lighting, the flicker duty cycle, etc.) And then it no longer should be called "flicker" as a result. One may see still detect 72Hz flicker (e.g. CRT) via peripheral vision in a bright room, while the same person can't see 48Hz flicker in a totally dark room with a different flicker duty cycle while staring directly at a screen (e.g. movie theater). But that's not what I talked about in my posts. The problem is that we had different definitions of the word "flicker".
(1) "flicker" at the directly detected level; (e.g. staring directly and see flicker directly)
(2) "flicker" at the indirectly detected level; (e.g. banding artifacts, or stroboscopic / phantom array effect, or color-separation artifacts like DLP rainbows or phosphor color shift, etc)
(3) "flicker" at the essentially undetectable level; (e.g. 20,000 Hz flicker) *in some scientific contexts, this is a non-sequitur
(4) "flicker" at the localized/pixel level; (e.g. DLP noise, plasma noise, temporal dithering effects, flickering pixels, flickering areas, etc)
(5) "flicker" at the full screen level; (e.g. averaged/integrated flicker, screen flicker seen from sofa-viewing distance)
The plasma engineer papers may be defining flicker as strictly (1) and (5), and not describing (2) (3) as the terminology "flicker".
However, some other engineer communities use a different definition of flicker (e.g. (2) and (3) is called "flicker")
As a blogger, it's always a challenge convert the scientific confusing stuff into material that can be consumed by the average layman, and that is one of the Blur Busters Blog's job.
High speed cameras capture snapshots in time during the compilation of subfields into a frame, as well as the leading blue rise of phosphor and trailing green. Please do not confuse this with how the human eye sees a PDP. Totally different.
Here's where I have a gotcha for you:
When you run software that averages the high speed video frames together (stacking/merging frames totalling 1/60sec into one frame), you actually result in a good colorful image.
The software integrated the video frames (i.e. averaged the video frames together -- stacked them and produced a resulting merged frame)
The integrating/averaging similiarity is quite remarkable.
All of my knowledge and information is based on studying PDP literature and patent information over the last 10 years.
I believe you, we're just using different terminology.
Just being honest in saying that my interpretation is directly from the literature. Yours is obviously not and is mostly incorrect IMO.
Not if you re-interpret my words. I equated detection of flicker with motion artifacts in certain contexts. However, this is possibly inaccurate based on scientific definitions of flicker, that requires flicker to be directly detectable (e.g. staring at stationary flicker).
I would not say that the flickers average out.
For the purposes of display temporal effects, averaged=integrated
(These words are not the same -- however for the purposes of this context, when staring at things that are static relative to the human eye -- staring at a static frame, or tracking a specific point in a panning frame (pursuit) -- the averaging and integrating produces remarkably similar result, either via human eye or via video.)
When you run a computer program that averages together 1/60th second worth of high speed camera frames (16 or 17 high speed 1000fps video frames, of a plasma or DLP) into one image, the averaged image is actually a more accurately colorful image of the whole frame. (Assuming full exposures per frame -- 1/1000sec exposure per frame on 1000fps video -- so totalling all frames together essentially behaves like a ~16/1000sec or ~17/1000sec exposure; something close to 1/60sec worth). It's the sum of all light captured for each major color component, whether by photoreceptors, or by CMOS sensor pixels. Obviously, RGB camera sensor primaries don't perfectly match human eye primaries, but let's ignore that technicality for now -- what we observe is the colors now dramatically resembles what the human eye saw when you integrate the video frames together. All the temporal effects are all then integrated; including phosphor decay, phosphor color shift during decay, DLP pixel PWM, temporal dithering, phosphor decay, whatever, doesn't matter what temporal effects of whatever display -- if you stack the video frames (integrate them) it results in the same colorfulness the human eye tends to see. You could even stack multiple refreshes (e.g. about 33 high speed 1000fps frames, covering about two 60Hz refresh cycles), or more, to get a stronger and clearer integration (clearer/more stable color, more resembling what the human eye saw statically).
The mathematical integration of video frame stacking (adding together the frames and averaging the pixel values), produces results remarkably similar to human vision integration in many ways. i.e. The resulting image is surprisingly similar to what the human eye saw statically. Also, if you're lucky enough to have an expensive ultrahighframerate 10,000fps camera, you'd stack together about 167 frames to equal one 1/60sec refresh, etc. to get The eyes certianly work differently as the eyes do not operate on the basis of discrete frames. However, the averaging/integrating works remarkably similar over longer timescales. Although cameras are only an approximate fascimile of what the human eye saw, the effect of the frame stacking (integration) produces a remarkable resemblance to what the human eye saw over a longer timescale (e.g. a full refresh). If you do it yourself, it's rather interesting to observe.
There are some minor psychophysics effects (e.g. scientific paper -- Flicker Induced Colors and Forms
) for human vision, and of course camera color artifact quirks for cameras. However, these factors are insignificant at DLP/plasma frequencies when averaged/integrated over the timescale of a refresh. Thiey don't factor in modern DLP and plasma displays, and integrating video frames of a static image over a timescale, produces results remarkably similiar to what the human eye integrated when staring statically at a static image. (Same thing also applies during pursuit: Staring at specific point within a panning image; this then begins to also additionally integrate the effects of sample-and-hold motion blur -- as you can see from my pursuit camera work and resulting pursuit camera photographs
that remarkably resembles what the human eye saw).
I think you have an understanding but are not quite clear on how PDP actually works.
No disagreement, but I still think you're misinterpreting me, again.
We're talking using different terminologies/definitions I think. You may have terminology more suited to the scientific community, some of which I may not be familiar with. However, it's not even always consistent. Certain engineering communities sometimes come up with a different set of terminologies than other certain engineering communities. (example: plasma engineers versus lighting engineers -- they both define the word "flicker" differently).
Remember some in this forum preach to video; and I often preach to computer/game users (I have enormously popular threads on certain gaming forum sites). I should point out, again, that it is far easier to see temporal display defects in motion computer imagery than with video (e.g. sharp boundaries, high-contrast edges, ability to do high framerates matching refresh, no compression artifacts, no compression softness, no video filtering, no softening during fast motion, etc). Video has a natural softness to it, even on 1080p/24 blu-ray that essentially gives it anti-aliasing. And for the framerate=Hz requirement, lots of 60fps broadcasts are either 1080i/60 or 720p/60, both of which soften on 1080p displays (1080i due to deinterlace, and 720p due to scaling) -- and 20Mbps bitrate isn't enough to preserve perfect 1-pixel-level details at 960pps in fast-panning in sports video. In video, you may not see 1 pixel of motion blur at 960 pixels/sec even for sports motion. However, one often sit closer when using a computer (and even the thousands of computer users on HardForum/OCN using TV's as monitors, sitting only 3-4 feet away), and in computer graphics (e.g. playing a FPS game), motion blur differences of 1 pixel during 960 pixels/sec can become noticeble for a motion blur person, because the source material has no limiting factor in motion blur. 1 pixel at 960pps is equal to 1/960sec of sample-and-hold. The ability to detect motion clarity limitations of a display is higher for computer graphics/video games, than when doing video. Video is full of gradients and details, and provides its own natural anti-aliasing features, and often the camera focus is too soft, or 4:2:2 soften the 1-pixel-thick chroma details, and compression artifacts often prevent perfect-sharp non-antialiased edges during fast pans, etc.
So I again point out that example of some papers like this one
use "flicker" to define anything that turns ON/OFF even beyond the human's ability to detect it directly. And staring at plasma noise in very dark shades of colors at very close distances or under magnifying glass, you're witnessing the "flickering" of individual pixels illuminating at different temporal offsets relative to adjacent pixels. However, when you step further back, it's all integrated/averaged (the noise blends into a solid shade). Seeing banding (caused by subfields) is simply the indirect detection of the flicker caused by the subfields, at least within the spatial area that the banding appeared in. Perhaps it should not be defined as the word "flicker", but it's still a stroboscopic effect, which is defined as "flicker" in some contexts (although, perhaps not by the plasma engineers within the scientific papers you have read, however).
Again, perhaps this may or may not be the correct use of the terminology "flicker"; but I think you certainly can understand where I'm getting at. Do a search-and-replace
of the certain words change "flicker"
(used in human vision contexts) into "various temporal artifacts and side-effects caused by the non-continuity of light at each different display points"
or a proper substitution for the word "flicker" to describe effects caused by the high speed flashes of plasma cells. I believe that upon re-reading what I wrote, a lot of what I wrote suddenly is more in sync with what you've already written about. The problem is that the word "flicker" has sometimes been used in some papers to describe anything that turns on/off even at rates beyond direct human detectability (but still detectable stroboscopically); which is possibly the chief cause of your misunderstandings of my posts.
And, please, before replying, go re-read my original posts with the re-defined definition of the word flicker and you'll see a lot of what I said is far more in sync with what you already said.
(even if different communities still has conflicting meanings on the precise definition of "flicker"). Perhaps from now on the terminology needs to be clarified in terms of the word flicker.
In fact, you posted an image that obviously shows we are talking about the same thing now:
Originally Posted by xrox
As you already explained, but translated into the terminology I'm currently using -- the subfields flicker at increasing intensities (and the flicker frequency is beyond direct human detectability) as it is averaged / integrated as the black line in this image. The dotted line is really only an approximation what the human eye actually perceives directly; but in reality varies from human to human. Again, this post is probably not using the same definition of "flicker" as the definition of "flicker" that plasma engineers use. The dotted line behaves like a moving average of a fraction of a second. The formula may not be exactly the same as average (e.g. calculus may be involved), but the dotted line is remarkably similar to an integrated moving average, just over a tiny timescale. Also, the flicker doesn't have to go all the way to full-off; insofar as lamps/fluorescent lamps often "flicker" unseen (they dim-bright-dim-bright in udulations, often beyond human detectability). So again, we're just using different terminology, to describe, exactly the same thing.
You are very welcome to correct my terminology; to make sure that we are "in-sync" with the terminology that we use. What I already wrote in this thread correct based on real-world and observations, but may be terminologically incorrect from the perspective of the language used by, say, plasma engineers. I am only read incorrectly because of the different terminology languages we are using. It is in Blur Busters Blog's intersest to carefully improve terminology. (For example, based on discussions on AVSFORUM and with forum member tgm1024, I have now changed certain terminologies (e.g. stopped using the phrase "pixel persistence" where a different phrase such as "LCD panel's pixel transition speed" is more appropriate (phrase that is more independent of whether the backlight is on or off).
P.S. I still saw banding in certain motion tests in Panasonic VT50 at 1:1 view distance. So not all modern plasmas have 100% fixed this. And computer/game motion can reveal far more artifacts than video motion, due to the 'perfectly sharp' nature of computer graphics (no source based limitation that 'hides' display temporal artifacts; i.e. whatever is seen is definitely caused by the display and not source material).Edited by Mark Rejhon - 7/28/13 at 12:40am