Originally Posted by spacediver;1039836621
interesting point. Suppose it were impulse driven - would that allow you to display a still image for the amount of time it takes for the impulse to decay?
Correct, static frame flashed for a specific impulse length. It can be a full-screen strobe, or a sequential strobe (pixel-at-a-time, or scanline), the visual effect is essentially the same to the human eye, the length of a flash per pixel per refresh. Basically, for fps=Hz motion (120fps@120Hz), eye tracking along the vector of object motion, milliseconds rounded-off to the nearest 1ms for mathematical simplicity:
sample-and-hold (8ms @ 120Hz) -- baseline
50% frame impulse (4ms flash) -- 50% less motion blur
25% frame impulse (2ms flash) -- 75% less motion blur
12.5% frame impulse (1ms flash) -- 87.5% less motion blur
This is directly comparable to staying sample-and-hold but instead adding more refreshes instead of black periods between refreshes:
sample-and-hold (8ms, 120fps@120Hz) -- baseline
sample-and-hold (4ms, 240fps@240Hz) -- 50% less motion blur
sample-and-hold (2ms, 480fps@480Hz) -- 75% less motion blur
sample-and-hold (1ms, 960fps@960Hz) -- 87.5% less motion blur
Obviously, you see diminishing points of returns here, but 50% versus 87.5% is still significant. This becomes even more dramatic when using 1/60sec as baseline. This means 1/120sec impulses has 50% less motion blur than 1/60sec, while 1/960sec impulses has 93.75% less motion blur than 1/60sec.
One example of a stroboscopic display is LightBoost. In an optimized setting (LightBoost=10%), it uses 1.4ms (1/700sec) stroboscopic flashes, so when you play a videogame at fully synchronized (VSYNC ON, 120fps@120Hz), then LightBoost has the exact same mathematical motion blur equivalence of a 700fps@700Hz display. One way to visualize this is mathematically equivalent to 700fps with lots of black frame inserted in between (120 visible rendered frames, spaced equally apart in time 1/120sec apart, in 1.4ms flashes with (8.3ms - 1.4ms = 6.9ms of blackness between refreshes). PixPerAn motion tests confirm this equivalence, too -- during PixPerAn motion, there's a blur trail that's measured 6x longer in non-LightBoost mode on the very same display. This exactly corresponds to a 8.33ms:1.4ms ratio, which confirms the math is accurate. (This is also where my oft-quoted number 12x less motion blur than 60Hz is -- the 1.4ms:16.7ms ratio -- and definitively confirmed in motion blur trail measurements in PixPerAn and the upcoming BlurBusters Motion Tests). Other scientific papers (Science & References
) have already touched upon this eye-tracking-based motion blur equivalence between increased Hz method, versus black insertion method (flicker displays like CRT) -- the stroboscopic method of eliminating motion blur without needing the GPU power of insane frame rates / refresh rates.
The motion blur math above is pretty simple if you have clean strobes (full bright quick, full dark quick) such as flicker-driven OLED's, or LightBoost displays. There are other factors that can fudge the numbers, such as phosphor decay. CRT has a noticeably measurable phosphor decay so the math is not as simple, but it appears that the baseline measurement for phosphor decay is to 90% black (phosphor on a medium-persistence phosphor CRT typically takes 1-2ms to lose 90% of light after being excited by the electron gun beam -- takes only microseconds to light up, but excitation time does NOT predict eye-tracking-based motion blur. Instead, the length of the impulse does).
If one has difficulty understanding, it's worth studying camera photography -- A shutter speed twice as fast, results in half the motion blur -- likewise, flash photography bypasses shutter speed speed limitations and the amount of motion blur in the photograph is directly proportional to the length of illumination from the flash (if there's no external light source). Eye-tracking based motion blur has a surprisingly close equivalence on stroboscopic displays and becomes equally easy to compute the predicted eye-tracking-based motion blur of (especially when "clean" strobes are used -- on immediately, static image for a certain amount of time, then off immediately). Obviously, the faster the shutter speed, the faster the motion needs to be in order to create visible motion blur. The same holds true for stroboscopic displays. Eventually, eye-tracking-based motion blur is so completely eliminated that motion needs to be too fast to create motion blur (e.g. faster than the human eyes can reliably track). For close viewing distances in sharp-resolution-material (e.g. videogames), the sweet spot is approximately 1ms frame length. (e.g. 1ms strobe flash). This can be off by quite a lot depending on the human, though, but this number covers the majority of humans.
Barring that, if you've got clean strobes of a static frame (on-then-off), the motion blur mathematics is extremely simple: Eye-tracking-based motion blur is directly proportional to the impulse length. The blur trail is easy to calculate: The impulse length percentage is the percentage of the original motion blur trail. Basically, if you've got a 10mm of moving-edge blur (caused by eye-tracking motion) during constant motion of a moving object on-screen, shortening the impulse to 50% of its length, automatically shortens the motion blur trail by 50% (e.g. 5mm of moving-edge blur). I'm of course, excluding other variables such as pixel persistence, eye tracking inaccuracy/limitations (which occurs when moving objects become too fast to track), phosphor decay, and other factors that fudge this math. Fortunately, LightBoost displays have proven remarkably efficient (TFTCentral said LightBoost outperforms all scanning backlights they have ever tested, and LightBoost completely bypasses pixel persistence
), and the mathematic equation holds up amazingly well on a LIghtBoost display. This math should also hold up very well on flicker-driven OLED displays (OLED pixels turn on and off nearly instantly!), as long as they're full-screen-impulse driven or sequential-impulse driven (edge to edge scan, such as top to bottom). I am rooting for flicker-driven modes on computer-based OLED displays, so we can get the stroboscopic motion-blur eliminating effect that CRT's have long had. Not everyone likes flicker, but it should be easy to enable/disable such a mode.
When I program visual stimuli in psychtoolbox for matlab, I can only get the stimulus to display for a single frame, which on our new vpixx display is 8.33 ms.
Are you using scanning backlight mode on your Viewpixx? If you turn on the scanning backlight mode, its response time is only 1ms (according to the manufacturer), but if this is a sequential scanning backlight, it would take about 8ms to flash the LED's sequentially from the top edge to the bottom edge.
Motion blur will be proportional to the illumination length of a single point of the display, so I'd expect Viewpixx display probably has a motion blur trail of approximately 1/8th the frame step. (1ms / 8.33ms). Basically 8 times less motion blur in scanning backlight mode than non-scanning-backlight mode. (Unless the pixel persistence is streaking excessively between refreshes)
How would an impulse driven display interact with this. I'm assuming the temporal width of the impulse function is on the order of microseconds or perhaps 1-2 ms.
Although OLED can turn on near instantly (microsecond league), turning the OLED off almost immediately after would lead to a very dark picture because of the long black period between refreshes (since you need to stick to one strobe per frame(refresh), for proper motion blur elimination). So we have the tradeoff between impulse length and brightness. Shorter impulses, the picture is too dark. Longer impulses, there can be more motion blur. So this is a technological challenge.
There are points of diminishing returns for shorter impulses. Eventually it becomes uneconomical (you need an insane amount of brightness for ultra-short flashes, to prevent a dim image). The sweet technological spot is near 1ms. This represent the point of the end of diminishing returns for most of human population. Even a portion of gamers is unable to see benefits of LightBoost (1.4ms = 1/700sec stroboscopic flashes of refreshes), while others see minor improvement (not as big as 60Hz-vs-120Hz), while yet others see stunningly major improvement (far bigger than the difference between 60Hz-vs-120Hz, see testimonials
This means a stroboscopic display of 1ms impulses (easy to do with CRT, OLED, or LightBoost-like display), or doing the same 1ms frame length on a sample-and-hold display capable of 1000fps@1000Hz (difficult to do with present technology). This 1ms sweet spot is pretty common for CRT's, but very rare for LCD's (except a few high-end HDTV's
). Unfortunately even Panasonic plasma with 2500 Hz Focused Field Drive
is hamstrung by plasma phosphor decay limitations (~5ms for red/green phosphor). Fortunately, OLED's should have no problems reaching the impulse-length of about 1ms, provided there's enough brightness in the impulses to compensate for the long black periods between refreshes (1ms:8.33ms ratio at 120Hz, or 1ms:16.7ms ratio at 60Hz). OLED pixels are near instant-reacting so are very impulse-friendly. I'm personally looking forward to the day that OLED panels successfully achieve enough brightness to make short impulses possible.
At the very end of the day, to gain CRT-quality motion on a flicker-free sample-and-hold display, would require ~1000fps@1000Hz (to gain equivalence to CRT phosphor of ~1ms decay) necessary to eliminate flicker AND eliminate interpolation AND eliminate motion blur (simultaneously). And to take advantage of that, requires GPU capable of 1000fps. Neither the display (native 1000Hz refresh) nor the GPU (1000fps capable) is possible today. Something that's currently only possible via interpolation, as today's GPU's are not able to pull that off natively. So we'll have several decades worth of technological progress improvements to finally merge the benefits of CRT-quality motion clarity, with the benefits of a completely flicker-free display at completely native refresh rates. As a compromise, I'll take flicker, as I'm motion-blur sensitive but not very flicker-sensitive (ala CRT-using population, the target audience of LightBoost, which uses flicker to eliminate motion blur). For the next few decades for flicker-sensitive/motion-blur-sensitive people, we have to put up with either: flicker OR interpolation OR motion blur (or more than one of the above). That said, OLED is probably the prime candidate technology to result in the world's first commercialized 1000fps@1000Hz native-refresh-rate display (albiet probably not this decade).
P.S. Are you aware of Strobemaster's work
in converting LightBoost displays into tachitoscopes? He does that by hooking an external circuit to the LightBoost strobe circuit. This allows full external control of the impulse length of the LightBoost strobes.
P.P.S. OLED displays that come out in the future, that are active-shutter-3D friendly, will probably be adequately-good motion-blur-reducing displays, although they would probably have a longer impulse length to compensate for OLED's traditional brightness difficulty. As a compromise, OLED's can also technically use dynamic impulse lengths (e.g. shorter impulses for dark pixels, longer impulses for bright pixels) which would lead to less motion blur during dark scenes and more motion blur in bright scenes. (This bright-ghosting effect occurs on CRT as well; bright images have a longer phosphor decay).