LCD motion blur: Eye-tracking now dominant cause of motion blur (not pixel transition/GtG) - Page 4 - AVS Forum
First ... 2  3  4 5  6  ... Last
OLED Technology and Flat Panels General > LCD motion blur: Eye-tracking now dominant cause of motion blur (not pixel transition/GtG)
Artwood's Avatar Artwood 03:33 AM 05-01-2013
Could Lightboost be used with refresh rates of 240 or 480? I have heard that at a refresh rate of 480 a lot of motion blur is reduced. At that rate could Lightboost help anymore?

It seems to me that if you could reduce motion bur that hockey would look much better on LCD.

What are the chances that Lightboost would be implemented on 4K LCDs?

Chronoptimist's Avatar Chronoptimist 05:46 AM 05-01-2013
Quote:
Originally Posted by Artwood View Post

Could Lightboost be used with refresh rates of 240 or 480? I have heard that at a refresh rate of 480 a lot of motion blur is reduced. At that rate could Lightboost help anymore?

It seems to me that if you could reduce motion bur that hockey would look much better on LCD.

What are the chances that Lightboost would be implemented on 4K LCDs?
As you increase the refresh rate, you are reducing the visibility of flicker - increasing persistence on the retina.
So you would be reducing the effectiveness of Lightboost unless you were also rendering the content at 240/480fps - and PCs are already struggling to render games at 120fps with everything turned up.

But this is exactly why LCDs are currently using a combination of backlight scanning and interpolation to increase the framerate - it provides similar motion handling improvements without all the flicker.
Motion interpolation is nowhere near as effective as having content with a natively higher framerate though, and is unsuitable for gaming as it adds too much latency.
tgm1024's Avatar tgm1024 01:08 PM 05-01-2013
Quote:
Originally Posted by Mark Rejhon View Post

As we already now know, some OLED's have motion blur because of sample-and-hold. This is a problem easily solved because OLED's switch very fast.

As long as they're bright enough, as you once pointed out.

Also if you increase strobe too much without increasing interpolation, you're right back where you started from because you're getting closer and closer to the sample-and-hold smear effect.

Depending on the pulse:gap ratio of course.
Mark Rejhon's Avatar Mark Rejhon 11:03 PM 05-01-2013
Quote:
Originally Posted by Artwood View Post

Could Lightboost be used with refresh rates of 240 or 480?
Yes, you could. But you'd also need to jack up the framerate too, as Chronoptometrist said (he's correct). That's hard to do without using interpolation.
Quote:
I have heard that at a refresh rate of 480 a lot of motion blur is reduced. At that rate could Lightboost help anymore?
Hz is not the only method to eliminate motion blur. CRT 60fps @ 60Hz still has less motion blur than many "960Hz" televisions such as Elite LCD HDTV. (Or Samsung CMR 960, or Sony Motionflow XR 960, or Panasonic 2500Hz Focussed Field Drive).

Motion blur is dictated by how long an individual static frame is displayed for (the frame sample length). Two methods exists: Extra Hz, or extra black periods betwen refreshes. Instead of raising Hz, you can reduce motion blur simply by adding extra black periods between refreshes. That's what LightBoost does -- flicker -- adds black between frames -- stroboscopic elimination of eye tracking based motion blur.

Also, to eliminate many other motion blur weak links
1. One flicker per frame.
Otherwise, repeat refreshes create a sample-and-hold effect. This is exactly why many cheap 600Hz plasmas still has lots of motion blur.
2. Frame rate matching refresh rate
Otherwise, repeat refreshes create a sample-and-hold effect. This is often why interpolation is necessary for high-Hz displays (even good plasmas such as Panasonic plasmas have to use interpolation to reduce plasma motion blur further).
3. No source-based motion blur
No soft focus, no long shutter camera, no overcompression, etc. You can't undo source based motion blur.

_________

Now, back to LightBoost, which is found in recent 120Hz-native computer monitors (see LightBoost testimonials and media coverage).

LightBoost is so efficient (TFTCentral said it outperforms all scanning backlights they have ever tested). The 1.4ms strobes that an optimized LightBoost display uses -- that's 1/700sec strobes -- creates the same motion blur equivalence to a 700fps@700Hz sample-and-hold LCD, if you're watching fps=Hz content (120fps@120Hz) such as playing video games using a Geforce Titan. Another way to visualize this is that this is equivalent to 700Hz with lots of black frame insertion (120 visible frames 1.4ms, followed by 6.9ms of black frames in between). The stroboscopic effect reduces eye-tracking-based motion blur to the point that it's equivalent to 700fps@700Hz, but without needing the insane framerates. 120Hz is possible to do without interpolation from a computer, which is what makes LightBoost so great -- it's the purest CRT-style motion you can get on a non-CRT display (again, see the "it's like a CRT" testimonials. Yes, crappy LCD color, but the motion is otherwise CRT-perfect on a LightBoost LCD)

This is because LightBoost at optimized settings (10%), uses 1.4 millisecond strobes (1/700th of a second), and the motion blur benchmarks far exceeds plasma quality (e.g. less motion blur than Panasonic VT50 running 2500Hz Focused Field Drive!). Obviously, the Panasonic has better color, but LightBoost has less motion blur than all known commercial plasma and LCD displays (including Elite/Samsung CMR 960/Sony Motionflow XR 960), assuming you're viewing 120fps content from a 120Hz computer.

LightBoost has 12x less motion blur than a 60Hz LCD. That is, where you had 1 inch of motion blur at 60Hz, you now only get 1/12th of an inch of motion blur for the same speed motion on the same size display. That's a full order of magnitude motion blur reduction. This corresponds amazingly accurately to sample length. The ratio of 1.4ms:16.7ms is 12 -- that's the same number as the measured reduction in the motion blur trail. 1.4ms is the strobe length, while 16.7ms is the length of a 1/60sec sample-and-hold refresh. So, the LightBoost stroboscopic backlight is apparently one of the world's most efficient motion-blur-eliminating backlight technology, despite having been originally designed for nVidia 3D Vision.

I wish more HDTV's would utilizes this technology. One technology that already exist is Sibt Motionflow Impulse setting, which is a stroboscopic backlight (that doesn't use interpolation), much like LIghtBoost. Unfortunately, it flickers at 60 Hz, which is annoying to a lot of people.

LightBoost eliminates most perceived flicker (to most human eyes), by the virtue of running at 120 Hz. Unfortunately, the only reliable 120 Hz content is from a computer -- e.g. a Geforce Titan running video games at 120fps @ 120Hz. I have a Geforce GTX680 and reliably run my older soure-engine games at 120fps @ 120Hz, as well as certain newer games such as Borderlands2 at 100fps @ 100Hz (at slight reduced view render distance). LightBoost is artificially vendor-locked to operate only within a 100Hz-to-120Hz range (due to its original 3D Vision purpose).
Mark Rejhon's Avatar Mark Rejhon 12:15 AM 05-02-2013
I am going to cross-post some great posts I've made on HardForum, highly relevant to this thread:
Quote:
Quote:
Originally Posted by spacediver;1039836621 
interesting point. Suppose it were impulse driven - would that allow you to display a still image for the amount of time it takes for the impulse to decay?
Correct, static frame flashed for a specific impulse length. It can be a full-screen strobe, or a sequential strobe (pixel-at-a-time, or scanline), the visual effect is essentially the same to the human eye, the length of a flash per pixel per refresh. Basically, for fps=Hz motion (120fps@120Hz), eye tracking along the vector of object motion, milliseconds rounded-off to the nearest 1ms for mathematical simplicity:

sample-and-hold (8ms @ 120Hz) -- baseline
50% frame impulse (4ms flash) -- 50% less motion blur
25% frame impulse (2ms flash) -- 75% less motion blur
12.5% frame impulse (1ms flash) -- 87.5% less motion blur

This is directly comparable to staying sample-and-hold but instead adding more refreshes instead of black periods between refreshes:

sample-and-hold (8ms, 120fps@120Hz) -- baseline
sample-and-hold (4ms, 240fps@240Hz) -- 50% less motion blur
sample-and-hold (2ms, 480fps@480Hz) -- 75% less motion blur
sample-and-hold (1ms, 960fps@960Hz) -- 87.5% less motion blur

Obviously, you see diminishing points of returns here, but 50% versus 87.5% is still significant. This becomes even more dramatic when using 1/60sec as baseline. This means 1/120sec impulses has 50% less motion blur than 1/60sec, while 1/960sec impulses has 93.75% less motion blur than 1/60sec.

One example of a stroboscopic display is LightBoost. In an optimized setting (LightBoost=10%), it uses 1.4ms (1/700sec) stroboscopic flashes, so when you play a videogame at fully synchronized (VSYNC ON, 120fps@120Hz), then LightBoost has the exact same mathematical motion blur equivalence of a 700fps@700Hz display. One way to visualize this is mathematically equivalent to 700fps with lots of black frame inserted in between (120 visible rendered frames, spaced equally apart in time 1/120sec apart, in 1.4ms flashes with (8.3ms - 1.4ms = 6.9ms of blackness between refreshes). PixPerAn motion tests confirm this equivalence, too -- during PixPerAn motion, there's a blur trail that's measured 6x longer in non-LightBoost mode on the very same display. This exactly corresponds to a 8.33ms:1.4ms ratio, which confirms the math is accurate. (This is also where my oft-quoted number 12x less motion blur than 60Hz is -- the 1.4ms:16.7ms ratio -- and definitively confirmed in motion blur trail measurements in PixPerAn and the upcoming BlurBusters Motion Tests). Other scientific papers (Science & References) have already touched upon this eye-tracking-based motion blur equivalence between increased Hz method, versus black insertion method (flicker displays like CRT) -- the stroboscopic method of eliminating motion blur without needing the GPU power of insane frame rates / refresh rates.

The motion blur math above is pretty simple if you have clean strobes (full bright quick, full dark quick) such as flicker-driven OLED's, or LightBoost displays. There are other factors that can fudge the numbers, such as phosphor decay. CRT has a noticeably measurable phosphor decay so the math is not as simple, but it appears that the baseline measurement for phosphor decay is to 90% black (phosphor on a medium-persistence phosphor CRT typically takes 1-2ms to lose 90% of light after being excited by the electron gun beam -- takes only microseconds to light up, but excitation time does NOT predict eye-tracking-based motion blur. Instead, the length of the impulse does).

If one has difficulty understanding, it's worth studying camera photography -- A shutter speed twice as fast, results in half the motion blur -- likewise, flash photography bypasses shutter speed speed limitations and the amount of motion blur in the photograph is directly proportional to the length of illumination from the flash (if there's no external light source). Eye-tracking based motion blur has a surprisingly close equivalence on stroboscopic displays and becomes equally easy to compute the predicted eye-tracking-based motion blur of (especially when "clean" strobes are used -- on immediately, static image for a certain amount of time, then off immediately). Obviously, the faster the shutter speed, the faster the motion needs to be in order to create visible motion blur. The same holds true for stroboscopic displays. Eventually, eye-tracking-based motion blur is so completely eliminated that motion needs to be too fast to create motion blur (e.g. faster than the human eyes can reliably track). For close viewing distances in sharp-resolution-material (e.g. videogames), the sweet spot is approximately 1ms frame length. (e.g. 1ms strobe flash). This can be off by quite a lot depending on the human, though, but this number covers the majority of humans.

Barring that, if you've got clean strobes of a static frame (on-then-off), the motion blur mathematics is extremely simple: Eye-tracking-based motion blur is directly proportional to the impulse length. The blur trail is easy to calculate: The impulse length percentage is the percentage of the original motion blur trail. Basically, if you've got a 10mm of moving-edge blur (caused by eye-tracking motion) during constant motion of a moving object on-screen, shortening the impulse to 50% of its length, automatically shortens the motion blur trail by 50% (e.g. 5mm of moving-edge blur). I'm of course, excluding other variables such as pixel persistence, eye tracking inaccuracy/limitations (which occurs when moving objects become too fast to track), phosphor decay, and other factors that fudge this math. Fortunately, LightBoost displays have proven remarkably efficient (TFTCentral said LightBoost outperforms all scanning backlights they have ever tested, and LightBoost completely bypasses pixel persistence), and the mathematic equation holds up amazingly well on a LIghtBoost display. This math should also hold up very well on flicker-driven OLED displays (OLED pixels turn on and off nearly instantly!), as long as they're full-screen-impulse driven or sequential-impulse driven (edge to edge scan, such as top to bottom). I am rooting for flicker-driven modes on computer-based OLED displays, so we can get the stroboscopic motion-blur eliminating effect that CRT's have long had. Not everyone likes flicker, but it should be easy to enable/disable such a mode.
Quote:
When I program visual stimuli in psychtoolbox for matlab, I can only get the stimulus to display for a single frame, which on our new vpixx display is 8.33 ms.
Are you using scanning backlight mode on your Viewpixx? If you turn on the scanning backlight mode, its response time is only 1ms (according to the manufacturer), but if this is a sequential scanning backlight, it would take about 8ms to flash the LED's sequentially from the top edge to the bottom edge.

Motion blur will be proportional to the illumination length of a single point of the display, so I'd expect Viewpixx display probably has a motion blur trail of approximately 1/8th the frame step. (1ms / 8.33ms). Basically 8 times less motion blur in scanning backlight mode than non-scanning-backlight mode. (Unless the pixel persistence is streaking excessively between refreshes)
Quote:
How would an impulse driven display interact with this. I'm assuming the temporal width of the impulse function is on the order of microseconds or perhaps 1-2 ms.
Although OLED can turn on near instantly (microsecond league), turning the OLED off almost immediately after would lead to a very dark picture because of the long black period between refreshes (since you need to stick to one strobe per frame(refresh), for proper motion blur elimination). So we have the tradeoff between impulse length and brightness. Shorter impulses, the picture is too dark. Longer impulses, there can be more motion blur. So this is a technological challenge.

There are points of diminishing returns for shorter impulses. Eventually it becomes uneconomical (you need an insane amount of brightness for ultra-short flashes, to prevent a dim image). The sweet technological spot is near 1ms. This represent the point of the end of diminishing returns for most of human population. Even a portion of gamers is unable to see benefits of LightBoost (1.4ms = 1/700sec stroboscopic flashes of refreshes), while others see minor improvement (not as big as 60Hz-vs-120Hz), while yet others see stunningly major improvement (far bigger than the difference between 60Hz-vs-120Hz, see testimonials).

This means a stroboscopic display of 1ms impulses (easy to do with CRT, OLED, or LightBoost-like display), or doing the same 1ms frame length on a sample-and-hold display capable of 1000fps@1000Hz (difficult to do with present technology). This 1ms sweet spot is pretty common for CRT's, but very rare for LCD's (except a few high-end HDTV's). Unfortunately even Panasonic plasma with 2500 Hz Focused Field Drive is hamstrung by plasma phosphor decay limitations (~5ms for red/green phosphor). Fortunately, OLED's should have no problems reaching the impulse-length of about 1ms, provided there's enough brightness in the impulses to compensate for the long black periods between refreshes (1ms:8.33ms ratio at 120Hz, or 1ms:16.7ms ratio at 60Hz). OLED pixels are near instant-reacting so are very impulse-friendly. I'm personally looking forward to the day that OLED panels successfully achieve enough brightness to make short impulses possible.

At the very end of the day, to gain CRT-quality motion on a flicker-free sample-and-hold display, would require ~1000fps@1000Hz (to gain equivalence to CRT phosphor of ~1ms decay) necessary to eliminate flicker AND eliminate interpolation AND eliminate motion blur (simultaneously). And to take advantage of that, requires GPU capable of 1000fps. Neither the display (native 1000Hz refresh) nor the GPU (1000fps capable) is possible today. Something that's currently only possible via interpolation, as today's GPU's are not able to pull that off natively. So we'll have several decades worth of technological progress improvements to finally merge the benefits of CRT-quality motion clarity, with the benefits of a completely flicker-free display at completely native refresh rates. As a compromise, I'll take flicker, as I'm motion-blur sensitive but not very flicker-sensitive (ala CRT-using population, the target audience of LightBoost, which uses flicker to eliminate motion blur). For the next few decades for flicker-sensitive/motion-blur-sensitive people, we have to put up with either: flicker OR interpolation OR motion blur (or more than one of the above). That said, OLED is probably the prime candidate technology to result in the world's first commercialized 1000fps@1000Hz native-refresh-rate display (albiet probably not this decade).

P.S. Are you aware of Strobemaster's work in converting LightBoost displays into tachitoscopes? He does that by hooking an external circuit to the LightBoost strobe circuit. This allows full external control of the impulse length of the LightBoost strobes.
P.P.S. OLED displays that come out in the future, that are active-shutter-3D friendly, will probably be adequately-good motion-blur-reducing displays, although they would probably have a longer impulse length to compensate for OLED's traditional brightness difficulty. As a compromise, OLED's can also technically use dynamic impulse lengths (e.g. shorter impulses for dark pixels, longer impulses for bright pixels) which would lead to less motion blur during dark scenes and more motion blur in bright scenes. (This bright-ghosting effect occurs on CRT as well; bright images have a longer phosphor decay).

--and--
Quote:
Quote:
Originally Posted by spacediver;1039842805 
thanks for the fantastic and readable post.
Yes we'll be using the scanning backlight mode (my supervisor just finished calibrating the display today so hopefully I'll be able to run my experiments on it very soon). I'm assuming a scanning backlight implies that it's a sequential strobe (emulating a crt scanline)?
Yes, albiet likely in a coarse manner (full row of LED's illuminating a cross-section of the LCD). Though I've seen some manufacturer interchangeably call scanning backlights as impulse backlights, and vice-versa, so it would be nice to have real confirmation.

If possible, get an inexpensive high-speed camera (such as a $300 Casio Exilim EX-FC200S or EX-ZR200, search ebay - imports from Japan) and point it at the Viewpixx in both 480fps recording modes and 1000fps recording modes. Even a cheap high speed camera should be able to determine your Viewpixx's approximate scanning backlight sequence. This high speed camera is a very worthy verification check. I used this camera for my high speed video of LightBoost
Quote:
I wasn't aware of Marc Repnow's work - this is very interesting. For my current area of study, being able to control the onset and offset of stimuli with sub-10 ms timescales would be useful (notwithstanding Bloch's Law.
Yes, Bloch's Law makes sense. Once a strobe flash is short enough, strobes look exactly the same to the eye. A 1ms strobe flash -- versus a 1 microsecond strobe flash that is 1000 times brighter -- looks the same -- it's the same number of photons hitting the eye. That said, this is for static impulses.

When we're talking about tracking motion, motion definitely makes things interesting. Take the scenario of a flying bullet going through an apple -- the famous high-speed photographs. A bullet becomes motion-blurred with millisecond strobe light, but you get perfect strobe captures with microsecond flash. Even naked human eyes can glimpse a speeding bullet floating in mid-air, if the strobe light went a microsecond, timed exactly at the moment of time the bullet zoomed directly in front of human eyes. Just like high-speed flash photography would capture a speeding bullet, if the flash was fast enough. A 1 millisecond flash is not fast enough, but a 1 microsecond flash can be fast enough for naked human eyes to see a speeding bullet. In this situation, what the human eyes is that the scene stroboscopically flashes briefly. The 1ms flash looks the same length to human eyes as the 1 microsecond flash (if the 1 microsecond flash is 1000 times brighter, due to Bloch's law). But the motion (speeding bullet) is more frozen by the shorter strobe, and thus the bullet becomes seen by the human eye as a momentary flash of a scene containing a bullet floating in mid-air, even though it was speeding along supersonically. Applies to naked human eyes staring at the scene, not just photo film. Likewise, shortening stroboscopic flash lengths on a strobe display (at one short strobe per refresh/frame), have the similiar effect of making motion clearer while eyes are tracking (in this case: Instead of reducing photographic blur caused by external motion, you're now reducing eye-tracking-caused motion blur).

For strobe displays, for maximum motion clarity, it is very important to have frame rates exactly matching refresh rate, for the most perfect possible motion. This is because stutters are essentially repeat refreshes, which creates a longer sample (sample-and-hold motion blur). It also causes discontinuities in eye tracking synchronization with moving object on the screen. Any deviations from ideal line is perceived as judder/stutter/motion blur (at very high framerates, ultra-high-speed repeat refreshes during fast motion, often simply blend into what looks like a motion blur. e.g. 60fps@120Hz looks like a double-edge motion blur. In fact, stutter-created motion blur still is visible to the human eye even in the neighborhood 120fps@240Hz, precisely because of this discontinuity, given sufficiently fast enough motion going at least ~240 pixels per second and greater and when you can see 1 pixel of motion blur in high-resolution moving objects).

fps-vs-hz-small.png

Mathematically, this is the reason why fps=Hz benefit strobe displays far more than sample-and-hold displays. Going from 30fps@60Hz to 60fps@60Hz on an LCD only reduces motion blur by 50%. However, going from 30fps@60Hz to 60fps@60hz on a CRT has a far more dramatic effect (looks many times clearer looking at 60fps@60Hz CRT than 30fps@60Hz CRT). Likewise for LightBoost 60fps@120Hz (so-so) versus 120fps@120Hz (dramatic). (LightBoost is hardware-restricted to only function at 100-120Hz). The eyes stay more perfectly in sync with the moving object, and it becomes possible to eliminate measurable eye-tracking-based motion blur. For strobe displays, fps=Hz is a dramatic improvement for people who have a good eye-tracking ability.

Likewise, impulse-driven OLED should behave the same -- dramatic improvement in motion clarity -- assuming the impulse lengths are made sufficiently short (~2ms).
Quote:
In case you're curious, the colors vary widely on the vpixx display depending on viewing angle. Also, based on a conversation I had today with my supervisor, it seems that the contrast ratio (not ANSI) is about 1000:1 - he measured full brightness at 99 cd/m^2 and full black at 0.1 cd/m^2
Interesting to know. I wonder what technology the Viewpixx LCD is.
Quote:
Quote:
Originally Posted by spacediver;1039842944 
that bullet example is an excellent thought experiment to illustrate these concepts!

I'll look into the high speed camera - it's something I've thought of before, and I could use it for my athletic pursuits also (there is critical information involved in tennis strokes that is revealed at those timescales).

Let me know if you have any specific questions about the Viewpixx display. I can bring a list of them to ask him (I'm flying out on the 9th of May).
Thanks for the compliment! Presently I'm most interested in seeing how the display looks under a high speed camera -- to find out its scanning backlight pattern, and how it meets the manufacturer's 1ms claim.

tgm1024's Avatar tgm1024 09:14 AM 05-02-2013
Mark,

This graph I'm not sure of:



Why does the eye move off the track? If it were moving off the track, then it would be mitigating the blur, not worsening it (No?)

Besides, the eye can't follow the breaks in a path that quickly, can it?
Mark Rejhon's Avatar Mark Rejhon 12:07 PM 05-02-2013
Quote:
Originally Posted by tgm1024 View Post

Mark,

This graph I'm not sure of:



Why does the eye move off the track? If it were moving off the track, then it would be mitigating the blur, not worsening it (No?)

Besides, the eye can't follow the breaks in a path that quickly, can it?
You got things mixed up:
This chart is for strobe displays -- you can see that my graphic is a variation of the Microsoft Research graphics.
This isn't display-based motion blur.
This is perceived motion blur created by the eye-tracking-motion itself.

This is actually easier to understand if you've already read
Why Does Some OLED's Have Motion Blur (sample and hold motion blur -- motion blur caused by eye tracking -- even if display has instant pixel response) and its scientific references at the bottom. It is useful to read that article carefully, as well as the Microsoft Research article, then get back to this thread if you're not sure.

Your eyes are continuously tracking moving eyes on the screen.
For sample-and-hold displays, motion blur gets created when static refreshes gets smeared across your retinas.
Your eyes are in a different position at the beginning of a refresh than at the end of a refresh.
This creates perceived motion blur, caused by eye tracking motion.

It is also important to have a good understanding of the various reasons why why 48Hz film has less motion blur than 24Hz film.
Also same reason why video has more motion blur than film.
And to understand why CRT 60fps@60Hz has less motion blur than LCD 120fps@120Hz (Hint: it's not because of pixel persistence)

For strobe displays (flicker displays like cRT), for minimum motion blur, flashes of the object needs to occur in the exact same place of the retina.
This means the object must move at the exact same speed as the eyes. Flash, flash, flash, in exactly the same position of the retina, as the object moves along the vector of the eye-tracking movement.
Stutters, judders, repeat refreshes, repeat strobes, are a discontinuity to this perfect synchronization, and creates a sensation of motion blur.
If you are familiar with the CRT 60fps@60Hz effect (Example: Nintendo game silky smooth fast scrolling on an old CRT, for example), this makes more sense.

Here's a post from a different forum (120hz.net) that explains the same thing, from a different angle:
Quote:
Quote:
Originally Posted by mxyz;23384 
Thanks for the detailed reply! Do you know why it looks so much smoother when your framerate is equal to the refresh rate of the monitor in lightboost?
This is an effect found on all flicker displays including CRT. This isn't just a LightBoost phenomenon. CRT is well known to have perfectly-sharp-looking motion when the frame rates match refresh rate.

Step 1: Understand "Sample-And-Hold" Motion Blur First

To understand this better, is to understand sample-and-hold first [non-flicker displays; frames continuously shine for the whole refresh].

1. Your eyes are continuously tracking a moving on-screen object.
2. As a result, your eyes are in different positions throughout a refresh
3. As a result, motion blur can be created; eye tracking across a statically displayed frame.

When motion moves fast (e.g. half a screen width per second, during a fast pan), you can have several pixels worth of motion blur created purely from eye tracking alone (even with a 0ms response display).

You can reduce this motion blur by shortening the refreshes, which can be done by higher Hz and/or by flickering the refreshes (black periods between refreshes)

Here are some images from that helps illustrate this. I've also linked to other scientific references at the bottom of this page.

sampleandhold2.gif
This creates eye tracking-based motion blur
(Source: Microsoft Research)

sampleandhold1.gif
Center image = flicker displays such as CRT and LightBoost.
Rightmost image = continuous-shine displays such as traditional LCD.
(Source: Microsoft Research)

Step 2: Understand How Stutters Affect Motion Clarity

I have created a new graphic that illustrates how a mismatch between fps and Hz can make motion less clear, even at framerates well beyond 60 fps:

fps-vs-hz-small.png

As your eyes track across a screen, the eye expect each flicker of the moving object to be in sync with the motion.
When frame rate matches refresh rate, there's no motion blur on flicker displays.

If the framerate does not match Hz, then discontinuities occur and you see blurred edges or multiple-edge effects (e.g. 30fps @ 60Hz as a double-edge effect). This can persist well beyond 60Hz, even at 60fps@120Hz, and even 120fps@240Hz given sufficiently fast-moving on-screen objects, though diminishing points of return obviously kick in. Some people are very sensitive to stutters, even 1fps or 2fps (e.g. 59fps@60Hz). Stutters can also occur when you have fps beyond Hz (e.g. 61fps@60Hz)

It is MUCH easier to see a stutter on stroboscopic displays (CRT, plasma, LightBoost) than on continuous-shining displays (e.g. most LCD). So it is a doubleedged sword:
1. Fast motion clarity is much sharper on flicker displays (CRT, plasma, LightBoost)
2. However, a mismatch between fps and Hz is easier to see on flicker displays

It's why LightBoost benefits really starts to shine when framerates are near or above Hz; and why LightBoost isn't worthwhile if you're running only half framerate (e.g. 60fps @ 120Hz).
As you can see, graphs #1 and #2 are Microsoft research graphs and similiar graphs are found in dozens of scientific papers (see academic section), and by googling "sample and hold motion blur site:edu". It is covered by a large number of Society for Information Display material.

More science references about the sample-and-hold problem, and also gives an understanding why flicker (i.e. CRT) solves the motion blur problem:

Temporal Rate Conversion” (Microsoft Research)
Information about frame rate conversion, that also explains how eye tracking produces perceived motion blur on a sample-and-hold display, including explanatory diagrams.

Correlation between perceived motion blur and MPRT measurement
by J. Someya (SID’05 Digest, pp. 1018–1021, 2005.)
Covers the relationship between human perceived motion blur versus Motion Picture Response Time (MPRT) of the display. This also accounts for motion blur caused by eye tracking on a sample-and-hold display, a separate factor than pixel persistence.

What is needed in LCD panels to achieve CRT-like motion portrayal?
by A. A. S. Sluyterman (Journal of the SID 14/8, pp. 681-686, 2006.)
This is an older 2006 paper that explains how scanning backlight can help bypass much of an LCD panel’s pixel persistence.

Frame Rate conversion in the HD Era
by Oliver Erdler (Stuttgart Technology Center, EuTEC, Sony Germany, 2008)
Page 4 has very useful motion blur diagrams, comparing sample-and-hold versus impulse-driven displays.

Perceptually-motivated Real-time Temporal Upsampling of 3D Content for High-refresh-rate Displays
by Piotr Didyk, Elmar Eisemann, Tobias Ritschel, Karol Myszkowski, Hans-Peter Seidel
(EUROGRAPHICS 2010 by guest editors T. Akenine-Möller and M. Zwicker)
Section “3. Perception of Displays” (and Figure 1) explains how LCD pixel response blur can be separate from hold-type (eye-tracking) motion blur.

Display-induced motion artifacts
by Johan Bergquist (Display and Optics Research, Nokia-Japan, 2007)
Many excellent graphics and diagrams of motion blur, including impulse-driven and sample-and-hold examples.
tgm1024's Avatar tgm1024 12:25 PM 05-02-2013
Quote:
Originally Posted by Mark Rejhon View Post

Quote:
Originally Posted by tgm1024 View Post

Mark,

This graph I'm not sure of:



Why does the eye move off the track? If it were moving off the track, then it would be mitigating the blur, not worsening it (No?)

Besides, the eye can't follow the breaks in a path that quickly, can it?
You got things mixed up:
This chart is for strobe displays -- you can see that my graphic is a variation of the Microsoft Research graphics.
This isn't display-based motion blur.
This is perceived motion blur created by the eye-tracking-motion itself.

This is actually easier to understand if you've already read
Why Does Some OLED's Have Motion Blur (sample and hold motion blur -- motion blur caused by eye tracking -- even if display has instant pixel response) and its scientific references at the bottom. It is useful to read that article carefully, as well as the Microsoft Research article, then get back to this thread if you're not sure.


No, no, no, I fully understand that. In fact it was the microsoft research article diagrams that had the diagrams in error that I critqued with you a while ago. They were labeling the chart as a desired path, when it's actually the distance traveled. An error because using their very same chart a horizontal motion would result in no blur. They needed to dump the notion of "path".

Correct me: According to your diagram, the eyes are travelling further from the ideal position. But they're not. They're travelling further from the pulsed position, because the pulses are always left behind.

Please verify again: is your eye position the dots, or the "ideal"?
Mark Rejhon's Avatar Mark Rejhon 11:43 AM 05-03-2013
Quote:
Originally Posted by tgm1024 View Post

No, no, no, I fully understand that. In fact it was the microsoft research article diagrams that had the diagrams in error that I critqued with you a while ago. They were labeling the chart as a desired path, when it's actually the distance traveled. An error because using their very same chart a horizontal motion would result in no blur. They needed to dump the notion of "path".
That's why I added the headlines to the post, to clarify what the axes represented. (Editing somebody else's graphs was a decision I decided to avoid, although redoing Blur Busters versions of the graphs completely from scratch is under consideration). For the purposes of this discussion, "path of real object in scene" is re-interpreted as "ideal line: position of eye relative to position of object on screen". Above line means one is ahead of the other. Below the line means one is behind the other. Either situation can happen. A good improvement to the Microsoft graphs is to move the sample-and-hold staircase so that the line equally divides the blur above and below (line going through the steps), since that is technically really a more accurate representation of the real world. But the more important point being explained is the divergence of eye position relative to object position.
Quote:
Correct me: According to your diagram, the eyes are travelling further from the ideal position. But they're not. They're travelling further from the pulsed position, because the pulses are always left behind.
Do you mean all the dots should all be left behind? Not necessarily: The eye can be tracking the leading edges of stutter (leading edge of motion blur), or the eye can be tracking the trailing edges of stutter (trailing edge of motion blur). That'd just shift the dots upwards and downwards. I simply used a rough average, as in looking at the center of the object. It is not defined where the eye position should be, relative to object position -- as long as it's in sync (going at the same average speed) -- so it makes sense that dots can go above or below the line. For consistency, I could push all the dots proportionally downwards, so that they are all below the line. It is also important to point out that sample-and-hold (eye-tracking) motion blur is symmetric -- same amount of blur at both leading edges and trailing edges of moving objects (see LCD Motion Artifacts 101).

An extra dot between the third dot, and the fourth dot, at exactly the same vertical level as the third dot, to clarify that it's a refresh repeat. Perhaps, this is something that should be done, to clarify this point.

I agree, the Microsoft graphs needs to be redone in wording at least (and mine with minor tweaks), but the sample-and-hold fact remains -- a mismatch between fps and Hz results in increased motion blur. Whether it's 24fps@60Hz, or 30fps@60Hz, or 47fps@60Hz or 60fps@120hz or 100fps@120Hz. It even remains when stutters are so high speed, that stutters blend into motion blur -- e.g. 120fps@240Hz -- which is above flicker fusion threshold so the flickering-edge of judders/stutters blend into a solid motion blur. At least until a frame sample length becomes so short, that it requires motion too fast to accurately eye-track, in order to create perceived motion blur. This applies to both sample-and-hold displays, as well as impulse driven displays (30fps@60Hz look different on CRT than on an LCD, but still has more perceived motion blur either way than 60fps@60Hz). I have no problem telling apart 1/400sec sample lengths (LightBoost=100%) from 1/700sec sample lengths (LightBoost=10%) during motion that runs at 120fps@120Hz in motion moving 960 pixels/second (half a screen width per second -- e.g. turning left/right in first-person shooter games. There is about 1 pixel of motion blur difference in this situation, which at normal computer montor viewing distance, can noticeably blur a high-resolution poster on a wall within a video game -- or a faraway sniper hiding in the bushes while I am turning. Even moreso during faster turns at 1 screen width per second; creating about 2 pixels of motion blur difference between a 1/400sec frame flicker and a 1/700sec frame flicker. Thus, clearly, the diminishing points of returns do not yet stop at 1/240sec samples (240Hz), especially during fast motion of really clear frames (e.g. first-person video games on a Geforce Titan on a LightBoost monitor).

For the very common 30fps@60Hz video game situation, there is a double-edge-effect motion blur for impulse displays (CRT 30fps@60Hz), versus a more continuous softer motion blur for sample-and-hold displays (LCD 30fps@60Hz). The graph is designed to illustrate fps vs Hz mismatch on impulse driven displays. Stutters creates non-linearity in object motion that is impossible to avoid; this provably creates motion blur in both eye-tracking motion or pursuit-camera-tracking motions -- repeat frames create the effect of a longer sample. At the other extreme, if the framerate is so low, you can see the stop-motion of individual frames, of course -- the motion blur effect is gone. But we're not talking about low framerates. Several people in other forums (e.g. Hardforum Monitors and 120hz.net) have asked why LightBoost gaming at 100fps@120Hz doesn't have as clear motion as Lightboost gaming at 120fps@120Hz, this graph was created to address this. We are running at refresh-rates / strobe-rates that create situations where stutters are so high-speed, that stutters blend into simple-looking motion blur.

That's the bottom line of what needs to be explained: Stutters (fps vs Hz mismatch) create sample-and-hold motion blur. The problem is how to describe this nicely in graphs, and this graph is one of the best attempts on the Internet done so far. Suggestions of minor tweaks is welcome.
Chronoptimist's Avatar Chronoptimist 12:01 PM 05-03-2013
Shouldn't it be something like this?

2009902ku4q.png

You would probably want to annotate it with the stair-step line that motion follows as well, that was just a quick edit.
tgm1024's Avatar tgm1024 12:24 PM 05-03-2013
Quote:
Originally Posted by Chronoptimist View Post

Shouldn't it be something like this?

2009902ku4q.png

You would probably want to annotate it with the stair-step line that motion follows as well, that was just a quick edit.

This is what I was trying to say.....but I needed clarification of which item meant what because his graph seemed to try to establish a relative distance with the eye trailing behind the objects. I have to think about his leading vs. trailing edge thing, but if his diagram is indeed correct then I don't properly understand what he's attempting to establish yet.

(Mark, sorry for the 3rd person dialog.)
tgm1024's Avatar tgm1024 12:30 PM 05-03-2013
(?) Mark, are you establishing: From object position to object position, the eye must trail it because it doesn't know where it's going to be until it's drawn. <---Is that what you're saying Mark?

Also, the graph has a y axis labeled "eye position" which should mean that everything in the graph (both dots and line) are part of that metric. It should maybe be just "distance", with the line being labeled "eye tracking" and the dots labeled object positions.

Still not sure.
Mark Rejhon's Avatar Mark Rejhon 02:32 PM 05-03-2013
Quote:
Originally Posted by Chronoptimist View Post

Shouldn't it be something like this?

2009902ku4q.png

You would probably want to annotate it with the stair-step line that motion follows as well, that was just a quick edit.
Yep, this one is also accurate too as well.
However, I think one minor edit to *my* graph is to add one dot (after the third dot). Then it is still accurate too: It represents one single stutter. e.g. 45fps@60Hz sampled over a period of 6 refreshes. This would create sloping-divergence, a repeat refresh and continued sloping-divergence -- like in my graph.
i.e.
-- your modified graph is applicable to 30fps@60Hz
-- my graph is more applicable to 45fps@60Hz or 50fps@60Hz (but needs to be clarified with a modification of a one repeat impulse at the same level, in the big gap)

Also, stutters are still detectable even at 59fps@60Hz and 119fps@120Hz (and remarkably, 239fps@240Hz human detectable stutter!), but no longer manifests itself as motion blur, since it's a low frequency mismatch (stutter beat frequency of 1 Hertz: Stutters tend to occur at the harmonic frequency between frame rate and refresh rate). The graphs are still useful for illustrating why it's still possible to detect single-stutter under very certain conditions even at insane frame rates. The graphs aren't specifically about motion blur, but about discontinuities / eye-object synchronization.

An old 21" CRT that syncs to 2048x1536@75Hz (120 KHz horizontal scanrate) is usually able to also display 640x480@240Hz (120 KHz horizontal scanrate), for high-refresh-rate motion tests. It was thus, possible to do well-controlled stutter tests at framerates well beyond flicker fusion threshold; and the human ability to detect single stutters still exists assuming fps=Hz motion moving sufficiently fast and smoothly. (At 960 pixels per second -- a single stutter is 4-pixel stutter (1/240th of 960) -- the object jumps suddenly behind by 4 pixels. This is human noticeable when staring at perfectly-smooth-moving fully-sharp-focus objects (like 3D rendered graphics on a computer screen moving in exact pixel steps per frame).

So the discontinuity problem affects both low-frequency stutters (ability to detect skips) and high-frequency stutters (that blend into perceived motion blur).
The same graphs are applicable for both extremes.
Mark Rejhon's Avatar Mark Rejhon 03:49 PM 05-03-2013
I've updated my graphic now:

fps-vs-hz-small.png
GmanAVS's Avatar GmanAVS 06:08 AM 05-09-2013
How difficult is it for manufacturers to implement an algo code to force a LCD/LED to compensate for the fps ~ hz differential and mimic what the perceived human eye tracking is?
Mark Rejhon's Avatar Mark Rejhon 06:52 AM 05-15-2013
Quote:
Originally Posted by GmanAVS View Post

How difficult is it for manufacturers to implement an algo code to force a LCD/LED to compensate for the fps ~ hz differential and mimic what the perceived human eye tracking is?
That what motion interpolation already does. Eye-tracking-independent elimination of sample-and-hold motion blur, via shortening the lengths of individual refreshes, via adding extra Hz (interpolation).

If you meant eye-tracking-dependant elimination of sample-and-hold motion blur, that is a non-sequitur. It makes no sense to do interpolation only for one motion vector, and would create worse artifacts. It would be incompatible with multiple viewers.
fluffysheap's Avatar fluffysheap 10:53 PM 05-15-2013
I bet the best way to do it is with a variable-framerate display, that syncs to the framerate output of the source.

The collective conception of display framerate is that it needs to be a fixed frequency because, historically, CRT monitors required this (even multisync monitors would take some time to resync when the frequency changed). But LCDs only stick to it by tradition. Instead, the graphics card, which knows the framerate of the source, should simply delay the next output frame until it's ready, rather than displaying the previous frame twice. I don't know of any limitation in DVI or DisplayPort that would require a fixed framerate.

I really doubt I could tell the difference between a scene fluctuating between 80 and 120Hz, so long as it was just one strobe per frame. But that is just an educated guess.

There would still be some judder under this situation, because the source would have to predict how long the frame will take to render, and it might be wrong. If it mispredicts, the objects in the image will appear in the wrong place because the renderer thought the frame would display at time X and it actually displayed at time X+K (K might be negative). This is pretty much the same thing that causes "microstuttering" on multi-GPU rendering setups.

But it would still be far less judder that you get from displaying an old frame twice!
Mark Rejhon's Avatar Mark Rejhon 11:52 PM 05-15-2013
Quote:
Originally Posted by fluffysheap View Post

I bet the best way to do it is with a variable-framerate display, that syncs to the framerate output of the source.
Many modern LCD's are already variable-Hz capable. For example, the SEIKI 4K LCD can sync in in tiny increments all the way from 24Hz through 120Hz. If you want 93.15Hz, the display can sync to it.
But it doesn't do it dynamically (e.g. continuously varying refresh rate capable). You choose a refresh rate and stick to it; the refresh rate can't continuously vary.
Quote:
But LCDs only stick to it by tradition. Instead, the graphics card, which knows the framerate of the source, should simply delay the next output frame until it's ready, rather than displaying the previous frame twice. I don't know of any limitation in DVI or DisplayPort that would require a fixed framerate.
The graphics card can simply switch automatically to a refresh rate matching the source. I know some software video players have the ability to do this. This would work on modern flexible refresh rate LCD's. So the job is for the source to switch refresh rate automatically to match the source. (e.g. a computer automatically switching to 24Hz or 48Hz when playing movies, and automatically switching to 60Hz when playing video). Some HTPC's have been set up in the past to do this with various utilities (e.g. Zoom Player)
Quote:
I really doubt I could tell the difference between a scene fluctuating between 80 and 120Hz, so long as it was just one strobe per frame. But that is just an educated guess.
Actually, it's easy to see a change in strobing rate unless you really were precise with strobe timings, like a plasma display is with its subfield flickers. Some plasmas such as the Panasonic VT50, with its focussed field drive, tends to flicker more during some scenes and flicker less during others, in a very dynamic way; different numbers of subfield flickers are occuring at different times. So I guess, it's theoretically possible, but you'd have to very precise in strobing control so that the duty cycle kept the dynamic strobe rate transitions seamless, e.g. a dynamically-varying flicker rate without seeing the flicker rate transitions. This requires creative and careful duty cycle and timing control. You'd also need logic to keep the strobe rate seamless even through judders/stutters (e.g. frame drops in a game, performance problem decoding a video, and other momentary frame drops), meaning you'll probably get forced repeat strobes during unavoidable stutters in order to avoid the stroberate transition from becoming noticed.

Interesting thought exercise.
I don't think this will ever happen at the source (e.g. I don't expect continuously-variable-framerate source that varies frame rate on a frame-to-frame basis), but it's an interesting theory nontheless.
Alternatively, approach the 1000fps@1000Hz (real-time) nirvana of the 22nd century, so we have motion-blur-free sample-and-hold without flicker/strobing. The tech will probably arrive sooner, but the real world (e.g. cameras, broadcasts, standards) will probably lazily stick to 60Hz sources, and slowly transition to 120Hz sources within our lifetime even if 1000fps@1000Hz native refresh rates and native real-time source framerates become practical sooner. So we're stuck with strobing or interpolation in the meantime.
fluffysheap's Avatar fluffysheap 01:52 AM 05-16-2013
For movie playing, it's true, the refresh rate need be only set once, because the whole movie will be at the same framerate. I'm thinking more in terms of games.

Right now most serious gamers play with (software) vertical sync turned off because of the performance penalty it incurs. But even with it off, whether on a CRT or a strobing backlight LCD, sometimes the rendering is not fast enough to keep up with the refresh rate of the display, and you end up with a duplicated frame, or more likely, a duplicated part of a frame. This will cause a motion artifact and visible tearing in the image.

As a result of this gamers are forced to buy rendering hardware that is more powerful than is really needed, to avoid the penalty for game framerate below refresh rate. Obviously in some scenes it either doesn't matter or is unavoidable, because of loading models/textures into memory or whatever, but that happens infrequently and usually when nothing is going on in the game (and there is nothing the display can do about it anyway). You would just need a minimum rate for flicker purposes, and a cap based on the performance limit of the display.

If the refresh times varied frame to frame, software vsync would be free, and there would not be any use for interpolation. It is really just a question of whether the variable refresh time would be visible or not. I'm afraid I don't know enough about physiology, or what is going on with the plasma TVs you mention, to say. I guess, if you had a fast enough strobe light that can fluctuate its speed and illuminate a room, you could just set it up and have somebody walk around in there and see if they look weird. Actually, I bet you already have a light capable of that! smile.gif

The reason I suspect the artifacts from varying refresh times would not be visible is that in games the (software) framerate already varies constantly, but it usually isn't perceptible unless the software framerate drops below the display's framerate. What's more, this fluctuation is usually also going on without close coordination with the game's physics engine, so entities in the game are also subtly in the wrong place. That is, the physics engine sets up the scene based on the CPU's current time, and the video system takes however long it needs to actually render the scene, which fluctuates. So there is already more or less continuous judder in the 2-3ms range going on and it just isn't visible to basically anybody. Granted it is not exactly the same thing, but it makes me think that frame-to-frame fluctuations in refresh time that are below some threshold are not likely to be perceptible.
tgm1024's Avatar tgm1024 06:46 AM 05-16-2013
Quote:
Originally Posted by Mark Rejhon View Post

Quote:
Originally Posted by fluffysheap View Post

I bet the best way to do it is with a variable-framerate display, that syncs to the framerate output of the source.
Many modern LCD's are already variable-Hz capable. For example, the SEIKI 4K LCD can sync in in tiny increments all the way from 24Hz through 120Hz. If you want 93.15Hz, the display can sync to it.
But it doesn't do it dynamically (e.g. continuously varying refresh rate capable). You choose a refresh rate and stick to it; the refresh rate can't continuously vary.
Quote:
But LCDs only stick to it by tradition. Instead, the graphics card, which knows the framerate of the source, should simply delay the next output frame until it's ready, rather than displaying the previous frame twice. I don't know of any limitation in DVI or DisplayPort that would require a fixed framerate.
The graphics card can simply switch automatically to a refresh rate matching the source. I know some software video players have the ability to do this. This would work on modern flexible refresh rate LCD's. So the job is for the source to switch refresh rate automatically to match the source. (e.g. a computer automatically switching to 24Hz or 48Hz when playing movies, and automatically switching to 60Hz when playing video). Some HTPC's have been set up in the past to do this with various utilities (e.g. Zoom Player)
Quote:
I really doubt I could tell the difference between a scene fluctuating between 80 and 120Hz, so long as it was just one strobe per frame. But that is just an educated guess.
Actually, it's easy to see a change in strobing rate unless you really were precise with strobe timings, like a plasma display is with its subfield flickers. Some plasmas such as the Panasonic VT50, with its focussed field drive, tends to flicker more during some scenes and flicker less during others, in a very dynamic way; different numbers of subfield flickers are occuring at different times. So I guess, it's theoretically possible, but you'd have to very precise in strobing control so that the duty cycle kept the dynamic strobe rate transitions seamless, e.g. a dynamically-varying flicker rate without seeing the flicker rate transitions. This requires creative and careful duty cycle and timing control. You'd also need logic to keep the strobe rate seamless even through judders/stutters (e.g. frame drops in a game, performance problem decoding a video, and other momentary frame drops), meaning you'll probably get forced repeat strobes during unavoidable stutters in order to avoid the stroberate transition from becoming noticed.

I don't think this in any way is going to result in a predictable picture. In addition to image artifacts that we cannot predict there is the notion of light output, (isn't there?) The combination of hold and strobe_length x #strobes had better be exact mid scene or your eyes will absolutely notice something. And there's no way that I can see that the timing would be exact enough. Even if you were accurate to 1 Hz variations, the total light output would never be the accurate enough.

And, the pulldown effects might suddenly change mid scene, like you alluded to. Like you said, you'd need to get judder just right, and that might be heroic in effort.

It would have been very hard to predict something like SOE several years ago. I would think it would be impossible to predict the side effects of this.

I do like the mental gymnastics the idea creates though. We're up to our armpits in side effects as it stands now and it forces me to think through the root causes.
fluffysheap's Avatar fluffysheap 01:38 PM 05-16-2013
You would have to vary the strobe length each frame, yes, or brightness would depend on the frame rate. It doesn't strike me as a hard problem. The strobe circuit knows how long the screen has been dark, so it can adjust the length of the strobe to suit, to maintain a constant duty cycle for the backlight. The only possible complication might be the response time of wLED phosphors causing a brightness or color shift as the strobe lengths vary, but if that is negligible (or if RGB LEDs are used), then it should work.

As far as judder, unless I am misunderstanding what you mean, I don't think it would be perceptible. Consider the case of a game played with triple buffering and vsync on. Let's say there is a 100Hz display (to simplify math) so the renderer produces a frame every 10ms on average. Ideally, then, every frame would take 10ms to render, and the display would constantly be 10ms behind the CPU. But real-world renderers have to be faster than the display or else there are doubled frames and it looks bad. So let's say the renderer is actually capable of maintaining about 110 FPS on average, but it varies from one frame to another. So what happens is something like this (I have left out the latency of the display, the connection, and even the PC's frame rendering queue because they are constants):
T+0 Renderer starts frame 1, display shows blank (empty pipeline)
T+9 Renderer finishes frame 1 and starts frame 2
T+10 Display shows frame 1 (representing T=0, 10ms lag)
T+18 Renderer finishes frame 2
T+20 Display shows frame 2 (representing T=9, 11ms lag)
T+27 Renderer finishes frame 3
T+30 Display shows frame 3 (representing T=18, 12ms lag)
T+40 Renderer finishes frame 4 (there was a glowing rocket in the frame, so it took longer to render) and display shows it (representing T=27, 13ms lag)
T+46 Renderer finishes frame 5 (the player turned around and is now looking at a simple scene)
T+50 Display shows frame 5 (representing T=40, 10ms lag)
T+53 Renderer finishes frame 6 and starts 6a
T+59 Renderer finishes frame 6a and discards frame 6
T+60 Display shows frame 6a (representing T=53, 7ms lag)
T+67 Renderer finishes frame 7 (back to normal scenes)
T+70 Display shows frame 7 (representing T=60, 10ms lag)

This stuff goes on constantly and nobody ever notices any of it, even when the renderer gets ahead of the monitor and has to drop a frame. You can get away with showing the whole scene wrong by 2 or 3 ms, so I don't think anyone would notice if the exact same scenes were displayed, but actually at the right time.

But in this example, if frame 4 had taken 1ms longer, and forced a doubled frame, that would stand out like a sore thumb. With a variable frame time display it would have only been 1ms more of a difference in the frame, probably not perceptible.
Mark Rejhon's Avatar Mark Rejhon 05:25 PM 05-16-2013
Quote:
Originally Posted by fluffysheap View Post

You would have to vary the strobe length each frame, yes, or brightness would depend on the frame rate. It doesn't strike me as a hard problem. The strobe circuit knows how long the screen has been dark, so it can adjust the length of the strobe to suit, to maintain a constant duty cycle for the backlight. The only possible complication might be the response time of wLED phosphors causing a brightness or color shift as the strobe lengths vary, but if that is negligible (or if RGB LEDs are used), then it should work.
It's a wonderful thought exercise, and it might be workable in theory. Dynamically varying lengths of refreshes, accompanied with dynamic flicker rates without noticeable flicker transitions would be an extremely difficult engineering problem. I know it's possible to do flicker-rate transitions without noticeable seams in the transitions, but co-ordinating the GPU, the CPU, the monitor, and making sure that timings are precise enough to keep a reliable duty cycle.

Also, artifacts caused by varying rate can occur (e.g. stronger ghosting for shorter refresh cycles and weaker ghosting for longer refresh cycles, as well as possible slightly different color temperatures from shorter LED pulses versus longer LED pulses). I know that I was able to notice a flicker-rate transition if the flicker timing was off by 1/10,000th of a second (Example: 0.1ms extra to a 1ms strobe flicker flash is a 10% brightness change!) so you really need to have extremely precise control of the duty cycles if you wanted varying flicker rates without noticeable flicker-rate transitions. This may not be 100% avoidable due to various factors, since you're going to do very sudden flicker-rate transitions. Also, flicker-rate transitions would definitely be noticeable at any frequencies less than 1/120sec, so you'd need to stick to really high Hz. (e.g. transition between 200 Hz and 300 Hz flicker could be made seamless by careful control of the duty cyclle).

Anyway, I really feel that this is not going to happen due to the complexities and engineering challenges (unless you were making this a Ph.D thesis or something), this is purely 100% a thought exercise only.

So, let's go back to the more practical realities of strobe backlights and scanning backlights.
tgm1024's Avatar tgm1024 11:32 AM 05-19-2013
Quote:
Originally Posted by Mark Rejhon View Post

[Edited to add] In many older LCD technologies, pixel persistence is the major motion blur barrier and cannot be bypassed. However, recently the scales have tipped -- this is no longer true anymore for modern LCD panels. Pixel persistence disadvantage of LCD (influences motion blur) is different from the continuous-backlight motion disadvantage of LCD (influences eye tracking motion blur). These are two separate disadvantageous traits of LCD from a motion quality perspective. It is important to understand the difference between the two. What is harder to understand is which factor dominates. Recent research has discovered that the dominating factor has tipped -- pixel persistence is no longer the dominant cause.

I'm still at odds with this. It seems that even those papers have still established that eye motion would never cause a blur if it weren't for the fact that the object is lit for a duration of time. If the hold time were magically ~0 seconds (but still pulsed brightly), then the eye would have nothing to blur with.

Where the confusion seems to be is different---I think it's because of this (for example): 60 Native frames composed of 1 *bright* pulse, verses 60 Native frames each with 16 lesser pulses----the 16 would cause more blur because they're re-strobing the retina in different places because the eye is moving.
Mark Rejhon's Avatar Mark Rejhon 11:36 AM 05-19-2013
Quote:
Originally Posted by tgm1024 View Post

I'm still at odds with this. It seems that even those papers have still established that eye motion would never cause a blur if it weren't for the fact that the object is lit for a duration of time. If the hold time were magically ~0 seconds (but still pulsed brightly), then the eye would have nothing to blur with.
That's correct, and that's what I've said.
Quote:
Where the confusion seems to be is different---I think it's because of this (for example): 60 Native frames composed of 1 *bright* pulse, verses 60 Native frames each with 16 lesser pulses----the 16 would cause more blur because they're re-strobing the retina in different places because the eye is moving.
That's correct.
That's also a problem that happens with PWM dimming. (e.g. 360Hz strobe with 120Hz refresh)

pursuitcam_pwm.jpg

Where's the confusion?
I've always tried to always mention that you need one flash per frame, for motion blur elimination.
Please point out the exact sentence where I made things confusing.
The numbers I've quoted is when fps=Hz, at one strobe per refresh.

Pixel persistence definitely used to be the main limiting factor, but not anymore. It was impossible to get good blur reduction via strobing on LCD displays because refreshes "bleeded into each other". Early color LCD's 20 years ago had many frames bleeding into each other (e.g. 10 frames streaking, due to pixel persistence 10x as long as a single refresh). This gradually became better and better, until there was only 1 or 2 frames streaking into each other. See High-speed video of 2007 LCD; see the numbers streaking into each other. If you tried to strobe on such an LCD, you'd reduce motion blur but not completely because of the pixel persistence streaking unavoidably between all refreshes. There is never a single microsecond where there's a sufficiently clean refresh to strobe. Also see LCD Motion Artifacts 101.

Finally, the LCD motion blur breakthrough happened (3D LCD's). The streaking between frames became 0 frames. Magic moment! Pixel persistence of 2ms is a tiny fraction of a refresh. You could finally hide pixel persistence into a (lengthened) vertical blanking interval, the idle pause between frames! Strobe the backlight between frames. Now, today, it's finally possible to practically completely hide pixel persistence, as shown in this high-speed video of 2012 LCD. The LCD refreshing is unseen in the dark, between refreshes. The frames no longer bleeds into each other (remnant persistence below human vision noise floor). Finally, you can strobe the backlight on fully-refreshed frames. Pixel persistence cease to be a factor in motion blur. There is no limiting factor. Actual measured motion blur (e.g. MPRT 1.4ms) becomes independent of pixel persistence (e.g. 2ms+). Motion blur now becomes as small as how quickly you can strobe the backlight, once per frame (fps=Hz). The motion clarity limits of LCD is now recently unbounded, thanks to strobe backlights + fully refreshed frames.

It's is a gradual progression to this achieved goal. Old scanning backlights reduced blur by say, 10%, maybe 30%, due to a large part due to pixel persistence still remaining a bottleneck. (And backlight bleed between adjacent backlight sections). Newer scanning backlights improved motion clarity by 2x or 3x. (blur trail shortens to one-half or one-third size). Finally, full-strobe backlights, such as 120Hz LightBoost (tweaked to OSD=10%) improve motion clarity by 6x relative to non-strobed 120Hz, and 12x relative to non-strobed 60Hz. The motion blurring reduces to 1/12th as long as original size. Test pattern confirmed (PixPerAn and Blur Busters Motion Tests), direct motion blur co-relation with strobe length (oscilloscope photodiode confirmed). We need to see this more-easily-achievable massive order-of-magnitude improvement in more LCD displays, since new tech (OLED, blue-phase, FED) are still all a dream at this moment.
tgm1024's Avatar tgm1024 12:16 PM 05-19-2013
Quote:
Originally Posted by Mark Rejhon View Post

Quote:
Originally Posted by tgm1024 View Post

I'm still at odds with this. It seems that even those papers have still established that eye motion would never cause a blur if it weren't for the fact that the object is lit for a duration of time. If the hold time were magically ~0 seconds (but still pulsed brightly), then the eye would have nothing to blur with.
That's correct, and that's what I've said.

One step at a time then: What you said in the OP was this:
Quote:
Recently, with today’s faster LCD’s, pixel persistence now only has a minor factor in motion blur.

How can that be true? Even with the articles you posted, if there were essentially no pixel persistence, then there would be no motion blur. Even though the pixel persistence (hold) is dramatically less these days, it's still the reason that a moving eye would have trouble keeping a clear image. In other words, Moving Eye Trouble == Persistence still. (No?)
Mark Rejhon's Avatar Mark Rejhon 12:25 PM 05-19-2013
Quote:
Moving Eye Trouble == Persistence still. (No?)
No. I'm not talking about persistence of vision.
I'm talking about persistence of the LCD panel itself: The time it takes for an LCD pixel to transition from one color to the next. That's what LCD pixel persistence means.

Obviously, there will always be eye tracking inaccuracies, but sample-and-hold enforces a dis-synchronization above-and-beyond human limitations. (Continuous eye-tracking versus the stepping effect of static frames (sample-and-hold) creates the perceived motion blur)
Strobe backlights, combined with fps=Hz, eliminate the enforced dis-synchronization (motion blur) of sample-and-hold.
Quote:
Recently, with today’s faster LCD’s, pixel persistence now only has a minor factor in motion blur.
This is still true.
See above. If still unclear, perhaps it's an interpretation problem, so let's me try to cover this from a different angle:

New LightBoost monitors are typically LCD's with a manufacturer-rated 1ms or 2ms pixel transition time. Although the real-world pixel transition time is often longer, pixel transitions is still far less than one refresh long. So you get more motion blurring caused by eye tracking, THAN motion blurring caused by LCD panel pixel persistence bleeding between frames.

Check out the pursuit-camera images at: LCD Motion Artifacts 101.

pursuitcam_ghosting.jpg

The clarity of this motion is affected by both pixel persistence and by eye-tracking (camera tracking, for this WYSIWYG photo).
Observe the additional ghosting that only occurs on the left edge. That's the panel pixel persistence blurriness.
Observe the symmetrical blur portion at both left/right edges. That's the tracking based blurriness.
Very obviously, the tracking based blur is now more obvious than the panel pixel persistence blur.

As you can see, recently, with today’s faster LCD’s, pixel persistence now only has a minor factor in motion blur.
tgm1024's Avatar tgm1024 12:42 PM 05-19-2013
Quote:
Originally Posted by Mark Rejhon View Post

Quote:
Recently, with today’s faster LCD’s, pixel persistence now only has a minor factor in motion blur.
This is still true. Perhaps it's an interpretation problem, so let's try to go over this again.

New LightBoost monitors are typically LCD's with a manufacturer-rated 1ms or 2ms pixel transition time. Although the real-world pixel transition time is often longer, pixel transitions is still far less than one refresh long. So you get more motion blurring caused by eye tracking, THAN motion blurring caused by pixel persistence bleeding between frames.

Ah stop right there. Here's the problem. (Keep the old days out of it for a second). You've been interpretting that the pixel persistence argument has been stated as one of the frame-to-frame bleeding. I have not. To me the problem has always been stated as one of a smear against the retina.....even if there is 0 bleed over from frame to frame. There was a gross smearing and tearing and similar issues in the past, but I'm talking about current displays.

THAT is why we're not connecting. You're arguing against a point that I never read in the first place.

Quote:
Moving Eye Trouble == Persistence still. (No?)
No. I'm not talking about persistence of vision.

Now you threw me again. I'm not talking about persistence of vision either. But I think we've cleared up that you're arguing against frame-to-frame bleed issues, which I'm not disagreeing with.
Quote:
Obviously, there will always be eye tracking inaccuracies, but sample-and-hold enforces a dis-synchronization above-and-beyond human limitations. (Continuously-moving eyeball versus stepping effect of static frames, creates the perceived motion blur caused by sample-and-hold)

See? IMO, this is precisely why you should never have started with "Recently, with today’s faster LCD’s, pixel persistence now only has a minor factor in motion blur." That's still not true. Verify: I believe what you meant to say was "pixel persistence bleeding from frame to frame is no longer a dominant cause of motion blur". Because Pixel persistence (not frame to frame) absolutely still is. That's how the blur is formed.
Mark Rejhon's Avatar Mark Rejhon 01:21 PM 05-19-2013
Stand by, you caused me to accelerate a blog post I've been planning to make.

A few days ago, I created pursuit camera photos of LightBoost. The moving-camera photos (1/30sec exposure on a 120fps moving object) are as crystal clear as non-moving camera on a stationary image. I want you to explain the photo I'm about to post within the next hour. Perhaps this shall allow us to clarify appropriate terminologies. Stand by.
tgm1024's Avatar tgm1024 01:46 PM 05-19-2013
Quote:
Originally Posted by Mark Rejhon View Post

Stand by, you caused me to accelerate a blog post I've been planning to make.

A few days ago, I created pursuit camera photos of LightBoost. The moving-camera photos (1/30sec exposure on a 120fps moving object) are as crystal clear as non-moving camera on a stationary image. I want you to explain the photo I'm about to post within the next hour. Perhaps this shall allow us to clarify appropriate terminologies. Stand by.

Sounds great!
Mark Rejhon's Avatar Mark Rejhon 02:04 PM 05-19-2013
Allow me to announce a new blog post, Blur Busters 60Hz versus 120Hz versus LightBoost.
Quote:
These photographs compare motion blur during 60Hz vs 120Hz, as well as with the LightBoost strobe backlight enabled. All images below are captured from the same ASUS VG278H computer monitor. These demonstrates differences in perceived motion blur caused by the sample-and-hold effect.

These UFO objects were moving horizontally at 960 pixels per second an ASUS VG278H LCD, moving at a frame rate matching refresh rate, and captured using a pursuit camera using a 1/30second camera exposure (exposing multiple refreshes into the same image).

60 Hz Refresh rate:
Each refresh is displayed continuously for a full 1/60 second (16.7ms)
CROPPED_60Hz-1024x341.jpg

120 Hz Refresh rate:
Each refresh is displayed continuously for a full 1/120 second (8.3ms) This creates 50% less motion blur.
CROPPED_120Hz-1024x341.jpg

120 Hz LightBoost:
The backlight is strobed briefly, once per refresh, eliminating sample-and-hold. This has 85% to 92% less motion blur, depending on the LightBoost OSD setting.
CROPPED_LightBoost50-1024x341.jpg

At 120fps@120Hz, a 1/30second camera exposure captures 4 refreshes. All 4 refreshes are stacked on each other, because the pursuit camera is moving in sync with the 120fps@120Hz moving object at a 1/30second camera exposure. The brief backlight flash prevents tracking-based motion blur.

There is extremely little leftover ghosting caused by pixel persistence (virtually invisible to the human eye), since nearly all (>99%+) pixel persistence ghosting & overdrive artifacts are kept unseen by the human eye, while the backlight is turned off between refreshes. The backlight strobe flash length, measured to be 1.5ms by TFT Central, is more than 90% shorter than a 60Hz refresh (16.7ms). The LightBoost 10% setting uses 1.5ms strobe flashes, while the LightBoost 100% setting uses 2.4ms strobe flashes. This is still greatly shorter than even a 120Hz refresh (8.3ms)!   As a result, motion clarity on a LightBoost monitor is comparable to a CRT display.

Observe the leftmost end of the bottommost image. You do see a very faint dot to the left of the line of white dots (in the red base). That's an example of the tiny remnant pixel persistence the human eye sees. This pursuit camera photographic proof, proves that the vast majority of pixel persistence (roughly ~99%) is successfully hidden from the human eye -- completely unseen by the human eye.

I have hereby posted photographic proof.
Please explain if you think I am still wrong.
I have sent you a PM to the Blur Busters Motion Tests, with a VIP invite, to the exact same motion test pattern that I used.
(Blur Busters Motion Tests is launching publicly Spring 2013)
Tags: Asus Vg236h 23 Inch 3d Lcd
First ... 2  3  4 5  6  ... Last

Up
Mobile  Desktop