Arduino scanning backlight for LCD to simulate "960Hz"/"1920Hz" with NO interpolation! (CRT-like motion) - AVS Forum
Forum Jump: 
Reply
 
Thread Tools
post #1 of 47 Old 09-15-2012, 08:50 PM - Thread Starter
AVS Special Member
 
Mark Rejhon's Avatar
 
Join Date: Feb 1999
Location: North America
Posts: 8,124
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 2 Post(s)
Liked: 102
Goal: Creating a home-made scanning backlight as a mod for an existing LCD monitor, that has the ability to reduce motion blur to the point where the motion blur is less than on a CRT.

After an endorsement by John Carmack of iD software, I decided to proceed with this project:
Quote:
Mark Rejhon @mdrejhon
@ID_AA_Carmack I'm researching home-made Arduino scanning backlight (90%:10% dark:bright) using 100-200 watts of LED's. 120hz.net/showthread.php…

John Carmack @ID_AA_Carmack
@mdrejhon Good project. You definitely want to find the hardware vsync, don't try to communicate it from the host.
(Please see the project info before you say "it can't be done")


I am intending to proceed with this experiment within the next while, to allow LCD to have less motion blur than CRT. This requires a scanning backlight that's dark 90-95% of the time -- to reduce motion blur 90-95% without doing motion interpolation and without more than about 3-4ms added input lag. This is quite extreme and requires a lot of LED's (10-20x brighter backlight to compensate for the very long dark periods between refreshes). -- Fortunately, common 5-meter LED ribbons, often used for accent lighting, have made it cheap to buy 200 watts of LED's and cram all of them behind a 24" monitor, illuminating only 20 watts at a time for a scanning backlight. The best scanning backlights in today'd industry (e.g. Samsung/Sony/Elite) are dark only approximately 75% of the time.

I've designed a draft schematic. There may be errors, and there's no protection (e.g. overcurrent, overvoltage, etc), but this shows how relatively simple an Arduino scanning backlight really is. Most of the complexity is in the timing, synchronization -- still relatively simple Arduino programming.

ArduinoScanningBacklight_schem960.png

Full size version: LINK

No modification of monitor electronics is required. I only need to know the VSYNC signal timing. Simple manual calibration adjustments can adjust the phase of the scanning, and to compensate for input lag. This can be a one-time step (for a given video mode) -- this would not be too different from a 3D shutter glasses crosstalk adjustment procedure.

[EDIT: This is an old post from 2012, archived for historical reasons -- Arduino Scanning Backlight on Blur Busters Forums.]

Thanks,
Mark Rejhon


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.

BlurBusters Blog -- Eliminating Motion Blur by 90%+ on LCD for games and computers

Rooting for upcoming low-persistence rolling-scan OLEDs too!

Mark Rejhon is offline  
Sponsored Links
Advertisement
 
post #2 of 47 Old 09-15-2012, 08:52 PM - Thread Starter
AVS Special Member
 
Mark Rejhon's Avatar
 
Join Date: Feb 1999
Location: North America
Posts: 8,124
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 2 Post(s)
Liked: 102
For the Arduino scanning backlight, there are specific requirements I need to research -- e.g. creating a small-scale breadboard trailblazer for this project. I've created electronics before, and I have programmed for more than 20 years, but this will be my first Arduino project. I've been researching, including Arduino's, to determine the best way to program it for a scanning backlight experiment.

Goals For Scanning backlight:

- At least 8 segments.
- Reduce motion blur by 90%. (Ability to be dark 90% of the time)
- Tunable in software. (1/240, 1/480, 1/960, and provisionally, 1/1920)
- Manual input lag and timing adjustment.
___

1. Decide a method of VSYNC detection.

Many methods possible. Will likely choose one of:
....(software) Signalling VSYNC from computer, using DirectX API RasterStatus.InVBlank() and RasterStatus.ScanLine .... (prone to CPU and USB timing variances)
....(hardware) Splicing video cable and use a VSYNC-detection circuit (easier with VGA, harder with HDMI/DP, not practical with HDCP)
....(hardware) Listen to 3D shutter glasses signal. It's conveniently synchronized with VSYNC. (however, this may only work during 3D mode)
....(hardware) Last resort: Use oscilloscope to find a "VSYNC signal" in my monitor's circuit. (very monitor-specific)

Note: Signalling the VSYNC from the host is not recommended (John Carmack said so!), likely due to variances in timing (e.g. CPU, USB, etc). Variances would interfere but this gives maximum flexibility for switching monitors in the future, and makes it monitor-independent. I could stamp microsecond timecodes on it to compensate (RasterStatus.ScanLine may play a role in 'compensating'). In this situation, an LCD monitor's natural 'input lag' plays into my favour: It gives me time to compensate for delays (wait shorter/longer until 'exaxctly' the known input lag) caused by timing fluctuation. I can also do averaging algorithms for the last X refreshes (e.g. 5 refreshes) to keep things even more accurate. The problem is that Windows is not a real time operating system, and there's no interrupt/event on the PC to catch InVBlank behavior. Another idea is almost randomly reading "ScanLine" and almost randomly transmitting (with a USB-timing-fluctuation-compensation timecode) it to the Arduino, and letting the Arduino calculate timings needed. This is far more complex software-wise, but far simpler and more flexible hardware-wise, especially if I want to be able to test multiple different LCD's with the same home-made scanning backlight.
___

2. Verify the precision requirements that I need.

- What are the precision requirements for length of flashes (amount of time that backlight segment is turned on)
- What are the precision requirements for sequencing (lighting up the next segment in a scaning backlight)
- What are the precision requirements for VSYNC (beginning the scanning sequence)

Milliseconds, microseconds? Experimentation will be needed. People who are familiar with PWM dimming, already know that microseconds matter a great deal here. Scanning backlights need to be run very precisely, sub-millisecond-level jitter _can_ be visually noticeable, because 1.0 millisecond versus 1.1 millisecond variance means a light is 10% brighter! That 0.1 millisecond makes a mammoth difference. We don't want annoying random flicker in a backlight! It's the same principle as PWM dimming -- if the pulses are even just 10% longer, the light is 10% brighter -- even if the pulse in PWM dimming are tiny (1ms versus 1.1ms pulses). Even though we're talking about timescales normally not noticeable to the human eye, precision plays an important role here because the many repeated pulses over a second, _adds_ up to a very noticeably brighter or darker picture. (120 flashes of 1.0 millisecond equals 120 milliseconds. But, 120 flashes of 1.1 milliseconds equals 132 milliseconds) So we must be precise here; pulses must not vary from refresh to refresh. However, we're not too concerned with the starting brightness of the backlight -- if the backlight is 10% too dim or too bright, we can deal with it -- it's the consistency between flashes that is more important. The length of the flash is directly related to the reduction in motion blur, the shorter the flash, the less motion blur, and since we're aiming for 1/960th second flash (with a hopeful 1/1920th second capability), that's approximately 1 millisecond.

As long as the average brightness remains the same over approximately a flicker fusion threshold (e.g. ~1/60sec), variances in the flicker timing (VSYNC, sequencing) isn't going to be as important as precision of flashes, as long as the flashes get done within the flicker fusion threshold. There may be other human vision sensitivities and behaviors I may not have taken into account, so experimentation is needed.

Estimated precision requirements:
Precision for length of flashes: +/- 0.5 millisecond
Precision for consistency of length of flashes: +/- one microsecond
Precision for sequencing: +/- somewhere less than 1/2 the time of a refresh (e.g. (1/120)/2 = 4 milliseconds)
Precision for VSYNC timing: +/- somewhere less than 1/2 the time of a refresh (e.g. (1/120)/2 = 4 milliseconds)

Goal of precision requirements is to better these requirements by an order of mangitude, for a safety margin for more sensitive humans and for errors. That means length of flashes would be precise to 0.1 microseconds.
This appears doable with Arduino. Arduino's are already very precise and very synchronous-predictable; Ardunio projects include TV signal generators -- THAT requires sub-microsecond precision for good-looking vertical lines in a horizontally-scanned signal.
Example: http://www.javiervalcarce.eu/wiki/TV_Video_Signal_Generator_with_Arduino
___

3. Arduino synchronization to VSYNC

...(preferred) Arduino Interrupt method. attachInterrupt() on input pin connected to VSYNC. However, at 120Hz, the VSYNC is less than a millisecond long, so I'll need to verify if I can detect short pulses via attachInterrupt() on Arduino. Worse comes to worse, I can add a simple toggle circuit inline to the VSYNC signal, to make that signal changes only 120 times a second (e.g. on for even refreshes, off for odd refreshes), which is a frequency low enough to be detectable using Arduino. attachInterrupt() can interrupt any in-progress delays, so this is convenient, as long as I don't noticeably lengthen the delay beyond my precision requirements.
...(alternate) Arduino Poll method. This may complicate precise input lag compensation since I essentially need to do 2 things at the same time precisely (one for precise VSYNC polling and input lag compensation, the other for precise scanning backlight timing). I could use two Arduinos running concurrently, side by side -- or run an Arduino along with helper chips such as an ATtiny chip -- to keep my precision requirements for my 2 precise tasks.

I anticipate being able to use the Interrupt method; but will keep the poll method as a backup plan.
___

4. Dimming ability for scanning backlight

...(preferred) Voltage method. A voltage-adjustable power supply to the backlight segments. (Note: A tight voltage range can dim LED's from 0% through 100%)
...(alternate) PWM method. Dimming only during the time a backlight segment is considered 'on'. e.g. a 1/960th second flash would use microsecond delays to PWM-flicker the light over the 1/960th second flash, for a dimmed flash. A tight PWM loop on an Arduino is capable of microsecond PWM (it can do it -- Arduino software is already used as a direct video signal generator).

The dimming of the backlight shouldn't interfere with its scanning operation. Thus, simplest method to not interfere, is to use a voltage controlled power supply that can dim the LED's simply using voltage. Adding PWM to a scanning backlight is far more complicated (especially if I write it as an Arduino program) since I can only PWM only during the intended flash cycle; or I lose the motion-blur-eliminating ability.
___

5. Adjustable Input lag compensation

...(preferred) Use the Arduino micros() function to start a scanning sequence exactly X microseconds after the VSYNC signal.

Hopefully this can be done in the same Arduino, as I have to keep completing the previous scanning backlight refresh sequence (1/120th second), while receiving a VSYNC signal. Worse comes to worse, I can use two separate Arduinos's or an Arduino running along with an ATtiny (one for precisely listening to VSYNC and doing input lag compensation, another one to do precise backlight sequencing). If I use attachInterrupt() for VSYNC interrupt on Arduino, I can capture the current micros() value and save it to a variable. Wait for the current scanning-backlight sequence to finish, and then I start watching micros() to time the next scanning backlight refresh sequence.

___

6. Precise sequencing of backlight segments.

...(preferred) Tiny delays are done on Arduino with delayMicroseconds(). Perfect for sequencing the scanning light segments. Turn one backlight segment on, delay, turn off, repeat for next backlight segment.
...(alternate) Use the PWM outputs (six of them) of an Arduino, or use a companion component to do the pulsing/sequencing for me. These PWM outputs can be configured to pulse in sequence. However, these outputs won't give me the precision needed for a highly-adjustable scanning backlight capable of simulating "1920Hz"

The tiny delays on the Arduino is currently my plan. I also need to do input lag compensation, so I have to start sequencing the backlight at the correct time delay after a VSYNC. I am also aware that interrupt routines (attachInterrupt()) will delay the delay, but I plan to keep my interrupt very short (less than 0.5 microsecond execution time, see precision requirements at top) to make this a non-issue.

Even though my goal is "960Hz" equivalence, I want to be able to play with "1920Hz" equivalence just for experimentation and overkill's sake, and simply litreally "pwn" the "My LCD is better than CRT" prize, even though it will probably require a 200-watt backlight to do so without a dim picture.
___

Likely Steps

-- The next step is to download an electronics schematic creator program and create the schematic diagram [DONE].
-- Emulate, if needed. Virtual Breadboard (http://www.virtualbreadboard.com/) has an electronics circuit simulator including an Arduino emulator. It would work perfectly for testing needs to run in slow-motion mode for visual verification of behavior, although it won't be timing-precise, it would at least allow me to visually test the code in slow-motion even before I buy the parts.
-- After that, the subsequent step is to breadboard a desktop prototype with 8 simple LED's -- more like a blinky toy -- that can run at low speed (human visible speeds) and/or high speed (scanning backlight).
-- Finally, choose the first computer monitor to hack apart. Decide if I want to try taking apart my old Samsung 245BW (72Hz limit) or buy a good high speed panel (3D 120Hz panel). My Samsung is very easy to take apart, and it is disposable (I want to replace it with a Catleap/******** 1440p 120Hz or similar within two or three months) so it is a safe 'first platform' to test on, even though its old technology means its response speed will cause more ghost after-images than today's 3D 120Hz panels, it will at least allow a large amount of testing before risking a higher-end LCD to it.
-- Create a high-power backlight (200watts). This will be the fun part of the project, buying 20 meters of 6500K LED tape and cramming all 2,400 LED's in a 2-foot wide 16:9 rectangle (suitable for 24"-27" panels). This might be massive overkill given, but I want to eventually nail the "1920Hz"-equialence "My LCD is better than CRT" prize. Only 10-20 watts of LED's would be lit up at a time, anyway. Appropriate power supply, switching transistors for each segment (25+ watt capable), etc. Attach it to the Arduino outputs, put LCD glass in front, and tweak away.
___

Although I do not expect many people here are familiar with Arduino programming, I'd love comments from anybody familiar with an Arduino, to tell me if there's any technical Arduino gotchas I should be aware of.

[EDIT: This is an old post from 2012, archived for historical reasons -- Arduino Scanning Backlight on Blur Busters Forums.]

Thanks,
Mark Rejhon


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.

BlurBusters Blog -- Eliminating Motion Blur by 90%+ on LCD for games and computers

Rooting for upcoming low-persistence rolling-scan OLEDs too!

Mark Rejhon is offline  
post #3 of 47 Old 09-16-2012, 08:30 AM - Thread Starter
AVS Special Member
 
Mark Rejhon's Avatar
 
Join Date: Feb 1999
Location: North America
Posts: 8,124
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 2 Post(s)
Liked: 102
Someone emailed me asking about 200 watts being insane power consumption.
The average power consumption would actually be only be ~10 watts (if illuminating a 5% section all the time), or ~20 watts (if illuminating a 10% section at a time).

P.S. I don't mean superior to CRT in all metrics. There will always be professional studio-league LCD monitors that will have better color. However, one metric that has not been adequately addressed is motion blur -- and that's the sole metric that this scanning backlight aims to solve. (That said, adding this technology to a professional studio LCD monitor, is potentially useful)

Thanks,
Mark Rejhon


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.

BlurBusters Blog -- Eliminating Motion Blur by 90%+ on LCD for games and computers

Rooting for upcoming low-persistence rolling-scan OLEDs too!

Mark Rejhon is offline  
post #4 of 47 Old 09-16-2012, 11:58 AM
AVS Special Member
 
borf's Avatar
 
Join Date: Oct 2003
Posts: 1,172
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 14
Quote:
Originally Posted by Mark Rejhon View Post

....(hardware) Last resort: Use oscilloscope to find a "VSYNC signal" in my monitor's circuit. (very monitor-specific)

Timing cues from the monitor sound best to me. In addition to the timing variances you mentioned, Input lag varies from 1 to 5 frames usually (16-80ms) so i don't think you can sync with DirectX and compensate for LCD lag with an averaging algorithm.
Quote:
Originally Posted by Mark Rejhon View Post

For example, a single 8ms refresh (1/120th second) for a 120Hz display, can be enhanced with a scanning/strobed backight:
2ms -- wait for LCD pixel to finish refreshing (unseen, while in the dark)
5ms -- wait a little longer for most of ghosting to disappear (unseen, while in the dark)
1ms -- flash the backlight quickly. (1/960th or 1/1000th second -- or even 1/2000th second!)

In this scenario, each pixel (refreshing top to bottom) must sync to its own individual led.. Can this be done with "globally placed" led strips. Otherwise there is a huge "fudge factor" if trying to illujminate crystals at full transition, as each color also has additionally a unique transition time. That's ok, if you accept the imprecise nature (reduced performance?) of back light scanning. Just a thought - an lcd with global refresh would eliminate the refresh timing issue without resorting to indivdual leds. You could then sync the backlight to the average color transition (not perfect).
Quote:
Originally Posted by Mark Rejhon View Post

Even though my goal is "960Hz" equivalence, I want to be able to play with "1920Hz" equivalence just for experimentation and overkill's sake

I'd stick with 960hz as 1920hz would require 240 unique fps from the video card (8-strip back light). You could choose not to raise the frame rate and flash each frame twice, but that increases average hold time and blur (sorry if i'm preaching to the choir but you probably have played a 60fps game on a blur-free120hz crt to see this phenomenon - it is not subtle!) Then again 240fps @ 1920hz (8 strip scanning backlight) would be better theoretically..
borf is offline  
post #5 of 47 Old 09-18-2012, 12:33 AM - Thread Starter
AVS Special Member
 
Mark Rejhon's Avatar
 
Join Date: Feb 1999
Location: North America
Posts: 8,124
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 2 Post(s)
Liked: 102
Quote:
Originally Posted by borf View Post

Timing cues from the monitor sound best to me. In addition to the timing variances you mentioned, Input lag varies from 1 to 5 frames usually (16-80ms) so i don't think you can sync with DirectX and compensate for LCD lag with an averaging algorithm.
I'm not trying to solve the LCD lag via DirectX:

There's a few separate issues being solved here.
1. Input lag. It can be manually calibrated using a software slider. It would not be too different from a software slider for crosstalk calibration for computer 3D glasses (syncing shutter to a specific LCD). For a specific video mode, the input lag is fixed and microsecond-accurate, so this can be a one-time manual adjustment, on a motion test pattern with color patterns.
2. Listen for VSYNC is a separate problem from input lag, and can be solved as an independent problem.
3. Sanning speed within a refresh (e.g. length of actual scan, which may be done faster than the display cable's refresh). This, too, can be a one-time manual adjustment (for a specific video mode). I can also make it adjustable to instantaneous (e.g. full panel strobe) for those global-refresh panels (e.g. multiscan LCD).

My goal is a reusable 24"-wide backlight panel that can recycled with any hackable 24-27" monitor for testing/experimentation; so I want to be as independent of the monitor electronics as possible, by providing the separate manual adjustment for the input lag and the intra-refresh scanning speed. 27" panels are 23.5" wide. Older LCD displays are easier to mod since the backlight are separate from the glass, but newer LCD monitors often use laptop-style LCD's which builds-in hard-to-remove backlight as part of panel assemblies.
Quote:
In this scenario, each pixel (refreshing top to bottom) must sync to its own individual led.. Can this be done with "globally placed" led strips. Otherwise there is a huge "fudge factor" if trying to illujminate crystals at full transition, as each color also has additionally a unique transition time. That's ok, if you accept the imprecise nature (reduced performance?) of back light scanning. Just a thought - an lcd with global refresh would eliminate the refresh timing issue without resorting to indivdual leds. You could then sync the backlight to the average color transition (not perfect).
Right. LCD pixels are continuously changing from one color to the next in consecutive frames, with most of the change completed in the first few (approx ~2) milliseconds, but transition is virtually done towards the end of the cycle; so different parts of LCD in different parts of the same scanning backlight segment, shouldn't be too different. If I remember correctly, literature online show that early scanning backlights in various 2006 computer monitors (reduced motion blur by only ~25%) only had 4 or 8 segments (CCFL), and I'd be surprised if it had more than 8 segments. There may be about 2-3% variances in the colorscale, but I expect color variances less than the difference between, say, an IPS LCD versus a TN LCD.
Quote:
I'd stick with 960hz as 1920hz would require 240 unique fps from the video card (8-strip back light). You could choose not to raise the frame rate and flash each frame twice, but that increases average hold time and blur (sorry if i'm preaching to the choir but you probably have played a 60fps game on a blur-free120hz crt to see this phenomenon - it is not subtle!) Then again 240fps @ 1920hz (8 strip scanning backlight) would be better theoretically..
I can still do 1920Hz-simulation with just 8 segments. I don't have to step the sequence exactly:
I'd just flash one segment 1/1920 second, wait in the dark 1/1920 second, flash the next segment, and so on. It'd just simply look like the phase-width halved in PWM dimming the picture by 50%. The shorter hold time of illumination gives the 50% less motion blur. I'd be able to simulate random Hz-equivalences, such as 1345Hz-equivalence or 773Hz-equivalence, just by calculating how many segments to illuminate, and for how long each. For example, one segment illuminated for 1/773second even as the next segment starts illuminating 1/960 second later for 1/773 second. That means sometimes one segment is illuminated at a time, and two consecutive segments are illuminated at other times. And so on. 1/480 equivalence would be done by illuminating two segments at a time, at all times, sliding downwards in sync with the scan, and 1/240 equivalence would be done by illuminating four segments at a time, sliding downwards in sync with the scan.

I plan to order parts, and within the next few weeks, do some kitchen countertop prototyping and experimentation, and do some oscilloscope measurements on it (on the pulses, and comparing it to light output using a fast-responding photocell). Then sometime after that Once I've programmed the Arduino, and verified correct scanning behavior, including the short length of pulses running in correct sequence, I'll create the 200 watt backlight (note: 10-20 actual watts) out of LED ribbons and test my first LCD glass on it. And then finding a sacrifical monitor (or few) to test with!

Also, I may opt for 12-segments or 16 instead of 8, depending on the preliminary tests. Additional Arduino analog pins are allowed to be used as digital outputs, provided they are also timing-accurate too. So the Arduino can allow me to digitally signal up to 19, but I need to keep a digital signal free for listening to VSYNC, and I also want to keep Tx/Rx free for real-time host communications (it also works over USB if I don't use those pins) so that coincidentally leaves 16 free pins. Host communication is needed for PC-based reconfiguration of scanning backlight, even if I ultimately don't use host communication for VSYNC (that route will at least be experimented with)

Also, since a 24" panel will cover only ~10-11 segments of a 12-segment scanning backlight designed large enough for a 27" panel. I can just use manual 'scanning speed' and 'scan until segment X' adjustments to compensate for a LCD panel too short for the scanning backlight designed to be flexible for testing multiple different 24"-27" LCD glass. It's simply a matter of math and Arduino programming, plus create a software utility that makes adjustment easy.

Thanks,
Mark Rejhon


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.

BlurBusters Blog -- Eliminating Motion Blur by 90%+ on LCD for games and computers

Rooting for upcoming low-persistence rolling-scan OLEDs too!

Mark Rejhon is offline  
post #6 of 47 Old 09-18-2012, 10:45 AM
Senior Member
 
guidryp's Avatar
 
Join Date: Dec 2001
Location: Ottawa
Posts: 250
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by Mark Rejhon View Post

I'
I'd just flash one segment 1/1920 second, wait in the dark 1/1920 second, flash the next segment, and so on.

It is quite pointless to aim for rates like this when the Human visual system is your target. Everything we see is averaged over about 10-20 ms.

The highest limit for noticing any change with the human visual system is Flicker Fusion Frequency and that is an extreme case of super high contrast Black and White flashing and even then in most humans it maxes out just above 60 Hz.

Heck a great many CCFL LCD backlights already use PWM for brightness control, They turn the whole screen backlight on/off at rates like 175 Hz and it is quite invisible and doesn't really help with motion blur.

Your best result for a scanning backlight will be obtained with the fastest changing LCD architecture and a backlight timed to be off for as much of the transition phase that can be handled before flicker becomes annoying. Once you start cycling your light source over 100Hz it is pretty much the same as having it on all the time as far as human beings are concerned.
guidryp is offline  
post #7 of 47 Old 09-18-2012, 11:53 AM
AVS Special Member
 
xrox's Avatar
 
Join Date: Feb 2003
Posts: 3,169
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 50
Quite an interesting thread. There are many papers on scanning backlight and at least a couple that I have that discuss a short 10% duty period. It is quite an old idea and I think you are right that the main reason it was not popular was the efficiency reduction.

Some thoughts on the design and the science:

1 - The diffuser creates enough cross-talk between backlight segments to limit the hold time to 2 segment flash periods or greater. Assuming you have 8 segments running at 120Hz this would create an effective hold time of ~1/480 or 2ms. Still not better than CRT.

2 - AFAIK even if the panel had zero cross-talk (OLED) and an 8 segment scanning, it would produce about equal hold time to CRT (~1ms) (not surpass it)

3 - Not sure about the claim that FFT is better for raster vs strobbing. Is this in the literature?

4 - Starting the backlight scan just before the next LC refresh may not be ideal as the aformentioned cross-talk may produce a visible ghost. There must be an ideal temporal position that avoids the LC refresh response and the threshold for cross-talk.

Over thinking, over analyzing separates the body from the mind
xrox is offline  
post #8 of 47 Old 09-18-2012, 08:48 PM
AVS Special Member
 
borf's Avatar
 
Join Date: Oct 2003
Posts: 1,172
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 14
Allow me to post more dumb thoughts Xrox. I've no more feedback for Mark - It would waste his time.
Quote:
Originally Posted by xrox View Post

Quite an interesting thread. There are many papers on scanning backlight and at least a couple that I have that discuss a short 10% duty period. It is quite an old idea and I think you are right that the main reason it was not popular was the efficiency reduction.

Did that refer to CCFL only or LED too... These LEDs are apparently ~ 10-20W per strip and 10-20x brighter than normal (to offset the shorter duty cycle). If this is too much a power requirement, how about adding more strips.

Quote:
Originally Posted by xrox View Post

1 - The diffuser creates enough cross-talk between backlight segments to limit the hold time to 2 segment flash periods or greater..

Random (non adjacent) sequencing could eliminate crosstalk? (something like Mark said in the last reply).
Why is a global diffuser needed with locally lit segments anyway.

Quote:
Originally Posted by xrox View Post

2 - AFAIK even if the panel had zero cross-talk (OLED) and an 8 segment scanning, it would produce about equal hold time to CRT (~1ms) (not surpass it)

Good enough for me. Is that with 960hz... What about 1980hz? But the Arduino can apparently do much better than that:

Quote:
Goal of precision requirements is to better these requirements by an order of mangitude, for a safety margin for more sensitive humans and for errors. That means length of flashes would be precise to 0.1 microseconds. This appears doable with Arduino. Arduino's are already very precise and very synchronous-predictable; Ardunio projects include TV signal generators -- THAT requires sub-microsecond precision for good-looking vertical lines in a horizontally-scanned signal

Quote:
Originally Posted by guidryp View Post

Once you start cycling your light source over 100Hz it is pretty much the same as having it on all the time as far as human beings are concerned.

I agree. As long as the frames are unique there should be no problem.
borf is offline  
post #9 of 47 Old 09-18-2012, 11:51 PM - Thread Starter
AVS Special Member
 
Mark Rejhon's Avatar
 
Join Date: Feb 1999
Location: North America
Posts: 8,124
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 2 Post(s)
Liked: 102
Quote:
Originally Posted by guidryp View Post

It is quite pointless to aim for rates like this when the Human visual system is your target. Everything we see is averaged over about 10-20 ms.
The highest limit for noticing any change with the human visual system is Flicker Fusion Frequency and that is an extreme case of super high contrast Black and White flashing and even then in most humans it maxes out just above 60 Hz.
I generally agree that it's pretty pointless, but it's a "free" feature. I'm currently aiming the wattage of the backlight for a sufficiently bright picture during "960Hz" simulation, but I want to have the "free" software feature of "simulated 1920Hz" (without motion interpolation) just for two reasons:
(1) Experimentation if it's even possible *at all* to tell the difference.
(2) Ability to claim that my LCD setup actually has less motion blur than CRT. (Note: I'm not solving *other* LCD deficiencies such as black levels, etc)

It's a free software feature that costs nothing extra, as long as the Arduino is capable of it, so why not include it for experimentation's sake. Even though running at "simulated 1920Hz" will mean half the brightness of "simulated 960Hz" due to the half-length flashes, it's enough for experimentation. However, I agree, that the sweet spot is probably 1/960 second. Any further, the extra lumens necessary for shorter flashes isn't worth it. (e.g. I'd need to design a 400 watt backlight in order to have a normal-brightness image using 1/1920sec flashes)

Heck, it does not even stop me from even experimenting with 1/3840 flash (at one-quarter brightness) or even 1/7680 flash (at one-eighth of brightness). I'll probably hit the latency of the phosphor of a white LED first, as the limiting factor, though that is bypassable by using R/G/B LED's which switch at nanosecond-league speeds.
Quote:
Heck a great many CCFL LCD backlights already use PWM for brightness control, They turn the whole screen backlight on/off at rates like 175 Hz and it is quite invisible and doesn't really help with motion blur.
Correct. Though, just like I can detect rainbow artifacts, I can detect the stroboscopic effet, even 500 Hz PWM is detectable indirectly if you know how to look for PWM stroboscopic artifacts (not everyone is sensitive to them, much like DLP rainbows is a person specific thing.)
Academic note: Detecting stroboscopic artifacts (e.g. DLP rainbows, PWM, etc) is a different vision phenomena than flicker fusion.
Example: Test mouse cursor on black screen: 180Hz PWM on a 60Hz signal shows a triple-cursor motion blur instead of a continous-blur.
Quote:
Your best result for a scanning backlight will be obtained with the fastest changing LCD architecture and a backlight timed to be off for as much of the transition phase that can be handled before flicker becomes annoying. Once you start cycling your light source over 100Hz it is pretty much the same as having it on all the time as far as human beings are concerned.
Flicker fusion is /different/ from Store-n-hold blur which is also /different/ from LCD response blur. They can all interact, of course.

Human perception of high-speed vision phenomena (different from "flicker fusion")
1. Witnessing high speed photography. Even xenon strobe lights that flash less than 1/5000th second, you can still see the flash. Though it's as instantaneous-looking to the human eye as a 1/200th second flash. Even a millionth-second flash would be detectable, provided there was enough photons to hit the eyeballs. It's called "integration" -- your cones/rods in your eyeballs are like tiny buckets collecting photons. Once you're far beyond flicker fusion threshold, it doesn't matter how fast or slow these buckets are filled: A millon-lumen flash for a nanosecond has the same number of photons as a one-lumen flash for a millisecond.
2. Wagon wheel effects. Human can detect continuous versus non-continuous light sources indirectly using the wagon wheel (stroboscopic) effect, and its cousins (DLP rainbows, etc). Given sufficient speed, insanely high numbers become detectable. Imagine a high speed wagon-wheel disc spinning synchronized with a theoretical 5000Hz strobe light. Wheel looks stationary. However, change strobe light to 5,001Hz without changing the wheel speed. The wagon wheel looks like it spins slowly backwards.
3. Motion blur. Detectability of motion blur is massively well beyond flicker fusion.

Now, apply to this science to store-and-hold phenomena:
EXAMPLE: Fast panning scene is moving across the screen at 1 inch every 1/60th of a second. Let's say, your eye is tracking a sharp object during the screen pan. Each frame smears across your vision field of a static frame, while your eyes are continuously tracking an object. That's persistence of vision. That creates the motion blur effect on continuously-shining displays (most LCD's). So strictly by the numbers for fast-panning motion moving at 1 inch every 1/60 second:

For fast motion moving at 1 inch every 1/60th second, the hold-type blur on LCD is as follows:
At 60Hz, the motion blur is 1" thick (entry level HDTV's, regular monitors) ...
At 120Hz, the motion blur is 0.5" thick (120Hz computer monitors, interpolated HDTV's) ...
At 240Hz, the motoin blur is 0.25" thick (interpolated HDTV's)...
At 480Hz, the motion blur is 0.125" thick ...
At 960Hz, the motion blur is 0.0625" thick (CRT style, high end HDTV's) ...

A good diagram about store-and-hold motion blur phenomena, is seen in the "Hold type blur" explanations (page 3) of this academic paper. This paper even explains why "LCD-response" blur is /different/ from "store-n-hold blur" (And explains why I can bypass LCD response speed as the primary factor of motion blur, by using shorter light pulses than the speed of the LCD pixel response).

But imagine an IMAX screen size instead, and you're sitting near the front row, and the motion is a whole foot per 1/60th second, and your eyes are able to track very fast objects -- and you're displaying a TV-opera-style 60 frames per second on the IMAX screen. (This is theoretical only; I know of no projector with "960Hz" simulation, due to the light that would require without interpolation!). Then, at this point, it is wholly possible the curve of diminishing returns don't stop getting detected beyond 1/960th second, because the stepping is large enough.

At this point, any rational person smart enough to respect physics, would suddenly stop saying "humans can't tell apart 960fps versus 1920fps" -- once you're armed with the information I wrote, it now starts sounding like an unsubstantiated claim such as telling a human "Human can't tell apart a stationary photograph taken using a 1/960sec shutter and taken using a 1/1920sec shutter". Being smart, you would then ask "It'd be useful to get some /scientific/ testing done on this matter, where the real point of diminishing returns are". But generally, I am with you, it probably doesn't matter beyond around "960Hz simulation" -- printed sport photography at 1/960sec vs 1/1920sec shutter speeds are hard to tell apart too, but human eyes are able to tell them apart. Back in year 1992, people assumed humans could not tell apart 30fps versus 60fps. Today, we're in a similar situation of "Humans can't tell 240Hz vs 480Hz vs 960Hz" (this isn't a simple flicker fusion threshold matter, so this statement is false!) But people begin to understand better once they read more about hold-type motion blur, as I've written above.

I've got plenty of references handy to explain detection of various temporal vision phenomena:
List of References: List of References
Audiophyle likes this.

Thanks,
Mark Rejhon


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.

BlurBusters Blog -- Eliminating Motion Blur by 90%+ on LCD for games and computers

Rooting for upcoming low-persistence rolling-scan OLEDs too!

Mark Rejhon is offline  
post #10 of 47 Old 09-19-2012, 12:49 AM - Thread Starter
AVS Special Member
 
Mark Rejhon's Avatar
 
Join Date: Feb 1999
Location: North America
Posts: 8,124
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 2 Post(s)
Liked: 102
Quote:
Originally Posted by xrox View Post

Quite an interesting thread. There are many papers on scanning backlight and at least a couple that I have that discuss a short 10% duty period. It is quite an old idea and I think you are right that the main reason it was not popular was the efficiency reduction.
Some thoughts on the design and the science:
1 - The diffuser creates enough cross-talk between backlight segments to limit the hold time to 2 segment flash periods or greater. Assuming you have 8 segments running at 120Hz this would create an effective hold time of ~1/480 or 2ms. Still not better than CRT.
Bleed between backlight segments will have little effect on hold time in a properly engineered scanning backlight. You want a little bit of bleed for other reasons for LCD's (to blend between segments).. As long as the backlight is flashed correctly, the length of the flash is what matters, not the bleed. I will probably design my backlight panel to also be able to run as 16 segments, if I determine I can use the analog Arduino inputs as precise digital outputs.

Assuming bleed only affects adjacent segments, the maximum possible degradation in motion resolution is 50%, so I just simulate a higher Hz to compensate. Actual perceived degradation will be far less, since bleed only affects certain sharp boundaries in images in motion, and the eyes are constantly moving all over the frame. I'd say probably less than 10% perceived reduction in motion blur caused by segment bleed. This is also an additional reason to still experiment with "simulated 1920Hz" operation, to compensate for bleed issues. Bleed artifacts may show up as PWM artifacts (two flickers rather than one, at boundaries between scanning backlight segments). E.g. high-speed horizontally moving vertical white bar on black background, might show bleed artifacts where the scanning backlight segments meet. Bleed artifacts (noticing boundaries between scanning backlight segments) would show up only during fast motion, and probably be harder to notice (beyond ~240Hz or ~480Hz simulation) than, say, DLP rainbows. The more segments, the harder or more impossible to notice. Also, the Arduino can technically let me do a 3840Hz flash (1/4 of 960) at quarter brightness of 960Hz, and even beyond. My limiting factor will be the amount of backlight brightness available -- there is no software limitation from allowing me to have less motion blur than CRT -- it will be the amount of lumens I can get into tiny flashes.
Quote:
2 - AFAIK even if the panel had zero cross-talk (OLED) and an 8 segment scanning, it would produce about equal hold time to CRT (~1ms) (not surpass it)
Incorrect -- 8 segment does not have a hold time limit of 1ms. (especially if there's no bleed regions and no cross talk) The segment size does not dictate hold limitations, unless you're following a requiremnt "next segment must illuminate at the same time as turning off the previous segment".
To simulate a hold time of 0.5 millisecond (1/1920th second) with an 8 segment scanning backlight at 120Hz, you flash the segment at 1/1920sec each, even if it means waiting in the dark a while before flashing the next segment. The scanning stepping is only to stay in sync with the LCD refresh, each segment can be treated as if it was a completely independent, separate LCD display (from a programming standpoint). Thus, as long as the strobe is sufficiently short and illuminates only refreshed LCD pixels, it doesn't matter how few segments there are.
Fact:
*** Segment count does not necesarily dictate hold type limit. You don't have to flash the segments synchronously. Think of each segment as a completely independent full-strobe backlight, and each segment is a separate LCD display. Assuming you catch already-refreshed LCD pixels during your strobe, the length of the flash dictates the motion blur, and not the number of "displays" (segments)
*** The size of the segments need to be smaller than the size of the portion of LCD (at any instantaneous moment) of fully-refreshed LCD pixels.
(For simplicity sake, "fully-refreshed" would be LCD pixels that are at least 99% to its correct color value. We can't be perfect here; there's some residuals much like the crosstalk between the two frames for 3D shutter glasses. So we define a cutoff point for LCD pixels for defining a "goal" for a scanning backlight operation. Any color imperfections will occur from any scanning backlight; but it can be made tiny enough not to be an issue. If a picture is 1% too bright or too dim, that's not a problem. If red is 1% incorrect, it's OK as long as the benefit is worth it, especially if the incorrectness can be calibrated out using picture adjustments, etc)

Instead of approaching this as a temporal problem, approach it as a geometry problem.
What you really want to know is "How many percent of the LCD display has pixels that are already within 99% of its final color value for the current refresh?".
...Before I explain, I need to explain how LCD pixels work (for those not familiar): When LCD pixels are refreshed, a pixel is being changed from one color to the next. Immediatly after the pixel is refreshed, it changes pretty quickly (especially if accelerated using overvoltage/undervoltage for response-time acceleration) in the first millisecond, slowly in the next millisecond, mostly finished within 2 milliseconds, but it may still be a few percent off its final color value. The next several milliseconds into a refresh, the pixel gradually inches closer towards the refresh. It's a logarithmic curve. Scanning backlights weren't very practical until LCD pixels were able to mostly (99%+) finish refreshing by the end of the frame, before the next frame -- a necessity for 3D, too.
...LCD refreshing is done from top-to-bottom on many LCD panels, in a fashion similar to CRT scanning. If a frame refresh takes 8 milliseconds at 120Hz, and LCD pixels are considered "fully refreshed" about 6 milliseconds later, that means approximately 1/4 vertical height, or 25% of the screen. An 8-segment scanning backlight would have segments small enough to illuminate just fully refreshed LCD, and engineered correctly, the backlight bleed.
...We obviously have to cover the granularity of scanning backlight. Since the vertical dimension of an LCD is proportional to time since the pixel was refreshed, we can have tiny inconsistencies/variances (less than 1%) in the amount of completeness in LCD pixel refreshes along the top edge versus bottom edge of a scanning backlight boundary, especially in a low-granularity scanning backlight, but this inconsistency will tend to be masked by the bleed between scanning backlight segments (see! A little bleed is beneficial here!)
Quote:
4 - Starting the backlight scan just before the next LC refresh may not be ideal as the aformentioned cross-talk may produce a visible ghost. There must be an ideal temporal position that avoids the LC refresh response and the threshold for cross-talk.
Yes, that's correct. My manual adjustment app utility (also for input lag) will take care of this, by allowing adjustment for minimum temporal artifacts in a motion test pattern. It is expected that the temporal delay is fixed and stable, permitting a one-time adjustment for a specific video mode.
___________________

Finally, just to be clear:

Bottom line fact: number of segments has no absolute-limiting effect on ability to reduce motion blur.
(I'm excluding bleed, here)

It's possible to simulate "1920Hz" out of a 60Hz signal, using just a 2-segment or 4-scanning scanning backlight (provided that the surface area of all the practically fully-refreshed LCD pixels exceeds the size of the segments. If the whole LCD is already refreshed at any instant (some high-speed LCD's are able to do this now), you only need full-backlight strobe (equivalent to a 1 segment scanning backlight) This is tantamount to black frame insertion (identical in purpose).

Of course, I skipped considering the bleed boundary between two scanning backlight segments -- but you brought up the OLED example where it is a non-issue. The bleed might be limited by the prescence of two flashes (from adjacent scanning backlight segments). The segment bleed will only the amount of reduction in reduce motion blur slightly (and only along a narrow sliver where the bleed occur). The average perceived motion blur will still scale with the flash duration.

That said, you may have convinced me to try for 16 segments instead of just 8 segments; to reduce visibility of bleed artifacts (just in case they're easier to notice than expected), by allowing me to test 1/1920 operation for the non-bleed parts of LCD, and 1/960sec for the bleed parts of LCD. Additionally, it will allow me a smooth-sliding 8-segment scanning backlight too (illuminating 2 segments at a time and stepping downwards one segment at a time), in case segment bleed artifacts is more noticeable than I expected. Also, the scanning speed of the scanning backlight can be sped up within a refresh, to further reduce bleed artifacts, though you run the risk of gradually increasing inconsistencies along the vertical dimension of the image, the faster you scan, due to catching LCD at different stages of refresh. There will also be an intra-refresh scanning speed adjustmeht. My goal is to simply have just two main adjustments (other than obvious one such as brightness of backlight, by controlling the power supply voltage to the LED's) -- phasing/latency adjustment (to adjust for input lag and to get correct phase with LCD refresh) -- and scanning speed adjustment (to adjust for scanning speed within a refresh), with the maximum speed setting be equivalent to a full-backlight strobe.

Thanks,
Mark Rejhon


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.

BlurBusters Blog -- Eliminating Motion Blur by 90%+ on LCD for games and computers

Rooting for upcoming low-persistence rolling-scan OLEDs too!

Mark Rejhon is offline  
post #11 of 47 Old 09-19-2012, 08:53 AM
AVS Special Member
 
xrox's Avatar
 
Join Date: Feb 2003
Posts: 3,169
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 50
Quote:
Originally Posted by Mark Rejhon View Post

Bleed between backlight segments will have little effect on hold time in a properly engineered scanning backlight. You want a little bit of bleed for other reasons for LCD's (to blend between segments
If I am reading you correctly, you may have not understood the stated issue. There is a light diffuser between the backlight and the LC panel that is inherent to LCD design to enable acceptable uniformity. One segment of the backlight will hit the diffuser and spread laterally. What this means is that adjacent segments will be illuminated enough to add to the hold time.
Quote:
Originally Posted by Mark Rejhon View Post

Incorrect -- 8 segment does not have a hold time limit of 1ms. (especially if there's no bleed regions and no cross talk) The segment size does not dictate hold limitations, unless you're following a requiremnt "next segment must illuminate at the same time as turning off the previous segment".
What I wrote was from the literature and AFAIK was correct. Yes obviously strobbing and scanning at the same time can further reduce below 1ms but again you run into the limiting cross-talk issue. Below is an example showing the duty cycle of the scanning backlight vs BET(fraction of frame time) for a given cross-talk.

cross-talk%20and%20BET_zps1bdf249c.jpg

Also, scanning + strobbing is going to tax your light output massively and increase power consumption. Also your LED lifetime may worsen. Not to mention the temporal motion artifacts it might cause.


Quote:
Originally Posted by Mark Rejhon View Post

Think of each segment as a completely independent full-strobe backlight, and each segment is a separate LCD display. Assuming you catch already-refreshed LCD pixels during your strobe, the length of the flash dictates the motion blur, and not the number of "displays" (segments)…….
No need to explain, this is quite an old idea. The novel/interesting part is the low cost, the light output, and the DIY. I’m still skeptical but very interested. Does that make sense?smile.gif

Below are some graphics describing the concept.

scanning%20backlight_zps5f9f49c8.jpg

scaning%20backlight2_zpse9bb51e8.jpg
Quote:
Originally Posted by Mark Rejhon View Post

Bottom line fact: number of segments has no absolute-limiting effect on ability to reduce motion blur.
Sorry, not true IMO. The diffuser and subsequent cross-talk are inherent to LCD. Strobbing will help but not as much as you state. Check out this graphic describing hold time in an interpolated system vs a frame repeat system. The CRT with frame repeat still produces motion blur but it is less due to the effective reduction in hold time due to the second pulse duty cycle.

blur.jpg
Quote:
Originally Posted by Mark Rejhon View Post

That said, you may have convinced me to try for 16 segments instead of just 8 segments; to reduce visibility of bleed…...
I actually believe that the inherent diffuser in the LCD will be somewhat limiting in all cases. And increasing the segments may actually make it worse because the cross-talk will spread over more segments (because each segment is smaller?).

One way to overcome this is to refresh the panel ultra fast and then strobe the backlight globally (all LEDs) for an “extremely” short time. This is also in the literature.

One last graph that adds to my skepticism. To me it shows that the motion benefits begin to level off as the duty cycle of the backlight scan decreases (similar to what guidryp was saying?)

scanning%20backlight3_zpsdc61aa23.jpg
Mark Rejhon likes this.

Over thinking, over analyzing separates the body from the mind
xrox is offline  
post #12 of 47 Old 09-19-2012, 10:30 AM - Thread Starter
AVS Special Member
 
Mark Rejhon's Avatar
 
Join Date: Feb 1999
Location: North America
Posts: 8,124
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 2 Post(s)
Liked: 102
Excellent references xrox, and now I understand much better what you are trying to explain. However, I managed to figure out most of what you were trying to say. Now on that basis, let's address each point.
Quote:
Originally Posted by xrox View Post

If I am reading you correctly, you may have not understood the stated issue. There is a light diffuser between the backlight and the LC panel that is inherent to LCD design to enable acceptable uniformity. One segment of the backlight will hit the diffuser and spread laterally. What this means is that adjacent segments will be illuminated enough to add to the hold time.
Diffusion issues will be most pronounced mainly in high-contrast imagery, and in these, motion blur of dark edges (especially low contrast) gets more degraded than motion blur of the bright, high contrast edges, due to diffuser/bleed issues. On extreme contrast images (lots of bright/dark content) your eyes really only have a far lower effective contrast ratio (even low numbers such as 1:100 contrast ratio) due to your eyeball internal diffusion limitations, so you're not going to notice the motion blur degradation caused by diffuser/bleed issues. I anticipate that average degradation will be very marginal, contributing only a few percent to the average perceived motion blur, permitting me continual average perceived motion blur improvement (albiet with diminishing point of returns) with shorter strobes / faster scanning.

That said, if it becomes significant, then as a backup plan, I also got instructions for removing the diffuser from some computer monitor LCD's. The diffuser may need to be replaced with my own diffuser. Diffusers designed for sidelights can theoretically be designed differently from diffusers designed for behind-LCD backlights, since there are different polarization/ray angle bending considerations for those two that can affect what's the most efficient diffuser to use. Most 120 Hz panels for computer monitors presently use sidelights, and mine is an actual backlight, so I might not want the specific diffuser from the panel I use. I may even test cheap diffusers (e.g. transparent white plastic sheets, given that my extreme DIY light output compensates for diffuser inefficiencies to an extent. The close spacing of LED's will allow me to put the diffuser extremely close to the panel, hopefully minimizing bleed. I will probably keep the diffuser at first, but if I replace the diffuser, but I will try to choose a diffuser that keeps the rest of the panel dark.)

Thank you in warning about potential diffuser issues to the forefront of my mind. Something to ensure: Make sure that diffuser bleed does not noticeably spread beyond adjacent segments.
Quote:
What I wrote was from the literature and AFAIK was correct. Yes obviously strobbing and scanning at the same time can further reduce below 1ms but again you run into the limiting cross-talk issue. Below is an example showing the duty cycle of the scanning backlight vs BET(fraction of frame time) for a given cross-talk.
If you bring crosstalk into the equation, I will concede you are right!
But, your original sentence was: "2 - AFAIK even if the panel had zero cross-talk (OLED) and an 8 segment scanning, it would produce about equal hold time to CRT (~1ms) (not surpass it)" -- A statement I believe is incorrect, at least for horizontal motion.

Unless, you're talking about vertical motion versus horizontal motion. Most motion that we care about is horizontal anyway, like hockey pucks, soccer balls, first person shooters (left/right turning), etc. For vertical motion, of sufficiently fast objects, there will be interaction with the scan flow and vertical eye movement, much like there already is for CRT, but eye tracking is not typically fast enough and the screen height is narrower than width, so limitations to vertical motion resolution caused by the scanning motion, shouldn't be a noticeable issue and is a problem for CRT's too anyway for those who is able to notice (contractions of perceived CRT height when moving your eyes rapidly downards in scan direction, and expansions of perceived height of CRT image when moving your eyes rapidly upwards opposite the scan direction)
Quote:
Also, scanning + strobbing is going to tax your light output massively and increase power consumption. Also your LED lifetime may worsen. Not to mention the temporal motion artifacts it might cause.
No need to explain, this is quite an old idea. The novel/interesting part is the low cost, the light output, and the DIY. I’m still skeptical but very interested. Does that make sense?smile.gif
I think it's still a worthwhile experiment. smile.gif

Your graphics are useful and will ensure that I pay attention to side issues such as backlight bleed and diffusion issues.

If bleed/diffusion issues become more pronounced than I expected, I can adjust scanning speed faster, to complete a scanning backlight scan in say, 1/240th of a second, even for 120Hz. (pretending that VSYNC is 50% idle time). It will probably bring out inconsistencies in greyscale for top versus bottom of the image, because parts of the LCD will be more completely refreshed and other parts of LCD will be more completely refreshed. So the scanning speed adjustment could become a image quality tradeoff between motion blur reduction (in bleed/diffusion) and vertical consistency of image. I might even find that full-strobe looks preferable (at 120Hz or greater), or I may find that a sweet spot in scanning speed actually approaches double scan speed. I'll make sure that the scanning speed is an important adjustment that's easy to do with a motion test pattern (e.g. smooth moving white objects in black background)

Note: A faster scanning mode can also mean more segments illuminated at a time (due to accelerated illumination of next segment before the previous segment turns off 'on its own pulse schedule'), while scanning, to maintain the same 'Hz' simulation while reducing bleed/diffusion artifacts
Quote:
Sorry, not true IMO. The diffuser and subsequent cross-talk are inherent to LCD. Strobbing will help but not as much as you state. Check out this graphic describing hold time in an interpolated system vs a frame repeat system. The CRT with frame repeat still produces motion blur but it is less due to the effective reduction in hold time due to the second pulse duty cycle.
blur.jpg
I actually believe that the inherent diffuser in the LCD will be somewhat limiting in all cases. And increasing the segments may actually make it worse because the cross-talk will spread over more segments (because each segment is smaller?).
I'm likely going to use thousands of tiny 3528/5050 LED's, so I can put the diffuser very close to it, minimizing bleed between even segments of 1/16 screen height. But you're right that the diffuser is a limiting factor.
Quote:
One way to overcome this is to refresh the panel ultra fast and then strobe the backlight globally (all LEDs) for an “extremely” short time. This is also in the literature.
Yes, you're right. But that can bring some incomplete-LCD-refresh artifacts. A compromise is an accelerated scan, as a balance between scanning backlight and full strobe (BFI style) operation. I plan to have the Arduino adjustable in scan speed all the way through this entire scale, permitting motion-resolution benchmarking of the various scenarios.
Quote:
One last graph that adds to my skepticism. To me it shows that the motion benefits begin to level off as the duty cycle of the backlight scan decreases (similar to what guidryp was saying?)
I will be able to touch the two lines in that graph, because of my complete adjustability from scanning all the way to no-scanning (full strobe).

Given the new information you've given, you're right in the bleed/diffuser issue, but one statement in your original post is still incorrect which caused me to leap on it!

You've certainly made me pay close attention to potential diffuser/bleed issues. I may even have to engineer slightly extra wattage to compensate too. Thank you for that. I do not anticipate it being a limiting factor to successfully reducing motion blur by 90% solely from the backlight alone.

Time for me to do the math in the maximum number of SMT3528 narrow LED ribbons (600LED/5 meter) and SMT5050 wide LED ribbons (300LED/5 meter) that I can cram into a small space. This will probably become my limiting factor: How much light output I can cheaply cram into a given space. Napkin calculations suggests approximately 200 watts (the factor of 10 required for 90% blur reduction), but I wonder if I can go beyond for that extra safety margin I'd like to have.

P.S. Unrelated, but required for full-panel strobe. At the same time, I am thinking of some electronics circuit safeties I need. I need to be mindful not to power more than about 20 watts average into these LED's at any time, due to heat build-up issues -- I may need develop an auto-current-limiting approx-12-volt power supply that dynamically adjusts current automatically depending on how many segments (or all segments) lit at a time, for my ability to continuously adjust from scanning all the way to full-panel strobe. Full panel strobe would be a 200 watt surge occuring only 10 percent (or less of the time) -- to average 20 watts (goal light output for a 24" monitor) but if my electronics fail and all segments get stuck on continuously with no pulsing, then I'd like the power supply to kick in and automatically downvolt slightly within a fraction of a second to dim the LED's for current-limiting to meet a 20 watt average output. 20 watts of heat is easily dissipated through the rear of a monitor without much complexity (but continuous fully-on 200 watts would be a nightmare, and I don't need to be blinded anyway!) Fortunately, this is a simple science with lots of established schematic diagrams, including open source automatically-adapting power supplies. Relatively simple stuff; let the power supply surge (200 watt surges allowed for full strobes) but quickly automatically adjust voltage output within a fraction of a second (e.g. 1/10th of a second) to a voltage that meets the exact average backlight amperage I want, for safety reasons. (a slow responding current regulator is exactly what I want, to permit surges needed for strobing)

Thanks,
Mark Rejhon


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.

BlurBusters Blog -- Eliminating Motion Blur by 90%+ on LCD for games and computers

Rooting for upcoming low-persistence rolling-scan OLEDs too!

Mark Rejhon is offline  
post #13 of 47 Old 09-19-2012, 12:42 PM - Thread Starter
AVS Special Member
 
Mark Rejhon's Avatar
 
Join Date: Feb 1999
Location: North America
Posts: 8,124
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 2 Post(s)
Liked: 102
UPDATE!:

Method of VSYNC Signalling
I found a way to do USB-signalled VSYNC timecoded to an accuracy of 1/135,000th of a second -- even if the host signalling is a totally random spray. Microsoft DirectX "RasterStatus.ScanLine" timestamped with PerformanceCounter (CPU cycle counter on the PC side) and the micros() (microsecond system timer on the Arduino side) is mathematically able to tell me exactly how long ago VSYNC occured (to a precision of 1/135,000th of a second for a 1920x1080 120Hz signal). So, I have microsecond-accuracy on both the PC and Arduino ends, required for mathematically compensating for delays in relaying between the computer and the Arduino. The CPU-fluctuation and communication-caused variances can be totally mathematically calculated out quite easily, resulting in the ability to use an inaccurate host signalling (I even call it "random spray" of signalling) as a highly precise VSYNC information source accurate to 1/135,000th of a second. In fact, at this point, I don't even care how 'random' the spray of host communications is -- I can be informed about VSYNC only a few times a second and I can calculate the rest of the information based on information received (and previous knowledge of the approximate current vertical refresh rate in Hz) . In fact, if the spray of data from the PC to the Arduino is interrupted for say, 1 second (due to CPU freeze on PC host) -- the Arduino scanning backlight can still continue from an extrapolation of previously received math, and still have accurate VSYNC information within the required accuracy for a scanning backlight for several seconds after interruption of VSYNC signalling, and blissfully continue normally when VSYNC signalling to the Arduino resumes. Things would only degrade slightly after several minutes of VSYNC interruption (manifesting simply as loss of motion blur reduction and increased crosstalk artifacts until VSYNC signalling resumes). Therefore, software-based host VSYNC signalling is actually practical and can be super-accurate! Information on how I came up with the accuracy calculation of 1/135,000th of a second.

Math: Calculated LED Wattage I can cheaply cram behind a 24" LCD
If I use SMT3528 LED ribbons, they are 50 watts per 16 foot ribbon with 600 LED's. These ribbons are home-cuttable in 2" increments. These ribbons are 8 millimeters wide. A 24" LCD monitor is approximately 300 millimeters tall, so I can cram about 37 strips of 20.8" each behind a single 24" LCD. That's a grand total of 64 foot of LED ribbon. Which I can purchase off eBay for approximately $60, or off DealExtreme for about $160 (higher-quality 6500K). Total 200 watts and 2,400 LED's.

Extra notes: In reality, I plan to use 2 foot strips to permit me to use 27" LCD's (2560x1440p 120Hz is available), so that means I'll use a little extra -- If I want to, I can cram about 42 strips of LED's (cut into 24" wide each), which is 84 foot of LED ribbon, or 5.2 strips, for a total of about 260 watts worth of LED's. If I overlap the strip slightly without blocking the light, I could probably cram 25% more ribbons, but mounting the adhesive strips then becomes much more difficult. An interesting thought is to someday replicate this same 'extreme' (90% motion blur reduction) project for a 47" HDTV, multiply by 4. ($640 of LED's, 800 watts worth!) Since the manufacturers will probably beat me to it eventually, I have no plans to home-modify a 47" HDTV but there's nothing stopping someone else (or a manufacturer) from doing so. But computer monitor manufacturers are very slow at innovating on motion blur reduction technologies at consumer price levels. The sheer dark wattage in LED required (actual power use: 1/10th) is why a 24" monitor is so much cheaper and easier to begin with. LED's are falling in price, and it's only recently (in the last 2-3 years) that 5 meter LED ribbons hit "bargain" price points, even for house-lighting-quality high-CRI white color. Thankfully LED prices have fallen so much, that this Arduino project is now financially feasible on a hobbyist budget, at least at computer-monitor panel sizes.

Thanks,
Mark Rejhon


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.

BlurBusters Blog -- Eliminating Motion Blur by 90%+ on LCD for games and computers

Rooting for upcoming low-persistence rolling-scan OLEDs too!

Mark Rejhon is offline  
post #14 of 47 Old 09-19-2012, 02:29 PM
AVS Special Member
 
borf's Avatar
 
Join Date: Oct 2003
Posts: 1,172
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 14
I would be interested if you kept a blog or something on your website Mark - with pics even. These are old ideas with new technology. Why haven't these ideas matured - technological limits or are manufacturers apathetic to the non-mainstream (gaming applications). Is it the "good enough" paridigm. Something like this might start off a niche product (probably gaming) and spread to a degree. Not saying it's possible. Its a bit sad that in 12 years there has not been a direct replacement for CRT.
borf is offline  
post #15 of 47 Old 09-19-2012, 07:18 PM - Thread Starter
AVS Special Member
 
Mark Rejhon's Avatar
 
Join Date: Feb 1999
Location: North America
Posts: 8,124
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 2 Post(s)
Liked: 102
Quote:
Originally Posted by borf View Post

I would be interested if you kept a blog or something on your website Mark - with pics even. These are old ideas with new technology. Why haven't these ideas matured - technological limits or are manufacturers apathetic to the non-mainstream (gaming applications). Is it the "good enough" paridigm. Something like this might start off a niche product (probably gaming) and spread to a degree. Not saying it's possible. Its a bit sad that in 12 years there has not been a direct replacement for CRT.
Good idea, I have been thinking the same. Register a domain name for my open-source scanning backlight project, and blog about it. (I was also thinking of a small Kickstarter project, to help finance the cost including multiple donor computer monitors, or computer monitor donations.)

These ideas have not matured, until recently because:

1. LCD refresh didn't complete quickly enough before next LCD refresh.
Solved. Today, LCD's are now fast enough to finish refreshing before the next frame (requirement of 3D LCD's). Finishing the refresh (for the most part) before backlight/segment strobing is required for full effect of motion blur reduction

2. Having more than 100 watt of LED _per_ square foot of display, used to be too expensive
Solved. LED's are now bright and cheap enough (requirement of extra brightness needed in ultra-short flashes in scanning/strobed backlight). If you don't have enough wattage in your very short flashes, your image will be too dim. To get a normal brightness using a backlight that is dark 90% of the time, you need about 200 watts for 24" monitor, or about 800 watts for 47" HDTV, even though average power consumption would be 20 watts and 80 watts respectively for a 90%:10% bright:dark cycle. You can now get 200 watts worth of 6500K LED's for less than $200 using 20 meters worth of LED ribbon reel tape, which is well within enthusiast budgets.

3. Native 120Hz capable LCD's was not available until recently.
Solved. Today's 120Hz native refresh capabilities (non-interpolated), means that flicker of a scanning-backlight (with ultra-short on:off duty cycle of flashes), will not bother most people. (3D LCD's brought us 120Hz LCD's)

4. Controllers for scanning backlights were not cheap or easy
Solved. Today, it can be done home-bew with an Arduino, which cost only $35 for an Arduino UNO at Radio Shack. Or even less if you build the Arduino yourself! It is fairly simple Arduino op

5. Many display manufacturers are struggling
DIY it instead. Many of them are not taking the risks (see above) required for a scanning backlight that reduces motion blur by 90%. We have to homebrew our own. People on these forums are creative (homemade anamorphic lens, homemade projectors, homemade screens, etc), so why not home-made scanning backlights, too? It's really only a glorified version of a common LED sequencer -- made to run harmoniously in symphony to the "VSYNC beat" and at high-fidelity (good manual adjustments, precise timings, reduced backlight bleed, etc)

All the above problems have been solved (for the most part), to finally allow a scanning backlight to reduce motion blur by 90% (or more) without other assistance such as interpolation. The above reasons is precisely why it has not been done on the market before today, and we have the opportunity to homebrew it. The open source nature of my backlight may encourage display manufacturers in the future to do it based on successful result (though I'd love them to pick my brains too! Maybe even earn a little penny at it, with a non-struggling display maker). There is zero proprietary technology in this open-source scanning backlight, and it is all based on publicly available knowledge, so no patents and lawsuits for this specific scanning backlight. I plan to provide the Arduino source code. The backlight is free for others, hobbyists or manufacturers, to make. Who knows, I could even instead earn a small penny off related products instead (e.g. go and create the world's best motion resolution benchmarking application) For now, this is a hobby - but this is a world's first to the best of my knowledge. I have purchased the Arduinos and parts already, and will do small-scale single-LED tests over the next few weeks. Kitchen countertop experiments first for now.


.
NOTE: It is necessary for an LCD to virtually complete refreshing before the next refresh, in order to allow motion blur reduction to break the "LCD response" barrier (e.g. LCD pixel response no longer the absolute limit). To understand this better, let's say we have an LCD with an approximately 2ms grey-to-grey refresh speed, in the following example. A single 8ms refresh (1/120th second for a 120Hz signal) for a specific segment of the LCD, would be:
Example of bypassing LCD response as the limiting factor in motion blur reduction
One refresh lasting 8 milliseconds (1/120th second at 120Hz):
-- 2ms -- wait for LCD pixel to finish refreshing (unseen, while in the dark)
-- 5ms -- wait a little longer for most of ghosting to disappear (unseen, while in the dark)
-- 0.5ms -- flash the backlight segment quickly. (1/1920th second)
Viola. You've essentially bypassed the LCD pixel refresh response as the motion blur barrier, because you're keeping the LCD refresh in the dark, so the LCD refresh is unseen, and no longer contributes to the motion blur. There will be some residual residual ghost only because LCD's does not perfectly finish refreshing before the next refresh: The cause of image leak between two eyes in shutter glasses 3D. Properly adjusted, faint residual ghost will be no worse than the residual crosstalk between two eyes during 3D shutter glasses operation. Also, all these values could be adjustable in the Arduino scanning backlight project (directly or indirectly; e.g. phasing and scanning speed adjustment instead of millisecond values), to reduce input lag, correct phasing with the actual refresh, and adjust for minimal ghosting.

Thanks,
Mark Rejhon


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.

BlurBusters Blog -- Eliminating Motion Blur by 90%+ on LCD for games and computers

Rooting for upcoming low-persistence rolling-scan OLEDs too!

Mark Rejhon is offline  
post #16 of 47 Old 09-19-2012, 09:12 PM
Senior Member
 
guidryp's Avatar
 
Join Date: Dec 2001
Location: Ottawa
Posts: 250
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by Mark Rejhon View Post

I generally agree that it's pretty pointless, but it's a "free" feature. I'm currently aiming the wattage of the backlight for a sufficiently bright picture during "960Hz" simulation, but I want to have the "free" software feature of "simulated 1920Hz" (without motion interpolation) just for two reasons:
(1) Experimentation if it's even possible *at all* to tell the difference.

I agree that experimentation has it's own value, so in that light, sure what the heck, if get it running try everything you can.

But I think characterizing this as 960/1920 Hz simulation is mistaken. An actual 1920Hz CRT would likely look more like a Sample and Hold display than a conventional CRT. CRTs were not so sharp because they better reflect reality than S&H display, CRTs were sharp in motion because unlike reality, they benefit from the stroboscopic effect freezing the action. If you actually ran a CRT at 960Hz it would no longer be exhibiting a stroboscopic effect that humans, or even birds and insects for that matter, could detect.


What I am saying will likely be controversial to many. But at some refresh rate (below 960Hz) Motion blur on a CRT would get worse the higher the refresh rate, until it essentially equaled a S&H display.

A thought experiment:

Sitting in your living room on a bright sunny day with lots of natural light.

Grab a book or something with some print and start moving it back and forth in front of your face. It will blur.

Repeat at night with an adjustable strobe light. At slow flash rate, the strobe will freeze it, and the print and it will be sharp.

As you increase the strobe rate at some point, you can't see the strobing anymore, and it will be back to like it looked in daylight: Blurred.

To benefit from the stroboscopic effect, it has to be close to a frequency where you can actually, detect it, or it must interact, with some other element to create artifacts that you can detect.

Quote:
Correct. Though, just like I can detect rainbow artifacts, I can detect the stroboscopic effet, even 500 Hz PWM is detectable indirectly if you know how to look for PWM stroboscopic artifacts (not everyone is sensitive to them, much like DLP rainbows is a person specific thing.)
Academic note: Detecting stroboscopic artifacts (e.g. DLP rainbows, PWM, etc) is a different vision phenomena than flicker fusion.
Example: Test mouse cursor on black screen: 180Hz PWM on a 60Hz signal shows a triple-cursor motion blur instead of a continous-blur.
Flicker fusion is /different/ from Store-n-hold blur which is also /different/ from LCD response blur. They can all interact, of course.
Human perception of high-speed vision phenomena (different from "flicker fusion")
1. Witnessing high speed photography. Even xenon strobe lights that flash less than 1/5000th second, you can still see the flash. Though it's as instantaneous-looking to the human eye as a 1/200th second flash. Even a millionth-second flash would be detectable, provided there was enough photons to hit the eyeballs. It's called "integration" -- your cones/rods in your eyeballs are like tiny buckets collecting photons. Once you're far beyond flicker fusion threshold, it doesn't matter how fast or slow these buckets are filled: A millon-lumen flash for a nanosecond has the same number of photons as a one-lumen flash for a millisecond.
2. Wagon wheel effects. Human can detect continuous versus non-continuous light sources indirectly using the wagon wheel (stroboscopic) effect, and its cousins (DLP rainbows, etc). Given sufficient speed, insanely high numbers become detectable. Imagine a high speed wagon-wheel disc spinning synchronized with a theoretical 5000Hz strobe light. Wheel looks stationary. However, change strobe light to 5,001Hz without changing the wheel speed. The wagon wheel looks like it spins slowly backwards.

None of this evidence of higher speed human vision that the Flicker Fusion Threshold. I see DLP rainbows as well, but that is a lower frequency artifact. When two (or more) higher frequency elements interact you get lower frequency artifacts. They are beat frequency/aliasing artifacts.

High speed flash is a particular poor example, and the reason is in your own statement. Integration. The integration time, or the time our visual system averages inputs is on the order of 10-20 ms. That means your really can't detect events spaced closer than that, or they will blur together. A single isolated flash is not a test of speed. The measure of speed would be how much time must elapse between two flashes, so they would be distinguishable from one. 10-20ms (50-100Hz).
Quote:
3. Motion blur. Detectability of motion blur is massively well beyond flicker fusion.

What? Motion Blur IS flicker fusion. Flicker fusion gives and indication of the integration time of our visual system as does motion blur, they are the same phenomena and both point to a visual system that integrates over 10-20ms.
Quote:
For fast motion moving at 1 inch every 1/60th second, the hold-type blur on LCD is as follows:
At 60Hz, the motion blur is 1" thick (entry level HDTV's, regular monitors) ...
At 120Hz, the motion blur is 0.5" thick (120Hz computer monitors, interpolated HDTV's) ...

You can pretty much stop here, because after about 60Hz the difference is really only going to matter to a high speed camera. Our eyes themselves integrate over a similar interval to a 60 Hz frame time. So looking a an the same motion through a window at a real object in sunlight would blur just as much.

There is an obsession in all specs on every device made to always go bigger/higher, but at some point it really isn't going to matter when humans are at the receiving end.

But just like the "Golden Eared" who think they need 24bit/96KHz recordings because they hear better than normal people, there will be those convinced they can see faster than the birds and the bees.



But that isn't to say I think a scanning backlight isn't a good idea for a an LCD, I do think there is benefit there. I just think the obsession with super high refresh rates and ultra short flashes is misplaced.
guidryp is offline  
post #17 of 47 Old 09-19-2012, 09:41 PM
Senior Member
 
guidryp's Avatar
 
Join Date: Dec 2001
Location: Ottawa
Posts: 250
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by Mark Rejhon View Post

These ideas have not matured, until recently because:

The reason you don't see this is largely because of it's niche appeal, and extra expense to build it.

120Hz monitors are already a hefty premium, add another premium for more powerful backlighting for short duty cycle.

A lot of LED LCD are edge lit to make them even cheaper. So having an array of 8 separate segments to scan would increase complexity and expense again.

By the time you were done you might be looking at 3x the cost of normal LCD and your market is slice of the niche that already insists on buying 120 Hz gaming monitors.

I doubt there is much technical challenge if monitor manufacturer like Samsung/LG wanted to pursue this. But I figure they crunched some build cost/sales projections and can't see a profit.
guidryp is offline  
post #18 of 47 Old 09-19-2012, 10:42 PM - Thread Starter
AVS Special Member
 
Mark Rejhon's Avatar
 
Join Date: Feb 1999
Location: North America
Posts: 8,124
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 2 Post(s)
Liked: 102
Quote:
Originally Posted by guidryp View Post

I agree that experimentation has it's own value, so in that light, sure what the heck, if get it running try everything you can.
But I think characterizing this as 960/1920 Hz simulation is mistaken. An actual 1920Hz CRT would likely look more like a Sample and Hold display than a conventional CRT.
Correct, unless you display a discrete frame for each refresh (e.g. 1920fps).
But that's insane, and we don't need that. We only need to black out the intermediate samples, and the persistence of vision (flicker fusion) does the rest.

CRT's running at 60Hz actual native refresh already have approximately a "1000Hz equivalence" if it has a 1ms phoshor decay.
Quote:
CRTs were sharp in motion because unlike reality, they benefit from the stroboscopic effect freezing the action. If you actually ran a CRT at 960Hz it would no longer be exhibiting a stroboscopic effect that humans, or even birds and insects for that matter, could detect.
It is worth pointing out that motion blur is enhanced by many methods, including non-stroboscopic methods too. Examples:
1. Display at interpolated X frames per second (e.g. 240 frames per second).
Effect: Store and hold, but 240 discrete samples
2. Display store-and-hold displaying a native 240 frames per second.
Effect: Store and hold, but 240 discrete samples
3. Display strobed at 1/X second (e.g. 1/240th of a second), from a 60Hz signal.
Effect: Stroboscopic, 60 discrete samples with intermediate samples blacked out. Persistence of vision and flicker fusion, blends the motion.
4. CRT scanned at 240 Hz from a 240fps signal. Stroboscopic with all intermediate samples.
Effect: Stroboscopic, 240 discrete samples.

Tiny interesting note: Despite the similarity of the above situation, #4 has less motion blur than #1/2/3 because of CRT strobing each pixel at 1/1000sec (phosphor decay). Basically, #1/2/3 have similar motion blur perceived by human eye (1/240sec samples), while #4 has less motion blur (1/1000sec samples)
Quote:
What I am saying will likely be controversial to many. But at some refresh rate (below 960Hz) Motion blur on a CRT would get worse the higher the refresh rate, until it essentially equaled a S&H display.
A thought experiment:
Sitting in your living room on a bright sunny day with lots of natural light.
Grab a book or something with some print and start moving it back and forth in front of your face. It will blur.
Repeat at night with an adjustable strobe light. At slow flash rate, the strobe will freeze it, and the print and it will be sharp.
As you increase the strobe rate at some point, you can't see the strobing anymore, and it will be back to like it looked in daylight: Blurred.
To benefit from the stroboscopic effect, it has to be close to a frequency where you can actually, detect it, or it must interact, with some other element to create artifacts that you can detect.
None of this evidence of higher speed human vision that the Flicker Fusion Threshold. I see DLP rainbows as well, but that is a lower frequency artifact. When two (or more) higher frequency elements interact you get lower frequency artifacts. They are beat frequency/aliasing artifacts.
I understand what you are saying. I can wave my hand in front of a LCD with 180Hz PWM, and I see the discrete samples instead of a continuous blur. Same effect as you are describing.
Quote:
High speed flash is a particular poor example, and the reason is in your own statement. Integration. The integration time, or the time our visual system averages inputs is on the order of 10-20 ms. That means your really can't detect events spaced closer than that, or they will blur together. A single isolated flash is not a test of speed. The measure of speed would be how much time must elapse between two flashes, so they would be distinguishable from one. 10-20ms (50-100Hz).
I think you misinterpreted my use of the word "speed". Everything I wrote is about one flash sample per refresh, so the shorter the flash sample, the higher the simulated "Hz", even if it is a single 1/960sec flash followed by a long delay until the next refresh. So really, we're talking about the same thing in a way. Flicker fusion blends the flash samples together into one consistent, continuous motion. So you're correct here.

However, when I meant speed, I meant shorter strobe lengths (While keeping the strobe cycle constant). In this case, shorter strobes continues to reduce motion blur even when you shorten the strobes shorter than 1/120 second. (you're not strobing more frequently, just strobing shorter and more intense bursts of light, in a scanning backlight). To see the benefits of "240Hz" vs "480Hz" vs "960Hz" (sample length measurement, not actual frequency measurement), you need to see material that meets three criteria: (1) Fast pans (2) Non-blurred frames(fast camera shutter) and (3) framerate matches native refresh rate of display signal. If any one of the 3 conditions are not met, going beyond 120 is usually quite useless. But if you meet all 3 conditions, the benefits of going beyond 120 suddenly becomes very clear (even with diminishing point of returns).
Quote:
What? Motion Blur IS flicker fusion.
Wrong -- Not necessarily! Motion blur is caused by multiple factors. Including factors other than stroboscopic effect. Motion blur can be caused by eye tracking -- and that's the _main_ cause of motion blur on LCD! NOT LCD response, NOT flicker fusion!
Quote:
Flicker fusion gives and indication of the integration time of our visual system as does motion blur, they are the same phenomena and both point to a visual system that integrates over 10-20ms.
Yes, but you're missing "persistence of vision" -- motion blur CAUSED by eye tracking (not caused by flicker fusion)
Your eyes do NOT behave like digital stepper motors!
Your eyes don't stop moving during a refresh. Your eyes are continuously tracking across the screen in a continuous and analog manner, so DIFFERENT rods/cones in your retina are integrating a different part of the image in motion, leading to motion blur caused by eye tracking. The image smears across your retina as you track. Even if it's 1/480 second later, at a high-contrast edge, a different set of cones/rods are doing integrating as the image smears across your field of vision. That's HOW you can see reduced motion blur at "240Hz", "480Hz", "960Hz". By having shorter strobes, you're limiting integration to closer to the same cones/rods (sharper) rather than spreading over more cones/rods in your retina. Flicker fusion takes over the rest to blend the consecutive images. Your eyes are integrating multiple stacked blurred images at slower strobes (e.g. 1/240) and you're integrating multiple stacked sharper images at faster strobes (1/960). See, flicker fusion has nothing to do with tracking-based motion blur.

You neglected to consider eye-tracking-caused motion blur

Digital Camera Experiment You Can Try
Tracking-caused motion blur. Metaphorically, your eyeballs are roughly akin to a slow-shutter digital camera. Now, get a good SLR digital camera with manual adjustments. Go into a windowless room. Shaking/panning the camera will be equivalent to eye tracking. Now try this experiment.

1. Configure the camera to 1/10sec shutter speed, flash turned off, but room lights turned on. It's going to integrate over a long period. Intentionally pan the camera while you are taking a picture. What happens? The picture is blurry because of the slow shutter.
2. Configure the camera to 1/10sec shutter speed, flash turned on, but room lights turned off. It's still going to integrate over a long period. Intentionally pan the camera while you are taking a picture. What happens? The picture is sharp despite the slow shutter.

Gasp! Impossible, you say? Not so fast buddy -- what happened is that even though the camera was integrating over a long 1/10sec period, the flash is faster than 1/10sec. There was no light caught during the integration period, except for the light caught from the flash!

This is a very similar principle for motion blur reduction using strobed (flash) backlight. You've eliminated eye-tracking-based motion blur. The shorter the strobe, the less opportunity for eye-tracking-caused motion blur to blur the image.

Corollary: Additional note: Your eyes are continuous open shutters. The display gives you multiple consecutive images. You're tracking objects in a fast panning scene. As you track an object, your eyeballs are integrating consecutive frames. So for store-and-hold, you're integrating a frame now blurred by eye-tracking based motion. (for THIS motion blur, there's no motion blur caused by flicker fusion here). But for strobed, you're integrating consecutive strobed frames while tracking an object in a panning scene. Shorter strobes will have less motion blur because you'll have less eye tracking motion during each strobe, the shorter the strobes are. That's less motion blur since you're no longer smearing as much to different retina rods/cones. Integration stays more on the same retina rods/cones. The stacked integration is sharper! Your eyes are not digital stepper motors while you're tracking an object in a fast-moving pan.

Good examples for telling apart "240/480/960" simulation is video material from HDTV cameras taken with a short shutter speed -- fast car racing pans in bright light, ski racing on sunny slopes, football field goal kick on a sunny day, fast turn left/right in FPS shooter games, fast horizontal panning in platformer games, etc) I know, I've been able to tell apart 240/480/960 simulation (and their progressive further motion blur elimination) on specific kinds of material like these! (Of course, "960" simulation is useless for HDTV material taken at slow shutter speeds such as 1/100sec -- the camera blur now becomes the limiting factor) Also, in the HDTV era, studios have often started to use smaller cameras and longer shutter speeds, than the gigantic NTSC cameras of yesteryear. So shutter speeds are often longer than during the NTSC era. So you do need to actively seek out HDTV footage taken at short shutter speed. Yes, you do need to test *specific* material in order to tell the motion blur. You need a fast shutter for non-blurred frames. (1) Fast pans (2) Non-blurred frames (3) framerate matches native refresh rate of display signal. If any one of the 3 conditions are not met, going beyond 120 is usually quite useless. But if you meet all 3 conditions, the benefits of going beyond 120 suddenly becomes very clear (even with diminishing point of returns).

There are many academic papers that cover eye-tracking-based motion blur (a separate motion blur issue from flicker fusion). For example, in this academic paper, the diagram note says:
Figure 1: A depiction of hold-type blur for a ball moving with a translational motion of constant velocity. In the top row we show six intermediate positions at equal time intervals taken from a continuous motion. The empty circles denote the eye fixation point resulting from a continuous smooth-pursuit eye motion that tracks some region of interest. For each instance of time, the same relative point on the ball is projected to the same location in the fovea, which results in a blur-free retinal image. The central row shows the corresponding hold-type display situation. Here, the continuous motion is captured only at the two extreme positions. Frame 1 is shown during a finite amount of time, while the eye fixation point follows the same path as in the top row. This time, different image regions are projected to the same point on the retina. Temporal integration registers an average color leading to perceived blur as shown in the bottom row.
Quote:
You can pretty much stop here
Incorrect -- it's very easily detectable beyond 120fps when you look at proper material (e.g. fast pans of 60fps@60Hz, fast scrolling ticker text, fast left/right motion in FPS shooters). It is also proven by academic papers, and also by the above digital camera experiment above, and ALSO I have been able to easily tell apart 120fps/240fps/480fps in the scrolling ticker tests. There are demo modes Have you been in Best Buy lately? There's a demo mode on some displays that allows you to test motion blur reduction. The difference is very clearly noticeable in the 60Hz-120Hz-240Hz-and-up in the demo mode enabled on some of these models, for scrolling tickers. Also, it is consistent with the information found in my references.

So let me re-iterate:

Fact #1: Store-n-hold display, no flicker at all.
Discrete 120fps at 120Hz has 50% less motion blur than 60Hz
Discrete 240fps at 240Hz has 75% less motion blur than 60Hz
Discrete 480fps at 480Hz has 87.5% less motion blur than 60Hz
All proven human eye noticeable. No flicker fusion involved!
For fast motion moving at 1 inch every 1/60th second:
At 60fps, the motion blur is 1" thick. No flicker fusion involved.
At 120fps, the motion blur is 0.5" thick. No flicker fusion involved.
At 240fps, the motoin blur is 0.25" thick. No flicker fusion involved.
At 480fps, the motion blur is 0.125" thick. No flicker fusion involved.
I have seen it with my eyes too! (Many new HDTV's have interpolation modes)

Fact #2: Strobed display such as CRT or scanning backlight/BFI
1/120sec flash once per refresh, for 60Hz+60fps, reduce motion blur by 50%
1/240sec flash once per refresh, for 60Hz+60fps, reduce motion blur by 75%
1/480sec flash once per refresh, for 60Hz+60fps, reduce motion blur by 87.5%
All proven human eye noticeable. Yes, flicker fusion involved, but it the fusion threshold has no effect in motion blur reduction -- that is persistence of vision from eye tracking (diagram on page 3 of academic paper)
For fast motion moving at 1 inch every 1/60th second, on 60fps@60Hz signal.
At 1/60sec strobe once per refresh, the motion blur is 1" thick. Tracking-based blur, not caused by flicker fusion.
At 1/120sec strobe once per refresh, the motion blur is 0.5" thick. Tracking-based blur, not caused by flicker fusion.
At 1/240sec strobe once per refresh, the motoin blur is 0.25" thick. Tracking-based blur, not caused by flicker fusion.
At 1/480sec strobe once per refresh, the motion blur is 0.125" thick. Tracking-based blur, not caused by flicker fusion.
I have, also, seen it with my eyes too! (Many new HDTV's have scanning modes)

However, you are right in one small thing: It is true that beyond a flicker fusion threshold, extra fps is quite useless if you've completely eliminated eye-tracking-based motion blur (an LCD problem that has nothing to do with flicker fusion threshold). Which means for a 1/960sec strobed backlight, 75fps@75Hz looks the same as 120fps@120Hz, looks the same as 240fps@240Hz. However, it would look different from 1/480sec strobed backlight at all of these (75fps@75Hz, 120fps@120Hz, 240fps@240Hz). So 120Hz native refresh rate (discrete refreshes) is probably approximately the final frontier for native refresh rate, and you can just eliminate all the remainder of motion blur using one shorter single flash per frame (to eliminate eye-tracking-based motion blur). On this minor subheading of a point about flicker fusion, you are right about the flicker fusion threshold.

HOWEVER, your blanket statement "motion blur is flicker fusion" IS FALSE since there are multiple factors affecting motion blur other than flicker fusion. Yes, flicker fusion is one factor, but it is JUST one factor. Therefore, the reset of your post is false, especially if you do the slow digital camera experiment illustrated above. The human-visible diminishing point of returns do not stop at 120Hz. (I can already tell apart motion blur reductions from 120Hz / 240Hz / 480Hz, so it's already clearly and easily proven by my own senses already, and the information in the academic papers agree with me)

On the final note, I suggest you do the digital camera test:
Quote:
Digital Camera Experiment You Can Try
Tracking-caused motion blur. Metaphorically, your eyeballs are roughly akin to a slow-shutter digital camera. Now, get a good SLR digital camera with manual adjustments. Go into a windowless room. Shaking/panning the camera will be equivalent to eye tracking. Now try this experiment.

1. Configure the camera to 1/10sec shutter speed, flash turned off, but room lights turned on. It's going to integrate over a long period. Intentionally pan the camera while you are taking a picture. What happens? The picture is blurry because of the slow shutter.
2. Configure the camera to 1/10sec shutter speed, flash turned on, but room lights turned off. It's still going to integrate over a long period. Intentionally pan the camera while you are taking a picture. What happens? The picture is sharp despite the slow shutter.

Gasp! Impossible, you say? Not so fast buddy -- what happened is that even though the camera was integrating over a long 1/10sec period, the flash is faster than 1/10sec. There was no light caught during the integration period, except for the light caught from the flash!

P.S. I like motion blur for 35mm film. It's the way it is supposed to be. But I hate motion blur in video games. (And things like trying to read while scrolling browser window -- something I used to do on CRT computer monitor but not LCD due to scrolling being blurred). That's why I want CRT-like quality on an LCD for video games, too. A big reason I'm starting the Arduino scanning backlight project. It's already technologically possible to reduce motion blur by 90% using a scanning backlight. Also, I suggest booking an airfare to CES or CEDIA; some people (when asked) will be happy to show you precise optimized demo material that clearly distinguishes 120Hz / 240Hz / 480Hz / etc. (scrolling ticker text tests, high speed smooth 60fps pans, etc), allowing you to subsequent disbelieve what you said in your post. You're also additionally welcome to purchase an airfare to visit to see the scanning backlight, once it's built, if you do wish so.

Thanks,
Mark Rejhon


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.

BlurBusters Blog -- Eliminating Motion Blur by 90%+ on LCD for games and computers

Rooting for upcoming low-persistence rolling-scan OLEDs too!

Mark Rejhon is offline  
post #19 of 47 Old 09-20-2012, 08:49 AM
Senior Member
 
guidryp's Avatar
 
Join Date: Dec 2001
Location: Ottawa
Posts: 250
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by Mark Rejhon View Post

Wrong -- Not necessarily! Motion blur is caused by multiple factors. Including factors other than stroboscopic effect. Motion blur can be caused by eye tracking -- and that's the _main_ cause of motion blur on LCD! NOT LCD response, NOT flicker fusion!
Yes, but you're missing "persistence of vision" -- motion blur CAUSED by eye tracking (not caused by flicker fusion)

I never said the stroboscopic effect creates motion blur, quite the opposite, I said it reduced it.

You have devoted a wall of text that is seemingly trying make one phenomena into many.

There is one mechanism at work. That is the slow integration speed of our visual system. Or in camera terms, our slow shutter speed.

There is no difference between:
Moving eyes, stationary scene (pirouette and the world blurs).
Stationary eyes, moving scene ( sit still and bat flys in front of you, nothing but blur).
Spokes blurring on a bicycle.
Flashing lights fusing into continuous on state. (AKA Flicker Fusion Threshold).

It is all the expected result, of integrating a visual sensor, over some relatively lengthy time period.

That integration time (like a camera shutter speed) is on the order of 10ms to 20ms (50-100hz), in humans.

Our slow integration and operation of our visual system can also be seen in our reflexes. Humans require approx 30 ms longer to respond to a visual stimulus than an auditory one.

Quote:
1. Configure the camera to 1/10sec shutter speed, flash turned off, but room lights turned on. It's going to integrate over a long period. Intentionally pan the camera while you are taking a picture. What happens? The picture is blurry because of the slow shutter.
2. Configure the camera to 1/10sec shutter speed, flash turned on, but room lights turned off. It's still going to integrate over a long period. Intentionally pan the camera while you are taking a picture. What happens? The picture is sharp despite the slow shutter.
Gasp! Impossible, you say? Not so fast buddy -- what happened is that even though the camera was integrating over a long 1/10sec period, the flash is faster than 1/10sec. There was no light caught during the integration period, except for the light caught from the flash!

Stop putting words into my mouth and trying to turn me into a strawman. I wouldn't say this is impossible, but expected, and this is essentially the same as the thought experiment I suggested using a strobe light and human vision.

But I will adjust your camera experiment so perhaps you can get, what you are missing.

Conditions:
1/50th of second for our shutter speed (close to human visual integration).
Camera on a tripod.
Place a Giant sheet of paper with text in front of the camera on a mechanism to shake it randomly about.
Flash duration sufficiently short to sharply freeze the text and make it readable.

Now consider what happens when we put the flash in stobe mode at various frequencies(frame rates) and press the camera shutter at:

40 Hz - 0 or 1 flash, only one perfect sharp exposure (or a black frame).
60 Hz - 1 flash or 2 flashes. Could be perfect, or could be double exposure.
120 Hz - 2 or 3 flashes
240 Hz - 4 or 5 flashes
480 Hz - 9 or 10 flashes.
960 Hz - 19 or 20 flashes, creating 19 or 20 exposure blended all together in blurry mess during the time the shutter is open.

If freezing the motion detail is your goal, higher strobe/frame rates are not the way to go, the, higher the frame rate, the more overlapping and displaced frames you have averaging together creating more blur, not less.


The obsession with ultra high frame rates for human consumption is misplaced. So is claiming short duration strobes is the equivalent of high frame rates. There are different effects from frame rate and pulse duration.
guidryp is offline  
post #20 of 47 Old 09-20-2012, 09:22 AM
AVS Special Member
 
xrox's Avatar
 
Join Date: Feb 2003
Posts: 3,169
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 50
Quote:
Originally Posted by Mark Rejhon View Post

But, your original sentence was: "2 - AFAIK even if the panel had zero cross-talk (OLED) and an 8 segment scanning, it would produce about equal hold time to CRT (~1ms) (not surpass it)" -- A statement I believe is incorrect, at least for horizontal motion.
The statement as written is 100% correct as I don’t mention strobbing. But as you pointed out, if you scan AND strobe then you can surpass 1ms (without including cross-talk).
Quote:
Originally Posted by guidryp View Post

What I am saying will likely be controversial to many. But at some refresh rate (below 960Hz) Motion blur on a CRT would get worse the higher the refresh rate, until it essentially equaled a S&H display.
Entirely true if you are only repeating identical refreshes. In fact, motion blur would get worse even at 120Hz in this method (see graphic in my previous post).

Quote:
Originally Posted by guidryp View Post

A thought experiment:
Sitting in your living room on a bright sunny day with lots of natural light.
Grab a book or something with some print and start moving it back and forth in front of your face. It will blur.
Unless you are tracking the print as it moves, the experiment is not valid.

I’ve been repeating this explanation about 8 years now on AVS. Our eyes track movement on the screen in a continuous fashion. Yet all displays produce motion with still images. The two systems are not compatible. The result is blur.

In other words, blur induced by the display (not inherent in the signal) is due to the conflict between our continuously moving retina (tracking movement on the screen) and sequential still images that make up motion video.

The best analogy I could come up with is the laser dot thought experiment. If your retina is moving and you shine a stationary laser beam onto a spot on its surface, the laser beam will literally draw a line on your moving retina due to retinal persistence. This is analogous to our eyes continuing to move while watching a stationary image (1 frame).

Using the same analogy it is easy to understand the artifact:

  • The length of the laser line drawn onto the retina is analogous to the width of perceived blur on a display.
  • The lengh of the laser line (i.e. – blur width) is determined by the speed of eye movement
  • The lengh of the laser line (i.e. – blur width) is also determined by how long you shine the laser (i.e. – how long you display a frame)
  • The lengh of the laser line (i.e. – blur width) is also determined by how long your eye persistence is. If you have short persistence the trailing edge of the laser line will start to disappear faster (i.e. – you may not perceive display blur as easily as others who have long persistence.)

Now, using the same thought experiment, if you only shine a short nano second laser pulse on your moving retina you will literally draw a dot on your retina with no blur. This is analogous to pulsing a frame for a very short time while your eyes are in movement.

As you can see, the primary display parameter in determining the blur induced by the display itself would be the HOLD TIME. Which is the time each unique frame is displayed on the screen. Understand hold time and you will understand this entire concept.

Remember that even with ultra-short nanosecond frames, if you repeat the frames you have effectively increased the hold time. This is why a 120Hz CRT displaying a 60Hz signal (using frame repeat) will show more motion blur than a 60Hz CRT.

Over thinking, over analyzing separates the body from the mind
xrox is offline  
post #21 of 47 Old 09-20-2012, 10:35 AM - Thread Starter
AVS Special Member
 
Mark Rejhon's Avatar
 
Join Date: Feb 1999
Location: North America
Posts: 8,124
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 2 Post(s)
Liked: 102
Quote:
Originally Posted by guidryp View Post

Now consider what happens when we put the flash in stobe mode at various frequencies(frame rates) and press the camera shutter at:
40 Hz - 0 or 1 flash, only one perfect sharp exposure (or a black frame).
60 Hz - 1 flash or 2 flashes. Could be perfect, or could be double exposure.
120 Hz - 2 or 3 flashes
240 Hz - 4 or 5 flashes
480 Hz - 9 or 10 flashes.
960 Hz - 19 or 20 flashes, creating 19 or 20 exposure blended all together in blurry mess during the time the shutter is open.
Aha, that's where our misunderstanding is.
My scanning backlight does NOT flash the same segment multiple times per frame.

For a scanning backlight operating on a 60Hz source, it would be:
"Equivalent perceived motion blur as 60fps@60Hz on a continuously-lit display" - continuously lit backlight
"Equivalent perceived motion blur as 120fps@120Hz on a continuously-lit display" - 1 flash at 1/120sec, 60 times a second.
"Equivalent perceived motion blur as 240fps@240Hz on a continuously-lit display" - 1 flash at 1/240sec, 60 times a second.
"Equivalent perceived motion blur as 480fps@480Hz on a continuously-lit display" - 1 flash at 1/480sec, 60 times a second.
(flash is per segment of scanning backlight)

For a scanning backlight operating on a 120Hz source, it would be:
"Equivalent perceived motion blur as 120fps@120Hz on a continuously-lit display" - continuously lit backlight
"Equivalent perceived motion blur as 240fps@240Hz on a continuously-lit display" - 1 flash at 1/240sec, 120 times a second.
"Equivalent perceived motion blur as 480fps@480Hz on a continuously-lit display" - 1 flash at 1/480sec, 120 times a second.
"Equivalent perceived motion blur as 960fps@960Hz on a continuously-lit display" - 1 flash at 1/960sec, 120 times a second.
(flash is per segment of scanning backlight)

It's now just semantics, terminology, and the way I word my posts. When I say "480Hz equivalence", it really means "Using a single 1/480sec strobe per refresh, to permit the equivalent perceived motion blur as 480fps@480Hz on a continuously-lit display" (a non-flicker display such as store-and-hold LCD with a continuous backlight). Interpreted this way, my post is entirely correct, and you can see the difference. My scanning backlight will not be strobing more than once per native refresh.

You may be right that I use terminology that can be subject to interpreted in a way different than intended. However, when interpreting "480Hz equivalence" as NOT meaning "flashing 480 times per second", but single 1/480sec strobe per single unique frame. Re-read my posts with that interpretation, and my post is correct -- it is correct that there is IS noticeably less motion blur during shorter strobes (even 1/240 versus 1/960), provided there's only one strobe per refresh, provided you test on (1) Fast pans/movements (2) Non-blurred frames (3) framerate matches native refresh rate of display signal. (All three criteria needs to be met for the benefits to be noticeable by the human eye)

The more accurate, long-winded "Using a single 1/480sec strobe per refresh, to permit the equivalent perceived motion blur as 480fps@480Hz on a continuous-lit display", instead of the potentially incorrectly interpreted "480Hz equivalence" short phrase (Which can be misunderstood as a 480Hz strobe, which is NOT what I am doing -- perhaps you misunderstood me as such. My sincere apologies; I accept responsibility for the misinterpretation). I am open to suggestions, especially from other scientists, of an appropriate short phrase to describe equivalences in motion blur reductions. That's why Samsung calls it "CMR 960" rather than "960 Hz equivalence", and that's why Sony calls it "XR 960" instead of "960 Hz equivalence". I understand why the numbers are used -- it's an 'equivalence' factor as I described. The equivalence is already scientifically proven to be pretty accurate, in the various academic references found, and I can see it with my eyes (in a comparision).

We are preaching to the same choir, if you agree with this post. Sorry about the terminology misunderstanding. Your post is entirely correct if doing multiple strobes per refresh, but that's not what I am doing with my scanning backlight -- it is only one strobe per native refresh. Also, to be fair, even though you potentially misinterpreted my "Hz equivalence" terminology (when I really meant a single strobe), your terminology "Motion Blur IS flicker fusion" is also misinterpreted. You were thinking about multiple repeated frames (multiple strobes per single frame), which IS TRUE, flicker fusion blends all the repeated frames into a motion blur. In this case, you're right. That's motion blur caused by flicker fusion. But my scanning backlight only strobes once per refresh, so "Motion Blur IS flicker fusion" is a wrong statement specifically for my scanning backlight, because I only strobe once per refresh. Thus, my scanning backlight produces no flicker-fusion-induced motion blur at all, provided the framerate matches native refresh rate of display signal, and there's no repeated frames in the original display signal.

TODO for myself: I will create some animation examples of a scanning backlight in slow motion, to explain its planning scanning behavior better.  It will reduce misunderstandings, too.

We don't need high framerates, if we have very short strobes -- 72fps @ single 1/960sec strobes 72 times per second (one strobe per refresh) is quite efficient. You only need 72fps which is very close to a common flicker fusion threshold. Doing a fully interpolated 960fps@960Hz is very inefficient, for the equivalent amount of motion blur reduction benefit. That said, different humans have different flicker fusion thresholds, some can see flicker at 100Hz, and others are actually bothered by fluorescent lights (120Hz), but most humans have a flicker fusion threshold not much above 60Hz (And often less, especially in dark environments). The 72 number is a common value, but it's a bell curve; some will see flicker well beyond. We don't need overkill of framerate, and motion interpolation is quite a crude blunt-force mallet to reduce motion blur, when it can be done stroboscopically at a lower framerate. If that is what you meant, yes, excess motion interpolation tends to be wasteful beyond the flicker fusion threshold.

Note -- 120Hz should allow scanning backlight to cover most human flicker fusion thresholds, including most of the 'sensitive humans'. That's why 120Hz is my goal, but it will be adjustable in 1Hz increments between 60 and 120. (Many 120Hz computer monitors sync to any vertical refresh between 60 and 120)

Misunderstanding from both sides cleared yet? Are we in agreement yet?

Thanks,
Mark Rejhon


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.

BlurBusters Blog -- Eliminating Motion Blur by 90%+ on LCD for games and computers

Rooting for upcoming low-persistence rolling-scan OLEDs too!

Mark Rejhon is offline  
post #22 of 47 Old 09-20-2012, 10:58 AM - Thread Starter
AVS Special Member
 
Mark Rejhon's Avatar
 
Join Date: Feb 1999
Location: North America
Posts: 8,124
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 2 Post(s)
Liked: 102
Quote:
Originally Posted by xrox View Post

The statement as written is 100% correct as I don’t mention strobbing. But as you pointed out, if you scan AND strobe then you can surpass 1ms (without including cross-talk).
Entirely true if you are only repeating identical refreshes. In fact, motion blur would get worse even at 120Hz in this method (see graphic in my previous post).
I didn't know your statement involved repeating identical refreshes, so on that basis -- I now agree with your statement, which I misunderstood as also applying to a situation of not repeating refreshes (which is where I interpreted your statement as incorrect). Now that you're saying you agree with me (in the one strobe per refresh scenario), and I now agree with you (in the scenario of repeating identical refreshes), we are now in total agreement, I believe. All clear? smile.gif
Quote:
Unless you are tracking the print as it moves, the experiment is not valid.
I’ve been repeating this explanation about 8 years now on AVS. Our eyes track movement on the screen in a continuous fashion. Yet all displays produce motion with still images. The two systems are not compatible. The result is blur.
In other words, blur induced by the display (not inherent in the signal) is due to the conflict between our continuously moving retina (tracking movement on the screen) and sequential still images that make up motion video.
Tell me about it. Motion blur caused by continuous eye-tracking is very difficult to explain to the average layman.

Thanks,
Mark Rejhon


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.

BlurBusters Blog -- Eliminating Motion Blur by 90%+ on LCD for games and computers

Rooting for upcoming low-persistence rolling-scan OLEDs too!

Mark Rejhon is offline  
post #23 of 47 Old 09-20-2012, 11:22 AM
Senior Member
 
guidryp's Avatar
 
Join Date: Dec 2001
Location: Ottawa
Posts: 250
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by xrox View Post

I’ve been repeating this explanation about 8 years now on AVS. Our eyes track movement on the screen in a continuous fashion.

Actually this is false.

There have been studies of how our eyes track and they don't smoothly and continuously track anything. They jump about repositioning the Fovea to gather details in a fairly jerky, haphazard way. Outside of the Fovea, everything is blur, we just learn to ignore it and think it is sharp.

From what I have read the Fovea reposition fires off at a very low rate around 4 Hz.

It is more a case of reposition, integrate over time to gather detail, flip to a new position, integrate over time, to gather detail.

Much of our visual systems is quite slow, quite low resolution and full glitches (like a giant blind spot in the middle of our visual field). It is just that our brain fills in gaps, ignores the missing bits and fools us into thinking we have a sharp, continuous, fast visual system.
guidryp is offline  
post #24 of 47 Old 09-20-2012, 11:35 AM - Thread Starter
AVS Special Member
 
Mark Rejhon's Avatar
 
Join Date: Feb 1999
Location: North America
Posts: 8,124
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 2 Post(s)
Liked: 102
Quote:
Originally Posted by guidryp View Post

Actually this is false.
There have been studies of how our eyes track and they don't smoothly and continuously track anything. They jump about repositioning the Fovea to gather details in a fairly jerky, haphazard way. Outside of the Fovea, everything is blur, we just learn to ignore it and think it is sharp.
Agreed in the inaccuracies of human eye tracking. However, to me, xrox's intent is to say that the eye is still in motion even over a tiny time period (e.g. a 1/960sec period), regardless of tracking accuracy. Also, it does not define a hard-defined exact limit to using shorter strobes (As long as 1 strobe per refresh. e.g. 1/960 sec strobe 60 times per second for 60Hz, like a CRT)

Despite inaccuracies in eye tracking,
(1) Eye tracking accuracy improves when you're actually staring an object in motion, rather than simply scrolling your gaze (without an object to track).
(2) Eye tracking accuracy improves the slower the object is moving.
(3) Eye tracking accuracy improves when tracking a sharp object rather than a blurry object.
(4) Eye tracking accuracy can improve if you move your head while tracking an object.
Also, people DO move their heads to improve tracking accuracy. People also DO move heads while watching a big screen, too -- front projection systems! :-)
Different people have different accuracies in eye tracking -- that's already proven in championships too (excellent ability to track moving objects during playing sports).

Also, eye tracking inaccuracy doesn't prevent human ability to identify sharp objects:
(A) It only take a brief tracking glance at a moving object to know that the moving object is sharp or not sharp.
(B) Tracking accuracy of your eyes improves on sharp objects. Wave a finger briskly across your eyes. You'll have more difficulty tracking the finger the faster you track, but you can tell the finger is quite sharp in real life while it's moving.

Take a sharp photograph of text and print it out. Have someone wave the photograph gradually and smoothly from left to right in front of you. You can tell the photo is sharp, even though the photo is moving. It's possible to read the text, until the photo is waved too fast for you to read.
Take a blurry photograph of text and print it out. Have someone wave the photograph gradually and smoothly from left to right in front of you. You can tell the photo is blurry, even though the photo is moving. It's harder to read the blurred text, and even harder when the photograph is waved even slightly faster. Especially if the text is so blurred that you can barely read it when the photo is stationary.

Also, you'll notice it's easier to read the text on the photo, if you move your head to help your eyes track the object. Also, the faster the photograph moves, the harder it is to identify differences in sharpness, so there's an analog point of diminishing returns. Even though your eyes will have difficulty tracking the photograph perfectly, you can still read the text on the photograph. Same situation arises when reading highway signs, advertisements on moving buses, signs while walking, etc. Most humans are able to tell that the text is still razor-sharp even in situations where your eyes is having difficulty tracking the text in motion (e.g. fast moving vehicles, running past a sign, etc), up to a certain point where it's impossible to do successful glances.

You can test the scenarios I mentioned (1), (2), (3), (4) yourself by doing this with the sharp/blurry photos:
(1) Try tracking your eyes across the stationary photo, versus staring at something within a moving photo being waved across you. You're tracking more accurately if you've got something to track.
(2) Wave the sharp photo at different speeds. It's easier to track and read the text on the photo when the photo is waved more slowly.
(3) Wave the sharp photo at the fastest speed (that you are still able to read the text on the photograph). Wave the blurry photo at the same speed. You'll notice it's much harder to read the text.
(4) Wave the sharp photo at the fastest speed (that you are still able to read the text on the photograph). Do the same without moving your head, and only using your eyes to track. Repeat while allowing your head to move while tracking. You'll notice it's easier to read the text if you move your head too.

...NOTE1 -- Use somebody else, or use an apparatus to move the photographs for you at known speeds, so that you've got accurate repeatability for comparison purpose.
...NOTE2 -- For some people, there can be vision deficiencies. Just like dyslexics exist, or color blind people exists, there exists people who are unable to track a moving object accurately, and thus may gain little/no benefit varying the variables in scenarios (1), (2), (3), (4). But people who can tell (1), (2), (3), (4) (to varying extents and abilities each), is generally the majority of people.
...NOTE3 -- Real life is more like the sharp photograph in motion. From *this* perspective, motion blur on LCD is 'artificial', and CRT provides something closer to a 'natural' look. (It can be vice versa, but it's a matter of perspective and what you believe in -- some people think crystal-sharp motion is 'unnatural') Yes, sharp motion can look unnatural (movies with motion interpolation -- YUCK!). But I'm not talking about motion blur in movies (that's "natural" for movies). I am talking about motion blur in things that should look as real as possible (e.g. 3D videogames, sports broadcasts, etc), which is where my scanning backlight plays in, to make the motion more real and immersive (feel real, like "being there"). I've had that sensation in the CRT days, but not with LCD. Many gamers agree that motion looked better on CRT (even though not everyone understands why) -- especially competition videogamers who like to track a fast-panning (turnning fast) while being ble to looking for things like far-away enemies while turning, etc. And the lack of motion blur looks more natural, we only want to have human-generated motion blur (e.g. blur caused by inaccuracies in eye tracking) -- don't let the display add blur for us; let our human eyes add blur for us naturally. So having shorter strobes (one strobe per refresh), much like CRT, can make the image look far more natural and immersive for videogames that are able to run at full capped framerate=refresh rate, when you want the "full immersion, being there" effect. As long as there's no noticeable stroboscopic effects (e.g. beyond flicker fusion threshold).

I understand xrox's intent (the eye is still in motion even over a tiny time period, say 1/960sec period, regardless of tracking accuracy); tracking inaccuracies does not prevent motion-blur reduction benefits from ultra-short strobes less than 1/120second. It is just a subject of distraction where the limit is at this point, when going into the finer details. However, it is far shorter than the 1/120 number. It stands, you still gain motion blur benefits as you shorten the strobes further (1/240, 1/480, 1/960) -- one strobe per unique frame -- up to a point of diminishing returns where things have to move too fast in order for motion-blur-reduction benefit to be noticed, and that's where the benefits stop coming in. This occurs at approximately CRT-speed strobes (1/960sec) for displays covering common field-of-view angles of your vision. The exact point of diminishing returns is subject to debate, but it's definitely far shorter than 1/120sec strobes (single strobe per frame, no repeated strobes, no repeated frames), and likely varies on human, the environment, the size of screen, speed of motion, display quality, distance from display, contrast of onscreen object being tracked, etc. Granted, tracking difficulties certainly start to begin playing a role during fast pans, but it doesn't prevent motion blur reduction benefits for strobes shorter than 1/120sec (provided only one strobe per unique image, no repeated images, no repeated strobes).

Thanks,
Mark Rejhon


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.

BlurBusters Blog -- Eliminating Motion Blur by 90%+ on LCD for games and computers

Rooting for upcoming low-persistence rolling-scan OLEDs too!

Mark Rejhon is offline  
post #25 of 47 Old 09-20-2012, 12:40 PM
AVS Special Member
 
xrox's Avatar
 
Join Date: Feb 2003
Posts: 3,169
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 50
Quote:
Originally Posted by guidryp View Post

Actually this is false.
There have been studies of how our eyes track and they don't smoothly and continuously track anything. They jump about repositioning the Fovea to gather details in a fairly jerky, haphazard way. Outside of the Fovea, everything is blur, we just learn to ignore it and think it is sharp.
From what I have read the Fovea reposition fires off at a very low rate around 4 Hz.
It is more a case of reposition, integrate over time to gather detail, flip to a new position, integrate over time, to gather detail.
Much of our visual systems is quite slow, quite low resolution and full glitches (like a giant blind spot in the middle of our visual field). It is just that our brain fills in gaps, ignores the missing bits and fools us into thinking we have a sharp, continuous, fast visual system.
I don't doubt this at all. However, the use of "continuous" or "analog" to describe eye movement is relative to system (i.e. moving retina vs stationary frame). Relative to the stationary frame, the eye is continuously moving around when tracking an object.

Further defining the eye movement as jumping around, jerky, stop/start, or stepping, does not change the interaction at all relative to the system. Even if the eye moved in high frequency sequential steps the same phenomenon would occur. A more accurate way to describe the conflict would maybe be -- we see blur is becuase the retina is in movement when the frame is not. And that movement is tracking the direction of motion. IMO the continuous vs stationary is easier to understand.

Over thinking, over analyzing separates the body from the mind
xrox is offline  
post #26 of 47 Old 09-20-2012, 01:46 PM - Thread Starter
AVS Special Member
 
Mark Rejhon's Avatar
 
Join Date: Feb 1999
Location: North America
Posts: 8,124
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 2 Post(s)
Liked: 102
Replying to myself about how it relates to scanning backlight:

About item (3) in my post where it says:
Quote:
(3) Eye tracking accuracy improves when tracking a sharp object rather than a blurry object.
This, referring to real life, also refers to displays too, and also to the scanning backlight too. This needs further explanation to be understood better. See below.

It's worth noting that phenomenon (3) also help partially explain some of the benefit afforded by the shorter the strobes (1/240 ... 1/480 ... 1/960 second) (single strobe per unique frame) given the same scene at the same speed of motion. The shorter strobes sharpens the moving imagery, and makes it easier to track the pan. So fast-panning scene is easier to track with a scanning backlight than without a scanning backlight. (Just like moving objects are easier to track on CRT than on LCD). The shorter the strobes, the easier to track the fast-panning scene. Tracking-based motion blur becomes less with shorter strobes (one strobe per unique frame), regardless of how random and inaccurate your tracking is. The reduced tracking-based motion blur makes it easier to track moving objects, reducing tracking-based motion blur even further! The benefits feed upon itself, to stretch the curve of the point of diminishing returns further down the slope. If you see the same scene on CRT versus ordinary LCD, you already understand some of this effect. It's easier to track fast-moving objects on a CRT, confirming the assertion in (3). (Try watching sports, or try playing videogames that run at full 60fps on a 60Hz display). The motion blur difference is quite dramatic when you do a side-by-side test with LCD next to CRT. There actually can be a continuous spectrum (of progressive motion blur reductions) between a continuously-lit display towards a strobed display. A scanning backlight with progressively shorter strobes (single strobe per refresh) gradually crosses the bridge between simple LCD all the way to CRT. It is essentially a continuous spectrum of motion-blur-reduction -- all the way between a store-and-hold display and a CRT. There is a gradual decrease in motion blur the closer you get to CRT-short strobes and away from ordinary-LCD continuous illumination. As in my examples in my previous post, a point of diminishing returns, and the thickness of the motion blur becomes less and less (in my examples in previous posts), until it reaches "CRT perfect-looking" sharp motion. People can clearly tell that a 120Hz LCD (or equivalently, 50% strobes at 60Hz, same motion blur equivalence) still isn't even approaching "CRT-sharp" motion. I have observed by seeing many displays, that the spectrum of motion blur reduction ability is actually quite continuous from LCD all the way towards CRT, the shorter the strobes become (single strobe per frame), reducing tracking-based blur. Finally, with sufficiently short strobes (one strobe per native refresh), your strobes are emulating CRT phosphor decay, and you're getting the same sharp motion that you get on CRT. Then, here, it is possible to go beyond CRT as I have explained in my original posts in this thread, simply by using shorter strobes than a phosphor decay. (With the caveat it requires an incredibly bright backlight flash to compensate for the relatively long dark period). Whether it is worthwhile to go beyond CRT by using even shorter strobes, depends on point of diminishing returns, but it's possible given the right variables for a scanning backlight, and there's nothing technically stopping a scanning backlight from using shorter strobes than the phosphor decay of a CRT.

Relative to a continuously-lit LCD, the Arduino scanning backlight aims for at least 90% reduction in tracking-based motion blur (tracking blur reduction is constant regardless of how random or inaccurate your eye tracking is. A fast flit of gaze, a slow movement of eye, a wrong-direction movement of eyeballs, a herky-jerky eye movement, anything -- 90% of whatever tracking-based motion blur, in whatever wrong/correct direction, 90% of all that tracking-based blur is eliminated by the short strobe.). This also dramatically make it so much easier to track fast-moving objects in the video games I play, due to frames having 1/10th as much blur. It is known it's easier to track sharp objects than blurred objects; and this benefit feeds back upon itself to an extent as the scanning backlight (single strobe per refresh) sharpens moving objects into a shorter sample prone to less tracking-baed motion blur. The benefits feedbacks on itself that way, pushing the point of diminishing returns further down the curve, and the benefits still clearly show all the way down to fast single 1/1000second strobes per refresh (CRT league). Perhaps a bit faster strobes, to compensate for bleed/diffusion issue. With the correct variables in a scanning backlight, the limit doesn't stop here - you can go beyond CRT with even less motion blur than CRT. (Is is worth it to go beyond CRT? Probably not. But the fact remains: It is possible, given the right variables.)

Thanks,
Mark Rejhon


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.

BlurBusters Blog -- Eliminating Motion Blur by 90%+ on LCD for games and computers

Rooting for upcoming low-persistence rolling-scan OLEDs too!

Mark Rejhon is offline  
post #27 of 47 Old 09-20-2012, 10:13 PM
AVS Special Member
 
borf's Avatar
 
Join Date: Oct 2003
Posts: 1,172
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 14
Reading your posts Mark i fell like i'm reading three people at once.
Quote:
Originally Posted by Mark Rejhon View Post

The reduced tracking-based motion blur makes it easier to track moving objects, reducing tracking-based motion blur even further! The benefits feed upon itself, to stretch the curve of the point of diminishing returns further down the slope. Is is worth it to go beyond CRT? Probably not. But the fact remains: It is possible, given the right variables.

Never thought of this synergetic effect. But say surpassing crt only returned theoretical benefis, As they say, aim for the sky as and at worst land among the stars.

Quote:
Originally Posted by Mark Rejhon View Post

Many gamers agree that motion looked better on CRT...don't let the display add blur for us; let our human eyes add blur for us naturally. As long as there's no noticeable stroboscopic effects:

I agree but i'll be less diplomatic than you. Adding extraneous blur to make games more "natural" or claiming LCD hold blur is more "natural" than a pulsing CRT in vsync is bogus. Extraneous blur in any form was/is a myth created by console game developers to blur away pathetically low framerates (and the accompanying judder). It was perpetuated by folks who never saw a properly set up crt and wouldn't understand how to do it anyway. Yes a crt will strobe unless FPS = HZ, (as it should be) - in which case the motion is more natural than any display to date ( i refer to the almost complete lack of 1) sample & hold blur (unnatural) 2) judder (unnatural) and 3) flicker (unnatural) - if above 75hz. 4) interpolation artifacts (unnatural)..


Quote:
Originally Posted by guidryp View Post

What? Motion Blur IS flicker fusion.

I am looking at it like this: flicker fusion (Hz) might be the rate at which the backlight pulses, but hold time is the duration this goes on for. If the duration is a small fraction of the total frame he can flash at a rate of million hz if he wants (way beyond flicker fusion). This is only one scenario to reduce hold time blur (involving a static frame) of the many Mark listed.

Quote:
Originally Posted by xrox View Post

But as you pointed out, if you scan AND strobe then you can surpass 1ms (without including cross-talk).

Great!...let us defer to Marks experimentation if and when he is so inclined...i have to stop reading these long posts.

This last question for anybody is simple.....Is liquid crystal response time really fast enough for a project like this. 120...240...480....960 hz - These existing LCDs require up to 1ms. but i'v never seen a TN matrix below 2.4 ms averge - and even those have massive response time errors. I missed something.

response time avg 2.4ms: Samsung SyncMaster SA950 Monitor: 3D Beauty



Response time compensation error (percent): Samsung SyncMaster SA950 Monitor: 3D Beauty

borf is offline  
post #28 of 47 Old 09-21-2012, 07:41 AM - Thread Starter
AVS Special Member
 
Mark Rejhon's Avatar
 
Join Date: Feb 1999
Location: North America
Posts: 8,124
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 2 Post(s)
Liked: 102
Quote:
Originally Posted by borf View Post

I agree but i'll be less diplomatic than you. Adding extraneous blur to make games more "natural" or claiming LCD hold blur is more "natural" than a pulsing CRT in vsync is bogus. Extraneous blur in any form was/is a myth created by console game developers to blur away pathetically low framerates (and the accompanying judder). It was perpetuated by folks who never saw a properly set up crt and wouldn't understand how to do it anyway. Yes a crt will strobe unless FPS = HZ, (as it should be) - in which case the motion is more natural than any display to date ( i refer to the almost complete lack of 1) sample & hold blur (unnatural) 2) judder (unnatural) and 3) flicker (unnatural) - if above 75hz. 4) interpolation artifacts (unnatural)..
I understand why game makers add artificial GPU-generated motion blur to some of their games, and I can see the benefit. It works great when the games run at roughly 24fps-30fps. It makes the games look like a movie. The artistic effect is acceptable, and it DOES mask judder/stutters (which is a more artificial but unavoidable artifact), so I agree with you there about adding GPU real-time blur effects to every 3D videogame frame image.
...However, I like it when the game gives me a choice to turn off this extraneous blur when I'm able to run at full-stop 60fps @ 60Hz with rare judders/stutters -- then I like to turn this off, and get the more 'real-life' feel instead of the 'movie' feel.
...Note: While the 'movie' feel is fun for many, the 'movie' feel is not liked by competition FPS gamers, since in a war-type shooter game, you can't identify faraway snipers while running/turning around fast in 3D war games, as an example, and you're forced to stop turning/moving in order to stop the screen blurring in order to check for faraway enemies like hard-to-see snipers. I am not a professional-league competition gamer, but I understand why several like to see their games run at insane framerates (excess overhead allows slow 3D scenes to stay fast, excess framerate reduces input lag, etc, etc.) and low input lag, and why certain some of them even still stick to CRT (e.g. Sony FW900). With CRT, perfect-looking motion gives quicker enemy identification, reaction time improves, shoot a millisecond before they shoot, you win, in a war-type videogame. In the gaming competition, they're shaving milliseconds, just like sprinters at the Olympics.
Quote:
This last question for anybody is simple.....Is liquid crystal response time really fast enough for a project like this. 120...240...480....960 hz - These existing LCDs require up to 1ms. but i'v never seen a TN matrix below 2.4 ms averge - and even those have massive response time errors. I missed something.
response time avg 2.4ms: Samsung SyncMaster SA950 Monitor: 3D Beauty

Response time compensation error (percent): Samsung SyncMaster SA950 Monitor: 3D Beauty
Oh, good news! It confirms what I already knew -- These graphs prove that LCD's today are already fast enough, at least to do nearly-perfect CRT simulation for strobing (and surpass CRT in the specific metric of motion blur!), at least for 60Hz refresh. (And at 120Hz with minimal crosstalk artifacts in certain colors, like image bleed between 3D shutters).

Why? I don't need to strobe until near the end of a LCD refresh.
Example of one 16.666ms refresh at 60Hz (1/60 = ~16.666ms)
T+0ms = LCD monitor begins refreshing pixel (unseen in the dark)
T+2.4ms = Average LCD pixel response (unseen in the dark)
T+2.4ms = Probable average start of pixel ripple/bounce (RTC error recovery) (unseen in the dark)
T+15ms = Slowest GTG combination is finished to the Samsung graph (unseen in the dark)
T+15.5ms = Strobe the backlight for 1ms or 0.5ms (say, 1/960 or 1/1920 second) (seen by human eye)
T+16.666ms = Next LCD monitor pixel refresh begins (unseen in the dark)
etc.
Voilà. Pixel response no longer a motion-blur barrier.

The above is per segment of LCD, for scanning backlight (full screen width segments). Strobe timing/length adjustable in the Arduino. Assuming segments are 1/16 screen height and it takes 1ms for the LCD controller to refresh from the top edge of the segment to the bottom edge of the segment, the numbers above has an inaccuracy of 1ms. (In reality, 120Hz LCD's may refresh 60Hz frames at the same speed as 120Hz, and this happens in practice sometimes, and will reduce the inaccuracy to 0.5ms). So I may have to strobe slightly sooner, e.g. T+14ms, to prevent bleeding into the next refresh. The smaller the segments, the more accurately I can stay in sync with the scan, but as you can see in the long bars in the response graph, GTG even for those "slow GTG combinations" will still have a 90-95% accurate color by a few milliseconds, it's the remaining several miliseconds where the LCD pixel slowly inches 98%...99%...100% of its way to the final color value. So artifacts will be faint, much like crosstalk between the two images during 3D active shutter, and it will only cause a trailing faint sharp image in high-contrast scenes (same scenes that cause crosstalk problems with 3D active shutter glasses). It might even be less objectionable than phosphor ghosting during fast CRT motion on average-persistence CRT computer monitors. Also, notice the tall 15ms GTG bar, observe carefully that only occurs for transition from one nearly-white color to another nearly-white color (e.g. transition from IRE 96 to IRE 98) (per color component, of course, R,G,B) -- a transition that won't be noticed except in scenes that are prone to crosstalk during 3D active shutter glasses.

So, as you can understand, for this specific monitor with this graph (assuming it's accurate), it's quite possible to surpass CRT in motion blur. Just strobe the backlight slightly later. I can make this adjustment to be much sooner (T+4ms) for less input lag exchange for a few minor increased crosstalk artifacts (trailing ghost image) in certain GTG combinations (it would only look like image bleed between two eyes in 3D active shutter glasses operation -- EXACTLY the same problem). (T+4ms would result in an input lag of 2-3ms, not 4ms lag, because we need to account for the LCD pixel response immediately after T+0ms). The benefit gained in gaming outweighs this tiny added input lag. The compromise setting would likely be acceptable to many, cover most GTG combinations, as long as the crosstalk artifacts proved acceptable. (crosstalk = faint ghost afterimage while in motion)

I can adjust the "wait-till-strobe" variable in my Arduino, as part of my planned "strobe phase" adjustment. At 60Hz with a 15ms-wait-till-strobe, I anticipate I will have fewer crosstalk artifacts than during 3D shutter glasses -- this is a lower refresh rate than 120Hz used for 3D shutter glasses. I will probably eventually find a compromise refresh rate (e.g. 85Hz) around where my personal flicker fusion "comfort" threshold is. For less input lag, I would adjust until I could no longer tolerate the crosstalk. The benefits of being able to identify videogame enemies while in motion, and the enjoyment of CRT-sharp blur-free motion, will outweigh the 2-3ms extra input lag, and outweigh the minor crosstalk between consecutive refreshes. (crosstalk shows only in motion, and as a very faint, sharp ghost lagging after high contrast boundaries. Adjusting the variables will make the faint trailing ghost image weaker/stronger)

RTC errors will not be a major problem for my scanning backlight for this specific monitor. I just simply pulse later. I wait for RTC errors to settle, so by T+15ms, the RTC errors have all but disappeared. Fortunately, when comparing the two graphs, most RTC errors only affect the faster GTG combinations, so that gives plenty of time to let errors settle ("pixel bounce") before the next refresh begins. So all GTG combinations are covered for a scanning backlight! They have to, anyway, for 3D active shutter glasses, or we wouldn't be doing 3D today with LCD's!. Also, I am already mindful of final-value errors, and that's why I will add plenty of adjustment to easily strobe later in the refresh. I will want to try with and without RTC, to see what looks better. Even if I noticed RTC artifacts, RTC errors will show up as more inaccurate/weird grayscale with my scanning backlight (I anticipate these will start to show, if I strobe too early). For my scanning backlight project, I'm not presently concerned about a 5-10% degradation in color quality, in exchange for "CRT perfect" lack of motion blur in videogames -- this is partially an academic exercise to surpass CRT-quality in ONE metric, so this is not necessarily a color-quality-preservation exercise for the first prototype scanning backlight. (Hopefully, it's good enough "wow" in clear motion, to become my primary video gaming monitor!) That said, I'm aware of potential interactions with the timings of my scanning backlight.

Thank you for the exciting graphs. It shows me excellent news about today's LCD's!

Thanks,
Mark Rejhon


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.

BlurBusters Blog -- Eliminating Motion Blur by 90%+ on LCD for games and computers

Rooting for upcoming low-persistence rolling-scan OLEDs too!

Mark Rejhon is offline  
post #29 of 47 Old 09-21-2012, 10:20 AM
AVS Special Member
 
borf's Avatar
 
Join Date: Oct 2003
Posts: 1,172
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 14
Quote:
Originally Posted by Mark Rejhon View Post

I understand why game makers add artificial GPU-generated motion blur to some of their games, and I can see the benefit. It works great when the games run at roughly 24fps-30fps. It makes the games look like a movie. The artistic effect is acceptable, and it DOES mask judder/stutters (which is a more artificial but unavoidable artifact), so I agree with you there about adding GPU real-time blur effects to every 3D videogame frame image.

I think you misunderstood Mark...I dislike artificial motion blur to put it mildly. But anyway,
Quote:
Originally Posted by Mark Rejhon View Post

..notice the tall 15ms GTG bar, observe carefully that only occurs for transition from one nearly-white color to another nearly-white color (e.g. transition from IRE 96 to IRE 98) (per color component, of course, R,G,B) -- a transition that won't be noticed except in scenes that are prone to crosstalk during 3D active shutter glasses.
Quote:
Originally Posted by Mark Rejhon View Post

..Even if I noticed RTC artifacts, RTC errors will show up as more inaccurate/weird grayscale with my scanning backlight (I anticipate these will start to show, if I strobe too early). For my scanning backlight project, I'm not presently concerned about a 5-10% degradation in color quality, in exchange for "CRT perfect" lack of motion blur in videogames.

Thankyou. So slower grey-to-grey colors are fortunately the least noticeable.
I would think Rtc errors may increase though at higher hz and become the limiting factor.
Quote:
Originally Posted by Mark Rejhon View Post

So all GTG combinations are covered for a scanning backlight! They have to, anyway, for 3D active shutter glasses, or we wouldn't be doing 3D today with LCD's!. Thank you for the exciting graphs. It shows me excellent news about today's LCD's!

You are too kind. Not doubting you, i was just asking for my benefit, as i have never seen resonse times truly worthy of 480 or 960hz which is about ~1ms avg response time. I will just assume everybody else has and with that, will erase my last doubt (retaining some healthy skeptacism of course!)
borf is offline  
post #30 of 47 Old 09-21-2012, 11:08 AM - Thread Starter
AVS Special Member
 
Mark Rejhon's Avatar
 
Join Date: Feb 1999
Location: North America
Posts: 8,124
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 2 Post(s)
Liked: 102
Quote:
Thankyou. So slower grey-to-grey colors are fortunately the least noticeable.
I would think Rtc errors may increase though at higher hz and become the limiting factor.
Perhaps. Experimentation is needed!

Apparently RTC errors "recover" quick enough for 3D active shutter glasses (I interpret 3D shutter glasses as convincing proof that RTC errors are a nonissue by end of a LCD refresh. Otherwise, how is 3D possible with an widespread 50-80% RTC error bleeding to the next eye?). Whenever there's an 80% error after say 2.4ms, it could have less than 1% error by 15ms. So the RTC errors in my scanning backlight won't be as extreme as the values in the graph. Notice that the big RTC errors don't occur on the slow responding GTG, and the big RTC errors occur on the fast responding GTG -- leaving a healthy safety margin for RTC error to disappear. That's the "pixel color bounce" -- the pixel overshoots by the % in the graph, and then bounces back. It's already finished bouncing back by the end of the refresh, or 3D wouldn't have been possible. By the end of the refresh cycle, the pixels are pretty much darn near their final value, and I am pretty certain that remaining side effects are acceptable: Lingering trailing-ghost (and distorted greyscale) artifact would merely look similar to the double image crosstalk found when using 3D glasses. Both situations only in high contrast scenes. Artifacts in incorrectness of pixel color, would be single percentages at worst (only with the worst color boundaries) and fractions of percentages at best (imperceptible with most color boundaries) - much like 3D crosstalk. I definitely see it become worse at higher refresh rates, since increasing refresh rate does not correspondingly speed up the LCD pixels, but it's apparently already good enough for 3D glasses at 120Hz, and thus ceased to be a limiting factor anymore.

It'll only get better from today. I've read about some developments that claim 1ms average response, and developments about accurate bidirectional GTG that maintains even better accuracy in the darken direction and brighten direction, with precisely controlled positive and negative voltages for less error. Such developments are done partly to reduce 3D crosstalk on LCD. It'll only benefit such scanning backlight designs in the future.
Quote:
You are too kind. Not doubting you, i was just asking for my benefit, as i have never seen resonse times truly worthy of 480 or 960hz which is about ~1ms avg response time. I will just assume everybody else has and with that, will erase my last doubt (retaining some healthy skeptacism of course!)
Understandable! smile.gif
Scanning backlights with 90% blur reduction just hasn't been done commercially before for many reasons, including the insane backlight brightness required (10 times brighter backlight) which wasn't possible cheaply before, and the only-recent introduction of LCD's fast enough for 3D. Amongst other factors such as annoying flicker of such short strobes (solved by going to higher refresh, even 120Hz. It would be like 120Hz PWM flicker, not too different from those old 180Hz PWM CCFL backlights). All the magic pieces of the puzzle are here now.

Thanks,
Mark Rejhon


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.

BlurBusters Blog -- Eliminating Motion Blur by 90%+ on LCD for games and computers

Rooting for upcoming low-persistence rolling-scan OLEDs too!

Mark Rejhon is offline  
Reply OLED Technology and Flat Panels General

User Tag List

Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off