AVS Forum banner

1 - 20 of 47 Posts

·
Premium Member
Joined
·
4,176 Posts
Discussion Starter #1
Goal: Creating a home-made scanning backlight as a mod for an existing LCD monitor, that has the ability to reduce motion blur to the point where the motion blur is less than on a CRT.


After an endorsement by John Carmack of iD software, I decided to proceed with this project:
Quote:
Mark Rejhon @mdrejhon

@ID_AA_Carmack I'm researching home-made Arduino scanning backlight (90%:10% dark:bright) using 100-200 watts of LED's. 120hz.net/showthread.php…

John Carmack @ID_AA_Carmack

@mdrejhon Good project. You definitely want to find the hardware vsync, don't try to communicate it from the host.
(Please see the project info before you say "it can't be done")



I am intending to proceed with this experiment within the next while, to allow LCD to have less motion blur than CRT. This requires a scanning backlight that's dark 90-95% of the time -- to reduce motion blur 90-95% without doing motion interpolation and without more than about 3-4ms added input lag. This is quite extreme and requires a lot of LED's (10-20x brighter backlight to compensate for the very long dark periods between refreshes). -- Fortunately, common 5-meter LED ribbons, often used for accent lighting, have made it cheap to buy 200 watts of LED's and cram all of them behind a 24" monitor, illuminating only 20 watts at a time for a scanning backlight. The best scanning backlights in today'd industry (e.g. Samsung/Sony/Elite) are dark only approximately 75% of the time.


I've designed a draft schematic. There may be errors, and there's no protection (e.g. overcurrent, overvoltage, etc), but this shows how relatively simple an Arduino scanning backlight really is. Most of the complexity is in the timing, synchronization -- still relatively simple Arduino programming.




Full size version: LINK


No modification of monitor electronics is required. I only need to know the VSYNC signal timing. Simple manual calibration adjustments can adjust the phase of the scanning, and to compensate for input lag. This can be a one-time step (for a given video mode) -- this would not be too different from a 3D shutter glasses crosstalk adjustment procedure.


[EDIT: This is an old post from 2012, archived for historical reasons -- Arduino Scanning Backlight on Blur Busters Forums .]
 

·
Premium Member
Joined
·
4,176 Posts
Discussion Starter #2
For the Arduino scanning backlight, there are specific requirements I need to research -- e.g. creating a small-scale breadboard trailblazer for this project. I've created electronics before, and I have programmed for more than 20 years, but this will be my first Arduino project. I've been researching, including Arduino's, to determine the best way to program it for a scanning backlight experiment.

Goals For Scanning backlight:


- At least 8 segments.

- Reduce motion blur by 90%. (Ability to be dark 90% of the time)

- Tunable in software. (1/240, 1/480, 1/960, and provisionally, 1/1920)

- Manual input lag and timing adjustment.

___

1. Decide a method of VSYNC detection.


Many methods possible. Will likely choose one of:

....(software) Signalling VSYNC from computer, using DirectX API RasterStatus.InVBlank() and RasterStatus.ScanLine .... (prone to CPU and USB timing variances)

....(hardware) Splicing video cable and use a VSYNC-detection circuit (easier with VGA, harder with HDMI/DP, not practical with HDCP)

....(hardware) Listen to 3D shutter glasses signal. It's conveniently synchronized with VSYNC. (however, this may only work during 3D mode)

....(hardware) Last resort: Use oscilloscope to find a "VSYNC signal" in my monitor's circuit. (very monitor-specific)


Note: Signalling the VSYNC from the host is not recommended (John Carmack said so!), likely due to variances in timing (e.g. CPU, USB, etc). Variances would interfere but this gives maximum flexibility for switching monitors in the future, and makes it monitor-independent. I could stamp microsecond timecodes on it to compensate (RasterStatus.ScanLine may play a role in 'compensating'). In this situation, an LCD monitor's natural 'input lag' plays into my favour: It gives me time to compensate for delays (wait shorter/longer until 'exaxctly' the known input lag) caused by timing fluctuation. I can also do averaging algorithms for the last X refreshes (e.g. 5 refreshes) to keep things even more accurate. The problem is that Windows is not a real time operating system, and there's no interrupt/event on the PC to catch InVBlank behavior. Another idea is almost randomly reading "ScanLine" and almost randomly transmitting (with a USB-timing-fluctuation-compensation timecode) it to the Arduino, and letting the Arduino calculate timings needed. This is far more complex software-wise, but far simpler and more flexible hardware-wise, especially if I want to be able to test multiple different LCD's with the same home-made scanning backlight.

___

2. Verify the precision requirements that I need.


- What are the precision requirements for length of flashes (amount of time that backlight segment is turned on)

- What are the precision requirements for sequencing (lighting up the next segment in a scaning backlight)

- What are the precision requirements for VSYNC (beginning the scanning sequence)


Milliseconds, microseconds? Experimentation will be needed. People who are familiar with PWM dimming, already know that microseconds matter a great deal here. Scanning backlights need to be run very precisely, sub-millisecond-level jitter _can_ be visually noticeable, because 1.0 millisecond versus 1.1 millisecond variance means a light is 10% brighter! That 0.1 millisecond makes a mammoth difference. We don't want annoying random flicker in a backlight! It's the same principle as PWM dimming -- if the pulses are even just 10% longer, the light is 10% brighter -- even if the pulse in PWM dimming are tiny (1ms versus 1.1ms pulses). Even though we're talking about timescales normally not noticeable to the human eye, precision plays an important role here because the many repeated pulses over a second, _adds_ up to a very noticeably brighter or darker picture. (120 flashes of 1.0 millisecond equals 120 milliseconds. But, 120 flashes of 1.1 milliseconds equals 132 milliseconds) So we must be precise here; pulses must not vary from refresh to refresh. However, we're not too concerned with the starting brightness of the backlight -- if the backlight is 10% too dim or too bright, we can deal with it -- it's the consistency between flashes that is more important. The length of the flash is directly related to the reduction in motion blur, the shorter the flash, the less motion blur, and since we're aiming for 1/960th second flash (with a hopeful 1/1920th second capability), that's approximately 1 millisecond.


As long as the average brightness remains the same over approximately a flicker fusion threshold (e.g. ~1/60sec), variances in the flicker timing (VSYNC, sequencing) isn't going to be as important as precision of flashes, as long as the flashes get done within the flicker fusion threshold. There may be other human vision sensitivities and behaviors I may not have taken into account, so experimentation is needed.


Estimated precision requirements:

Precision for length of flashes: +/- 0.5 millisecond

Precision for consistency of length of flashes: +/- one microsecond

Precision for sequencing: +/- somewhere less than 1/2 the time of a refresh (e.g. (1/120)/2 = 4 milliseconds)

Precision for VSYNC timing: +/- somewhere less than 1/2 the time of a refresh (e.g. (1/120)/2 = 4 milliseconds)


Goal of precision requirements is to better these requirements by an order of mangitude, for a safety margin for more sensitive humans and for errors. That means length of flashes would be precise to 0.1 microseconds.

This appears doable with Arduino. Arduino's are already very precise and very synchronous-predictable; Ardunio projects include TV signal generators -- THAT requires sub-microsecond precision for good-looking vertical lines in a horizontally-scanned signal.

Example: http://www.javiervalcarce.eu/wiki/TV_Video_Signal_Generator_with_Arduino

___

3. Arduino synchronization to VSYNC


...(preferred) Arduino Interrupt method. attachInterrupt() on input pin connected to VSYNC. However, at 120Hz, the VSYNC is less than a millisecond long, so I'll need to verify if I can detect short pulses via attachInterrupt() on Arduino. Worse comes to worse, I can add a simple toggle circuit inline to the VSYNC signal, to make that signal changes only 120 times a second (e.g. on for even refreshes, off for odd refreshes), which is a frequency low enough to be detectable using Arduino. attachInterrupt() can interrupt any in-progress delays, so this is convenient, as long as I don't noticeably lengthen the delay beyond my precision requirements.

...(alternate) Arduino Poll method. This may complicate precise input lag compensation since I essentially need to do 2 things at the same time precisely (one for precise VSYNC polling and input lag compensation, the other for precise scanning backlight timing). I could use two Arduinos running concurrently, side by side -- or run an Arduino along with helper chips such as an ATtiny chip -- to keep my precision requirements for my 2 precise tasks.


I anticipate being able to use the Interrupt method; but will keep the poll method as a backup plan.

___

4. Dimming ability for scanning backlight


...(preferred) Voltage method. A voltage-adjustable power supply to the backlight segments. (Note: A tight voltage range can dim LED's from 0% through 100%)

...(alternate) PWM method. Dimming only during the time a backlight segment is considered 'on'. e.g. a 1/960th second flash would use microsecond delays to PWM-flicker the light over the 1/960th second flash, for a dimmed flash. A tight PWM loop on an Arduino is capable of microsecond PWM (it can do it -- Arduino software is already used as a direct video signal generator).


The dimming of the backlight shouldn't interfere with its scanning operation. Thus, simplest method to not interfere, is to use a voltage controlled power supply that can dim the LED's simply using voltage. Adding PWM to a scanning backlight is far more complicated (especially if I write it as an Arduino program) since I can only PWM only during the intended flash cycle; or I lose the motion-blur-eliminating ability.

___

5. Adjustable Input lag compensation


...(preferred) Use the Arduino micros() function to start a scanning sequence exactly X microseconds after the VSYNC signal.


Hopefully this can be done in the same Arduino, as I have to keep completing the previous scanning backlight refresh sequence (1/120th second), while receiving a VSYNC signal. Worse comes to worse, I can use two separate Arduinos's or an Arduino running along with an ATtiny (one for precisely listening to VSYNC and doing input lag compensation, another one to do precise backlight sequencing). If I use attachInterrupt() for VSYNC interrupt on Arduino, I can capture the current micros() value and save it to a variable. Wait for the current scanning-backlight sequence to finish, and then I start watching micros() to time the next scanning backlight refresh sequence.


___

6. Precise sequencing of backlight segments.


...(preferred) Tiny delays are done on Arduino with delayMicroseconds(). Perfect for sequencing the scanning light segments. Turn one backlight segment on, delay, turn off, repeat for next backlight segment.

...(alternate) Use the PWM outputs (six of them) of an Arduino, or use a companion component to do the pulsing/sequencing for me. These PWM outputs can be configured to pulse in sequence. However, these outputs won't give me the precision needed for a highly-adjustable scanning backlight capable of simulating "1920Hz"


The tiny delays on the Arduino is currently my plan. I also need to do input lag compensation, so I have to start sequencing the backlight at the correct time delay after a VSYNC. I am also aware that interrupt routines (attachInterrupt()) will delay the delay, but I plan to keep my interrupt very short (less than 0.5 microsecond execution time, see precision requirements at top) to make this a non-issue.


Even though my goal is "960Hz" equivalence, I want to be able to play with "1920Hz" equivalence just for experimentation and overkill's sake, and simply litreally "pwn" the "My LCD is better than CRT" prize, even though it will probably require a 200-watt backlight to do so without a dim picture.

___

Likely Steps


-- The next step is to download an electronics schematic creator program and create the schematic diagram [DONE].

-- Emulate, if needed. Virtual Breadboard ( http://www.virtualbreadboard.com/ ) has an electronics circuit simulator including an Arduino emulator. It would work perfectly for testing needs to run in slow-motion mode for visual verification of behavior, although it won't be timing-precise, it would at least allow me to visually test the code in slow-motion even before I buy the parts.

-- After that, the subsequent step is to breadboard a desktop prototype with 8 simple LED's -- more like a blinky toy -- that can run at low speed (human visible speeds) and/or high speed (scanning backlight).

-- Finally, choose the first computer monitor to hack apart. Decide if I want to try taking apart my old Samsung 245BW (72Hz limit) or buy a good high speed panel (3D 120Hz panel). My Samsung is very easy to take apart, and it is disposable (I want to replace it with a Catleap/******** 1440p 120Hz or similar within two or three months) so it is a safe 'first platform' to test on, even though its old technology means its response speed will cause more ghost after-images than today's 3D 120Hz panels, it will at least allow a large amount of testing before risking a higher-end LCD to it.

-- Create a high-power backlight (200watts). This will be the fun part of the project, buying 20 meters of 6500K LED tape and cramming all 2,400 LED's in a 2-foot wide 16:9 rectangle (suitable for 24"-27" panels). This might be massive overkill given, but I want to eventually nail the "1920Hz"-equialence "My LCD is better than CRT" prize. Only 10-20 watts of LED's would be lit up at a time, anyway. Appropriate power supply, switching transistors for each segment (25+ watt capable), etc. Attach it to the Arduino outputs, put LCD glass in front, and tweak away.

___


Although I do not expect many people here are familiar with Arduino programming, I'd love comments from anybody familiar with an Arduino, to tell me if there's any technical Arduino gotchas I should be aware of.


[EDIT: This is an old post from 2012, archived for historical reasons -- Arduino Scanning Backlight on Blur Busters Forums .]
 

·
Premium Member
Joined
·
4,176 Posts
Discussion Starter #3
Someone emailed me asking about 200 watts being insane power consumption.

The average power consumption would actually be only be ~10 watts (if illuminating a 5% section all the time), or ~20 watts (if illuminating a 10% section at a time).


P.S. I don't mean superior to CRT in all metrics. There will always be professional studio-league LCD monitors that will have better color. However, one metric that has not been adequately addressed is motion blur -- and that's the sole metric that this scanning backlight aims to solve. (That said, adding this technology to a professional studio LCD monitor, is potentially useful)
 

·
Registered
Joined
·
1,170 Posts

Quote:
Originally Posted by Mark Rejhon  /t/1429546/arduino-scanning-backlig...o-interpolation-crt-like-motion#post_22405849


....(hardware) Last resort: Use oscilloscope to find a "VSYNC signal" in my monitor's circuit. (very monitor-specific)

Timing cues from the monitor sound best to me. In addition to the timing variances you mentioned, Input lag varies from 1 to 5 frames usually (16-80ms) so i don't think you can sync with DirectX and compensate for LCD lag with an averaging algorithm.
Quote:
Originally Posted by Mark Rejhon  /t/1429546/arduino-scanning-backlig...o-interpolation-crt-like-motion#post_22405849


For example, a single 8ms refresh (1/120th second) for a 120Hz display, can be enhanced with a scanning/strobed backight:

2ms -- wait for LCD pixel to finish refreshing (unseen, while in the dark)

5ms -- wait a little longer for most of ghosting to disappear (unseen, while in the dark)

1ms -- flash the backlight quickly. (1/960th or 1/1000th second -- or even 1/2000th second!)

In this scenario, each pixel (refreshing top to bottom) must sync to its own individual led.. Can this be done with "globally placed" led strips. Otherwise there is a huge "fudge factor" if trying to illujminate crystals at full transition, as each color also has additionally a unique transition time. That's ok, if you accept the imprecise nature (reduced performance?) of back light scanning. Just a thought - an lcd with global refresh would eliminate the refresh timing issue without resorting to indivdual leds. You could then sync the backlight to the average color transition (not perfect).
Quote:
Originally Posted by Mark Rejhon  /t/1429546/arduino-scanning-backlig...o-interpolation-crt-like-motion#post_22405849


Even though my goal is "960Hz" equivalence, I want to be able to play with "1920Hz" equivalence just for experimentation and overkill's sake

I'd stick with 960hz as 1920hz would require 240 unique fps from the video card (8-strip back light). You could choose not to raise the frame rate and flash each frame twice, but that increases average hold time and blur (sorry if i'm preaching to the choir but you probably have played a 60fps game on a blur-free120hz crt to see this phenomenon - it is not subtle!) Then again 240fps @ 1920hz (8 strip scanning backlight) would be better theoretically..
 

·
Premium Member
Joined
·
4,176 Posts
Discussion Starter #5

Quote:
Originally Posted by borf  /t/1429546/arduino-scanning-backlig...o-interpolation-crt-like-motion#post_22407039


Timing cues from the monitor sound best to me. In addition to the timing variances you mentioned, Input lag varies from 1 to 5 frames usually (16-80ms) so i don't think you can sync with DirectX and compensate for LCD lag with an averaging algorithm.
I'm not trying to solve the LCD lag via DirectX:


There's a few separate issues being solved here.

1. Input lag. It can be manually calibrated using a software slider. It would not be too different from a software slider for crosstalk calibration for computer 3D glasses (syncing shutter to a specific LCD). For a specific video mode, the input lag is fixed and microsecond-accurate, so this can be a one-time manual adjustment, on a motion test pattern with color patterns.

2. Listen for VSYNC is a separate problem from input lag, and can be solved as an independent problem.

3. Sanning speed within a refresh (e.g. length of actual scan, which may be done faster than the display cable's refresh). This, too, can be a one-time manual adjustment (for a specific video mode). I can also make it adjustable to instantaneous (e.g. full panel strobe) for those global-refresh panels (e.g. multiscan LCD).


My goal is a reusable 24"-wide backlight panel that can recycled with any hackable 24-27" monitor for testing/experimentation; so I want to be as independent of the monitor electronics as possible, by providing the separate manual adjustment for the input lag and the intra-refresh scanning speed. 27" panels are 23.5" wide. Older LCD displays are easier to mod since the backlight are separate from the glass, but newer LCD monitors often use laptop-style LCD's which builds-in hard-to-remove backlight as part of panel assemblies.
Quote:
In this scenario, each pixel (refreshing top to bottom) must sync to its own individual led.. Can this be done with "globally placed" led strips. Otherwise there is a huge "fudge factor" if trying to illujminate crystals at full transition, as each color also has additionally a unique transition time. That's ok, if you accept the imprecise nature (reduced performance?) of back light scanning. Just a thought - an lcd with global refresh would eliminate the refresh timing issue without resorting to indivdual leds. You could then sync the backlight to the average color transition (not perfect).
Right. LCD pixels are continuously changing from one color to the next in consecutive frames, with most of the change completed in the first few (approx ~2) milliseconds, but transition is virtually done towards the end of the cycle; so different parts of LCD in different parts of the same scanning backlight segment, shouldn't be too different. If I remember correctly, literature online show that early scanning backlights in various 2006 computer monitors (reduced motion blur by only ~25%) only had 4 or 8 segments (CCFL), and I'd be surprised if it had more than 8 segments. There may be about 2-3% variances in the colorscale, but I expect color variances less than the difference between, say, an IPS LCD versus a TN LCD.
Quote:
I'd stick with 960hz as 1920hz would require 240 unique fps from the video card (8-strip back light). You could choose not to raise the frame rate and flash each frame twice, but that increases average hold time and blur (sorry if i'm preaching to the choir but you probably have played a 60fps game on a blur-free120hz crt to see this phenomenon - it is not subtle!) Then again 240fps @ 1920hz (8 strip scanning backlight) would be better theoretically..
I can still do 1920Hz-simulation with just 8 segments. I don't have to step the sequence exactly:

I'd just flash one segment 1/1920 second, wait in the dark 1/1920 second, flash the next segment, and so on. It'd just simply look like the phase-width halved in PWM dimming the picture by 50%. The shorter hold time of illumination gives the 50% less motion blur. I'd be able to simulate random Hz-equivalences, such as 1345Hz-equivalence or 773Hz-equivalence, just by calculating how many segments to illuminate, and for how long each. For example, one segment illuminated for 1/773second even as the next segment starts illuminating 1/960 second later for 1/773 second. That means sometimes one segment is illuminated at a time, and two consecutive segments are illuminated at other times. And so on. 1/480 equivalence would be done by illuminating two segments at a time, at all times, sliding downwards in sync with the scan, and 1/240 equivalence would be done by illuminating four segments at a time, sliding downwards in sync with the scan.


I plan to order parts, and within the next few weeks, do some kitchen countertop prototyping and experimentation, and do some oscilloscope measurements on it (on the pulses, and comparing it to light output using a fast-responding photocell). Then sometime after that Once I've programmed the Arduino, and verified correct scanning behavior, including the short length of pulses running in correct sequence, I'll create the 200 watt backlight (note: 10-20 actual watts) out of LED ribbons and test my first LCD glass on it. And then finding a sacrifical monitor (or few) to test with!


Also, I may opt for 12-segments or 16 instead of 8, depending on the preliminary tests. Additional Arduino analog pins are allowed to be used as digital outputs, provided they are also timing-accurate too. So the Arduino can allow me to digitally signal up to 19, but I need to keep a digital signal free for listening to VSYNC, and I also want to keep Tx/Rx free for real-time host communications (it also works over USB if I don't use those pins) so that coincidentally leaves 16 free pins. Host communication is needed for PC-based reconfiguration of scanning backlight, even if I ultimately don't use host communication for VSYNC (that route will at least be experimented with)


Also, since a 24" panel will cover only ~10-11 segments of a 12-segment scanning backlight designed large enough for a 27" panel. I can just use manual 'scanning speed' and 'scan until segment X' adjustments to compensate for a LCD panel too short for the scanning backlight designed to be flexible for testing multiple different 24"-27" LCD glass. It's simply a matter of math and Arduino programming, plus create a software utility that makes adjustment easy.
 

·
Registered
Joined
·
246 Posts

Quote:
Originally Posted by Mark Rejhon  /t/1429546/arduino-scanning-backlig...o-interpolation-crt-like-motion#post_22411440


I'

I'd just flash one segment 1/1920 second, wait in the dark 1/1920 second, flash the next segment, and so on.

It is quite pointless to aim for rates like this when the Human visual system is your target. Everything we see is averaged over about 10-20 ms.


The highest limit for noticing any change with the human visual system is Flicker Fusion Frequency and that is an extreme case of super high contrast Black and White flashing and even then in most humans it maxes out just above 60 Hz.


Heck a great many CCFL LCD backlights already use PWM for brightness control, They turn the whole screen backlight on/off at rates like 175 Hz and it is quite invisible and doesn't really help with motion blur.


Your best result for a scanning backlight will be obtained with the fastest changing LCD architecture and a backlight timed to be off for as much of the transition phase that can be handled before flicker becomes annoying. Once you start cycling your light source over 100Hz it is pretty much the same as having it on all the time as far as human beings are concerned.
 

·
Registered
Joined
·
3,186 Posts
Quite an interesting thread. There are many papers on scanning backlight and at least a couple that I have that discuss a short 10% duty period. It is quite an old idea and I think you are right that the main reason it was not popular was the efficiency reduction.


Some thoughts on the design and the science:


1 - The diffuser creates enough cross-talk between backlight segments to limit the hold time to 2 segment flash periods or greater. Assuming you have 8 segments running at 120Hz this would create an effective hold time of ~1/480 or 2ms. Still not better than CRT.


2 - AFAIK even if the panel had zero cross-talk (OLED) and an 8 segment scanning, it would produce about equal hold time to CRT (~1ms) (not surpass it)


3 - Not sure about the claim that FFT is better for raster vs strobbing. Is this in the literature?


4 - Starting the backlight scan just before the next LC refresh may not be ideal as the aformentioned cross-talk may produce a visible ghost. There must be an ideal temporal position that avoids the LC refresh response and the threshold for cross-talk.
 

·
Registered
Joined
·
1,170 Posts
Allow me to post more dumb thoughts Xrox. I've no more feedback for Mark - It would waste his time.
Quote:
Originally Posted by xrox  /t/1429546/arduino-scanning-backlig...o-interpolation-crt-like-motion#post_22412877


Quite an interesting thread. There are many papers on scanning backlight and at least a couple that I have that discuss a short 10% duty period. It is quite an old idea and I think you are right that the main reason it was not popular was the efficiency reduction.

Did that refer to CCFL only or LED too... These LEDs are apparently ~ 10-20W per strip and 10-20x brighter than normal (to offset the shorter duty cycle). If this is too much a power requirement, how about adding more strips.

Quote:
Originally Posted by xrox  /t/1429546/arduino-scanning-backlig...o-interpolation-crt-like-motion#post_22412877


1 - The diffuser creates enough cross-talk between backlight segments to limit the hold time to 2 segment flash periods or greater..

Random (non adjacent) sequencing could eliminate crosstalk? (something like Mark said in the last reply).

Why is a global diffuser needed with locally lit segments anyway.

Quote:
Originally Posted by xrox  /t/1429546/arduino-scanning-backlig...o-interpolation-crt-like-motion#post_22412877


2 - AFAIK even if the panel had zero cross-talk (OLED) and an 8 segment scanning, it would produce about equal hold time to CRT (~1ms) (not surpass it)

Good enough for me. Is that with 960hz... What about 1980hz? But the Arduino can apparently do much better than that:

Quote:
Goal of precision requirements is to better these requirements by an order of mangitude, for a safety margin for more sensitive humans and for errors. That means length of flashes would be precise to 0.1 microseconds. This appears doable with Arduino. Arduino's are already very precise and very synchronous-predictable; Ardunio projects include TV signal generators -- THAT requires sub-microsecond precision for good-looking vertical lines in a horizontally-scanned signal
Quote:
Originally Posted by guidryp  /t/1429546/arduino-scanning-backlig...o-interpolation-crt-like-motion#post_22412642


Once you start cycling your light source over 100Hz it is pretty much the same as having it on all the time as far as human beings are concerned.

I agree. As long as the frames are unique there should be no problem.
 

·
Premium Member
Joined
·
4,176 Posts
Discussion Starter #9

Quote:
Originally Posted by guidryp  /t/1429546/arduino-scanning-backlig...o-interpolation-crt-like-motion#post_22412642


It is quite pointless to aim for rates like this when the Human visual system is your target. Everything we see is averaged over about 10-20 ms.

The highest limit for noticing any change with the human visual system is Flicker Fusion Frequency and that is an extreme case of super high contrast Black and White flashing and even then in most humans it maxes out just above 60 Hz.
I generally agree that it's pretty pointless, but it's a "free" feature. I'm currently aiming the wattage of the backlight for a sufficiently bright picture during "960Hz" simulation, but I want to have the "free" software feature of "simulated 1920Hz" (without motion interpolation) just for two reasons:

(1) Experimentation if it's even possible *at all* to tell the difference.

(2) Ability to claim that my LCD setup actually has less motion blur than CRT. (Note: I'm not solving *other* LCD deficiencies such as black levels, etc)


It's a free software feature that costs nothing extra, as long as the Arduino is capable of it, so why not include it for experimentation's sake. Even though running at "simulated 1920Hz" will mean half the brightness of "simulated 960Hz" due to the half-length flashes, it's enough for experimentation. However, I agree, that the sweet spot is probably 1/960 second. Any further, the extra lumens necessary for shorter flashes isn't worth it. (e.g. I'd need to design a 400 watt backlight in order to have a normal-brightness image using 1/1920sec flashes)


Heck, it does not even stop me from even experimenting with 1/3840 flash (at one-quarter brightness) or even 1/7680 flash (at one-eighth of brightness). I'll probably hit the latency of the phosphor of a white LED first, as the limiting factor, though that is bypassable by using R/G/B LED's which switch at nanosecond-league speeds.
Quote:
Heck a great many CCFL LCD backlights already use PWM for brightness control, They turn the whole screen backlight on/off at rates like 175 Hz and it is quite invisible and doesn't really help with motion blur.
Correct. Though, just like I can detect rainbow artifacts, I can detect the stroboscopic effet, even 500 Hz PWM is detectable indirectly if you know how to look for PWM stroboscopic artifacts (not everyone is sensitive to them, much like DLP rainbows is a person specific thing.)

Academic note: Detecting stroboscopic artifacts (e.g. DLP rainbows, PWM, etc) is a different vision phenomena than flicker fusion.

Example: Test mouse cursor on black screen: 180Hz PWM on a 60Hz signal shows a triple-cursor motion blur instead of a continous-blur.
Quote:
Your best result for a scanning backlight will be obtained with the fastest changing LCD architecture and a backlight timed to be off for as much of the transition phase that can be handled before flicker becomes annoying. Once you start cycling your light source over 100Hz it is pretty much the same as having it on all the time as far as human beings are concerned.
Flicker fusion is /different/ from Store-n-hold blur which is also /different/ from LCD response blur. They can all interact, of course.


Human perception of high-speed vision phenomena (different from "flicker fusion")

1. Witnessing high speed photography. Even xenon strobe lights that flash less than 1/5000th second, you can still see the flash. Though it's as instantaneous-looking to the human eye as a 1/200th second flash. Even a millionth-second flash would be detectable, provided there was enough photons to hit the eyeballs. It's called "integration" -- your cones/rods in your eyeballs are like tiny buckets collecting photons. Once you're far beyond flicker fusion threshold, it doesn't matter how fast or slow these buckets are filled: A millon-lumen flash for a nanosecond has the same number of photons as a one-lumen flash for a millisecond.

2. Wagon wheel effects. Human can detect continuous versus non-continuous light sources indirectly using the wagon wheel (stroboscopic) effect, and its cousins (DLP rainbows, etc). Given sufficient speed, insanely high numbers become detectable. Imagine a high speed wagon-wheel disc spinning synchronized with a theoretical 5000Hz strobe light. Wheel looks stationary. However, change strobe light to 5,001Hz without changing the wheel speed. The wagon wheel looks like it spins slowly backwards.

3. Motion blur. Detectability of motion blur is massively well beyond flicker fusion.


Now, apply to this science to store-and-hold phenomena:

EXAMPLE: Fast panning scene is moving across the screen at 1 inch every 1/60th of a second. Let's say, your eye is tracking a sharp object during the screen pan. Each frame smears across your vision field of a static frame, while your eyes are continuously tracking an object. That's persistence of vision. That creates the motion blur effect on continuously-shining displays (most LCD's). So strictly by the numbers for fast-panning motion moving at 1 inch every 1/60 second:


For fast motion moving at 1 inch every 1/60th second, the hold-type blur on LCD is as follows:

At 60Hz, the motion blur is 1" thick (entry level HDTV's, regular monitors) ...

At 120Hz, the motion blur is 0.5" thick (120Hz computer monitors, interpolated HDTV's) ...

At 240Hz, the motoin blur is 0.25" thick (interpolated HDTV's)...

At 480Hz, the motion blur is 0.125" thick ...

At 960Hz, the motion blur is 0.0625" thick (CRT style, high end HDTV's) ...


A good diagram about store-and-hold motion blur phenomena, is seen in the "Hold type blur" explanations (page 3) of this academic paper . This paper even explains why "LCD-response" blur is /different/ from "store-n-hold blur" (And explains why I can bypass LCD response speed as the primary factor of motion blur, by using shorter light pulses than the speed of the LCD pixel response).


But imagine an IMAX screen size instead, and you're sitting near the front row, and the motion is a whole foot per 1/60th second, and your eyes are able to track very fast objects -- and you're displaying a TV-opera-style 60 frames per second on the IMAX screen. (This is theoretical only; I know of no projector with "960Hz" simulation, due to the light that would require without interpolation!). Then, at this point, it is wholly possible the curve of diminishing returns don't stop getting detected beyond 1/960th second, because the stepping is large enough.


At this point, any rational person smart enough to respect physics, would suddenly stop saying "humans can't tell apart 960fps versus 1920fps" -- once you're armed with the information I wrote, it now starts sounding like an unsubstantiated claim such as telling a human "Human can't tell apart a stationary photograph taken using a 1/960sec shutter and taken using a 1/1920sec shutter". Being smart, you would then ask "It'd be useful to get some /scientific/ testing done on this matter, where the real point of diminishing returns are". But generally, I am with you, it probably doesn't matter beyond around "960Hz simulation" -- printed sport photography at 1/960sec vs 1/1920sec shutter speeds are hard to tell apart too, but human eyes are able to tell them apart. Back in year 1992, people assumed humans could not tell apart 30fps versus 60fps. Today, we're in a similar situation of "Humans can't tell 240Hz vs 480Hz vs 960Hz" (this isn't a simple flicker fusion threshold matter, so this statement is false!) But people begin to understand better once they read more about hold-type motion blur, as I've written above.


I've got plenty of references handy to explain detection of various temporal vision phenomena:

List of References: List of References
 

·
Premium Member
Joined
·
4,176 Posts
Discussion Starter #10

Quote:
Originally Posted by xrox  /t/1429546/arduino-scanning-backlig...o-interpolation-crt-like-motion#post_22412877


Quite an interesting thread. There are many papers on scanning backlight and at least a couple that I have that discuss a short 10% duty period. It is quite an old idea and I think you are right that the main reason it was not popular was the efficiency reduction.

Some thoughts on the design and the science:

1 - The diffuser creates enough cross-talk between backlight segments to limit the hold time to 2 segment flash periods or greater. Assuming you have 8 segments running at 120Hz this would create an effective hold time of ~1/480 or 2ms. Still not better than CRT.
Bleed between backlight segments will have little effect on hold time in a properly engineered scanning backlight. You want a little bit of bleed for other reasons for LCD's (to blend between segments).. As long as the backlight is flashed correctly, the length of the flash is what matters, not the bleed. I will probably design my backlight panel to also be able to run as 16 segments, if I determine I can use the analog Arduino inputs as precise digital outputs.


Assuming bleed only affects adjacent segments, the maximum possible degradation in motion resolution is 50%, so I just simulate a higher Hz to compensate. Actual perceived degradation will be far less, since bleed only affects certain sharp boundaries in images in motion, and the eyes are constantly moving all over the frame. I'd say probably less than 10% perceived reduction in motion blur caused by segment bleed. This is also an additional reason to still experiment with "simulated 1920Hz" operation, to compensate for bleed issues. Bleed artifacts may show up as PWM artifacts (two flickers rather than one, at boundaries between scanning backlight segments). E.g. high-speed horizontally moving vertical white bar on black background, might show bleed artifacts where the scanning backlight segments meet. Bleed artifacts (noticing boundaries between scanning backlight segments) would show up only during fast motion, and probably be harder to notice (beyond ~240Hz or ~480Hz simulation) than, say, DLP rainbows. The more segments, the harder or more impossible to notice. Also, the Arduino can technically let me do a 3840Hz flash (1/4 of 960) at quarter brightness of 960Hz, and even beyond. My limiting factor will be the amount of backlight brightness available -- there is no software limitation from allowing me to have less motion blur than CRT -- it will be the amount of lumens I can get into tiny flashes.
Quote:
2 - AFAIK even if the panel had zero cross-talk (OLED) and an 8 segment scanning, it would produce about equal hold time to CRT (~1ms) (not surpass it)
Incorrect -- 8 segment does not have a hold time limit of 1ms. (especially if there's no bleed regions and no cross talk) The segment size does not dictate hold limitations, unless you're following a requiremnt "next segment must illuminate at the same time as turning off the previous segment".

To simulate a hold time of 0.5 millisecond (1/1920th second) with an 8 segment scanning backlight at 120Hz, you flash the segment at 1/1920sec each, even if it means waiting in the dark a while before flashing the next segment. The scanning stepping is only to stay in sync with the LCD refresh, each segment can be treated as if it was a completely independent, separate LCD display (from a programming standpoint). Thus, as long as the strobe is sufficiently short and illuminates only refreshed LCD pixels, it doesn't matter how few segments there are.
Fact:

*** Segment count does not necesarily dictate hold type limit. You don't have to flash the segments synchronously. Think of each segment as a completely independent full-strobe backlight, and each segment is a separate LCD display. Assuming you catch already-refreshed LCD pixels during your strobe, the length of the flash dictates the motion blur, and not the number of "displays" (segments)

*** The size of the segments need to be smaller than the size of the portion of LCD (at any instantaneous moment) of fully-refreshed LCD pixels.

(For simplicity sake, "fully-refreshed" would be LCD pixels that are at least 99% to its correct color value. We can't be perfect here; there's some residuals much like the crosstalk between the two frames for 3D shutter glasses. So we define a cutoff point for LCD pixels for defining a "goal" for a scanning backlight operation. Any color imperfections will occur from any scanning backlight; but it can be made tiny enough not to be an issue. If a picture is 1% too bright or too dim, that's not a problem. If red is 1% incorrect, it's OK as long as the benefit is worth it, especially if the incorrectness can be calibrated out using picture adjustments, etc)


Instead of approaching this as a temporal problem, approach it as a geometry problem.

What you really want to know is "How many percent of the LCD display has pixels that are already within 99% of its final color value for the current refresh?".

...Before I explain, I need to explain how LCD pixels work (for those not familiar): When LCD pixels are refreshed, a pixel is being changed from one color to the next. Immediatly after the pixel is refreshed, it changes pretty quickly (especially if accelerated using overvoltage/undervoltage for response-time acceleration) in the first millisecond, slowly in the next millisecond, mostly finished within 2 milliseconds, but it may still be a few percent off its final color value. The next several milliseconds into a refresh, the pixel gradually inches closer towards the refresh. It's a logarithmic curve. Scanning backlights weren't very practical until LCD pixels were able to mostly (99%+) finish refreshing by the end of the frame, before the next frame -- a necessity for 3D, too.

...LCD refreshing is done from top-to-bottom on many LCD panels, in a fashion similar to CRT scanning. If a frame refresh takes 8 milliseconds at 120Hz, and LCD pixels are considered "fully refreshed" about 6 milliseconds later, that means approximately 1/4 vertical height, or 25% of the screen. An 8-segment scanning backlight would have segments small enough to illuminate just fully refreshed LCD, and engineered correctly, the backlight bleed.

...We obviously have to cover the granularity of scanning backlight. Since the vertical dimension of an LCD is proportional to time since the pixel was refreshed, we can have tiny inconsistencies/variances (less than 1%) in the amount of completeness in LCD pixel refreshes along the top edge versus bottom edge of a scanning backlight boundary, especially in a low-granularity scanning backlight, but this inconsistency will tend to be masked by the bleed between scanning backlight segments (see! A little bleed is beneficial here!)
Quote:
4 - Starting the backlight scan just before the next LC refresh may not be ideal as the aformentioned cross-talk may produce a visible ghost. There must be an ideal temporal position that avoids the LC refresh response and the threshold for cross-talk.
Yes, that's correct. My manual adjustment app utility (also for input lag) will take care of this, by allowing adjustment for minimum temporal artifacts in a motion test pattern. It is expected that the temporal delay is fixed and stable, permitting a one-time adjustment for a specific video mode.

___________________


Finally, just to be clear:

Bottom line fact: number of segments has no absolute-limiting effect on ability to reduce motion blur.

(I'm excluding bleed, here)


It's possible to simulate "1920Hz" out of a 60Hz signal, using just a 2-segment or 4-scanning scanning backlight (provided that the surface area of all the practically fully-refreshed LCD pixels exceeds the size of the segments. If the whole LCD is already refreshed at any instant (some high-speed LCD's are able to do this now), you only need full-backlight strobe (equivalent to a 1 segment scanning backlight) This is tantamount to black frame insertion (identical in purpose).


Of course, I skipped considering the bleed boundary between two scanning backlight segments -- but you brought up the OLED example where it is a non-issue. The bleed might be limited by the prescence of two flashes (from adjacent scanning backlight segments). The segment bleed will only the amount of reduction in reduce motion blur slightly (and only along a narrow sliver where the bleed occur). The average perceived motion blur will still scale with the flash duration.


That said, you may have convinced me to try for 16 segments instead of just 8 segments; to reduce visibility of bleed artifacts (just in case they're easier to notice than expected), by allowing me to test 1/1920 operation for the non-bleed parts of LCD, and 1/960sec for the bleed parts of LCD. Additionally, it will allow me a smooth-sliding 8-segment scanning backlight too (illuminating 2 segments at a time and stepping downwards one segment at a time), in case segment bleed artifacts is more noticeable than I expected. Also, the scanning speed of the scanning backlight can be sped up within a refresh, to further reduce bleed artifacts, though you run the risk of gradually increasing inconsistencies along the vertical dimension of the image, the faster you scan, due to catching LCD at different stages of refresh. There will also be an intra-refresh scanning speed adjustmeht. My goal is to simply have just two main adjustments (other than obvious one such as brightness of backlight, by controlling the power supply voltage to the LED's) -- phasing/latency adjustment (to adjust for input lag and to get correct phase with LCD refresh) -- and scanning speed adjustment (to adjust for scanning speed within a refresh), with the maximum speed setting be equivalent to a full-backlight strobe.
 

·
Registered
Joined
·
3,186 Posts

Quote:
Originally Posted by Mark Rejhon  /t/1429546/arduino-scanning-backlig...o-interpolation-crt-like-motion#post_22414821


Bleed between backlight segments will have little effect on hold time in a properly engineered scanning backlight. You want a little bit of bleed for other reasons for LCD's (to blend between segments
If I am reading you correctly, you may have not understood the stated issue. There is a light diffuser between the backlight and the LC panel that is inherent to LCD design to enable acceptable uniformity. One segment of the backlight will hit the diffuser and spread laterally. What this means is that adjacent segments will be illuminated enough to add to the hold time.
Quote:
Originally Posted by Mark Rejhon  /t/1429546/arduino-scanning-backlig...o-interpolation-crt-like-motion#post_22414821


Incorrect -- 8 segment does not have a hold time limit of 1ms. (especially if there's no bleed regions and no cross talk) The segment size does not dictate hold limitations, unless you're following a requiremnt "next segment must illuminate at the same time as turning off the previous segment".
What I wrote was from the literature and AFAIK was correct. Yes obviously strobbing and scanning at the same time can further reduce below 1ms but again you run into the limiting cross-talk issue. Below is an example showing the duty cycle of the scanning backlight vs BET(fraction of frame time) for a given cross-talk.




Also, scanning + strobbing is going to tax your light output massively and increase power consumption. Also your LED lifetime may worsen. Not to mention the temporal motion artifacts it might cause.


Quote:
Originally Posted by Mark Rejhon  /t/1429546/arduino-scanning-backlig...o-interpolation-crt-like-motion#post_22414821


Think of each segment as a completely independent full-strobe backlight, and each segment is a separate LCD display. Assuming you catch already-refreshed LCD pixels during your strobe, the length of the flash dictates the motion blur, and not the number of "displays" (segments)…….
No need to explain, this is quite an old idea. The novel/interesting part is the low cost, the light output, and the DIY. I’m still skeptical but very interested. Does that make sense?



Below are some graphics describing the concept.




Quote:
Originally Posted by Mark Rejhon  /t/1429546/arduino-scanning-backlig...o-interpolation-crt-like-motion#post_22414821

Bottom line fact: number of segments has no absolute-limiting effect on ability to reduce motion blur.
Sorry, not true IMO. The diffuser and subsequent cross-talk are inherent to LCD. Strobbing will help but not as much as you state. Check out this graphic describing hold time in an interpolated system vs a frame repeat system. The CRT with frame repeat still produces motion blur but it is less due to the effective reduction in hold time due to the second pulse duty cycle.


Quote:
Originally Posted by Mark Rejhon  /t/1429546/arduino-scanning-backlig...o-interpolation-crt-like-motion#post_22414821


That said, you may have convinced me to try for 16 segments instead of just 8 segments; to reduce visibility of bleed…...
I actually believe that the inherent diffuser in the LCD will be somewhat limiting in all cases. And increasing the segments may actually make it worse because the cross-talk will spread over more segments (because each segment is smaller?).


One way to overcome this is to refresh the panel ultra fast and then strobe the backlight globally (all LEDs) for an “extremely” short time. This is also in the literature.


One last graph that adds to my skepticism. To me it shows that the motion benefits begin to level off as the duty cycle of the backlight scan decreases (similar to what guidryp was saying?)

 
  • Like
Reactions: Mark Rejhon

·
Premium Member
Joined
·
4,176 Posts
Discussion Starter #12
Excellent references xrox, and now I understand much better what you are trying to explain. However, I managed to figure out most of what you were trying to say. Now on that basis, let's address each point.
Quote:
Originally Posted by xrox  /t/1429546/arduino-scanning-backlig...o-interpolation-crt-like-motion#post_22415619


If I am reading you correctly, you may have not understood the stated issue. There is a light diffuser between the backlight and the LC panel that is inherent to LCD design to enable acceptable uniformity. One segment of the backlight will hit the diffuser and spread laterally. What this means is that adjacent segments will be illuminated enough to add to the hold time.
Diffusion issues will be most pronounced mainly in high-contrast imagery, and in these, motion blur of dark edges (especially low contrast) gets more degraded than motion blur of the bright, high contrast edges, due to diffuser/bleed issues. On extreme contrast images (lots of bright/dark content) your eyes really only have a far lower effective contrast ratio (even low numbers such as 1:100 contrast ratio) due to your eyeball internal diffusion limitations, so you're not going to notice the motion blur degradation caused by diffuser/bleed issues. I anticipate that average degradation will be very marginal, contributing only a few percent to the average perceived motion blur, permitting me continual average perceived motion blur improvement (albiet with diminishing point of returns) with shorter strobes / faster scanning.


That said, if it becomes significant, then as a backup plan, I also got instructions for removing the diffuser from some computer monitor LCD's. The diffuser may need to be replaced with my own diffuser. Diffusers designed for sidelights can theoretically be designed differently from diffusers designed for behind-LCD backlights, since there are different polarization/ray angle bending considerations for those two that can affect what's the most efficient diffuser to use. Most 120 Hz panels for computer monitors presently use sidelights, and mine is an actual backlight, so I might not want the specific diffuser from the panel I use. I may even test cheap diffusers (e.g. transparent white plastic sheets, given that my extreme DIY light output compensates for diffuser inefficiencies to an extent. The close spacing of LED's will allow me to put the diffuser extremely close to the panel, hopefully minimizing bleed. I will probably keep the diffuser at first, but if I replace the diffuser, but I will try to choose a diffuser that keeps the rest of the panel dark.)


Thank you in warning about potential diffuser issues to the forefront of my mind. Something to ensure: Make sure that diffuser bleed does not noticeably spread beyond adjacent segments.
Quote:
What I wrote was from the literature and AFAIK was correct. Yes obviously strobbing and scanning at the same time can further reduce below 1ms but again you run into the limiting cross-talk issue. Below is an example showing the duty cycle of the scanning backlight vs BET(fraction of frame time) for a given cross-talk.
If you bring crosstalk into the equation, I will concede you are right!

But, your original sentence was: "2 - AFAIK even if the panel had zero cross-talk (OLED) and an 8 segment scanning, it would produce about equal hold time to CRT (~1ms) (not surpass it)" -- A statement I believe is incorrect, at least for horizontal motion.


Unless, you're talking about vertical motion versus horizontal motion. Most motion that we care about is horizontal anyway, like hockey pucks, soccer balls, first person shooters (left/right turning), etc. For vertical motion, of sufficiently fast objects, there will be interaction with the scan flow and vertical eye movement, much like there already is for CRT, but eye tracking is not typically fast enough and the screen height is narrower than width, so limitations to vertical motion resolution caused by the scanning motion, shouldn't be a noticeable issue and is a problem for CRT's too anyway for those who is able to notice (contractions of perceived CRT height when moving your eyes rapidly downards in scan direction, and expansions of perceived height of CRT image when moving your eyes rapidly upwards opposite the scan direction)
Quote:
Also, scanning + strobbing is going to tax your light output massively and increase power consumption. Also your LED lifetime may worsen. Not to mention the temporal motion artifacts it might cause.

No need to explain, this is quite an old idea. The novel/interesting part is the low cost, the light output, and the DIY. I’m still skeptical but very interested. Does that make sense?
I think it's still a worthwhile experiment.



Your graphics are useful and will ensure that I pay attention to side issues such as backlight bleed and diffusion issues.


If bleed/diffusion issues become more pronounced than I expected, I can adjust scanning speed faster, to complete a scanning backlight scan in say, 1/240th of a second, even for 120Hz. (pretending that VSYNC is 50% idle time). It will probably bring out inconsistencies in greyscale for top versus bottom of the image, because parts of the LCD will be more completely refreshed and other parts of LCD will be more completely refreshed. So the scanning speed adjustment could become a image quality tradeoff between motion blur reduction (in bleed/diffusion) and vertical consistency of image. I might even find that full-strobe looks preferable (at 120Hz or greater), or I may find that a sweet spot in scanning speed actually approaches double scan speed. I'll make sure that the scanning speed is an important adjustment that's easy to do with a motion test pattern (e.g. smooth moving white objects in black background)


Note: A faster scanning mode can also mean more segments illuminated at a time (due to accelerated illumination of next segment before the previous segment turns off 'on its own pulse schedule'), while scanning, to maintain the same 'Hz' simulation while reducing bleed/diffusion artifacts
Quote:
Sorry, not true IMO. The diffuser and subsequent cross-talk are inherent to LCD. Strobbing will help but not as much as you state. Check out this graphic describing hold time in an interpolated system vs a frame repeat system. The CRT with frame repeat still produces motion blur but it is less due to the effective reduction in hold time due to the second pulse duty cycle.


I actually believe that the inherent diffuser in the LCD will be somewhat limiting in all cases. And increasing the segments may actually make it worse because the cross-talk will spread over more segments (because each segment is smaller?).
I'm likely going to use thousands of tiny 3528/5050 LED's, so I can put the diffuser very close to it, minimizing bleed between even segments of 1/16 screen height. But you're right that the diffuser is a limiting factor.
Quote:
One way to overcome this is to refresh the panel ultra fast and then strobe the backlight globally (all LEDs) for an “extremely” short time. This is also in the literature.
Yes, you're right. But that can bring some incomplete-LCD-refresh artifacts. A compromise is an accelerated scan, as a balance between scanning backlight and full strobe (BFI style) operation. I plan to have the Arduino adjustable in scan speed all the way through this entire scale, permitting motion-resolution benchmarking of the various scenarios.
Quote:
One last graph that adds to my skepticism. To me it shows that the motion benefits begin to level off as the duty cycle of the backlight scan decreases (similar to what guidryp was saying?)
I will be able to touch the two lines in that graph, because of my complete adjustability from scanning all the way to no-scanning (full strobe).


Given the new information you've given, you're right in the bleed/diffuser issue, but one statement in your original post is still incorrect which caused me to leap on it!


You've certainly made me pay close attention to potential diffuser/bleed issues. I may even have to engineer slightly extra wattage to compensate too. Thank you for that. I do not anticipate it being a limiting factor to successfully reducing motion blur by 90% solely from the backlight alone.


Time for me to do the math in the maximum number of SMT3528 narrow LED ribbons (600LED/5 meter) and SMT5050 wide LED ribbons (300LED/5 meter) that I can cram into a small space. This will probably become my limiting factor: How much light output I can cheaply cram into a given space. Napkin calculations suggests approximately 200 watts (the factor of 10 required for 90% blur reduction), but I wonder if I can go beyond for that extra safety margin I'd like to have.


P.S. Unrelated, but required for full-panel strobe. At the same time, I am thinking of some electronics circuit safeties I need. I need to be mindful not to power more than about 20 watts average into these LED's at any time, due to heat build-up issues -- I may need develop an auto-current-limiting approx-12-volt power supply that dynamically adjusts current automatically depending on how many segments (or all segments) lit at a time, for my ability to continuously adjust from scanning all the way to full-panel strobe. Full panel strobe would be a 200 watt surge occuring only 10 percent (or less of the time) -- to average 20 watts (goal light output for a 24" monitor) but if my electronics fail and all segments get stuck on continuously with no pulsing, then I'd like the power supply to kick in and automatically downvolt slightly within a fraction of a second to dim the LED's for current-limiting to meet a 20 watt average output. 20 watts of heat is easily dissipated through the rear of a monitor without much complexity (but continuous fully-on 200 watts would be a nightmare, and I don't need to be blinded anyway!) Fortunately, this is a simple science with lots of established schematic diagrams, including open source automatically-adapting power supplies. Relatively simple stuff; let the power supply surge (200 watt surges allowed for full strobes) but quickly automatically adjust voltage output within a fraction of a second (e.g. 1/10th of a second) to a voltage that meets the exact average backlight amperage I want, for safety reasons. (a slow responding current regulator is exactly what I want, to permit surges needed for strobing)
 

·
Premium Member
Joined
·
4,176 Posts
Discussion Starter #13
UPDATE!:

Method of VSYNC Signalling

I found a way to do USB-signalled VSYNC timecoded to an accuracy of 1/135,000th of a second -- even if the host signalling is a totally random spray. Microsoft DirectX "RasterStatus.ScanLine" timestamped with PerformanceCounter (CPU cycle counter on the PC side) and the micros() (microsecond system timer on the Arduino side) is mathematically able to tell me exactly how long ago VSYNC occured (to a precision of 1/135,000th of a second for a 1920x1080 120Hz signal). So, I have microsecond-accuracy on both the PC and Arduino ends, required for mathematically compensating for delays in relaying between the computer and the Arduino. The CPU-fluctuation and communication-caused variances can be totally mathematically calculated out quite easily, resulting in the ability to use an inaccurate host signalling (I even call it "random spray" of signalling) as a highly precise VSYNC information source accurate to 1/135,000th of a second. In fact, at this point, I don't even care how 'random' the spray of host communications is -- I can be informed about VSYNC only a few times a second and I can calculate the rest of the information based on information received (and previous knowledge of the approximate current vertical refresh rate in Hz) . In fact, if the spray of data from the PC to the Arduino is interrupted for say, 1 second (due to CPU freeze on PC host) -- the Arduino scanning backlight can still continue from an extrapolation of previously received math, and still have accurate VSYNC information within the required accuracy for a scanning backlight for several seconds after interruption of VSYNC signalling, and blissfully continue normally when VSYNC signalling to the Arduino resumes. Things would only degrade slightly after several minutes of VSYNC interruption (manifesting simply as loss of motion blur reduction and increased crosstalk artifacts until VSYNC signalling resumes). Therefore, software-based host VSYNC signalling is actually practical and can be super-accurate! Information on how I came up with the accuracy calculation of 1/135,000th of a second .

Math: Calculated LED Wattage I can cheaply cram behind a 24" LCD

If I use SMT3528 LED ribbons, they are 50 watts per 16 foot ribbon with 600 LED's. These ribbons are home-cuttable in 2" increments. These ribbons are 8 millimeters wide. A 24" LCD monitor is approximately 300 millimeters tall, so I can cram about 37 strips of 20.8" each behind a single 24" LCD. That's a grand total of 64 foot of LED ribbon. Which I can purchase off eBay for approximately $60, or off DealExtreme for about $160 (higher-quality 6500K). Total 200 watts and 2,400 LED's.


Extra notes: In reality, I plan to use 2 foot strips to permit me to use 27" LCD's (2560x1440p 120Hz is available), so that means I'll use a little extra -- If I want to, I can cram about 42 strips of LED's (cut into 24" wide each), which is 84 foot of LED ribbon, or 5.2 strips, for a total of about 260 watts worth of LED's. If I overlap the strip slightly without blocking the light, I could probably cram 25% more ribbons, but mounting the adhesive strips then becomes much more difficult. An interesting thought is to someday replicate this same 'extreme' (90% motion blur reduction) project for a 47" HDTV, multiply by 4. ($640 of LED's, 800 watts worth!) Since the manufacturers will probably beat me to it eventually, I have no plans to home-modify a 47" HDTV but there's nothing stopping someone else (or a manufacturer) from doing so. But computer monitor manufacturers are very slow at innovating on motion blur reduction technologies at consumer price levels. The sheer dark wattage in LED required (actual power use: 1/10th) is why a 24" monitor is so much cheaper and easier to begin with. LED's are falling in price, and it's only recently (in the last 2-3 years) that 5 meter LED ribbons hit "bargain" price points, even for house-lighting-quality high-CRI white color. Thankfully LED prices have fallen so much, that this Arduino project is now financially feasible on a hobbyist budget, at least at computer-monitor panel sizes.
 

·
Registered
Joined
·
1,170 Posts
I would be interested if you kept a blog or something on your website Mark - with pics even. These are old ideas with new technology. Why haven't these ideas matured - technological limits or are manufacturers apathetic to the non-mainstream (gaming applications). Is it the "good enough" paridigm. Something like this might start off a niche product (probably gaming) and spread to a degree. Not saying it's possible. Its a bit sad that in 12 years there has not been a direct replacement for CRT.
 

·
Premium Member
Joined
·
4,176 Posts
Discussion Starter #15

Quote:
Originally Posted by borf  /t/1429546/arduino-scanning-backlig...o-interpolation-crt-like-motion#post_22416789


I would be interested if you kept a blog or something on your website Mark - with pics even. These are old ideas with new technology. Why haven't these ideas matured - technological limits or are manufacturers apathetic to the non-mainstream (gaming applications). Is it the "good enough" paridigm. Something like this might start off a niche product (probably gaming) and spread to a degree. Not saying it's possible. Its a bit sad that in 12 years there has not been a direct replacement for CRT.
Good idea, I have been thinking the same. Register a domain name for my open-source scanning backlight project, and blog about it. (I was also thinking of a small Kickstarter project, to help finance the cost including multiple donor computer monitors, or computer monitor donations.)


These ideas have not matured, until recently because:

1. LCD refresh didn't complete quickly enough before next LCD refresh.
Solved. Today, LCD's are now fast enough to finish refreshing before the next frame (requirement of 3D LCD's). Finishing the refresh (for the most part) before backlight/segment strobing is required for full effect of motion blur reduction

2. Having more than 100 watt of LED _per_ square foot of display, used to be too expensive
Solved. LED's are now bright and cheap enough (requirement of extra brightness needed in ultra-short flashes in scanning/strobed backlight). If you don't have enough wattage in your very short flashes, your image will be too dim. To get a normal brightness using a backlight that is dark 90% of the time, you need about 200 watts for 24" monitor, or about 800 watts for 47" HDTV, even though average power consumption would be 20 watts and 80 watts respectively for a 90%:10% bright:dark cycle. You can now get 200 watts worth of 6500K LED's for less than $200 using 20 meters worth of LED ribbon reel tape, which is well within enthusiast budgets.

3. Native 120Hz capable LCD's was not available until recently.
Solved. Today's 120Hz native refresh capabilities (non-interpolated), means that flicker of a scanning-backlight (with ultra-short on:eek:ff duty cycle of flashes), will not bother most people. (3D LCD's brought us 120Hz LCD's)

4. Controllers for scanning backlights were not cheap or easy
Solved. Today, it can be done home-bew with an Arduino, which cost only $35 for an Arduino UNO at Radio Shack. Or even less if you build the Arduino yourself! It is fairly simple Arduino op

5. Many display manufacturers are struggling
DIY it instead. Many of them are not taking the risks (see above) required for a scanning backlight that reduces motion blur by 90%. We have to homebrew our own. People on these forums are creative (homemade anamorphic lens, homemade projectors, homemade screens, etc), so why not home-made scanning backlights, too? It's really only a glorified version of a common LED sequencer -- made to run harmoniously in symphony to the "VSYNC beat" and at high-fidelity (good manual adjustments, precise timings, reduced backlight bleed, etc)

All the above problems have been solved (for the most part), to finally allow a scanning backlight to reduce motion blur by 90% (or more) without other assistance such as interpolation. The above reasons is precisely why it has not been done on the market before today, and we have the opportunity to homebrew it. The open source nature of my backlight may encourage display manufacturers in the future to do it based on successful result (though I'd love them to pick my brains too! Maybe even earn a little penny at it, with a non-struggling display maker). There is zero proprietary technology in this open-source scanning backlight, and it is all based on publicly available knowledge, so no patents and lawsuits for this specific scanning backlight. I plan to provide the Arduino source code. The backlight is free for others, hobbyists or manufacturers, to make. Who knows, I could even instead earn a small penny off related products instead (e.g. go and create the world's best motion resolution benchmarking application) For now, this is a hobby - but this is a world's first to the best of my knowledge. I have purchased the Arduinos and parts already, and will do small-scale single-LED tests over the next few weeks. Kitchen countertop experiments first for now.



.
NOTE: It is necessary for an LCD to virtually complete refreshing before the next refresh, in order to allow motion blur reduction to break the "LCD response" barrier (e.g. LCD pixel response no longer the absolute limit). To understand this better, let's say we have an LCD with an approximately 2ms grey-to-grey refresh speed, in the following example. A single 8ms refresh (1/120th second for a 120Hz signal) for a specific segment of the LCD, would be:
Example of bypassing LCD response as the limiting factor in motion blur reduction

One refresh lasting 8 milliseconds (1/120th second at 120Hz):

-- 2ms -- wait for LCD pixel to finish refreshing (unseen, while in the dark)

-- 5ms -- wait a little longer for most of ghosting to disappear (unseen, while in the dark)

-- 0.5ms -- flash the backlight segment quickly. (1/1920th second)
Viola. You've essentially bypassed the LCD pixel refresh response as the motion blur barrier, because you're keeping the LCD refresh in the dark, so the LCD refresh is unseen, and no longer contributes to the motion blur. There will be some residual residual ghost only because LCD's does not perfectly finish refreshing before the next refresh: The cause of image leak between two eyes in shutter glasses 3D. Properly adjusted, faint residual ghost will be no worse than the residual crosstalk between two eyes during 3D shutter glasses operation. Also, all these values could be adjustable in the Arduino scanning backlight project (directly or indirectly; e.g. phasing and scanning speed adjustment instead of millisecond values), to reduce input lag, correct phasing with the actual refresh, and adjust for minimal ghosting.
 

·
Registered
Joined
·
246 Posts

Quote:
Originally Posted by Mark Rejhon  /t/1429546/arduino-scanning-backlig...o-interpolation-crt-like-motion#post_22414786


I generally agree that it's pretty pointless, but it's a "free" feature. I'm currently aiming the wattage of the backlight for a sufficiently bright picture during "960Hz" simulation, but I want to have the "free" software feature of "simulated 1920Hz" (without motion interpolation) just for two reasons:

(1) Experimentation if it's even possible *at all* to tell the difference.

I agree that experimentation has it's own value, so in that light, sure what the heck, if get it running try everything you can.


But I think characterizing this as 960/1920 Hz simulation is mistaken. An actual 1920Hz CRT would likely look more like a Sample and Hold display than a conventional CRT. CRTs were not so sharp because they better reflect reality than S&H display, CRTs were sharp in motion because unlike reality, they benefit from the stroboscopic effect freezing the action. If you actually ran a CRT at 960Hz it would no longer be exhibiting a stroboscopic effect that humans, or even birds and insects for that matter, could detect.



What I am saying will likely be controversial to many. But at some refresh rate (below 960Hz) Motion blur on a CRT would get worse the higher the refresh rate, until it essentially equaled a S&H display.


A thought experiment:


Sitting in your living room on a bright sunny day with lots of natural light.


Grab a book or something with some print and start moving it back and forth in front of your face. It will blur.


Repeat at night with an adjustable strobe light. At slow flash rate, the strobe will freeze it, and the print and it will be sharp.


As you increase the strobe rate at some point, you can't see the strobing anymore, and it will be back to like it looked in daylight: Blurred.


To benefit from the stroboscopic effect, it has to be close to a frequency where you can actually, detect it, or it must interact, with some other element to create artifacts that you can detect.

Quote:
Correct. Though, just like I can detect rainbow artifacts, I can detect the stroboscopic effet, even 500 Hz PWM is detectable indirectly if you know how to look for PWM stroboscopic artifacts (not everyone is sensitive to them, much like DLP rainbows is a person specific thing.)

Academic note: Detecting stroboscopic artifacts (e.g. DLP rainbows, PWM, etc) is a different vision phenomena than flicker fusion.

Example: Test mouse cursor on black screen: 180Hz PWM on a 60Hz signal shows a triple-cursor motion blur instead of a continous-blur.

Flicker fusion is /different/ from Store-n-hold blur which is also /different/ from LCD response blur. They can all interact, of course.

Human perception of high-speed vision phenomena (different from "flicker fusion")

1. Witnessing high speed photography. Even xenon strobe lights that flash less than 1/5000th second, you can still see the flash. Though it's as instantaneous-looking to the human eye as a 1/200th second flash. Even a millionth-second flash would be detectable, provided there was enough photons to hit the eyeballs. It's called "integration" -- your cones/rods in your eyeballs are like tiny buckets collecting photons. Once you're far beyond flicker fusion threshold, it doesn't matter how fast or slow these buckets are filled: A millon-lumen flash for a nanosecond has the same number of photons as a one-lumen flash for a millisecond.

2. Wagon wheel effects. Human can detect continuous versus non-continuous light sources indirectly using the wagon wheel (stroboscopic) effect, and its cousins (DLP rainbows, etc). Given sufficient speed, insanely high numbers become detectable. Imagine a high speed wagon-wheel disc spinning synchronized with a theoretical 5000Hz strobe light. Wheel looks stationary. However, change strobe light to 5,001Hz without changing the wheel speed. The wagon wheel looks like it spins slowly backwards.

None of this evidence of higher speed human vision that the Flicker Fusion Threshold. I see DLP rainbows as well, but that is a lower frequency artifact. When two (or more) higher frequency elements interact you get lower frequency artifacts. They are beat frequency/aliasing artifacts.


High speed flash is a particular poor example, and the reason is in your own statement. Integration. The integration time, or the time our visual system averages inputs is on the order of 10-20 ms. That means your really can't detect events spaced closer than that, or they will blur together. A single isolated flash is not a test of speed. The measure of speed would be how much time must elapse between two flashes, so they would be distinguishable from one. 10-20ms (50-100Hz).
Quote:
3. Motion blur. Detectability of motion blur is massively well beyond flicker fusion.

What? Motion Blur IS flicker fusion. Flicker fusion gives and indication of the integration time of our visual system as does motion blur, they are the same phenomena and both point to a visual system that integrates over 10-20ms.
Quote:
For fast motion moving at 1 inch every 1/60th second, the hold-type blur on LCD is as follows:

At 60Hz, the motion blur is 1" thick (entry level HDTV's, regular monitors) ...

At 120Hz, the motion blur is 0.5" thick (120Hz computer monitors, interpolated HDTV's) ...

You can pretty much stop here, because after about 60Hz the difference is really only going to matter to a high speed camera. Our eyes themselves integrate over a similar interval to a 60 Hz frame time. So looking a an the same motion through a window at a real object in sunlight would blur just as much.


There is an obsession in all specs on every device made to always go bigger/higher, but at some point it really isn't going to matter when humans are at the receiving end.


But just like the "Golden Eared" who think they need 24bit/96KHz recordings because they hear better than normal people, there will be those convinced they can see faster than the birds and the bees.




But that isn't to say I think a scanning backlight isn't a good idea for a an LCD, I do think there is benefit there. I just think the obsession with super high refresh rates and ultra short flashes is misplaced.
 

·
Registered
Joined
·
246 Posts

Quote:
Originally Posted by Mark Rejhon  /t/1429546/arduino-scanning-backlig...o-interpolation-crt-like-motion#post_22417543


These ideas have not matured, until recently because:

The reason you don't see this is largely because of it's niche appeal, and extra expense to build it.


120Hz monitors are already a hefty premium, add another premium for more powerful backlighting for short duty cycle.


A lot of LED LCD are edge lit to make them even cheaper. So having an array of 8 separate segments to scan would increase complexity and expense again.


By the time you were done you might be looking at 3x the cost of normal LCD and your market is slice of the niche that already insists on buying 120 Hz gaming monitors.


I doubt there is much technical challenge if monitor manufacturer like Samsung/LG wanted to pursue this. But I figure they crunched some build cost/sales projections and can't see a profit.
 

·
Premium Member
Joined
·
4,176 Posts
Discussion Starter #18

Quote:
Originally Posted by guidryp  /t/1429546/arduino-scanning-backlig...o-interpolation-crt-like-motion#post_22417807


I agree that experimentation has it's own value, so in that light, sure what the heck, if get it running try everything you can.

But I think characterizing this as 960/1920 Hz simulation is mistaken. An actual 1920Hz CRT would likely look more like a Sample and Hold display than a conventional CRT.
Correct, unless you display a discrete frame for each refresh (e.g. 1920fps).

But that's insane, and we don't need that. We only need to black out the intermediate samples, and the persistence of vision (flicker fusion) does the rest.


CRT's running at 60Hz actual native refresh already have approximately a "1000Hz equivalence" if it has a 1ms phoshor decay.
Quote:
CRTs were sharp in motion because unlike reality, they benefit from the stroboscopic effect freezing the action. If you actually ran a CRT at 960Hz it would no longer be exhibiting a stroboscopic effect that humans, or even birds and insects for that matter, could detect.
It is worth pointing out that motion blur is enhanced by many methods, including non-stroboscopic methods too. Examples:

1. Display at interpolated X frames per second (e.g. 240 frames per second).

Effect: Store and hold, but 240 discrete samples

2. Display store-and-hold displaying a native 240 frames per second.

Effect: Store and hold, but 240 discrete samples

3. Display strobed at 1/X second (e.g. 1/240th of a second), from a 60Hz signal.

Effect: Stroboscopic, 60 discrete samples with intermediate samples blacked out. Persistence of vision and flicker fusion, blends the motion.

4. CRT scanned at 240 Hz from a 240fps signal. Stroboscopic with all intermediate samples.

Effect: Stroboscopic, 240 discrete samples.


Tiny interesting note: Despite the similarity of the above situation, #4 has less motion blur than #1/2/3 because of CRT strobing each pixel at 1/1000sec (phosphor decay). Basically, #1/2/3 have similar motion blur perceived by human eye (1/240sec samples), while #4 has less motion blur (1/1000sec samples)
Quote:
What I am saying will likely be controversial to many. But at some refresh rate (below 960Hz) Motion blur on a CRT would get worse the higher the refresh rate, until it essentially equaled a S&H display.

A thought experiment:

Sitting in your living room on a bright sunny day with lots of natural light.

Grab a book or something with some print and start moving it back and forth in front of your face. It will blur.

Repeat at night with an adjustable strobe light. At slow flash rate, the strobe will freeze it, and the print and it will be sharp.

As you increase the strobe rate at some point, you can't see the strobing anymore, and it will be back to like it looked in daylight: Blurred.

To benefit from the stroboscopic effect, it has to be close to a frequency where you can actually, detect it, or it must interact, with some other element to create artifacts that you can detect.

None of this evidence of higher speed human vision that the Flicker Fusion Threshold. I see DLP rainbows as well, but that is a lower frequency artifact. When two (or more) higher frequency elements interact you get lower frequency artifacts. They are beat frequency/aliasing artifacts.
I understand what you are saying. I can wave my hand in front of a LCD with 180Hz PWM, and I see the discrete samples instead of a continuous blur. Same effect as you are describing.
Quote:
High speed flash is a particular poor example, and the reason is in your own statement. Integration. The integration time, or the time our visual system averages inputs is on the order of 10-20 ms. That means your really can't detect events spaced closer than that, or they will blur together. A single isolated flash is not a test of speed. The measure of speed would be how much time must elapse between two flashes, so they would be distinguishable from one. 10-20ms (50-100Hz).
I think you misinterpreted my use of the word "speed". Everything I wrote is about one flash sample per refresh, so the shorter the flash sample, the higher the simulated "Hz", even if it is a single 1/960sec flash followed by a long delay until the next refresh. So really, we're talking about the same thing in a way. Flicker fusion blends the flash samples together into one consistent, continuous motion. So you're correct here.


However, when I meant speed, I meant shorter strobe lengths (While keeping the strobe cycle constant). In this case, shorter strobes continues to reduce motion blur even when you shorten the strobes shorter than 1/120 second. (you're not strobing more frequently, just strobing shorter and more intense bursts of light, in a scanning backlight). To see the benefits of "240Hz" vs "480Hz" vs "960Hz" (sample length measurement, not actual frequency measurement), you need to see material that meets three criteria: (1) Fast pans (2) Non-blurred frames(fast camera shutter) and (3) framerate matches native refresh rate of display signal. If any one of the 3 conditions are not met, going beyond 120 is usually quite useless. But if you meet all 3 conditions, the benefits of going beyond 120 suddenly becomes very clear (even with diminishing point of returns).
Quote:
What? Motion Blur IS flicker fusion.
Wrong -- Not necessarily! Motion blur is caused by multiple factors. Including factors other than stroboscopic effect. Motion blur can be caused by eye tracking -- and that's the _main_ cause of motion blur on LCD! NOT LCD response, NOT flicker fusion!
Quote:
Flicker fusion gives and indication of the integration time of our visual system as does motion blur, they are the same phenomena and both point to a visual system that integrates over 10-20ms.
Yes, but you're missing "persistence of vision" -- motion blur CAUSED by eye tracking (not caused by flicker fusion)
Your eyes do NOT behave like digital stepper motors!

Your eyes don't stop moving during a refresh. Your eyes are continuously tracking across the screen in a continuous and analog manner, so DIFFERENT rods/cones in your retina are integrating a different part of the image in motion, leading to motion blur caused by eye tracking. The image smears across your retina as you track. Even if it's 1/480 second later, at a high-contrast edge, a different set of cones/rods are doing integrating as the image smears across your field of vision. That's HOW you can see reduced motion blur at "240Hz", "480Hz", "960Hz". By having shorter strobes, you're limiting integration to closer to the same cones/rods (sharper) rather than spreading over more cones/rods in your retina. Flicker fusion takes over the rest to blend the consecutive images. Your eyes are integrating multiple stacked blurred images at slower strobes (e.g. 1/240) and you're integrating multiple stacked sharper images at faster strobes (1/960). See, flicker fusion has nothing to do with tracking-based motion blur.

You neglected to consider eye-tracking-caused motion blur

Digital Camera Experiment You Can Try

Tracking-caused motion blur. Metaphorically, your eyeballs are roughly akin to a slow-shutter digital camera. Now, get a good SLR digital camera with manual adjustments. Go into a windowless room. Shaking/panning the camera will be equivalent to eye tracking. Now try this experiment.


1. Configure the camera to 1/10sec shutter speed, flash turned off, but room lights turned on. It's going to integrate over a long period. Intentionally pan the camera while you are taking a picture. What happens? The picture is blurry because of the slow shutter.

2. Configure the camera to 1/10sec shutter speed, flash turned on, but room lights turned off. It's still going to integrate over a long period. Intentionally pan the camera while you are taking a picture. What happens? The picture is sharp despite the slow shutter.


Gasp! Impossible, you say? Not so fast buddy -- what happened is that even though the camera was integrating over a long 1/10sec period, the flash is faster than 1/10sec. There was no light caught during the integration period, except for the light caught from the flash!


This is a very similar principle for motion blur reduction using strobed (flash) backlight. You've eliminated eye-tracking-based motion blur. The shorter the strobe, the less opportunity for eye-tracking-caused motion blur to blur the image.


Corollary: Additional note: Your eyes are continuous open shutters. The display gives you multiple consecutive images. You're tracking objects in a fast panning scene. As you track an object, your eyeballs are integrating consecutive frames. So for store-and-hold, you're integrating a frame now blurred by eye-tracking based motion. (for THIS motion blur, there's no motion blur caused by flicker fusion here). But for strobed, you're integrating consecutive strobed frames while tracking an object in a panning scene. Shorter strobes will have less motion blur because you'll have less eye tracking motion during each strobe, the shorter the strobes are. That's less motion blur since you're no longer smearing as much to different retina rods/cones. Integration stays more on the same retina rods/cones. The stacked integration is sharper! Your eyes are not digital stepper motors while you're tracking an object in a fast-moving pan.


Good examples for telling apart "240/480/960" simulation is video material from HDTV cameras taken with a short shutter speed -- fast car racing pans in bright light, ski racing on sunny slopes, football field goal kick on a sunny day, fast turn left/right in FPS shooter games, fast horizontal panning in platformer games, etc) I know, I've been able to tell apart 240/480/960 simulation (and their progressive further motion blur elimination) on specific kinds of material like these! (Of course, "960" simulation is useless for HDTV material taken at slow shutter speeds such as 1/100sec -- the camera blur now becomes the limiting factor) Also, in the HDTV era, studios have often started to use smaller cameras and longer shutter speeds, than the gigantic NTSC cameras of yesteryear. So shutter speeds are often longer than during the NTSC era. So you do need to actively seek out HDTV footage taken at short shutter speed. Yes, you do need to test *specific* material in order to tell the motion blur. You need a fast shutter for non-blurred frames. (1) Fast pans (2) Non-blurred frames (3) framerate matches native refresh rate of display signal. If any one of the 3 conditions are not met, going beyond 120 is usually quite useless. But if you meet all 3 conditions, the benefits of going beyond 120 suddenly becomes very clear (even with diminishing point of returns).


There are many academic papers that cover eye-tracking-based motion blur (a separate motion blur issue from flicker fusion). For example, in this academic paper , the diagram note says:
Figure 1: A depiction of hold-type blur for a ball moving with a translational motion of constant velocity. In the top row we show six intermediate positions at equal time intervals taken from a continuous motion. The empty circles denote the eye fixation point resulting from a continuous smooth-pursuit eye motion that tracks some region of interest. For each instance of time, the same relative point on the ball is projected to the same location in the fovea, which results in a blur-free retinal image. The central row shows the corresponding hold-type display situation. Here, the continuous motion is captured only at the two extreme positions. Frame 1 is shown during a finite amount of time, while the eye fixation point follows the same path as in the top row. This time, different image regions are projected to the same point on the retina. Temporal integration registers an average color leading to perceived blur as shown in the bottom row.
Quote:
You can pretty much stop here
Incorrect -- it's very easily detectable beyond 120fps when you look at proper material (e.g. fast pans of [email protected], fast scrolling ticker text, fast left/right motion in FPS shooters). It is also proven by academic papers, and also by the above digital camera experiment above, and ALSO I have been able to easily tell apart 120fps/240fps/480fps in the scrolling ticker tests. There are demo modes Have you been in Best Buy lately? There's a demo mode on some displays that allows you to test motion blur reduction. The difference is very clearly noticeable in the 60Hz-120Hz-240Hz-and-up in the demo mode enabled on some of these models, for scrolling tickers. Also, it is consistent with the information found in my references.


So let me re-iterate:

Fact #1: Store-n-hold display, no flicker at all.

Discrete 120fps at 120Hz has 50% less motion blur than 60Hz

Discrete 240fps at 240Hz has 75% less motion blur than 60Hz

Discrete 480fps at 480Hz has 87.5% less motion blur than 60Hz
All proven human eye noticeable. No flicker fusion involved!

For fast motion moving at 1 inch every 1/60th second:

At 60fps, the motion blur is 1" thick. No flicker fusion involved.

At 120fps, the motion blur is 0.5" thick. No flicker fusion involved.

At 240fps, the motoin blur is 0.25" thick. No flicker fusion involved.

At 480fps, the motion blur is 0.125" thick. No flicker fusion involved.

I have seen it with my eyes too! (Many new HDTV's have interpolation modes)

Fact #2: Strobed display such as CRT or scanning backlight/BFI

1/120sec flash once per refresh, for 60Hz+60fps, reduce motion blur by 50%

1/240sec flash once per refresh, for 60Hz+60fps, reduce motion blur by 75%

1/480sec flash once per refresh, for 60Hz+60fps, reduce motion blur by 87.5%
All proven human eye noticeable. Yes, flicker fusion involved, but it the fusion threshold has no effect in motion blur reduction -- that is persistence of vision from eye tracking (diagram on page 3 of academic paper )

For fast motion moving at 1 inch every 1/60th second, on [email protected] signal.

At 1/60sec strobe once per refresh, the motion blur is 1" thick. Tracking-based blur, not caused by flicker fusion.

At 1/120sec strobe once per refresh, the motion blur is 0.5" thick. Tracking-based blur, not caused by flicker fusion.

At 1/240sec strobe once per refresh, the motoin blur is 0.25" thick. Tracking-based blur, not caused by flicker fusion.

At 1/480sec strobe once per refresh, the motion blur is 0.125" thick. Tracking-based blur, not caused by flicker fusion.

I have, also, seen it with my eyes too! (Many new HDTV's have scanning modes)


However, you are right in one small thing: It is true that beyond a flicker fusion threshold, extra fps is quite useless if you've completely eliminated eye-tracking-based motion blur (an LCD problem that has nothing to do with flicker fusion threshold). Which means for a 1/960sec strobed backlight, [email protected] looks the same as [email protected], looks the same as [email protected] However, it would look different from 1/480sec strobed backlight at all of these ([email protected], [email protected], [email protected]). So 120Hz native refresh rate (discrete refreshes) is probably approximately the final frontier for native refresh rate, and you can just eliminate all the remainder of motion blur using one shorter single flash per frame (to eliminate eye-tracking-based motion blur). On this minor subheading of a point about flicker fusion, you are right about the flicker fusion threshold.


HOWEVER, your blanket statement "motion blur is flicker fusion" IS FALSE since there are multiple factors affecting motion blur other than flicker fusion. Yes, flicker fusion is one factor, but it is JUST one factor. Therefore, the reset of your post is false, especially if you do the slow digital camera experiment illustrated above. The human-visible diminishing point of returns do not stop at 120Hz. (I can already tell apart motion blur reductions from 120Hz / 240Hz / 480Hz, so it's already clearly and easily proven by my own senses already, and the information in the academic papers agree with me)


On the final note, I suggest you do the digital camera test:
Quote:
Digital Camera Experiment You Can Try

Tracking-caused motion blur. Metaphorically, your eyeballs are roughly akin to a slow-shutter digital camera. Now, get a good SLR digital camera with manual adjustments. Go into a windowless room. Shaking/panning the camera will be equivalent to eye tracking. Now try this experiment.


1. Configure the camera to 1/10sec shutter speed, flash turned off, but room lights turned on. It's going to integrate over a long period. Intentionally pan the camera while you are taking a picture. What happens? The picture is blurry because of the slow shutter.

2. Configure the camera to 1/10sec shutter speed, flash turned on, but room lights turned off. It's still going to integrate over a long period. Intentionally pan the camera while you are taking a picture. What happens? The picture is sharp despite the slow shutter.


Gasp! Impossible, you say? Not so fast buddy -- what happened is that even though the camera was integrating over a long 1/10sec period, the flash is faster than 1/10sec. There was no light caught during the integration period, except for the light caught from the flash!

P.S. I like motion blur for 35mm film. It's the way it is supposed to be. But I hate motion blur in video games. (And things like trying to read while scrolling browser window -- something I used to do on CRT computer monitor but not LCD due to scrolling being blurred). That's why I want CRT-like quality on an LCD for video games, too. A big reason I'm starting the Arduino scanning backlight project. It's already technologically possible to reduce motion blur by 90% using a scanning backlight. Also, I suggest booking an airfare to CES or CEDIA; some people (when asked) will be happy to show you precise optimized demo material that clearly distinguishes 120Hz / 240Hz / 480Hz / etc. (scrolling ticker text tests, high speed smooth 60fps pans, etc), allowing you to subsequent disbelieve what you said in your post. You're also additionally welcome to purchase an airfare to visit to see the scanning backlight, once it's built, if you do wish so.
 

·
Registered
Joined
·
246 Posts

Quote:
Originally Posted by Mark Rejhon  /t/1429546/arduino-scanning-backlig...o-interpolation-crt-like-motion#post_22417922


Wrong -- Not necessarily! Motion blur is caused by multiple factors. Including factors other than stroboscopic effect. Motion blur can be caused by eye tracking -- and that's the _main_ cause of motion blur on LCD! NOT LCD response, NOT flicker fusion!

Yes, but you're missing "persistence of vision" -- motion blur CAUSED by eye tracking (not caused by flicker fusion)

I never said the stroboscopic effect creates motion blur, quite the opposite, I said it reduced it.


You have devoted a wall of text that is seemingly trying make one phenomena into many.


There is one mechanism at work. That is the slow integration speed of our visual system. Or in camera terms, our slow shutter speed.


There is no difference between:

Moving eyes, stationary scene (pirouette and the world blurs).

Stationary eyes, moving scene ( sit still and bat flys in front of you, nothing but blur).

Spokes blurring on a bicycle.

Flashing lights fusing into continuous on state. (AKA Flicker Fusion Threshold).


It is all the expected result, of integrating a visual sensor, over some relatively lengthy time period.


That integration time (like a camera shutter speed) is on the order of 10ms to 20ms (50-100hz), in humans.


Our slow integration and operation of our visual system can also be seen in our reflexes. Humans require approx 30 ms longer to respond to a visual stimulus than an auditory one.

Quote:
1. Configure the camera to 1/10sec shutter speed, flash turned off, but room lights turned on. It's going to integrate over a long period. Intentionally pan the camera while you are taking a picture. What happens? The picture is blurry because of the slow shutter.

2. Configure the camera to 1/10sec shutter speed, flash turned on, but room lights turned off. It's still going to integrate over a long period. Intentionally pan the camera while you are taking a picture. What happens? The picture is sharp despite the slow shutter.

Gasp! Impossible, you say? Not so fast buddy -- what happened is that even though the camera was integrating over a long 1/10sec period, the flash is faster than 1/10sec. There was no light caught during the integration period, except for the light caught from the flash!

Stop putting words into my mouth and trying to turn me into a strawman. I wouldn't say this is impossible, but expected, and this is essentially the same as the thought experiment I suggested using a strobe light and human vision.


But I will adjust your camera experiment so perhaps you can get, what you are missing.


Conditions:

1/50th of second for our shutter speed (close to human visual integration).

Camera on a tripod.

Place a Giant sheet of paper with text in front of the camera on a mechanism to shake it randomly about.

Flash duration sufficiently short to sharply freeze the text and make it readable.


Now consider what happens when we put the flash in stobe mode at various frequencies(frame rates) and press the camera shutter at:


40 Hz - 0 or 1 flash, only one perfect sharp exposure (or a black frame).

60 Hz - 1 flash or 2 flashes. Could be perfect, or could be double exposure.

120 Hz - 2 or 3 flashes

240 Hz - 4 or 5 flashes

480 Hz - 9 or 10 flashes.

960 Hz - 19 or 20 flashes, creating 19 or 20 exposure blended all together in blurry mess during the time the shutter is open.


If freezing the motion detail is your goal, higher strobe/frame rates are not the way to go, the, higher the frame rate, the more overlapping and displaced frames you have averaging together creating more blur, not less.



The obsession with ultra high frame rates for human consumption is misplaced. So is claiming short duration strobes is the equivalent of high frame rates. There are different effects from frame rate and pulse duration.
 

·
Registered
Joined
·
3,186 Posts

Quote:
Originally Posted by Mark Rejhon  /t/1429546/arduino-scanning-backlig...o-interpolation-crt-like-motion#post_22415958


But, your original sentence was: "2 - AFAIK even if the panel had zero cross-talk (OLED) and an 8 segment scanning, it would produce about equal hold time to CRT (~1ms) (not surpass it)" -- A statement I believe is incorrect, at least for horizontal motion.
The statement as written is 100% correct as I don’t mention strobbing. But as you pointed out, if you scan AND strobe then you can surpass 1ms (without including cross-talk).
Quote:
Originally Posted by guidryp  /t/1429546/arduino-scanning-backlig...o-interpolation-crt-like-motion#post_22417807


What I am saying will likely be controversial to many. But at some refresh rate (below 960Hz) Motion blur on a CRT would get worse the higher the refresh rate, until it essentially equaled a S&H display.
Entirely true if you are only repeating identical refreshes. In fact, motion blur would get worse even at 120Hz in this method (see graphic in my previous post).

Quote:
Originally Posted by guidryp  /t/1429546/arduino-scanning-backlig...o-interpolation-crt-like-motion#post_22417807


A thought experiment:

Sitting in your living room on a bright sunny day with lots of natural light.

Grab a book or something with some print and start moving it back and forth in front of your face. It will blur.
Unless you are tracking the print as it moves, the experiment is not valid.


I’ve been repeating this explanation about 8 years now on AVS. Our eyes track movement on the screen in a continuous fashion. Yet all displays produce motion with still images. The two systems are not compatible. The result is blur.


In other words, blur induced by the display (not inherent in the signal) is due to the conflict between our continuously moving retina (tracking movement on the screen) and sequential still images that make up motion video.


The best analogy I could come up with is the laser dot thought experiment. If your retina is moving and you shine a stationary laser beam onto a spot on its surface, the laser beam will literally draw a line on your moving retina due to retinal persistence. This is analogous to our eyes continuing to move while watching a stationary image (1 frame).


Using the same analogy it is easy to understand the artifact:
  • The length of the laser line drawn onto the retina is analogous to the width of perceived blur on a display.
  • The lengh of the laser line (i.e. – blur width) is determined by the speed of eye movement
  • The lengh of the laser line (i.e. – blur width) is also determined by how long you shine the laser (i.e. – how long you display a frame)
  • The lengh of the laser line (i.e. – blur width) is also determined by how long your eye persistence is. If you have short persistence the trailing edge of the laser line will start to disappear faster (i.e. – you may not perceive display blur as easily as others who have long persistence.)


Now, using the same thought experiment, if you only shine a short nano second laser pulse on your moving retina you will literally draw a dot on your retina with no blur. This is analogous to pulsing a frame for a very short time while your eyes are in movement.


As you can see, the primary display parameter in determining the blur induced by the display itself would be the HOLD TIME. Which is the time each unique frame is displayed on the screen. Understand hold time and you will understand this entire concept.


Remember that even with ultra-short nanosecond frames, if you repeat the frames you have effectively increased the hold time. This is why a 120Hz CRT displaying a 60Hz signal (using frame repeat) will show more motion blur than a 60Hz CRT.
 
1 - 20 of 47 Posts
Top