If you're eye-tracking ultralow persistence motion that's framerate-refreshrate synchronized -- e.g. viewing motion tests on a CRT such as www.testufo.com/photo -- then there's no judder and there's no stutter. The persistence is so low that the motion is point sampled so there's no stutter/judder from things like 50% persistence (180-degree stutter), so as long as persistence is so low that the motion is effectively point-sampled. Not 50% persistence, not 25% persistence, but as close-as-possible to point-sampled persistence (e.g. persistence as tiny a fraction of a refresh cycle as possible), THEN you can simultaneously eliminate judder AND motion blur during the eye-tracking scenario. I think you already very well know what I am talking about. Now if you keep your eyes still, there's some strobe effects. Motion blur ruins your ability to accurately and comfortably eye-track, while you can easily eye-track when there is zero blur (before the stroboscopic effect begins to bother you). When you fixate your gaze on something scrolling on a screen, there is no strobing effects.
In fact, I wrote it in the Area 51 Display Research Forum at Blur Busters Forum about the problem of finite framerates, So what refresh rate do I need? [Analysis]
One particularly good section is the one about the strobing effect of finite framerates, which can still be seen even at 120fps. This is the "judder" you talk about, and this is still visible at 1000fps.
There are some additional factors where one may require well beyond 75Hz, and possibly far beyond (e.g. 1000Hz), include the stroboscopic effect. (See Why We Need 1000fps @ 1000Hz This Century).
Several sources of what you wrote already explains this, but here's a good demonstration:
View http://www.testufo.com/photo#photo=eiffel.jpg which I've embedded below using the [testufo] tag.
Stare at a stationary point in the middle of the top edge of this moving animation.
TestUFO Animation: Scrolling Eiffel Tower
1. Put your finger at the top edge of this animation. Keep the finger stationary.
2. Stare at your finger (or next to your finger). Keep your eyes and finger stationary.
3. As the antenna part of Eiffel tower scrolls under your finger, you will see multiple antennas appear
(the stroboscopic effect -- kind of like the reverse version of the phantom array effect -- stationary eye but a series of static images that represent moving object at a finite refresh rate)
4. This problem is most pronounced at 60Hz (e.g. antennas 16 pixels apart at 960 pixels/second)
5. This problem still exists at 120Hz (e.g. antennas 8 pixels apart at 960 pixels/second)
This will work even on the slowest laptop LCD panels, too.
Same kind of situation occurs when you spin your mouse pointer in a circle on a black background, it's not a continuous blur, even a 1000Hz mouse will show up only 120 'copies' of cursor per second (at 120Hz refresh rate) when you spin the mouse pointer rapidly in a circle on a black background while you stare stationary in the middle of your monitor (this is also known as the 'mousedropping' effect).
The only way to eliminate all stroboscopic effects like this, without adding motion blur back, is to do flicker-free persistence at ultrahigh frame rates (either by interpolation, but preferably real frames), so that there's continuous motion rather than static frames that can cause stroboscopic interactions (phantom array, mouse dropping effect, wagonwheel effect, etc).
You can add (1/fps)th millisecond worth of intentional/artifical motion blur to mask this effect, much like movies do (35mm film), in order to fix the strobing, filmmakers add intentional motion blur. For example, at 1000 pixels/second and 16 pixel step per frame (60Hz), you could add 16 pixels of intentional GPU-effect motion blurring, to eliminate this stroboscopic effect.
However, adding motion blur is very bad when you want to simulate virtual reality (as both you and I already know, from John Carmack's talks, and Oculus). For this use case, you want 100% of all motion blur to be 100% natural, created inside the human brain, if possible -- no externally added motion blur as a band-aid. Also, motion blur is undesirable by a lot of readers on Blur Busters, who come to this very site, in the pursuit of elimination of motion blur. So someday into the future, we'd want to attempt to do strobefree low-persistence. To do 1ms persistence without flicker/strobing/phosphor/etc, you need to fill all 1ms timeslots in a second, and that means 1000fps@1000Hz to achieve low-persistence with no form of light modulation. That, as you can guess, is quite hard to do with today's technology, so strobing is a lot easier.
75Hz completely solves the motion blur problem by allowing low persistence above flicker fusion threshold. However, it doesn't solve 100% of the problem of making virtual imagery completely indistinguishable from real life. Certainly, it's often "good enough", and it will have to be good enough for the next decades (or few), probably.
There is a Law of Persistence: 1ms of persistence (strobe length) translates to 1 pixel of motion blurring during 1000 pixels/second. Decay curves (e.g. phosphor) complicate the math, but strobe backlights such as LightBoost, ULMB, BENQ Blur Reduction are essentially near-squarewave and very accurately follow this Law, to the point where I've begun to call this "Blur Busters Law of Persistence". This does make some assumptions (no other weak links, stutter free, frame rate matching refresh rate, perfect smooth VSYNC ON motion such as stutterfree TestUFO motion, motionspeed that are slow enough that random eye saccades are an insignificant motion factor). I find I can track eyes accurately on moving objects on screen (i.e. ability to count eyes in the TestUFO alien, which are single pixels), up to approximately 3000 pixels/second from arm's length away from a 24" monitor. Different humans will have different eye tracking speeds, but this kind of defines the bottom end persistence that we need, since 1ms of persistence at 3000 pixels/second blurs the eyes to 3 pixels wide rather than 1 pixel. This is the reason why I told BENQ to support 0.5ms strobewidth in their new firmware (they listened; now we just have to wait for the fixed XL2720Z firmwares to ship), since I apparently can just about barely detect the motion clarity difference between 0.5ms persistence (strobelength) and 1.0ms persistence (strobelength). For 1080p 24" at arm's length away, most people track reasonably accurately at 960 pixels/second. Others, track at 2000 pixels/second before eye tracking can't keep up. I find I cap out approximately at that motionspeed. During 3000 pixels/second TestUFO animations, this means the difference between 1.5 pixels of motion blurring (insignificant blurring at http://www.testufo.com/ghosting#pps=3000 or http://www.testufo.com/photo#pps=3000 ) versus 3.0 pixels of motion blurring (alien eyes blurred at http://www.testufo.com/ghosting#pps=3000 as well as windowframes of buildings blurred at http://www.testufo.com/photo#pps=3000 )... I have this beta firmware installed on my XL2720Z, and it confirmed my findings: 1ms persistence is not the final frontier. So, I recommend manufacturers start considering 0.5ms persistence, and not stop at 1.0ms persistence. This will become even more demanding in the VR era, during panning during fast head-turning speeds, and 4K screens (twice as many pixels to track across), so 0.25ms might actually produce a human noticeable improvement over 0.5ms. (e.g. 8000 pixels/second during slow head turning -- creating 2 pixels versus 4 pixels of motion blur during 0.25ms persistence versus 0.5ms persistence). For now, 1ms persistence (LightBoost 10%) is sufficiently low to satisfy the majority of population, as you still get a lot of brightness loss trying to achieve lower persistence, and compensating with brighter strobes gets expensive (oe.g. custom backlights/edgelights). That said, you can still just do 75Hz, with say, 0.25ms persistence and call it a day, unless you were concerned about stroboscopic effects.
We are stuck with stroboscopic effects, ever since humankind invented the concept of frame rates / refresh rates when we came out with zoetropes and kinetoscopes of the 19th century, we have never yet been able to successfully record and playback continuous motion naturally in a framerateless manner, so we have the artifical invention of the frame rate for now -- since it's the easiest way to virtually represent motion.
The lighting industry has done several studies about human detection of stroboscopic effects of flickering light sources (it's a good reason why fluorescent ballasts have gone electronic and often use >10KHz rather than strobing at 120Hz). The stroboscopic-effect detection threshold (phantom array detection) can be quite high, even 10,000Hz for a portion of human population -- see this lighting industry paper, so that will define roughly the refreshrate we need, although we could get by with just 1000fps@1000Hz + 1ms of motion GPU-effect blurring (fairly imperceptible, but enough to prevent wagonwheel effect).
I totally agree with the individuals such as those in Valve Industry and Oculus, about the elimination of the vast majority of artifacts during low-persistence >75Hz -- this is definitely the sweet spot, as you've described. By all means, it doesn't completely eliminate all differences between virtual imagery and real-life imagery, we still will need >1000fps@1000Hz to pull off the "real life indistinguishability" feat, or some kind of future framerateless continuous-motion display, even a display that refreshes faster only where the eye is staring at, etc. By going to low persistence via strobing, we solve a large number of VR problems, just that low persistence using today's technology necessitates strobing and that problem is unsolvable without going to ultrahigh framerates. (0.5ms = 2000fps@2000Hz needed for flickerfree low persistence with zero strobing, zero light modulation)
Corollary/TLDR: As you said, low-persistence 75Hz+ is definitely the sweet spot that solves a lot of problems. However 75Hz is still not enough to pass a theoretical Holodeck Turing Test, "Wow, I didn't know I was standing in Holodeck. I thought I was standing in real life.", because there still remain side effects of finite framerates, that cause motion to not fully mimic the completely step-free continuous motion of real life (no judder, no stutter, no wagonwheel artifact, no blur, no strobing, no visible harmonics between framerate vs refreshrate, no phantom array, no mousedropping effect). To do so via finite refresh rate, we need ultrahigh framerates synchronized to ultrahigh refrehsrates, 4-digit, in order to completely solve all possible human-detectable side effects of a finite frame rate, achieving low persistence via continuous light output, without strobing/phosphor/light modulation, to achieve simultaneously completely stepfree, strobefree, and blurfree motion necessary to mimic real life.
Very interesting talk though -- and we need more people like you, visiting this brand new forum which launched barely more than a month ago!
Also, here's photos of the Eiffel Tower Test. You stare stationary at the screen while the eiffel tower scrolls past. Less strobe effect. The same problem occurs on any finite-refresh-rate display (CRT, LCD, plasma, whatever).
The same problem occurs for CRT and LCD, strobed and non-strobed, flicker and flickerfree, phosphor and phosphorless.
There are visual artifacts of any finite framerates, you must choose either motion blur OR strobing. What has been definitively proven is that if you were forced to choose one or the other for VR, you definitely don't want motion blur.
The judder is not visible when you're accurately eye tracking, and it's much easier to eye-track low-persistence at 4000 pixels/second than eye-track high-persistence at 1000 pixels/second. The eye-tracking comfort improvement of low-persistence outweighs the motion blur, when you're doing fast head tracking.
I am familiar with the judder you talk about here, but you're akin to misdiagnoising -- the lesser of evil vs the bigger of evil. Some of my work is going into upcoming scientific papers so I am extremely experienced in the stutter, judder, and motion blur, and all the corresponding mathematical tradeoffs between them, in the eye-tracking-motion versus eye-not-tracking-motion situation.
On low-persistence displays such as my BENQ-Z Series or LightBoost, I can pass the TestUFO Panning Map Readability Test. This is a very slow 1200 pixels/second motion, similar to the panning speed caused by a slow 20-degree-per-second head turning. You can't read this map on a 60Hz LCD or a non-strobed 120Hz LCD. But the map labels are readable on a CRT, on a LightBoost monitor, or on Oculus DK2. You can read text on a paper book that's sliding past your face at a few inches per second. But you can't read tiny text scrolling past at that speed on a 60Hz LCD or full persistence display such as DLP (unless it's using black frame insertion or high refresh rate to compensate). Now, you need a display that can pull off that virtually zero-blur feat, for comfortable head-tracked VR.
The artificial existence of "framerate" means the beginning of a frame visibility is at a different time than the ending of a frame visibility -- as you track moving objects on a screen, your eyes are in a different position at the beginning of a frame cycle than at the end of a frame cycle. This blurs the frames on your eyes. You need to fix this via framerateless technology which does not exist (infinite framerate), so you have to add black periods between the frames, so you can point-sample the frames instead.
I wish we didn't need black frame insertion, but there is no other way to pull off just 1ms persistence. 1000fps@1000Hz is the only flickerfree/strobefree alternative to making the frame visible for just 1ms with large black gap till next 1ms frame.
During the eye-tracking (on a panning scene, like www.testufo.com/photo), 75Hz with 2ms strobe flashes, has exactly the same motion blur and same zero-judder effect as flickerfree 500fps@500Hz. We're of course discounting the non-eye-tracking situation. Some people, no doubt will be annoyed by judder and might even prefer motion blur, but this would be a minority of people.
The bonus is Oculus DK2 low-persistence mode can be enabled/disabled, so that flicker-sensitive people can enjoy full persistence for completely flickerfree operation (at the cost of motion blur, of course). OLED rolling scans are extremely adjustable. Several new monitors have adjustable persistence (e.g. BENQ Z-Series persistence is adjustable from 0.5ms through 5.0ms, or flickerfree/strobefree mode), so it's not like low persistence is being forced upon your eyes, if you do genuinely prefer nausea-inducing motion blur during virtual reality. Recent convention center tests with the public, have shown that there's more VR nausea from motion blur, than VR nausea from flicker/strobing. The anti-strobe people do not realize the wonderful flexibility is emerging in these new displays, to give people/users the choice, and some people don't realize the human-visible side effects of finite framerates actually can stay visible all the way to several thousands of frames per second, due to the stroboscopic effect. However, it is far by the lesser of a problem than motion blur than VR.
Joe, before you reply to my post, read Pages #1, #2, #3, #4, #5 of So what refresh rate do I need? [Analysis] [very good one!], as well as Highest perceivable framerate?, as well Michael Abrash's Down the VR Rabbit Hole: Fixing Judder. This is full of useful information about judder and how it relates to eye tracking. And there is a lot of great stuff written by many other motion-fludity-experienced people, including Paul Bakaus and others. And shall I go on? Once you read these, you'll realize that Palmer isn't marketing -- he's just restating known scientific fact and new VR knowledge that the lack of motion blur makes head-tracked VR far, far more immersive. We don't care if it's Oculus, or another vendor -- but low persistence is that important for VR, or it won't ever become mainstream due to everything looking more artificial during VR (not possible to comfortably eye-track faster than slow motionspeeds, due to motion blur).
Eventually we'll have 4K or 8K OLED VR that's fully configurable from flickerfree-to-nearzero persistence, and everybody's happy for now until the infinite-framerate refreshrateless Holodeck arrives (to permit crystal clear real-life motion without needing strobing). But as we all know, that infinite framerate dream is probably not going to happen this century -- but 4K OLED definitely will.
TL;DR: Since infinite framerate is impossible, and we're stuck with finite framerates, strobing (well above comfortable flicker fusion threshold) is generally the lesser of evil for virtual reality, than the mandatory motion blur of a flickerfree display (at current technologically achievable refresh rates). Eye-tracking comfort is numero uno paramount importance during rapid virtual reality movements, and there's no way to eliminate eye-tracking-based motion blur without an ultrahigh framerate (or infinite/framerateless continuous motion) unless you point-sample the frames by adding large black gaps between frames and make the frame visibility as brief as possible. This hugely reduces nausea, as the motion blur creates massively more nausea than strobe/judder effects (during non-eye-tracking situations)
www.BlurBusters.com - Everything Better Than 60Hz(tm)
I almost commented on this statement of Joe's as well, and then erased it because I realized that his is a statement of what happens before image information lands on the retina.
In both real life and VR there's an eye involved feeding information to the brain in a format that isn't a frame at a time. His statement has to do with before it even gets to the eye: in real life, light hits the back of the eye continually. In VR, the eye receives it in brief pulses.
You're the wife. You're the dead wife. Give me my @#$%ing coin, dead wife.
Here's my impressions:
Let's start with the positives:
Despite some light leaking in at the bottom of the mask, black-level performance was excellent. Especially in Alien. It helps a lot with immersion and fear level for horror games.
The head straps felt more comfortable than the DK1 - maybe it was lighter?
No noticeable lag and very smooth head panning even at extreme angles.
Aliasing is much improved due to the higher resolution. This is especially obvious in the distance.
Now to the negatives:
The biggest flaw for me was the fact that the device tracks your head and not your eyes. So you need to lock your vision into a forward/center stare and let you neck do all the work. It would take some brain training to adjust to that. If you try to move your eyes instead, you will look into the border/peripheral vision area where the optics of the device make everything blurry. Only the center area appears clear so you need to keep moving your head so that the point of interest remains dead center. You end up with a very small useful FOV. It drove me nuts in the Alien game because the motion tracker in your hand was at the edge of the screen and I had to keep moving my head to see it clearly. I don't remember this issue from the original DK1 but I tested it a very long time ago. Either it wasn't noticeable in the older demos or they changed something with the optics of the device. Maybe the interpupillary distance didn't match my eyes? I mentioned it to 2 different booth attendants and they didn't know how to fix it.
I didn't notice any improvement in persistence or motion blur. There was no flicker and the display was very bright so I'm not sure if it was even doing the scanning refresh. I asked and they claimed all units were in the low-persistence mode. To be honest, it was very difficult to evaluate motion blur because you couldn't do eye-tracking for very long before hitting the out-of-focus borders. Overall, I think the motion was fine but not up to CRT levels yet.
The image itself looked a bit like a mis-converged projector because they are using an OLED screen with pentile pixel structure where neighboring pixels share colors (http://www.oled-info.com/pentile). All on-screen text and smaller details had a colored fringe that got worse in the screen borders.
One issue remaining from the first devkit is that it's like wearing a diving mask suctioned to your face. You face sweats a lot even after a few minutes - luckily it doesn't fog up like a diving mask.
Overall, I was hoping for a larger improvement from DK1. There are still some significant problems to overcome and I can see why they took the facebook money to help solve them. In its current state, I don't think it's ready for consumers. Maybe they can add the eye tracking tech mentioned in this article:
Edit: Some additional info came to light after people received their units which explains some of the issues I saw above. See here:
Last edited by Wizziwig; 08-19-2014 at 01:16 AM.
You're the wife. You're the dead wife. Give me my @#$%ing coin, dead wife.
Last edited by Joe Bloggs; 06-12-2014 at 01:34 PM.
Today I was able to get his reaction. He hated the whole thing even more than me but for different reasons. Being a game designer, he focused on the terrible controls in Alien. He didn't notice the focus and color fringing issues that I saw.
So either I'm more picky than most or this is an issue that depends on the viewer - similar to DLP rainbows.
Whether it's genuine or calculated or in the middle, they come across as likeable goofballs once in a while. Perhaps to soften their image as the evil empire to startups.
You're the wife. You're the dead wife. Give me my @#$%ing coin, dead wife.
Any chance you're attending Siggraph this year? I'll be there. Send me a msg if you go.
At least the video captured of the space flight sim finally explains that crazy loss of focus and increase in color fringing at the edges of the display. They seem to be intentionally adding a color mis-convergence to compensate for crappy optics that suffer from chromatic aberration. A software solution is not the right way to fix this and I hope they'll find a way to include better quality lenses while sticking to consumer friendly prices.
BTW...Luckey, Iribe, Abrash and Carmack are all on record saying the ultimate VR experience will take place when the market has access to 16k-24k of resolution or higher. And they think that can happen over the next 20 years as VR becomes more widely used in a huge range of industries utilizing its unique capabilities. But they also stated that a very good to excellent VR experience can be achieved in the interim with 1080p per eye to 8k (4k per eye). I think these guys are ready right now from a technical and partnership standpoint to launch a consumer version of OR. Their statements sans the FB acquisition suggests they are delaying so a full range of game developers and accessory manufacturers have a wide stable of products ready to feed the VR beast. As far as video tech is concerned. They have come a long way since the prototype stage. And I suspect they will always push the envelop on best in class, practical A/V tech.
Might not be a game for every one but its nice to see it already listed and available to utilize.
Proud owner of an Axiom Audio home theatre, powered by Axiom Audio.
If they can't figure out a way to fix chromatic aberration optically via better lenses or coatings, then they will require some insane resolutions to fix it via software. The problem is that light is refracted/bent through the lenses similar to a prism. This ends up splitting the red from the blue very noticeably. They try to compensate for this in software by shifting the red in the opposite direction on the GPU before it hits the lens. This is a very bad solution because the amount of shift required varies across the screen and does not fall into exact pixel sized amounts. You will always see some color fringing somewhere on the screen. This was my biggest gripe about the DK2.
The pentile OLED pixel structure is also not helping them since it effectively reduces the available resolution even further.
I wish I had the time at E3 to see what Sony was offering. Given their history of producing some excellent front projectors, they might be better at designing a good light path for their headset.
Not so fast. If Glyphe is the winning tech...which I doubt...it too will just be a tech in the Face Book Oculus Rift stable. Or Sony Morpheus toolbox IMO.
I think the skeptics are completely under estimating the power of Social Media with the marriage of Face Book & Oculus Rift. And I think that calculus will reveal itself more with movies, Sports and TV shows until full VR games are more available. Super Bowl Parties with friends in different places and countries can take on a whole new scale with VR.
The Gameface Mark 5 prototype HMD has a higher resolution than even the Oculus Rift at 1440P 2.5 K or 2560×1440 with Nvida's Tegra K1 driving it.
This technology could very well reshape how movies, documentaries of all sorts are made. It could revitalize consumer interest in 3D. Because it is decidedly not associated with huge HDTVs and special glasses. Just a different kind of HDTV display instead. I think consumers will understand that distinction fairly easily. I can't wait to see nature, space and science docs of all sorts in VR. Just looking at the kind of photography & filming techniques and technologies Jaunt is exploring, is really exciting. Pretty soon Oculus Rift is going to have to show us the consumer beef. Like in 2015 at the latest IMO. Or people will start to raise serious questions and doubts. Including their biggest supporters. Of course, I'm sure they already know this.
|Oculus Rift Developers Kit|