or Connect
AVS › AVS Forum › Display Devices › Flat Panels General and OLED Technology › Oculus Rift VR Headsets
New Posts  All Forums:Forum Nav:

Oculus Rift VR Headsets - Page 5

post #121 of 286
Quote:
Originally Posted by DLPProjectorfan View Post

I always thought that the higher the resolution the better.
The higher the resoultion turns into more detail, more pixals, better color, less eye fatigue.

From the hands-on report: prototype virtual retinal display (VRD) delivers insanely sharp definition and a realistic image even with low-resolution sources by projecting directly into each eye using an array of two million micromirrors.... We watched a few minutes of Life of Pi in 720p 3D, played a bit of Call of Duty and poked around a 360-degree video filmed at a traffic circle in Italy. It all looked great, and that latter clip, which was streaming from a smartphone, was a mere 360 x 180 pixels.

It looks thus the Avegant gear may have 720p micromirror chips inside. Apparently its projection method delivers ultrasharp pics which compensates for any lack of pixel resolution and there is no screen-door effect. Perception of super-PQ may also strongly depend on a method how they generate color pictures. With micromirrors it is possible (or even necessary) to generate in color-sequential way: the array displays R,G,B sequentially in fast succession. Then, effective resolution as compared to standard RGB display is increased three-fold. Even the lowest resolution 1024x768 micromirror would have effective resolution of 1080p.

So the Avegant looks very promising but as usual with such new things, pessimistic question is: Where are the weak points?
post #122 of 286
Quote:
Originally Posted by irkuck View Post

It looks thus the Avegant gear may have 720p micromirror chips inside. Apparently its projection method delivers ultrasharp pics which compensates for any lack of pixel resolution and there is no screen-door effect. Perception of super-PQ may also strongly depend on a method how they generate color pictures.
You are confusing sharpness with resolution. They are not the same thing. Resolution affects the amount of detail and the smoothness of the image.
Quote:
Originally Posted by irkuck View Post

With micromirrors it is possible (or even necessary) to generate in color-sequential way: the array displays R,G,B sequentially in fast succession. Then, effective resolution as compared to standard RGB display is increased three-fold. Even the lowest resolution 1024x768 micromirror would have effective resolution of 1080p.
You are mistaken. It will give you a higher fill-factor, it will not give you a higher resolution. Having subpixels increases your resolution, as you can take advantage of subpixel rendering.

When you have sequential color, there is the potential for improved motion handling if each subfield were independently addressable, but that's very unlikely to happen.
post #123 of 286
Quote:
Originally Posted by Chronoptimist View Post

You are confusing sharpness with resolution. They are not the same thing. Resolution affects the amount of detail and the smoothness of the image.
You are mistaken.

The point is different. Imagine high resolution device showing bit unsharp picture and opposite, low-res display showing sharp picture. The high resolution device will be classified as worse. Same might with Avegant due to the claim of preceptually ultrasharp picture.
Quote:
Originally Posted by Chronoptimist View Post

It will give you a higher fill-factor, it will not give you a higher resolution. Having subpixels increases your resolution, as you can take advantage of subpixel rendering.

You are apparently mixing sharpness with rendering quality. Origin of the ultrasharpness of Avegant might be tracked, at least partially, to the sequential color. In the sequential color, each color subpixel comes from the same place, while in standard display subpixels are in slightly different positions. This normally is of no significance since the distance to the pixels is much larger. But Avegant is projecting to the eye so even slight differences in subpixel locations could be detrimental. One good way of seeing the issues here is by comparing imaging chips in standard digital cameras using Bayer filter with Sigma camera using Foveon chip. Foveon is known from delivering ultrasharp pictures /though it has its own problems/.
Edited by irkuck - 11/15/13 at 1:19am
post #124 of 286
Quote:
Originally Posted by irkuck View Post

The point is different. Imagine high resolution device showing bit unsharp picture and opposite, low-res display showing sharp picture. The high resolution device will be classified as worse. Same might with Avegant due to the claim of preceptually ultrasharp picture.
But you cannot make up for a lack of resolution with sharpness. You still have a low resolution image.
Quote:
Originally Posted by irkuck View Post

You are apparently mixing sharpness with rendering quality. Origin of the ultrasharpness of Avegant might be tracked, at least partially, to the sequential color. In the sequential color, each color subpixel comes from the same place, while in standard display subpixels are in slightly different positions. This normally is of no significance since the distance to the pixels is much larger. But Avegant is projecting to the eye so even slight differences in subpixel locations could be detrimental.
I agree that eliminating subpixels is better for image quality, though you replace it with temporal artifacts that can be especially problematic in HMDs.

Quote:
Originally Posted by irkuck View Post

One good way of seeing the issues here is by comparing imaging chips in standard digital cameras using Bayer filter with Sigma camera using Foveon chip. Foveon is known from delivering ultrasharp pictures /though it has its own problems/.
That's a completely different issue. Bayer sensors work by reducing a monochrome sensor's resolution to create color images. You take four photosensors and split them up into one red, one blue, and two green elements.

This means that a Foveon sensor with 1/4 the image resolution of a bayer sensor has the same red/blue resolution. So a 6MP Foveon sensor has the same red/blue resolution as a 24MP Bayer sensor, but half the green resolution.
Because Bayer sensors are spread out in the way that they are sampled, interpolation will get you more useful information than the number of photosensors would suggest. Really, a 6MP Foveon sensor produces images closer to a 12MP demosaiced Bayer image than a 24MP one most of the time.


With displays, it's not like that at all. We don't count the subpixels when determining the resolution of the display. If it was done like cameras, a 720p display using RGB subpixels would really be 3840x720, and a 720p DLP would be 1280x720.
So you don't gain any resolution from eliminating the subpixels on a display, you gain a better fill-factor, and lose the ability to use subpixel rendering to increase your resolution.
If you were to replace each subpixel with a full color pixel, that would be a huge increase in resolution, but so far that has not happened, nor is it likely to.
post #125 of 286
Quote:
Originally Posted by Chronoptimist View Post
 
Quote:
Originally Posted by irkuck View Post

One good way of seeing the issues here is by comparing imaging chips in standard digital cameras using Bayer filter with Sigma camera using Foveon chip. Foveon is known from delivering ultrasharp pictures /though it has its own problems/.
That's a completely different issue. Bayer sensors work by reducing a monochrome sensor's resolution to create color images. You take four photosensors and split them up into one red, one blue, and two green elements.

This means that a Foveon sensor with 1/4 the image resolution of a bayer sensor has the same red/blue resolution. So a 6MP Foveon sensor has the same red/blue resolution as a 24MP Bayer sensor, but half the green resolution.
Because Bayer sensors are spread out in the way that they are sampled, interpolation will get you more useful information than the number of photosensors would suggest. Really, a 6MP Foveon sensor produces images closer to a 12MP demosaiced Bayer image than a 24MP one most of the time.

 

In some of the raw modes available on many cameras, you can get independent readings for each of the green, but generally you still get 1 pixel comprised of both greens even if the edge of an object only strikes one of them.  It's the entirety of the square that matters, not the fact that there's an extra green horizontally or vertically.  If only one of the greens is struck, then the pixel is showing half-green, no different from a magic sub-pixel free sensor would when only half covered by the object.

post #126 of 286
Quote:
Originally Posted by Chronoptimist View Post

But you cannot make up for a lack of resolution with sharpness. You still have a low resolution image.
I agree that eliminating subpixels is better for image quality, though you replace it with temporal artifacts that can be especially problematic in HMDs.

There is more to the perception of resolution than pixel count. So sharpness is part of the perceptual impression of resolution. The question about temporal artefacts reamins to be seen.
Quote:
Originally Posted by Chronoptimist View Post

That's a completely different issue. Bayer sensors work by reducing a monochrome sensor's resolution to create color images. You take four photosensors and split them up into one red, one blue, and two green elements.
This means that a Foveon sensor with 1/4 the image resolution of a bayer sensor has the same red/blue resolution. So a 6MP Foveon sensor has the same red/blue resolution as a 24MP Bayer sensor, but half the green resolution.
Because Bayer sensors are spread out in the way that they are sampled, interpolation will get you more useful information than the number of photosensors would suggest. Really, a 6MP Foveon sensor produces images closer to a 12MP demosaiced Bayer image than a 24MP one most of the time.

It is not like this. Bayer sensor has 4 color subpixels
RG
GB

But they are utilized twice through the moving window concept by one subpixel in the mosaic:

RGRGRG
GBGBGB
so that at one position a pixel is made by
RG
GB
but at the next position it is made of
GR
BG
Due to this, equivalent resolution of Bayer sensor is 12 MP since Foveon produces perfect RGB pixel.
Bayer obviously makes small distortions but they are normally tiny. But they can be seen as small loss
of resolution when comparing to pure RGB. With the exception of very high resolution cameras which
become limited by optical resolution. For example the new A7r camera by Sony with 36 MP and no low
pass filter.
post #127 of 286
Just taking a break from the technical discussion since future of HMD displays looks brilliant due to unlimited potential supply of revolutionary content biggrin.gif
post #128 of 286
post #129 of 286
Conceptually, the Glyph by Avegant looks cool. With demos scheduled for the CES and Kickstarter productivization campaign later in January this should be a real stuff in 2014. The limitation of the present system is the 45 deg viewing angle but should be acceptable for the Gen 1 product. At least it falls back on a pair of regular headphones with strange headband contraption biggrin.gif.
post #130 of 286
Glyph in real life. Supposedly improved optics and focusing. So maybe they increased 45 degree field of view. The second gen of these are going to become the VR gold standard. This first gen still seems to be lacking lag free head tracking and is still gear more for mobile entertainment rather than gaming. All they need to do is increase FOV, add lag free head tracking, make it smaller, lighter, wireless and they will own the market.

http://www.engadget.com/2013/12/23/avegant-glyph/
post #131 of 286
^The $499 question is if faith in the Glyph Gen 1 is hot enough to burn the money in Kickstarter, knowing that without Gen 1 there won't be Gen 2 Holy Grail biggrin.gif
post #132 of 286
Thread Starter 
"With the latest Rift, Oculus has created a device that may usher in an era of truly immersive gaming and entertainment, and even create new opportunities for businesses to use virtual reality in everything from manufacturing to medical environments. Of all the exciting, innovative products we've seen at CES this year, the Oculus Rift "Crystal Cove" prototype is unquestionably the best of the best."

http://www.engadget.com/2014/01/09/the-oculus-rift-crystal-cove-prototype-is-2014s-best-of-c/
http://www.businessinsider.com/oculus-rift-crystal-cove-2014-1
http://www.techradar.com/reviews/gaming/gaming-accessories/oculus-rift-crystal-cove-1123963/review
http://gizmodo.com/i-wore-the-new-oculus-rift-and-i-never-want-to-look-at-1496569598
post #133 of 286
Breakthrough product. Period. Color me a believer.
post #134 of 286
Quote:
Originally Posted by barrelbelly View Post

"With the latest Rift, Oculus has created a device that may usher in an era of truly immersive gaming and entertainment, and even create new opportunities for businesses to use virtual reality in everything from manufacturing to medical environments. Of all the exciting, innovative products we've seen at CES this year, the Oculus Rift "Crystal Cove" prototype is unquestionably the best of the best."

http://www.engadget.com/2014/01/09/the-oculus-rift-crystal-cove-prototype-is-2014s-best-of-c/
http://www.businessinsider.com/oculus-rift-crystal-cove-2014-1
http://www.techradar.com/reviews/gaming/gaming-accessories/oculus-rift-crystal-cove-1123963/review
http://gizmodo.com/i-wore-the-new-oculus-rift-and-i-never-want-to-look-at-1496569598

 

Things's a moose!

post #135 of 286
Note this present incarnation of Oculus uses cumbersome motion tracking based on external camera. This, while precise, limits the freedom of movements. Ideally there should be cameras around the room to track the head.
post #136 of 286
Quote:
Originally Posted by irkuck View Post

Note this present incarnation of Oculus uses cumbersome motion tracking based on external camera. This, while precise, limits the freedom of movements. Ideally there should be cameras around the room to track the head.

It's a prototype. They aren't committed to the camera, they are committed to making the motion tracking work...
post #137 of 286
Quote:
Originally Posted by rogo View Post

It's a prototype. They aren't committed to the camera, they are committed to making the motion tracking work...

Indeed. But the use of camera vs. previous embedded tracking system indicates there are problems with making it practical.
Motion tracking must be highly precise to avoid bad problems for the users (motion sickness).
post #138 of 286
Quote:
Originally Posted by irkuck View Post

Indeed. But the use of camera vs. previous embedded tracking system indicates there are problems with making it practical.
Motion tracking must be highly precise to avoid bad problems for the users (motion sickness).

It's not either/or. The new prototype still has the accelerometer sensors on the unit to determine which way you are looking. The camera improves things by being able to track the head position in space, not unlike a TrackIR.
post #139 of 286
Thread Starter 
Quote:
Originally Posted by tgm1024 View Post

Things's a moose!

biggrin.gif It's a prototype...as Rogo said. They're still testing. And I'm sure one of the concepts being tested is acceptable unit size. Personally...I'd buy the "Moose" if it delivered the goods. It's not like I'm going to walk around in public with it on. Even if the thing was like a pair of glasses. On a plane...waiting in an airport...Hotel Room...Boat...Car...on the porch...who cares IMO. I'd gladly wear the Moose for $300-$500. But I suspect none of us will have too. I bet it won't be any bigger than Sony's goggles at launch. But a whole lot better and much cheaper!
post #140 of 286
It's hard to overstate this: Oculus Rift will be breakthrough technology. Even the 1.0 version will make people react with 10x the wonder they did when they first pick up the Wiimote and started playing videogames with their arms (and, sorry gamers, but that was the last time games cause wonder in people). The Rift won't go mass market like the Wii -- at least at first -- because of software, but I suspect the geek/nerd set that buys them will convince most visitors to put it on for a few minutes to experience one of the demos... And those people will walk away in utter disbelief.

I'm not sure if people remember the early Windows Mobile / Nokia smartphones. But compare those to current iPhones/Samsung Galaxy phones and then consider similar development of the Rift from last year through the next 10 years. I think the improvements are likely to be that good. And once people realize they can have something that really resembles a Star Trek experience, they will find this to be a technology they don't want to live without.

Most every virtual reality tech has failed on several dimensions: the home versions were completely awful; the ones where you had to go somewhere were not worth the trip; the "reality" wasn't very real; the costs were too high. The iPhone could never have been built in 2003... It was a function of packaging the right pieces at the right time. Then an app ecosystem grew around it. The Rift is a function of technology that is -- at last -- good enough. It's being developed by people who understand that if the experience is sickening, the product fails. They are already letting third parties take the technology and turn it into whatever the imagination allows.

This is a complete "wow" product. It could still fail. Or it could change the world.
post #141 of 286
Quote:
Originally Posted by irkuck View Post

It looks thus the Avegant gear may have 720p micromirror chips inside.
How low persistence is this DLP?
I would root for low-persistence OLED, like the one found in the prototype Oculus goggles -- that's the real deal I'd love to see.

Right now, 1ms of persistence creates 1 pixel of motion blurring during 1000 pixels/second panning motion.
This has been so reliably reproduced in my tests of many modern strobe-backlight gaming displays, that I now call it "Blur Busters Law of persistence".

Wearing 4K VR googles, and turning your head at, say 5000 pixels/second, you will still get 5 pixels of motion blurring during 1ms persistence.
Most DLP VR goggles have at least 8ms of persistence, which would create about 40 pixels of motion blurring during fast 5000 pixels/second head turning (slightly over one screen width per second head turning).
That makes it hard to read text (e.g. signs on walls) while turning your head, like you can already do in real life.

See animation: www.testufo.com/eyetracking -- motion blur caused by persistence -- to understand why this is so important, and what Oculus has done is rather impressive. As you can see in the animation, pixel transition (GtG) is different from pixel static state (persistence), and persistence is the bigger cause of motion blur problem nowadays, as pixel transitions are a tiny fraction of the length of a refresh cycle, and this makes persistence problems a dominant factor in motion blurring, especially during VR.

You can create lower persistence, by either more frames, or adding black gaps between frames, or a combination of both.

-- High refresh rate on flicker free display; persistence is equal to 1/Hz
-- Adding black gaps between refreshes, persistence is equal to strobe length

I am quite excited by the prospect of low-latency, low-persistence VR.
Although multiple parties (Oculus, John Carmack, Michael Abrash) have all said that ultrahigh framerates (e.g. 1000fps@1000Hz) would greatly benefit VR by allowing completely flickerfree low-persistence (no light modulation, no strobing, minimized wagonwheel effects, etc), the current techniques of low persistence are currently quite reasonable. I am impressed how far VR has suddenly leaped over the last 2 years alone, thanks to the rapid work by large numbers of parties, and very especially with low-latency low-persistence displays.
Edited by Mark Rejhon - 1/15/14 at 2:35pm
post #142 of 286
^DLP has practically no persistence (microseconds range). Persistence is one problem in VR, another one on which Oculus bumped is sensitivity to user motions. It must highly precise to avoid sickness, From this reason they apparently moved to external camera reading markings on the headset but this looks problematic with consumers apps.
post #143 of 286
Quote:
Originally Posted by irkuck View Post

^DLP has practically no persistence (microseconds range).
This is false.
Most DLP is usually currently fairly high persistence, so there is more motion blur on most DLP than on plasma/CRT displays, unless it uses interpolation or black frame insertion. Most DLP projectors currently fail the TestUFO Panning Map Test at 960 pixels/second, you cannot read the street name labels because of the high persistence. Just try it. Try it and weep.

Transition/GtG = pixel switching/movement time
Persistence = pixel static/visibility time.

Persistence is NOT the same as transition / GtG.

Start studying (use a stutter-free browser for these animations to be accurate):
1. Animation of why persistence creates motion blur: www.testufo.com/eyetracking (Educational animation)
2. Animation of how length of pixel static time coorelates to motion blurring: www.testufo.com/blackframes (Educational animation)
3. Animation of various persistence lengths. Shorter pixel static time = less motion blur: www.testufo.com/blackframes#count=3 (Educational animation)

Note: These are software based animations using the full refresh cycle (8.3ms persistence steps at 120Hz, or 16.7ms persistence steps at 60Hz) in order to control persistence, so there will be lots of flicker, unlike high-frequency hardware-driven PWM, which can operate at frequencies far beyond the human flicker-detectability threshold.

Yes, DLP pixels can switch in microseconds, but DLP pixels do not "stay still" for just a microsecond.
That is NOT persistence. Persistence is NOT switching time. Persistence is STATIC time.

The only way to create 1 microsecond persistence is essentially the following:
1. Flash a pixel for just one microsecond, once per refresh, any refresh. (this would be a very dark picture due to the million-to-one duty cycle).
2. Fill all timeslots. (1 million frame per second at 1 million Hz).
DLPs cannot do that, and virtually no technology currently achieves true 1 microsecond persistence; not even CRTs.

At 100% bright white, on most DLP projectors, a DLP pixel is typically continuously on for the whole refresh cycle (16.7 milliseconds at 60Hz). So you get 16.7ms persistence at 60fps@60Hz when not using interpolation, and when not using black frame insertion. As you track your eyes on moving objects on a screen, your eyes are in a different position at the beginning of a refresh than at the end of a refresh. That causes the frame to be blurred across retinas, creating motion blur, as demonstrated at www.testufo.com/eyetracking .... Mathematically, 1ms of persistence translates to 1 pixel of motion blurring during tracking your eyes on 1000 pixel/second full framerate=Hz moving objects. (This has been reliably true for clean square-wave persistence, such as LightBoost, BENQ Blur Reduction, and GSYNCs' ULMB feature, according to photodiode oscilloscope tests -- these are new low-persistence gaming monitors that has less motion blur than DLP/plasma, and actually has slightly less persistence than the Sony FW900 CRT -- finally, an LCD with less motion blur than certain CRT). Softer persistence curves such as phosphor decayon CRT/plasma fuzzies this math up a bit, but the principle is still the same -- short-persistence CRTs have less motion blur/ghosting than long-persistence CRTs and thus remain true.

DLP pixels use PWM, which switches them on/off rapidly. On average, persistence of a DLP goes from the first visibility of the first PWM flash of a specific refresh, to the last visibility of the final PWM flash of a specific refresh. Typically, this is nearly the full refresh cycle -- e.g. 16.7ms of motion blurring at 60Hz (1/60sec = 16.7ms). Motion interpolation adds extra refresh cycles, and reduces persistence, so this is one common technique. Pixels of a specific refresh are visible for shorter times == shorter persistence. Black frame insertion is another technique of reducing persistence. Longer black period and shorter pixel visibility times == shorter persistence. There are numerous DLP projectors that does one or the other or both (interpolation or black frames, or both).

However, DLP projectors *can* be made lower persistence

There are some good DLP's with fairly low persistence (<4ms). For example, DLP with large-ratio black frame insertion. It does reduce temporal time for the DLP to generate color. Some PC gamers found out that on some DLPs such as Optoma GT720 by using true 120Hz (not stereoscopic 3D) combined with enabling black frame insertion, reduces persistence. On this model (GT720), enabling 3D mode even for non-stereoscopic 2D gaming, is another method of black frame insertion since 3D modes on DLPs adds a black period between refreshes as a guard delay for shutter glasses open/close -- this is effectively black frame insertion and can be reused for motion blur reduction too, since it reduces persistence. This creates only 4ms persistence and it actually barely passes the TestUFO Panning Map Test (readable map labels at 960 pixels/second). Most home theater DLP projectors, even $5000 models, are currently unable to run a Game Mode / Computer Mode with a low-persistence, but this is gradually changing.

As long as there is an acceptable tradeoff between persistence and color depth (lower persistence = less temporal time to generate colors), DLP are very capable of low-persistence. There is a 500Hz true-refresh-rate scientific DLP projector available at places like VPixx Technologies Inc. And I see nothing stopping virtual reality vendors using DLP chips as a method of creating low-persistence. However, it's extremely difficult for DLP to achieve 1 millisecond persistence while maintaing the full 8-bit color depth. (microsecond? Dream on. That's switch time, not persistence)

(Owner of BlurBusters -- considered an authoritative resource among PC gamers wanting "better than 60Hz" displays.)
Edited by Mark Rejhon - 1/22/14 at 11:44am
post #144 of 286

Ok, wait.  Most of us understand what's happening, but I would like a clarification of the terms here.

 

(And I see that you edited your reply above to be more complete).

 

In your (great!) thread LCD motion blur: Eye-tracking now dominant cause of motion blur (not pixel persistence), you say this.  (Note, the bold is yours, not mine, I'm not trying to accent anything).

 

Quote:
On very new LCD's, most pixel persistence is now only a tiny fraction of a refresh. Motion blur on LCD displays occurs due to several factors, including pixel persistence and eye tracking. Recently, with today’s faster LCD’s, pixel persistence now only has a minor factor in motion blur.

 

By this, I had thought you were making this "Pixel Persistence" synonymous with "response time".  Otherwise, why would very new LCD's have pixel persistence as a tiny fraction of refresh.  If pixel persistence is the amount of time that the pixel is held in any one state, that's free to be from the response time to the length of the frame.  No?

post #145 of 286

Well we knew THIS wasn't far behind in the occulus rift channel.  They're reporting on a sex game available soon for the OR, from a new startup called "Sinful Robot".

 

BTW: These two "web reporters" are a complete riot!

 

 

 

post #146 of 286
Quote:
Originally Posted by tgm1024 View Post

Ok, wait.  Most of us understand what's happening, but I would like a clarification of the terms here.
More than one year ago, I had confused some terminology, mixing up GtG with persistence.
However, today, the terms have been greatly clarified.
This is pre-correction. Although I understood what was going on at the time, I used the wrong terminology.

An old motion blur testing software application, "PixPerAn" (written in the era before good backlight strobing), perpetuated the confusion, especially as persistence was roughly equal to GtG for many years, until GtG became far less than a refresh cycle, then motion blur was bottlenecked by the frame length even as GtG continued falling to sub-frame lengths (meaning little difference between 1ms, 2ms, and 5ms LCDs). The divergence of persistence and GtG, makes it necessary to really clearly industry standardize the separate factors, and the industry has finally chosen the word persistence.

When I wrote that at the time, it originally mean "pixel GtG".
For a long time, a lot of us debated terminology (including you and me) What is persistence? What is GtG? Etc. We finally settled on not using persistence to describe GtG. And then, lately a lot of us (Oculus, myself, John Carmack, Valve Software) has now correctly interpreted the word persistence. As you remember that several months ago, you remember I agreed to correct my terminology to be more in line with the industry standard? You agreed with me at the time, too.

To prevent further damage and confusion, I'm going to ask a site admin to edit the topic title to what I actually meant at the time.
post #147 of 286
Quote:
Originally Posted by Mark Rejhon View Post
 
Quote:
Originally Posted by tgm1024 View Post

Ok, wait.  Most of us understand what's happening, but I would like a clarification of the terms here.
More than one year ago, I had confused some terminology, mixing up GtG with persistence.
However, today, the terms have been greatly clarified. This is pre-correction. Although I understood what was going on at the time, I used the wrong terminology.

When I wrote that at the time, it originally mean "pixel GtG".
For a long time, a lot of us debated terminology (including you and me) What is persistence? What is GtG? Etc. We finally settled on not using persistence to describe GtG. And then, lately a lot of us (Oculus, myself, John Carmack, Valve Software) has now correctly interpreted the word persistence. As you remember that several months ago, you remember I agreed to correct my terminology to be more in line with the industry standard? You agreed with me at the time, too.

To prevent further damage and confusion, I'm going to ask a site admin to edit the topic title to what I actually meant at the time.

 

Ok.  Yeah, we had a PM converstion where you and I tried to feret out what was going on and my confusion at the time was because of terminology, but a resolution to it wasn't clear to me at the time.  I didn't think you were on board with any changes either.  We haven't spoken since on that topic I don't think.

 

In any case, asked and answered.

post #148 of 286
Quote:
Originally Posted by tgm1024 View Post

I didn't think you were on board with any changes either.
No, that part was a misunderstanding. Didn't you see my edits of removing the word "persistence" almost immediately after around that time? (However, I am unable to edit the topic title).

Either way, I've long unified to the same terminology, including in all my Blur Busters Forum posts & subsequent articles (including my GSYNC input lag tests published at Blur Busters, which will now be published in an upcoming issue of PC Games Hardware Magazine).
post #149 of 286
Quote:
Quote:
Originally Posted by Mark Rejhon View Post
 
Quote:
Originally Posted by tgm1024 View Post

I didn't think you were on board with any changes either.

No, that part was a misunderstanding. Didn't you see my edits of removing the word "persistence" almost immediately after around that time? (However, I am unable to edit the topic title).

 

Either way, I've long unified to the same terminology, including in all my Blur Busters Forum posts & subsequent articles (including my GSYNC input lag tests published at Blur Busters, which will now be published in an upcoming issue of PC Games Hardware Magazine).

No harm done, except maybe a bit of confusion more than a year ago, thanks to the outdated PixPerAn tool.

 

No, no harm done.  I didn't think to go looking for any changes---I don't recall you announcing that.  Yes, have a moderator change that.  Are you going to change the part that I quoted?

post #150 of 286
Quote:
Originally Posted by tgm1024 View Post

Are you going to change the part that I quoted?
The edits I made are on the Blur Busters site, not on that old 2012 forum thread. (Perhaps that is why you hadn't noticed the edits made shortly after our PMs). However, I didn't edit the thread because it's not properly fully editable. The word "persistence" needs to be extinct from all 5 pages of the whole thread, including other peoples' who quoted parts of my post. All pages of the whole thread needs a simple s/persistence/transition/ -- then the whole thread is suddenly correct. But I can't edit other people who quoted my text in 2012. There are over 100 occurances of word "persistence" that is required to be extinct from that old thread, so editing only that single sentence would not make sense at all.

For many months now, Myself, Blur Busters, John Carmack, Oculus, Valve Software, and many parties, have all unamiously agreed "pixel persistence" is a totally different meaning from "pixel transition/GtG"

persistence == sample and hold == pixel visibility/static time

transition == GtG == pixel switching time == pixel transition time == pixel movement time

But in 2012, I was using the "pixel persistence"(2012) in place of "pixel transition"(today) -- an incorrect equivalence. And was using "sample-and-hold"(2012) in place of "persistence"(today) -- a still largely correct equivalence. So wrote that 2012 thread based on that. So in that case, search-replace "pixel persistence" with either "pixel transition" or "pixel GtG" makes the whole 2012 thread correct again.

That said, I think I'm thread-hijacking now, so transferring further correspondence via PM. smile.gif
Edited by Mark Rejhon - 1/22/14 at 12:56pm
New Posts  All Forums:Forum Nav:
  Return Home
AVS › AVS Forum › Display Devices › Flat Panels General and OLED Technology › Oculus Rift VR Headsets