HMD theory, for something like the Oculus VR - AVS Forum
Forum Jump: 
 
Thread Tools
post #1 of 5 Old 08-14-2013, 09:25 PM - Thread Starter
Member
 
forsureman's Avatar
 
Join Date: Sep 2012
Posts: 102
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 11
This is a theory on how a head mounted display can show stereoscopic images and a large field of view.

Starting with the stereoscopic first.
The image being shown to the eye, is seen by the eyes at two different horizontal perspectives.

If you hold a pencil vertically and in front of you, your eyes are cross eyed as you focus on the vertical pencil.
Then move the pencil to the left or right until only one eye sees the pencil.
The pencil is now not visible to the two eyes but it is visible in your field of view.
So by only the field of view seeing it but not two eyes, the image stops being stereoscopic.

So there is a part of the display or image being shown to the eye that is supposed to be stereoscopic,
and part that is only the field of view and not stereoscopic.

As you may or may not know, the Oculus VR is using only the one stereoscopic position, focusing at infinity.
But true stereoscopic images has positive, negative, and zero parallax; three types of stereoscopic images.

My solution is to wear shutter glasses over the eyes, then have the LCD screen behind the shutter glasses.
Then when the left eye shutter closes, the display shows one image,
and then the right eye shutter closes and the left eye shutter opens and the display shows a new picture.
The pictures shown to the eyes are stereoscopic pictures, not just focusing at infinity.
Part of the stereoscopic picture the eye sees is being fused in the brain so it's stereoscopic,
Part of the picture belongs only to the field of view.

Because the glasses are worn,
I suspect a custom type of shutter glasses that lets the field of view be non-occluded be visible would be necessary.
So the glasses would look like they belong to a clown.

And since the glasses extend about a inch past the eye brow, a large monitor would be necessary,
like a iPad mini screen.

I'll draw up some pictures, but I warn you, I don't draw very well.



With the HMD now able to show 3D stereoscopic images, there can be eye tracking.
Eye tracking is so if the eye looks in virtual reality, it sees that in stereoscopic 3D, the far and close distance is in 3D. Things you see, can move in the three different types of stereoscopic parallax in virtual reality.

I won't build this myself, I would need to be a hardware genius to rig up some 3D shutter glasses into something like the Oculus which uses a ipad mini screen, and that's not even mentioning they're odd use of glass.
If of the shelf 3D TV shutter glasses are used, which initially for testing purposes it will have to be, then the field of view will be reduced.
forsureman is offline  
Sponsored Links
Advertisement
 
post #2 of 5 Old 08-19-2013, 02:03 AM - Thread Starter
Member
 
forsureman's Avatar
 
Join Date: Sep 2012
Posts: 102
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 11
I looked at the headplay video where the person takes it apart, and that is a very simple display.

I think the oculus vr takes the screen of a cell phone display and uses leep optics to enlarge the half screen for the one eye.

But this means the eye needs to see the leep optics show that half of the display, which means the eye are looking straight ahead.
Cross eyed stereoscopic to see positive parallax isn't done in the oculus vr if the eyes are always focused at infinity.

I said that to show the eyes need to look at the other eyes viewing space when the eyes go cross eyes to see in 3D.
So the display space is shared between the two eyes.

Now with this is mind, the display is seen by the two eyes you have to show the image on the display in stereoscopic form to each eye.
So the eyes have some sort of glasses that lets the eye see the display differently than the other eye.

Then as the eye sees the display, the part of the picture not in 3D is the fov that lets the picture seem more immersive. For this the 3D glasses need to be deformed from regular glasses shape so the sides of the glasses are stretched out to the side by a few inches to maybe half a foot.

So the glasses let the eyes see the display in 3D and the glasses let there be a large fov. So the logical conclusion to these requirements is the display is in front of the 3D glasses and is a large size to fill the stereoscopic space and the fov space the eyes can see.
A display maybe the size of the iPad mini.
The display must be 120 Hz at least and able to show a 3D image to 3D glasses.

So if a smaller display could be used for this the leep optics the oculus vr uses maybe able to be used in this as well? I don't know.
forsureman is offline  
post #3 of 5 Old 08-29-2013, 11:40 AM - Thread Starter
Member
 
forsureman's Avatar
 
Join Date: Sep 2012
Posts: 102
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 11
To get the varying distance the object is perceived I would use stereoscopic images parallax.

First make binocular fingers and look through the holes your fingers make.
Then look at your keyboard in front of you of some image you can see with both eyes.
That image you see with both eyes is in 3D, no that 3D has a parallax value.
The SW uses the pupils distance from each other to decide what 3D images you can see and only shows you these in 3D.

Now if you increase or decrease the space of the holes in-between the eyes you can find the image your looking at. I mean if your looking at your keyboard, and you increase the space of the holes your eyes are looking through, you can only see one key on the keyboard in 3D or stereoscopically, but look up and into the distance and you can see a lot in 3D not just one thing.
But if you look up from the keyboard into the distance, AND increase the space of the holes in-between the eyes, as you look into the distance you will only see one thing in stereoscopic 3D.

So there is a path that shows the distance the holes are from each other, looking at what's closer to you the holes are closer together, looking at what's farther from you the holes are further apart. Spacing the holes reduces what you can see to only one image.

If this hole spacing is in SW and not with real holes, the SW uses the holes to decide what image to show to the eyes.
So what if the HW and SW worked together so the SW knows - at what distance the eyes see the image in 3D, the holes need to be this far apart.

So the image is shown to the eye and the HW makes the holes small and spaced apart so the eyes only see that image, this is recorded in SW, and the for all the distances the eyes see the HW makes the holes spaced so far the image is visible in 3D. Now the SW can make the image in 3D.

You see the image this far away and the SW makes the parallax distance just far enough you see the object in 3D.

The 3D is shown on the screen and the 3D glasses separate the screen from the eyes and the SW decides the eyes can see the image using the right parallax value.
If the glasses sides are stretched out, the fov is increased without occlusion from the sides of the 3D glasses.

The glasses would have some sort of blinder available that could allow you to find the spacing of the eyes as it sees the image on the screen so the correct parallax could be found. These would be fancy 3D glasses or some tech layered on top of the glasses that can give the blinders effect that acts to space the holes the eyes see through.
forsureman is offline  
post #4 of 5 Old 08-30-2013, 02:34 AM - Thread Starter
Member
 
forsureman's Avatar
 
Join Date: Sep 2012
Posts: 102
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 11
I will reiterate what I just said and add how to track the eyes.

The parallax of the image is understood by this analogy.
There is a pencil held in front of your face and the pencil is in-between a TV, and what's being shown on the TV is a horizontal number line, so the eyes see the pencils background is the number line on the TV.

When the two eye focus on the pencil the number line is doubled somehow, there's two of the number zero for instance.
But when one eye is closed and looks at the pencil the number line is not doubled, there is only one zero on the number line.

When one eye looks at the pencil the background of the pencil is a number on the number line not zero. The closer the pencil is to the TV the small the number on the number line will be when one eye looks at the pencil - the farther away the pencil is from the TV the larger the number on the number line the pencil is over when one eye is closed.

Now if the pencils distance from the TV changes it is viewed by the eyes, and the eyes pupils have a distance for how close they are to each other.
When the eyes see the pencil over a number on the number line, it's because there is a eye looking at the pencil because the eyes are that far away from each other.
If the eyes are not that far apart then as the eyes look at the pencil they see it over a different part of the number line!

So there is two measurements that need to be done for 3D;
- the distance the pencil is from the TV so as it is seen by the single left and right eye it's over this part of the number line.
- the 3D parallax the TV is trying to show, so the eye sees the pencil over a number on the number line, actually meets the eye as it shows the picture.

So a video picture of a ball being thrown and caught in VR, the image of the ball flying to the glove and finally being caught, that is one frame of video the moment the ball is caught. And that frame of video has a parallax value that the ball has and this must meet the pupils of the two eyes. And the ability to meet the two pupils is based on how far apart the eyes are from each other.
So one persons eyes are 70mm apart and they see the ball get caught so the parallax must be made so the image shown meets the eyes at 70mm apart, and another persons eyes are 63mm apart so the image shown to the eyes of the instant the ball is caught must meet eyes spaced 63mm apart.

And that is how there is two types of parallax, one the TV or display shows and is based on generic pupil distance, and one that meets the pupils of the unique eye when it's shown from the TV or display.

To get this distance the pupils are from each other, you show a video and on the glasses you have vertical blinders that block a part of the image.
Then you space the blinders so it finds the pupil distance for each distance the eye can see.
When the blinders are on to see the image being shown the blinders have to be moved towards the bridge of the nose.
This shows that when the blinders are this far from the bridge of the nose the object being shown on the screen is visible, and so all the parallax values can be found and stored in SW for later use, because the blinders at one parallax are not the same for a different parallax.

So that is part one showing the image, now to have eye tracking.

I think if there is a dot on the bridge of the nose where the skin doesn't move as the face changes it's shape, this can be the static position to give reference to the moving positions.

And the moving positions of the eyes are colored contact lenses.
A camera can see the static dot on the bridge of the nose and the colored contact lens of the eye, and pass this onto a SW that can find the difference between the still and moving colored parts.

Then as the blinders are used to find the distance the eyes are as it sees the image so the image can be shown to the eyes and the eyes see a 3D image, the camera sens the position of the contact lens to SW and this uses both eyes position relative to the still colored dot on the bridge of the nose.

Then as the eyes look around the video it looks at one part of the picture, and as the eyes look at that part of the picture the eyes change it's position relative to the still colored dot, and this shows the parallax of the eyes.
So the eyes look at this part of the picture and have this distance so that SW shows the eyes a picture with the correct parallax value so what they are looking at is in 3D.

Now how do you find what part of the picture they are looking at?
I mean you don't know just be measuring the distance the eyes are from each other what the eyes are looking at.

So the video has a grid that has lights for each grid, and the eyes look at the grid light up. So the eyes looking at the grid light up makes the contact lens that far away from the still colored dot, and that is the value for what the eye is looking at.
forsureman is offline  
post #5 of 5 Old 09-02-2013, 02:41 PM - Thread Starter
Member
 
forsureman's Avatar
 
Join Date: Sep 2012
Posts: 102
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 11
In other words,
if you send a video of the ball being thrown and caught.
The parallax is being made to one distance of the pupils.
And so the 3D seen is the object is one size.

But then you need to adjust for different pupils distance from each other which changes the parallax, and so the image size looks smaller or bigger.

So the smaller person sees a smaller world than a bigger person.
If you sit a a large and small person down and put a circle say a orange before them at mm from their face.
The bigger person will see a different sized orange than the smaller person.

Once this is able to be corrected in SW, so the bigger person sees the image differently than the smaller person.
You can worry about parallax perception which is based on the distance the pupils are from each other, and eye tracking the eyes look at this part of the screen and have this pupil form meaning they are looking at this parallax.

So first things first.
forsureman is offline  
Reply 3D Tech Talk

User Tag List

Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off