holographic display idea - AVS Forum
Forum Jump: 
 
Thread Tools
post #1 of 8 Old 06-25-2013, 04:44 PM - Thread Starter
Member
 
forsureman's Avatar
 
Join Date: Sep 2012
Posts: 107
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 11


http://www.youtube.com/watch?v=Oa7QF-ItjZA

http://www.youtube.com/watch?v=aTctta2OMRc

The above picture and two youtube videoa show the basic design for the holographic TV.

What happens is the display and eye form a line, and the two points that make the line are the eye point and the display pixel point.

The pixel point sends light to the eye point in a straight line. What happens is the pixel point sends light onto the eye point.

In real life we see objects because light bounces off of it and then the redirected light that bounced off the object then bounces onto the eye which then percieves the object.

So in the display eye points the display acts as though light bounced on the display then the light bounced from the display and onto the eye so the eye could see the display.

In real life though, when the person looks around what it sees it sees the other side of what it's looking at, because the light is striking the viewed object 360 degrees around the object so as the person looks at the object the light is bouncing off the object at the different points the person looks at as the person walks around the object.
But in a display when the person looks around the object the tv has a diffuser that sends the same 2d imade to all points the display sends light too, so there is no 3d and no hologram because the only light that hits the eye as it moves around the object is the same light from the different places the eye looks at the display.

But if the persons eye moved around the display then the display sends light to the eye so the eye sees different reflection of light from the object the display is showing, this is 3d and a hologram.

So the principle of the 3d I am describing in the display is the line sends light to the eye and the eye is in one xy axis position. But when the eyes xy axis position changes the display sends light to the eye so the eye sees light from the object in the display from a different part of the object being looked at.

How does the display sends light to the eye based on the eyes position so the effect is the eye recieves light from the object being looked at from the new view of what the eye is looking at?

...
forsureman is offline  
Sponsored Links
Advertisement
 
post #2 of 8 Old 06-25-2013, 05:01 PM - Thread Starter
Member
 
forsureman's Avatar
 
Join Date: Sep 2012
Posts: 107
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 11
... How the display decides to send light to the eye based on the eyes position relative to the display is a technology I have thought out.

I will detail that technology below.

First off this was built for the oculusvr but I think it's applicable to 3d tv too so that's why I am making this thread. But substitute the 3d glasses for the oculus vr headset and then they are compaible so both vr and 3d tv can use this technology I will describe below.

The tracking of the eyes needs to be precise enough so the difference in the eyes position creates the visual impression of a 3d image on the screen the eyes are looking at.

Hold your index finger in front of your face about a foot or two away from your eyes, and then holding the finger still moves your head around the finger so you see the finger from different viewpoints of the finger.
This is the accuracy the tracking system needs.

So on the display is "a camera called camera2", "a display for the camera to look at called monitor2", "a tilting mirror", "a stationary mirror", "a green laser", "a program to redirect the tilting mrror so the laser shines onto a point on the display".

The person looking at the tv sees "monitor1", and as the person sees monitor1, "camera1" captures the persons eyes.

camera1 sends the picture of the eyes onto monitor2.
camera2 sees the person on monitor2.
The display sends light from the stationary laser onto a stationary tilted mirror, and the laser light is reflected onto the dynamic tilting mirror.
From the dynamicly tilting mirror the laser light is reflected onto the monitor2 and onto the face of the person on monitor2.
The program tilts the dynamicaly tilting mirror to redirect the laser onto a point on the face on monitor2.
When the program correctly sets the laser onto the point onthe face this is where the laser will shine on continually even when the person moves their faces xy axis position, because the program will redirect the laser onto the persons face at that exact point.

Now the person is tracked in xy axis position so when the person looks around the image on the screen the xy axis the eyes are at see one light from the display so the image seen by the eyes are one viewpoint of the image being looked at, and a different xy axis shows a 3d image hologram so the eyes see whats on the display from a different angle.

And that's the basic idea.
forsureman is offline  
post #3 of 8 Old 06-25-2013, 05:26 PM - Thread Starter
Member
 
forsureman's Avatar
 
Join Date: Sep 2012
Posts: 107
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 11
The tv would show a souce that has a stereoscopic image for all the possible points the eye could look at the image from, depending on the xy axis the eyes are looking at the tv.

And if this is for VR, then the yaw of the head, or turn of the head as the person looks left or right, could be seen by camera1 if the person wears some identifyable light around the person head.

Like a paper with leds on the 4 edge tips of the paper. And the plane of the paper or what is written on is held on flat and on top of the persons head.
Now when the person turns left or right the camera1 sees the leds and sends this image to camera2 which applies the leds on the paper to the xy coordinates so they equal each other.

Now if the person turns so the eyes aren't visible to the camera1 then what is visible to camera1 is the leds on top of the persons head, and since the leds equal the xy coordiantes of the eyes, camera2 shows the program leds so the program can guess the eyes xy axis coordinates.

Now you have the yx and yaw you can have hand tracking and it works inside of the coordinates the head and eyes are using.
The hands use the paper and leds on the paper, so the hands go thought the plane of the paper, which puts 2 led papers around the wrists of the two hands, one paper per hand.
The paper acts as a cuff wrist.

The hands are seen by camera1 and the camera1 sees the leds surrounding the two hands.
Camera2 sees the two hands leds and the program creates a virtual box to surround the two hands paper leds, this acts as a tracking system when the virtual box surrounds each of the hands.
The virtual box has xy and yaw coordinates, now when the person moves their hands the program slides the virtual box over the leds cuffs so they mantain their initial position inside of the virtual box.
The virtual boxes coordinates exist inside of the head tracking coordiantes and when the virtual box moves it changes it's position to the eyes.
This way the person can hold their hands out to their sides and touch their nose in vr and not miss.

For haptic touch, the hands have gloves, and the way the hand moves up, down, left, right, changes how the fingers move.
So if the fingers have a initial position then they can use the hands coordinates and plot the fingers coordinates changes, this way if the vr world has items to touch the person can touch then as they see them and feel them through the haptic gloves.

So you can see without holography, vr position and haptic touch is not possible. I'll admit, Maybe it is, maybe some genius can figure out how, But I don't see how.
forsureman is offline  
post #4 of 8 Old 06-26-2013, 09:09 AM
AVS Special Member
 
barrelbelly's Avatar
 
Join Date: Nov 2007
Posts: 1,695
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 62 Post(s)
Liked: 231
Thanks for starting this thread Forsureman. Because I really do think it deserves a thorough discussion in this forum on its own merit. It just didin't belong in the Oculus Rift/Flat Panel tech forum. I won't dive into the science of your idea right now except to say...the big hurdles IMO for you to overcome will be in the area of Latency and image delays. That is going to require an enormous level of computing horsepower to achieve true holodeck realism. This will allow you and others to hammer out all aspects of its potential. I'll start it up for you.
  1. I understand what you are driving toward. But it is very confusing the way you describe it. It is very easy to grasp when you support it with visuals like the YouTube/Wii video. But in reality you are suggesting an application that may be linitied in scope IMO. I certainly can see commercial applications such as revolutionary new kinds of Family Entertainment Centers...Arcades...interactive/movement based Health clubs. The science is right up the alley for those type of establishments to blend design architectures with multiple display technologies...very expensive computer linkups...and employ creative bridges necessary to fully create the vision you are pursuing (the holodeck). MS's Illumiroom...Kinect 2.0/commercial...Sony's motion wand...Oculus Rift VR goggles...Haptic gloves...big display perimeters with efficient High end display tech like LPD...and etc are all required to some degree to create the illusions you seek.
  2. I don't see the technology/application as very relevant in homes and apartments. There is just way too much space and noise restrictions to overcome. Ditto for MS Illumiroom and similar approaches.
  3. The one narrow area where I see potential synergy with home based application is with Kinect. And that is only if MS wants to create a holographic controller/Keyboard that can be activated in a game. THink the Movie "Minority Report" to visulaize what I'm suggesting here. It appears that your idea could recreate a reasonable facsimile of that kind of display from on screen graphics. But precision would be a premium with it. There would have to be zero variance between the actual holographic control and the gamers interaction with it.

That's my 2 cents worth. Good luck with your idea and with this thread. I'll chime in when I read something I feel compelled to comment on. Is the market really ready for the Star Trek "Holodeck"? Not in the home or living room IMO. As an out of home destination experience? Absolutely a resounding yes IMO. But it will have to be firing on all cylinders to keep people engaged.
barrelbelly is offline  
post #5 of 8 Old 06-26-2013, 04:12 PM - Thread Starter
Member
 
forsureman's Avatar
 
Join Date: Sep 2012
Posts: 107
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 11


A wheel holds a man in the middle of the wheel using a harness attached to the man.

The mans arms and legs have poles attached to them that act like oars that go into and out of water at is moves a boat.
The poles have sensors that relay the mans position to the VR program so his arms and legs move in VR as they do in the wheel.
Because the poles are held in the wheel by holes, the poles can move in the full freedom of movement the arms and legs can.

The person in the wheel must have the ability to crouch, and roll, as he does in real life. Or crawl on the VR ground.
To do this the wheel needs to use the poles so the poles bear weight from the arms and legs.
So at one pole position the legs move as if in air, and another pole position the legs touch the ground so the pole bears weight.
The pole position is judged by the leg extension, if the leg is extended the pole bears weight, if the leg is curled the pole moves freely.
This way with the pole bearing weight the person can have the sensation of walking in VR space.

And that's the basic idea to apply to the arms too if the person moves the body to make them crawl on the ground in VR space, the wheel and poles move so the person is physically crawling in the wheel.

This idea has weight distribution and weight bearing so the person can possibly feel gravity force in VR as he or she walks around.

Thanks barrelbelly,

I don't have the engineering experience to build this in HW or SW but this is the idea for how to do it if I did have the experience to build it, like a wish list of things I wish I could do.
forsureman is offline  
post #6 of 8 Old 06-29-2013, 12:13 PM - Thread Starter
Member
 
forsureman's Avatar
 
Join Date: Sep 2012
Posts: 107
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 11
This post shows a SW version of the tilting mirror mechanism, which would allow for cheaper holography and augmented reality.

Now the VR uses a camera to read a screen to tilt the mirror so the laser strikes the tracked part of the monitor.
The mechanical design in real life would make this a large box and take up a lot of room and be very expensive.

So I envision a virtual tilting mirror design I will describe below;

The monitor1 and camera1 are in the real world, but the monitor2, camera2, and other parts of the design are virtual.

The effect of this is the laser is still guided by the program to hit monitor on the tracked part of the monitor,
which finds the changing xyz coordinates of the tracked thing on the monitor.
But the laser is in video game world reality.

So by integrating the real world picture in video game world, the real world picture is beamed onto a video game world tv.
Then the program reads the real world picture in the video game world on the virtual monitor,
then uses the virtual monitor in the video game world that is showing the real picture of the person on the virtual screen
to see where the laser is pointing and then redirect the laser onto the part of the monitor that is the tracked part.
This way the program finds the changing xyz values of the persons eyes being tracked without any physical HW.
This makes the whole design only need the 1 monitor in the real world for the person to look at and the 1 real physical camera for watching the person look at the monitor.

So I would need to find a way to broadcast real pictures in real-time to a video game tv then have a program look at the video game tv and make changes to the tilting mirror to find the xyz values necessary for the holographic effect I described earlier.

Monitor1 shows stereoscopic eyes the xy coordinates of the thing the eyes look at on the screen, a orange or dog, etc.

Camera1 sees the eyes and records this so that the eyes xy coordinates are able to be shown on monitor2.

Monitor2 shows camera2 the eyes xy coordinates and the eyes xy coordinates change as the person looks
at monitor1 from different physical perspectives.
forsureman is offline  
post #7 of 8 Old 06-29-2013, 12:15 PM - Thread Starter
Member
 
forsureman's Avatar
 
Join Date: Sep 2012
Posts: 107
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 11
Using the SW version of the tilting mirror mechanism, I worked out how to create augmented reality;

As camera2 sees the eyes xy coordinates it shows a stereoscopic image based on the coordinate of the eyes,
so when the eyes move, each new xy coordinate the eyes go to the monitor1 shows the eyes a new stereoscopic image.

This is the basic idea for a camera that's recording the person but not on the person; camera1,
and a camera watching the video from camera1 that is also not on the body of the person being recorded by camera1; camera2.
But what if both cameras are on the body of the person being recorded by camera1?
That is what I will describe below, it would work for robot navigation and augmented reality.

monitor1 must show the stereoscopic eyes a image, and this image must have a xy coordinate that is held in a coordinate system.
If monitor1 shows a face to the stereoscopic eyes,
the face on monitor1 has a xy coordinate that is one piece of a greater whole of xy coordinates.

Remember that the light must bounce off of the image on monitor1,
and then that light that bounced off of the object in monitor1 is the reflected onto the stereoscopic eyes.
So if light can bounce onto the object in monitor1 there is other xy coordinates that exist besides the image the eyes are looking at.
Therefore the eyes look at a xy coordinate that is one coordinate within a larger system of xy coordinates.

So a camera must get the coordinates plural, then find the coordinate singular, to show to the stereoscopic eyes.

Now camera2 sees the eyes and the eyes looking at the image on monitor1 is the initial xy coordinate on monitor2.
The eyes and camera are on the same body,
so the xy coordinates the eyes move in on monitor2 must be the same coordinates the eyes use to find the thing being looked at on monitor1.
How this is done is the eyes have a position in the coordinates of the thing it is looking at.
This way when the eyes see the image on monitor1, then the eyes also have a coordinate on the coordinates the thing seen on monitor1 is in.

So now as the eyes see the image on monitor1 that is a single coordinate, the eyes are a coordinate that itself that is tracked by camera2.

Now when the eyes look at the image coordinate on monitor1, the eyes are also a coordinate on monitor2 to camera2.
So if the coordinate on monitor1 is augmented reality, then when the eyes look at the coordinate and change the eyes coordinate,
the program shows the image on monitor1 from the new perspective the eyes look at the image on monitor1.

Now the important part is, is the coordinates accurate so that the augmented image isn't bouncing around as the person looks at it from different perspectives?
The y coordinate would relate to the persons walking or standing,
then the x coordinates would be mapped from this y coordinates,
so each time the person walks or sits etc the y axis is the same each time the person moves,
then the x axis is decided on the y axis.

And for robotics, I think the image on monitor1 would have some image that can have a skeleton wireframe put around it,
and this skeleton can have image recognition traits,
then when the program sees the skeleton it decides what to do or it copies what to do to the skeleton,
using something like "the baxtor robot system" of mimicry.

So this post shows how to use the holographic system I described for augmented reality.
forsureman is offline  
post #8 of 8 Old 07-17-2013, 08:13 PM - Thread Starter
Member
 
forsureman's Avatar
 
Join Date: Sep 2012
Posts: 107
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 11
I posted this at gamedev to ask a few questions but thought I would post it here too;
http://www.gamedev.net/topic/645585-chase-and-evade-ai-applied-to-facial-tracking/

"
What I have to say here is an idea for tracking in 3d space for use in the oculus vr.

Now bear in mind that I'm learning c# and unity right now and in a years time I should be able to try some things to do what I want. So this is really just a few questions to more experienced developers on what is possible.

I have thought through what I am trying to do and will describe it now.

A cube is beside a mirror and the mirror is above the cube, and the mirror is facing the cube, so the mirror is facing downwards.

A light souce like a laser pen is shining upwards and the laser pen is beside the cube and below the mirror, and the mirror is recieving light from the laser pen and shining the light it recieves from the laser pen onto the cube.

It looks like this;

See picture A.jpg


Now the laser is shing onto the cube the cube also has a mirror so the laser beam is shing onto a different surface, this is where it gets tricky.

See picture B;


The surface the cube reflects the laser light onto is called "destination".

"Destination" has a moving dot that is chased by the laser light from the cube, so that the laser light from the cube sits on top of the dot on the "destination".

See picture C to see the dot;


So in pictures A,B,C, you see the overall mechanism I want to create.

Now the tricky thing is what happens when the cube redirects the laser beam light.

In order for the cube to redirect the laser beam light onto "destination", the cube must have a surface that moves in the xy axis positions, so the moving surface must sit on top of the cube.

See picture D;


Now as the moving surface redirects the laser light onto the "destination", the moving surface has changing xy axis coordinates.

Now what I want to know, is is it possible to create this moving surface that chases the dot on the "destination" and as it chases the dot the moving surface has accurate xy axis coordinates that change every time the moving surface moves,
so if the moving surface moves the xy axis coordinates change?

That is the basic design I have in mind.

Now the "destination" has a dot that the moving surface chases with the laser beam it is reflecting onto the "destination".

That moving dot on the "destination" is a facial point that is being tracked using facial recognition tracking point.

So the problem now is to video a face, apply facial tracking points to the face, feed the face with the points on it to the "destination", and then the moving surface sends the laser light onto the facial tracking point, whichever point I choose.

And as the facial point moves the moving surface sends light onto that point and then the xy axis coordinates change so the face moves has xy axis coordinates data in the moving surface as the moving surface sends light from the mirror onto the "destination".

I was thinking of using unity and c# to get the facial points, and then somehow getting unity movie texture to show the face with the points on it in a video game environment, then setting up the cube and shing the laser onto the movie texture and somehow getting xy axis coordinates from this setup.

But I'm not sure the movie texture can be used like this. So since I am learning c# and unity and I will put a lot of time and effort into this I thought I would ask if what I want to do is possible or not before I put too much effort into this project I want to do.
"
forsureman is offline  
Reply 3D Tech Talk

User Tag List

Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off