or Connect
AVS › AVS Forum › 3D Central › 3D Tech Talk › 3D creation tutorial
New Posts  All Forums:Forum Nav:

# 3D creation tutorial

Here's how to create a stereoscopic image.

First the right eye and left eye have different images they see and the brain joins them into one image.
Here is how to see this phenomenon;
on a wall draw a horizontal number line, and face the number line you drew on the wall.
Hold out your hand in front of you, and and hold out one finger so you see the finger inbetween you and the number line.
Now close one eye and the finger will be over a different part of the number line when you close your left eye than it will be if you close your right eye.

It looks like this;
By drawing a number line on a tv, then holding a pen between the number line onthe tv and my eyes and looking at the pen.
Then closing one eye then the other and seeing which number then pen is over.
The two see different numbers and these two images are joined by the brain to let me see the pen inbetween the tv and my eyes.

Therefore, when I get closer to the tv, and do this same thing, holding thepen inbetween my eyes and the number line on the tv,
the eyes see different numbers when iI close an eye,
because the distance to the tv changed which number my eye saw.

Therefore, the distance to the tv and the size of the tv play a role in which number my eyes see when I close a eye.

By remembering this value I can create a artificial pen.
If I draw a image of the pen on the number line as it appears when I close one eye.
Then show the eye the image of the pen on number line,
the brain should join the two images seen by the eyes so that it looks as if the pen is inbetween the display and my eyes.

That's how #D works for one parallax, which is pop out parallax.
For the opposite of pop out parallax, you would look through glass and see the pen on the other side of the glass.
Then draw a number line on the glass and look at the pen and the number line.
Then close one eye and see which number the pen is at and then when the number is found the artificial pen can be drawn.

Why I am saying this is because if you don't know how to find the image you have a hard time creating stereoscopic images that look as they do in the real world.
In the Oculus rift, if you have a virtual test room and a real world test room, you can match the pen so it is the same in both the virtual and the real.

If you have the wrong value in the artificial image of the pen, so the pen is in the wrong part of the number line,
the brain still tries to join the image, and if the brain does do this it strains the brain and causes sore eyes
,and other 3d image joining problems you can find on google.
Also, children and very small people see different values on the number line than a adult of average size does.
So by reducing the value the adult sees to match the childs,
if the child or small person looks at the artificial image of the pen on the tv they won't strain their brain
when they try to join the two artificial images of the pen into one image.

- I read somewhere that the Oculus is hard to develope for so I reasoned a little advice would help. Obviously if you want perfect comparison between the virtual test room and the real test room you would need precise measurement methodolgy which I didn't describe in this post.
Edited by forsureman - 8/3/13 at 11:57am

### AVS Top Picks

This was a quick and dirty piece of advise, not a technical diploma material,.

I probably shouldn't have called it a tutorial but a piece of quick and dirty advise, but I liked to sound of tutorial better.

I have thought about how to make a 3d image.

A picture I drew in ms paint;
http://www.imagebam.com/image/50424d267572481

The idea is the eyes converge on the pencil that is inbetween the tv and the eyes.
So since they converge, a line can be drawn to show the path they travel as they converge, that is what the laser is for.

If the laser is where the eyes are, above the pupil of the eye, and they rotate to beam a laser dot where the eyes converge, then the laser beam and eyes converge equally at the same spot on the pencil from the same starting point.

Now the laser has a camera that rotates to where the laser is pointing, and the camera has a fov that matches the persons fov inside the glasses.

Now the camera takes a snapshot of the left and right convergence and the person is presented the video in the form of a stereoscopic 3d picture viewed on a 3d tv.
The person then judges for themselves if the pencil looks 3d or not.

Then if you want to get fancy, a camera could be put on the eyes to watch for movement and when the above is done it has a eyeball position fingerprint.
Now your eyes can converges anywhere and the laer and camera will be able to create a stereoscopic image of what you see.

This is just a idea though and not part of the tutorial.
This is a more technical tutorial;

This is a new approach to VR that builds in head tracking.

First I wll detail what 3D is, and then I will detail how to create a 3D image, then I will detail how to create a VR headset like the Oculus VR.

What is 3D?
Imagine a horizontal number line being displayed on a television.
You have the number 0 in the middle and then the negative numbers on the left and the positive numbers on the right.

Now sitting in front of the television and looking at the number line, hold a pencil in front of you.
Focus on the pencil and the number line will have double vision.
Because the number line will be double vision there will be two number 0's.

Put the pencil equally far from the two zero's on the number line when your focusing on the pencil.
This will put the pencil inbetween the two zero's in your peripheral vision.

Now hold your hand over one eye and still focus on the pencil, the pencil will be over one part of the number line not zero.
Then move the hand covering the eye to the other eye and cover that other eye, and the pencil still in focus the pencil will be over a different part of the number line.

The position of the pencil inbetween the two zero's is lost when one eye is covered, so the vision puts the pencil over a different part of the number line.

The two eyes seeing the pencil, not covering one eye anymore, and the 0 is double vision again and the pencil is inbetween the two zero's on the number line.
What the eyes see as it is focusing on the pencil with two eyes is the two images of the pencil over different parts of the number line.
The brain takes these two different images and joins them into one image so the pencil looks as if it's inbetween two zero's.
This is the principle of stereoscopic 3D.

By taking two different photos of something and showing the two photos to the eyes, the pencil between two zero's phenomena happens again so your brain joins the two different pictures into one picture.

When you get closer to the TV you hold the pencil inbetween the two zero's, you see two zero's because of the double vision from focusing on the pencil.
And then hold the hand over the eye and you see a different number, a smaller value on the number line.
This shows that tv size and distance affect how the stereoscopic pictures function.

If the stereoscopic images for the left and right eye were made for one TV size and to be viewed at a certain distance, then undoing this makes the stereoscopic phenomena invalid.
And does the 3D tv industry and movie industry have a standard on what tv size and viewing distance to use? No.
But this is a problem that games and VR headsets can fix, by building games to use the size and distance on the VR headset the stereoscopic images made should always be valid.

How to create 3D for use in a 3D VR headset?

Assuming the user taking the photograph is wearing glasses, the user is looking at the 3D TV, and the image on the TV is not in 3D yet, and the TV is showing the same number line as before.

The user holds a pencil inbetween the eyes and the number line and see's a double vision of zero.
Then he holds a hand over one eye and the eye sees the pencil over a different number on the number line, the first stereoscopic image.

Now the technology on the glasses is a swivelling laser pointer, and a camera is attached to the swivel laser pointer.
Where the laser points is where the camera films.
The camera has the same field of view as the person sees out of the glasses.

The laser of the eye not covered by the hand, points to the number on the number line the persons eye sees the pencil is over when they cover one eye.
Then the person agreeing the laser is true, the camera takes a photo, then the eye is covered by the hand and the other eye sees and then the process of taking a photo repeats this process and the second stereoscopic photo is taken.

Now the two photos are run through 3D photo software to be viewable on the 3D TV in 3D mode, and the lasers and camera are turned off and the glasses become 3D glasses and the person looks at the 3D image the camera took of the pencil on the number line and agrees if it looks 3D or not when they focus on the pencil.

The moving pencil

Ignoring VR tracking for now.

If the glasses taking a picture of the TV showing the number line takes the two photos and the person looks at the 3D image and agrees the 3D looks fine.
Then the glasses have a camera that is able to view the persons eye, one camera per eye.
The cameras looking at the eyes takes a photo of the physical position of the eyes when the photos of the pencil are taken.

Now this process of photgraphing the pencil is retaken but the pencil is closer to the TV.
And this happens for all distances the pencil can be inbetween the tv and the glasses.

These values are plugged into the virtual environment.
Then inside VR, the person can sit still and view the pencil inbetween the glasses and TV and then the pencil be moved towards and away from the TV, and the person still sees it in 3D.
The same as if the photos of the pencil moving towards and away from the TV were taken in real life and then were shown in 3D mode on the 3D TV and the person seeing if the pencil looked as if it were 3D.

How to create VR headset tracking part 1

Because the pencils position on the number line changes if two eyes focus on the pencil or not,
there is an 'x' shape from one eye then the other eye intersecting past the pencil.

That x shape is made visible by two intersecting lines drawn by the two lasers on the glasses.

With the pencil inbetween the glasses and tv held one distance from the glasses.
When the person moves closer to the tv the lasers behind the pencil become shorter.
When they move farther from the tv the lasers behind the pencil become larger.

When the lasers become shorter the pencil is closer to zero on the number line.

If this distance of the lasers is measured and virtually recreated in VR.
Then as the person moves the laser closer to the virtual TV, the lasers behind the pencil become shorter.

This recreation is equaling the virtual environment to the real world environment.
The shorter laser has a trackable value.
The virtual and real environments agree on the length of the laser behind the pencil.

As the pencil is one distance from the glasses but a variable distance away from the tv, the lasers on the number line give the value that can be plugged into VR.
If the person moves in a straight line towards and away from the tv in VR, the number line should show the laser on the number line change as much as it does in real life.

In the real world the persons shining the two lasers from the glasses onto the number line being displayed by the tv.
Then in VR this is recreated, so the virtual glasses are beaming the two lasers onto the number line the tv is displaying.
Then as the person in real life moves in a straight line towards the tv so the lasers change position on the number line,
then in virtual reality the lasers change position on the number line and the person is moving closer to the tv.
So the exact distance the person moves in reality that changes the number the two lasers are over is also mirrored in virtual reality.

How to create VR headset tracking part 2

Two different concepts;
The pencil is a static distance from the glasses and the person moves the distance from the pencil to the tv.
The person and tv are static values and the pencil changes it's distance from the tv and eyes at the same time.

How to create a VR headset tracking part 3

Now the person physically uses the static pencil virtual reality, but the virtual reality the person sees is the moving pencil and static tv and person.
So the eyes see stereoscopic images that look 3D, but the head position is measured in reality.

The VR headset in the real world must move as the person moves their head.
The person doesn't look at the pencil, but a pencil is used for the lasers.
So something like a unicorn horn is needed to put in front of the vr helmet to play the role of the pencil.
Then lasers on the vr headset beam onto the pencil creating the x shape I mentioned before.
The distance the lasers are behind the pencil lets the virtual and real world agree on measurment.
The left and right camera see the left and right lasers behind the pencil that is inbetween the glasses and the number line.
The cameras feed this into software which finds where the lasers are hitting on the number line.

Then the real world and virtual world put the persons head that far away from the number line,
so in the virtual and real world the lasers hit the number line that exact distance from behind the pencil that is in front of the persons glasses.

The person doesn't know what the lasers are touching on the number line, but only looking at what's in the virtual environment.

Now when the person turns their head in the virtual environment of the moving pencil,
the virtual environment using the number line is used to finds it's position.
Then the virtual environment of the static pencil is used by the virtual environment of the dynamic pencil to find the head position to enable head tracking.

This may mean a number line circling the person so as they turn the head the lasers focused on the pencil but still hit the number line.

Because the vr environment the person sees uses the eye position not the laser position,
the lasers can stay focused in one spot but the person still move their head around.
Edited by forsureman - 8/5/13 at 12:08pm
Edited by forsureman - 8/5/13 at 12:08pm
Edited by forsureman - 8/5/13 at 12:08pm
Edited by forsureman - 8/5/13 at 12:09pm
Playing the unity free game Angry bots.

I make the character fire his weapon when I press the left mouse button
and by moving the mouse pointer I can rotate him to fire in a circle.
I press the up, arrow to move him up, down arrow to move him down, right arrow to move him right, left arrow to move him left.

What happens when I point the character to look in the east direction, and then press the up arrow to move him forward?
The character goes to their left, not forward to where the character is pointing his face.

I think in the nausea problem in the Oculus VR is,
in the game when the player moves the character in one direction, say east, so their players eyes see east in the VR headset.
Then the player presses the up arrow on the keyboard to move him forward,
They move forward unintuitive, they move to their left, not forward to where they are looking.

As I wander around in Angry bots the forward the controls refer to are not the forward the character is facing,
which makes navigating the character wobbly when trying to fire.
It's ridiculous how disoriented I am in that game as I press the arrow keys to get around.

So if the oculus vr fixes the arrows to move where the person is looking then maybe the nausea will stop.
Maybe the headset is the parent and the arrow keys are the child.
So where the parent faces; north, south, east, west, and all points inbetween,
then when the child presses up it means up is where the parent is facing, and back is opposite of where the parent is facing.

If the arrow keys are the child and the head position the parent, so the up arrow is going to move you to where your facing.

Then head tracking is needed, or the fabled positional tracking. Which is not yet been fixed AFAIK.

I made a solution to this tracking in my theory above though.
Edited by forsureman - 8/5/13 at 12:17pm
Side by side, right side first;

Click Quality to reveal the 3D tab.
Edited by forsureman - 8/5/13 at 12:18pm
Tripod holds pole 1.
pole 1 swivels on the tripod, up down, and all around.

Pole holds pole 2, pole 2 moves forward and backward.
Pole 2 holds pole 3, pole 3 moves forward and backward.

Pole 3 is attached to the Virtual reality helmet,
the virtual reality helmet can move in all directions like pole 1.

Pole 3 goes through the two doughnut shapes that are stacked vertically.

The players Virtual reality helmet is in the middle of the two donuts.

Pole 1, 2, 3, create a tripod shape.

There's a PlayStation 3 controller on the tripod in between pole 1 and the tripod.
The PlayStation 3 left or right stick is touching pole 1.

The PlayStation 3 is using the motionjoy pc drivers to read the ps3 stick motion.

When the person moves their head in the middle of the donuts,
pole 3 attached to the VR helmet is moved forward or backward,
or to a different horizontal part of the doughnut.
This motion is relayed to pole 2,
then to pole 1,
then to the ps3 stick on the tripod.
Then from the ps3 stick to motionjoy drivers,
which is reflected as the parent in the game and is head tracking.
The child is the arrow buttons which move the player forward or backward and the forward is where the player is looking.

This is a crude way of getting the difference in position to the doughnut from the virtual reality helmet to get the head tracking.
In my theory this is the static pencil in front of the helmet that changes it's distance from the pencil to the number line.
The number line in this crude example is the donuts.

I figure if you can jimmy this mechanism together and see it working,
you might get the camera and lasers I talked about before in my theory working too.

See the picture for a diagram of the idea;

This is a proof of concept design, the head moves further than the PlayStation 3 stick so the head will pull on the stick or push it too far as the person looks around in Virtual reality. If I knew how to correct this I would detail that here as well.
That example with the three poles was rather crude, so I thought up a more fancy design.

The theory shows a static pencil that changes the distance from the pencil to the tv, but keeps the same distance from the pencil to the eyes.

If a elastic band was cut so it's a long string shape.
Then one end of the elastic string is attached to the chin, this would serve as the pencil in front of the eyes in the theory.
The other end of the elastic string is what shows distance from the pencil to the TV.

On the persons shoulders sits a neck accessory, that looks like a big ring. Like the kind you put on donkeys or ox's so they can plow the ground.

This sits on the shoulders so the ring doesn't wobble very much and rests in one spot when the person moves their head.

The other end of the elastic sting is attached to the ring sitting on the shoulders.

When the ring is pulled by the elastic string, a key press is made in software to indicate the head moved.

Then you attach multiple rubber bands from the chin to the ring and then when the sw reads the key press it can have software programming if conditions to show the key presses means the head moved in a certain direction.

That's how the neck ring and chin that's connected by rubber bands strings can input movement that is read by software that using if statements can show head position for head tracking.

The chin wears a scuba wetsuit mask so the rubber bands are attached to the wetsuit.

This means the ring needs to have some heft to it so the rubber bands don't jiggle it all about, and the ring has some sensors so the sensors read the rubber band is pulling. Then the ring feeds the rubber band stimuli to software so the neck ring needs to be fed into software too.
So the ring is probable plugged into the oculus so they share the connection to the computer.

Then the person puts on the wetsuit mask with dangling rubber bands.
The headset is then put on with dangling neck ring.
Then the neck ring is put on and then the rubber mask clips onto the rubber bands onto the neck ring.
A bit cumbersome to suit up like that, but it gets you head tracking in one clean design.
That rubber band solution was more sophisticated than the three poles, but it was still pretty crude, so here is an advanced solution.

The rubber band solution uses the scuba wetsuit to hold the tip of the rubber band, and the neck ring to hold the other end of the rubber band.

This is to create the difference between the chin end of the rubber band, to the rubber band end attached to the neck ring.
This was using theory example where the pencil is in front of the eyes that's one distance from the eyes, but a variable difference to the TV.

The TV in the theory was called the number line, and the number line was said to move from only being on the regular TV screen to circling the person like a doughnut shape.
This is why in the three poles the person sat in the middle of the doughnut.
This doughnut is static and does not move, just like the neck ring that has a rubber band attached to it stays still on the persons shoulders.

I said that to show the doughnut shape is static movement, but the distance to the pencil or rubber band on the chin is dynamic.

If the doughnut shape is lasers, then that is one half of the rubber band solution, so the persons head has a laser on it too.

How is the doughnut shape lasers?
The 5 MW laser pointer used to point at projector screens by teachers in lectures, has a visible splash where the laser touches the screen. And if you shine the green laser onto a wall there is a visible laser splash where the laser hits the walls.
By there being a splash where the laser touches a surface, this can mark there being a surface there.

If the person sits on a chair and the chain sends a laser in four directions circling the persons chair, then the doughnut is made and this serves to show the difference between the head and the doughnut. This is part one of the rubber band solution.

How does the laser on the head work with the doughnut make of laser splashes?

The laser on the head points towards the area the four laser splash on the ground, the four laser splashes stay still while the lasers from the head move.

If the lasers from the head are red, and the lasers from the chair are green, then a camera can read the four green laser splashes and also see the red lasers move on the area of the four green lasers.

This way the head position can have one initial laser position, and then when the head turns these have their own laser position values, then this can be read and matched.

Then the person does some initial calibration to show the initial and turned positions of the head, so that when the red lasers shine around the four green lasers, the red laser match a head position; looking forward, looking left or right, up or down.

Now for a super advanced solution, the four green lasers can be substituted for a camera that sees a reading on the ground, some unique image, and the red lasers work with those four images surrounding the person like they worked with the green lasers that shined from the chair the person was sitting on.

This way maybe augmented reality can use this camera unique image method as anything can serve as the four images surrounding the person. So the person can walk around and isn't tied to a chair. This way head position can allow augmented images to be painted in the persons view of the world.
This post is why some people leave the Oculus VR and still feel the effect of playing.

First the technical reason, then the guess or rather deductuve assumption.

The eyes get head tracking when the parts attached to the head stays one distance from the head and this part is a variable distance from the TV.

I said the person gets his head closer to the TV and the lasers beaming from his head to the TV get to lesser values on the number line; the number line is a image on the TV and the lasers are beaming on the number line from the head of the person to the TV.

This is the same logic as holding a sliding measuring tape and going one distance and then shortening that distance, the result is the measuing tape gets shorter.

This is used for head tracking, the thing attached to the head is at a static position on the head but a variable distance from the TV that is showing the number line.

Now the other way the device attached to the head for head tracking is used, is it moves closer to the TV, but the person does not more and the TV does not move.
Now the lasers still shoot at the TV and the numbers get smaller and smaller still, but not as a result of the person getting closer to the TV but only the device getting closer to the TV.

The person uses this method to view the 3D virtual reality world. Because when he throws a object that thing moves but what he throws at does not move and he doesn't move. So this is using the principle of the thing moving but he stays still, not the principle of him moving but the object staying still.

Head tracking lets him hold the thing he threw one distance from his head and then walk to what he tried to hit, and what is sees is the number line effect I described.

What the Oculus VR does, is it keeps the eyes at one stereoscopic position or focus. You focus at infinity.

Then you walk around and the effect is similar to when you tried to throw a object ve carry it, in the Oculus VR the effect you see is you hold the object one distance before you and walk towards what you were trying to hit.

Next I will talk about what may be happening to the people getting sick.

"
since I played with the Rift two days ago I feel dizzy, I have nauseas, trouble focusing my eyes, I feel exhausted and I have eye strain. I didn't touch the Rift since then, but the effects are not leaving. I am a little scared.
"
I got this quote from somebody who wrote this on the Oculus VR forums.

Trouble focusing his eyes, when the eyes in the Oculus VR are focused at inifinty and in real life the ball being held may either be seen to be carried or thrown to a target some distance away. But in the Oculus VR it's similar to being forced to carry the object because the focus is fixed at one point.

It looks like some sort of way to change from focusing on infinity only is necessary for virtual reality not to create this mental trauma.

But why do some people not feel this effect?

Some people can carry things around for hours at a time and never feel the compulsive need to throw things rather than carry them.
But some other people have a mental compulsion to try to the throw technique as well as the carry technique.

Therefore,
If you force those who compulsively use the throw technique as well as the carry technique, to use only the carry technique, it gets them to blend in the throw technique to the carry technique.
And then in non-VR when they try the throw technique they use the carry technique instead.
This is unnatural as the carry technique in the real world is not the throw technique.

So, if people feel queezy in VR maybe they are compulsively using the throw technique, but the Oculus VR forces the focus to infinity thus acts to block the throw technique, so they feel the throw technique being forced into the carry technique and they feel ill and need to stop the VR.

To fix this, the throw technique being mangled to focusing at infinity, needs to use the throw technique where the eyes focus sees the object moves to the thing being thrown at.
Cameras on the eye position helps do this, as I described in my theory.
Edited by forsureman - 8/10/13 at 4:32pm
Some people want to walk in virtual reality, so they build mechanisms to feed walking into software so in the virtual reality they see themselves walking.
And they use ball bearings.

The idea of using ball bearings is a crude idea I think.
What the idea showed me is that the ball bearings are displaced then they are reset to a original condition.
In normal day to day terms, this is a button press, being displaced then reset to the original state.

So what if the ball bearings design is not used but a keyboard design is?

The idea

- A weight is tied to a cord.

- The cord goes through a hollow ball bead, the hollow ball bead is welded to the interior of the machine.

- What's being built is a mechanism that when the cord is attached to what looks like a keyboard key. The person touches this key and when they do it is pressed down then it pops right back up when it is depressed.

- When the key is not pressed in, the key is sitting at the top of the ball bead and the weight is not lifted towards the bead.

- When the bead is pressed in, the key slides to the side of the ball bead, to the user this looks like the key is pressed in.
Then with the key to the side of the ball bead, the weight tied to the cord is lifted up towards the ball bead, because the cord on the key went to the side of the ball bead which acts to lift the cord with the weight.

- Then when the key is depressed, the weight on the cord is pulled down again and the key goes back to the top of the ball bead and the user sees the key become reset so it can be pressed in again.

- Since the key goes straight up and down, it must be attached to a spring that can bend around the ball bead, and the cord is attached to the spring.
Then the key is pressed down, the spring is moved to the side and down around the ball bead, and the weight on the other side of the cord is lifted up.
So the key must go down and rest on the floor the user sees, but the spring attached to the key goes down and to the side of the hollow in the ball bearing.

That's the basic mechanism of the contraption.
Now since this uses weight and cords, it doesn't need to be in a curved tub shape anymore, it can be flat.
Because the curved design was based on using ball bearings rolling in a tub, now the weights and cord is used the curved design is obsolete.

Shoes that slide on the keys would work. Not slip on the keys but something like the omni shoes.

When the weight is being lifted it triggers a led on the inside of the mechanism.
Then a camera on the inside of the mechanism reads the led and gives this to the VR software, which then translates what the lights mean and then feeds this into the game, and this functions as the person walking or running on the contraption then seeing themselves run or walk in Virtual Reality.

Since the weight on the other side of the ball bead would swing around if left untethered when it is lifted up quickly, it needs a bungie cord that it is tied to that it can recoil to when it springs up towards the bead. This bungie cord is tiying the weight to the floor or base structure of the contraption.

This was a more advanced idea then ball bearings in a tub.

I then sat down to draw how this would look, and this is what I made;

Here is a description of the drawing;

1. Is the button before and after it is pressed.
It has a lip at the top of the button.
The lip rests on the floor that the button is sticking out of when the button is pressed down.
So the button can be measured so when it is fully pressed in it's lip rests flat on the floor of what it is sticking out of.

2. Is the first layer.
The first layer is a floor on the inside of the machine, that has a outer ring hollow, and a center hollow.
The outer ring hollow is what the button sticks pole 1 into when the button is pressed down.
Pole 1 is attached to pole 2, and pole 2 is attached to the cord.

As pole 1 is pushed into the hollow, the angle the two poles make is squished together, because both poles are being stuck into the outer circle on inner floor 1.
As pole 2 is being pushed into the inner floor 1, the tip of pole 2 is lifting the cord closer to the button.

Pole 2 is connected to the cord and the cord is going through the center hole to the weight.

3. The weight. The weight is connected to the cord that pole 2 is connected too.
And the weight is connected to the bungie cord below it.
The weight is pulled up to the button when the buttun is pressed down,
and when the button is depressed gravity pulls the weight down and thus the button connected to pole 1 is lifted up to it's original height by pole 1.

4. The bungie cord. The bungie cord is connected to the weight so that when the weight is flicked towards the button it does not swing around wildly.
But that when the weight swings up, the bungie cord controls where the weight can swing too.
The bungie cord is connected to the bottom of the contraption.

Also the bungie cord may allow tension so when the button is being pressed down the person can feel the button give some resistance to being pressed down.

5. The electronics. Now when the bungie cord is lifted, when it is tied to has a trigger that sends a impulse to a electronic recording device that can translate that impulse into a led or similar pattern that can be fed into a computer to process the place the person stepped.
This will allow the person to step down and have some leg tracking.

The leg tracking heirarchy would be a child to the head tracking, so when the legs walk or step down, this is based on where the parent the head tracking is facing.
What the walking machine looks like.

A diagram that shows how to repair the walking machine.

That about covers all the important idea points of the walking machine.
I dub it, "kingpin".

The camera can be replaced by just a sensor that can feed the electronic stimuli into a board.

And the board is numbered.

And the numbers light up in a SW, that SW then lets these numbers be converted into something that lets walking be in VR.
Edited by forsureman - 8/12/13 at 10:30am
Eye tracking based on the kingpin technology

Not looking at how the eyes are recorded, I will start on how the eyes see something that can be recorded.

If you have a square grid, and the grid is able to light up.

First you need to measure how far the eye can see, up, down, left, right, and get the grid to match these extremes the eye can see with the size of the grid.
So when the eye looks as far up as it can it sees the top of the grid, as low as it can it sees the bottom of the grid, and as far side to side as it can it sees the farthest left and right of the grid.

Second, the eye sees the grid light up. And here the eye position is matched to a grid.
So the grid lights up, and the eye position that sees the grid light up is recoded, and this is the position of the eye that sees that part of the grid.
So the grid equals a eye position.
This is done for both eyes since some eyes are lazy or deformed etc, this is custom measured per person.

Third, the 3D stereoscopic pair displayed to the person is based on the grid.
The left eye picture is made to be on this part of the grid, and the right eye picture is made to be on this part of the grid, then the eyes should see a stereoscopic picture.

Fourth, the grid lines up to what the eyes see - therefore, the eyes line up to the grid and the grid to 3D. It's synchronicity; (the line goes one way, 3d to grid to eyes), (then it can go the other way, eyes to grid to 3D).

Fifth, the eyes stop moving to focus, and then when the eyes focus the eyes see.
So there is motion then no motion. And when there is no motion what happens is something is visible in 3D.

Also, if you look at your neck motion as you do these eye motions;
- look forward into the distance
- look at a foot in front of you
- look to your side at a distance
- look to your side at a close distance.

1.) your eyes move then stop to see.

Since we made a grid to show how the eyes can see the grid, and measured the eyes position and matched this to the grid.
Then the neck is used to measure the distance you look, so a grid needs to match the neck to a distance grid.

If there is a grid that can be lit up and lets you see into the distance or close to you as it lights up.
Sort of like seeing the floor tiles light up.
Then they light up and your neck position is matched to the lit grid tile.

Now the neck position matches a grid tile and the eyes match a lit grid. So when the eyes move to a part of the grid the neck moves to a part of the grid, and then the stereoscopic picture can be shown to the eyes of the person.

Sixth, in the kingpin technology, the weight resets the button after the button is depressed.
This resetting of the button can be when the stereoscopic picture is shown to the eyes.
- The eyes move the buttons are pressed in and match a part of the grid for the neck and eyes.
- The eyes stop, the buttons are reset and the eyes see a stereoscopic picture.

Seventh, the way to measure neck motion is to use the difference the dynamic chin motion from the static position on the neck.
So something static is on the neck and the chin shows the difference as it moves around. This difference is visible on the lit grid, as the chin moves, the eyes are looking at a certain spot on the grid.

To measure the eyes you need 4 sensors at the four spots the grid is looked at, to the sides of the eye and the top and bottom of the eye.
Then the grid is lit up and sensor 1 measures the eye is at position x,y, and sensor 2 shows the eye is at position x,y, etc for all four sensors.

The neck sensor shows neck tracking and the four sensors around both eyes shows eye tracking, but they are both used for eye tracking here.
The neck shows the distance being looked at, and the eyes sensors shows the part of the grid the eyes are looking at.
In my eye tracking idea, I used the kingpin technology.

First I took the grid, that is the "button" in the kingpin technology.
I first made the eye to form the grids dimensions, then got the eye to look at the grid light up and this was the eyes position relative to that lit grid.

Then I showed the stereoscopic picture to the eye based on what part of the grid the eye was looking.

That was the eye tracking section.
Second was the neck tracking which let me judge the depth the person was looking at, it used the same grid idea but the grid looked different than the previous grid, so that what the grid showed could let me judge the distance the person was looking at.

That's about the skinny on what the eye tracking idea was.

In hand tracking, I need to have two parts;
- the hand motion when the arm is bent and holds the hand as clode as it can to the armpit
This is the same as the eye tracking in the eye tracking idea.
- the arm moving out so the hand is distanced from the armpit is was held close to.
This is the same as the neck tracking in the eye tracking idea.

I found that if you hold your hand by your armpit, and the palm is facing downwards, so the thumb is besides the chin.
That is you wiggle the single finger starting with your thumb and moving to your pinky.
Then the finger that is next to the wiggling finger, and closer to the pinky, will wiggle slightly, like the two are trying to rub together.
And if you wiggle the finger a lot then the finger beside it that wiggles slightly will will more too.

This was the wiggling finger is copied by the finger besides it.

Conversely, if you start with the pinky moving to the thumb, and wiggle the one finger at a time.
That the finger besides the wiggling finger and closest to the thumb will wiggle.

So if your counting the fingers inwards to outwards, thumb to pinky, the copycat finger will be the finger closer to the pinky.
Or if your counting the fingers outwards to inwards, pinky to thumb, the copycat finger is the finger closer to the thumb.
This is because your feeling where your moving too, it's a compelling feeling that you put your hand where your hand is moving.
This is visible when you imagine a object to your right and then try to touch it and you watch how your fingers bend, then try again but the object is to the left not the right - your fingers move differently, it's a touch association in the mind I think.

The grid in this case is the wiggling finger, the eye in the case, of hand tracking is the copycat finger.
So the grid lights up and then the eye follows the grid and this has a value, grid lit up by main wiggling finger = copycat finger position.

If you hold your hand below your elbow and then bend your forearm upwards. The amount you bend your forearm affects the amount you bend the arm attached to the shoulder, the same as the copycat finger, just the same.

The forearm is the main wiggling finger, and the arm bone attached to the shoulder is the copycat bone that copies the forearm.
So if you map the amount the copycat bone moves, it moves because the forearm moved.
So if you can have the forearm be the grid, the copycat bone that is inside the shoulder joint is what follows the copycat bone, as the eye, if you see the parallel between eye tracking and this.

So when the arm is held next to the armpit, the forearm is held still against the ribs, so the arm attached to the shoulder is still.
Then the copycat fingers are made to equal the main wiggling finger.
Then using eyetracking terminology, the forearm moves, and the bone attached to the shoulder is made to equal the forearm, if the forearm is the grid that lights up, and the bone attached to the shoulder is the eye.

Now the grid and tracking of the grid is understood, you just need to figure out how to get the grid to mean something in software, then match the copycat to the grid in software.

This would then let you have hand tracking in virtual reality. This was on theory not actual implementation. For implementation I can't say what can read the bones of the hand and forearm to make the grid, thenwhat can read the copycat bones in the arm and hand. Maybe colors on the arms that then show how the fingers will wiggle?

If the arm moves outwards, then the fingers grid finger and copycat finger is understood in SW.
TBH I got a bit lost when I was explaining it, so the typing was a bit off, and the words rambled on a bit as I refined what I had understood in my psyche verbally.

But today is a new day, and I have a better understanding from my previous attempt and now I will try to detail it again but my precisely.

The same for the right, so the right hand is close to your armpit.
This is the starting position to get hand measurements for tracking.
________
The fingers section

Now sway your left hand to the left, and as you sway your left hand to the left curl your middle fingers in a tapping motion.
You will see the left ring finger wiggle more than the index finger.

Now sway the left hand to the right, and as you curl your middle finger so it is tapping up and down, and as you do watch your left index fingers wiggle compared to the way it wiggled before.

The left index finger wiggles differently based on which direction you sway your hand.

This is associated with touch in the mind.
That is, if your going to touch something, your fingers move based on the direction of the motion hand.

If you move your hand up, the fingers will curl one way, then if you move you hand down your fingers will curl a different way than when the hand was moving up.

The deductive conclusion to this phenomena is;
the direction the hand moves, will show how the fingers will typically curl.

Not only that, but wiggling one finger gets one other finger to wiggle as well.

And this copy cat wiggling finger is directly beside the wiggling finger.

And if the wiggling finger has one finger on either side of it, the copy cat finger is on the side the hand is moving too.

And if the middle finger moves far up and down, the copycat finger moves far up and down too.

I think the copy cat finger is understanding what the wiggling finger is doing.
For instance, it's like they are talking to each other, and the wiggling finger is talking and the copy cat finger is listening.
__________

The arms section

Now the humerus bone is connected to the forearm bones and shoulder.

If the forearm moved the hand close to the armpit, then holding the humerus bone the same position swing the forearm around freely, up, down, side to side.

Now hold the left arm straight of in front of you and put your right palm below the left arms elbow, then bend your forearm upwards slowly.
Then lower the forearm back down, gently.
Then lift the forearm upwards fast and in a jerking motion.
Chances are, the hand below the elbow moved less when the arm was bent slowly than when the arm moved up fast.

That is the principle that shows, the way the forearm moves, is how the elbow thus humerus relates to the forearm.
Thus the forearm is what is before the humerus in terms of motion.
______________

Conclusion

The arm has the humerus follow the forearm.
The hand has a copy cat finger that's based on the direction of the hands motion.

So, if you can get the motion of the hand known, you can figure out what is the copy cat finger.

And the forearm moves the hand.

Therefore, if the forearm moves, the hand moves, the copycat finger moves.
And how the forearm moves is how the humerus moves.

So if the forearm and hand and fingers move one way, the humerus follows this logically.

e.g. the forearm pushes outwards, and the hand extends the length of the forearm and the fingers point straight on the hand, the humerus does what?
The humerus moves the forearm outwards is what.

the hierarchy of arm motion is thus;
forearm
hand moves
finger moves
copycat finger moves
humerus moves
= arm and thus hand motion.
Based on what I said before, for hand tracking you must track the forearm first, the hand motion second, the humerus third, in that order.

Now the hierarchy is the humerus and hand are both children to the forearm parent.

The hand is connected to the forearm, and follows the forearms initialization of movement.
And the humerus is connected to the forearm and follows the forearms initialization of movement.

So you put the sensor onto the forearm to get the parent motion, then you put sensors on the humerus and hand to get the children position.

And if your fancy, you put sensors on the fingers,
and get the primary wiggling fingers position,
then the secondary wiggling fingers position.
These would be children to the hand position parent in hierarchy.

But for now finger position is too fancy, just strap three sensors to the arm, 1 on the humerus, one on the forearm, one on the hand and now you have arms in virtual reality, not just floating hands,

which look weird.
I'm not doing any kinematics and I don't know what a potentiometer is.

Wouldn't they need to measure the shoulder position? Not necessarily.
The reason is the arms are dynamic motion, so they must relate to a static motion.

Then from the way they change their position compared to the static position, they show their position.

Look at the gametrak;

The base stays on the ground then the person moves the arms, and this movement translates to in game movement.
This is what I mean by there needing to be a static compared to a dynamic.
The static in the gametrak is the base, the dynamic is the cords moving because they are tied to the hands glove.

I thought about doing something like the gametrak and tying the base on the neck using a collar. Then the wires are tied to the chin via a mask covering the chin, and then when the head moves this translates to the gametrak sitting on the neck and creates head tracking.

I said this to show the gametrak is tied to the neck and stays still.

If 1 wireless controller on is the humerus,
1 wireless controller on the forearm,
1 wireless controller on the hand;
the controllers change their position to the base on the neck,
and the effect of this like the gametrak, the wireless controllers on the arm act as the moving cords on the gametrak.

Now the head is found by the neck collar, and the arms and hand position is found by the neck collar.
The head uses wires so it's just a gametrak with the head acting as the hands that move around.
And the wireless controllers show how they change position compared to the still neck collar and this shows a point that is where the arms originate, and the movement of the arms.

I think that about covers the idea.
"So your idea uses a form of absolute position tracking, such as the Razer Hydra, for all of the points you mention?"

Yes, I think the sixense product stem would work well here, you only need four sensors and the base on the neck.

I say four sensors, the sensor in the shape of a collar tied around the neck touches the top of the humerus bone on the shoulder area.

So the first sensor is on the elbow, and this measures the distance from the top of the humerus touching the neck collar, and the elbow.

The second sensor is on the wrist, this measures the length of the forearm.

The sensors on the fingers would be next, but that's too expensive for all 10 sensors, 1 per finger.

Then the sensor on the wrist and sensor on the elbow give length dimension to the forearm, and then the forearm in the hierarchy is the parent.
The hierarchy children would be, the wrist to the hand, and the elbow to the top of the humerus.

The wrist to the hand, is letting the wrist sensor be the parent to the sensors on the fingers.
But again, there's no sensors on the fingers because ten sensors would be too expensive.

Right now the sixense can use five sensors or controllers I think.

So the one controller on the head, measuring the neck, to the controller on the head, the effect is something like the gametrak I showed previously.

And four controllers for the arms, one per elbow and one per wrist.

Then you have the top of the humerus by found be touching the neck collar and this lets the arms move from this spot.

So the neck collar to mark the head tracking and arm tracking is important.

I said the top of the humerus is what the arm is tied too as it moves, but doesn't this make the top of the humerus the parent? No because this is the static position, the neck I mean.
By the forearm being the parent the hands and humerus is tied to a moving part.

SO that's how the sixense stem can be used for hand and head tracking in VR.

As far as having one stem controller somehow show the whole arm using " inverse kinematics", I don't know anything about that. But it would be cool to see in action.
First you should read my "arm tracking technology" thread to see what basic idea I'm using here.

Basically, the idea is there is one spot on the body that has a sensor and reciever and the sensor is in one spot. This is like lighting up on spot on the ground in a dark room with one flash light and the flash light never moves.

This sensor and receiver is on the neck like a dog collar or oxen collar and is held on the neck so it doesn't shift around when the person moves.
The sensor and receiver are right next to each other so when the person bends down the flash light doesn't move, if you use the flash light in a dark room analogy.

Now you have the arm technology show that from sensors you build up a hierarchy that shows where the bones are. So you have a sensor on the elbow and a sensor on the wrist and these show where the forearm is and the sensor on the elbow shows the humerus. Then the sensors on the elbow and wrist go to the sensors on the neck and the SW see's these sensors and estimates the position of the forearm and humerus bones, then the SW shows the forearm and humerus bones moving when the sensors move.
That is arm tracking technology.

Hand and finger tracking technology is the same idea, you need a static position to reference the dynamic positions. The static positions are the still flash lights in a dark room, the dynamic positions are the moving flash lights in a dark room.

The moving flash lights are the sensors on the arm, the one one the elbow and wrist, the still flash light is the one on the neck.
So the neck is used to give the still flash light but there is;
1 sensors on the wrist = 1 sensor
1 sensor on each knuckle = 5 sensors
1 sensor on each bending part of each finger that is not the knuckle = 9 sensors
= 15 sensors

These 15 sensors go to the receiver on the neck, then the SW sees the sensors on the wrist and hands fingers and decides where the bones are.

Come to think of it, the finger tips could use sensors too couldn't they? Then the length of the bone that has the finger nail on it could be found by the SW, so that's 5 more sensors, one for each finger tip and that would bring the total to 20 sensors per hand.
40 sensors for 2 hands, and 41 sensors total if you include the sensor on the neck.

Then the receiver on the neck gets these sensors which have a unique ID the SW can use to find the location of the bones by seeing how far the sensors are from each other.
e.g. the wrist sensor is the far away from the knuckles sensors, the knuckles sensors are this far away from the middle of the fingers sensors, etc.

Then if you want to get fancy there can be a haptic skin that can give feed back to the hands touch sensation, so in VR the hand touches something the sensors go to the receiver the SW decides the hand touches something and the hand feels that thing.

Unless the hand has some kind of robotics to hold the hand in position when it touches some VR object, like a ghost the hand will pass through that object.

So 41 sensors to get the hands up and running with VR, and 43 sensors if you want the arm included, you would add one sensor to the elbow. Now the entire hand and arm is articulated in VR. And with haptic skin on the glove you can feel the VR world too.

I thought about how to get the hand controller working without wireless sensors and decided that a gametrak method would work.

If you look at the hand when you hold your fingers out straight and stiff, then bend the fingers, if you bend the knuckles the fingers move too.

So the knuckles is one part of movement and the fingers bones are the other parts.

So if you tracked the hand you need to track the knuckles separately than the fingers bones.

Tying a one end of a string to the fingers bending points, and the other end to the gametrak joystick, you can translate the fingers motion to the gametrak joystick.

The gametrak joysticks are held above the fingers on a angled board, at a 45 degree angle when the fingers are held out flat and stiff. The angled board starts at the knuckles so the movement of the knuckles doesn't move the joysticks the fingers bending points are tied too.

Then the knuckles have a string tied to them, and the other end of the string to a gametrak joystick, the joystick is held over the knuckles and held on a angled board that starts at the wrist and goes over the back of the hand at a 45 degree angle.

Then you input the gametrak joysticks into SW and read it in VR. Then this is inexpensive and has no lag.

Then to find the roll of the hand you first find what part of the hand arm makes the hand roll. If your hand is flat so the fingers are all on the same horizontal plane then you tilt the hand so the thumb is on a different horizontal plane than the pinky this is rolling the hand.

When you roll the hand you move the forearm but not the humerus bone.

So if you had a two rods joined at a angle joint, and the angle joint was where the elbow is, then one of the rods is tied to the humerus bone and the other rod is in one spot above the forearm.
When the forearm twists the rod above the forearm doesn't twist.

Then you can tie one part of the string to the wrist and the other part of the string to the tip of the rod above the forearm. On the tip of the rod above the forearm is a joy stick and the string is tied to this.
So when the wrist turns the joy stick moves.

Now you can track the turn of the wrist, the movement of the knuckles, and the bend of the fingers, all mechanically using proven technology.

The joy sticks then input into SW and this into VR.

And most importantly, the string on the fingers is lightweight and unobtrusive since it's just tied to the fingers using something like a ring.
Edited by forsureman - 9/4/13 at 3:01pm
New Posts  All Forums:Forum Nav:
Return Home
Back to Forum: 3D Tech Talk

### AVS Top Picks

AVS › AVS Forum › 3D Central › 3D Tech Talk › 3D creation tutorial