or Connect
AVS › AVS Forum › 3D Central › 3D Source Components › When small interaxials just don't cut it!
New Posts  All Forums:Forum Nav:

When small interaxials just don't cut it! - Page 36

post #1051 of 1087
http://store.sony.com/webapp/wcs/stores/servlet/ProductDisplay?catalogId=10551&storeId=10151&langId=-1&partNumber=NEXVG30#specifications

For wide stereobase shooting and high quality lenses plus pro audio accessories, this could be the next greatest camcorder body. It has the Sony A/V Remote connection to hook up a Lanc Shepherd sync controller and IOS lenses or fisheye wide angle. Imagine two of these in your kit with a compliment of lenses.
post #1052 of 1087
Quote:
Originally Posted by Powerplay4 View Post

Quote:
Do you use a controler to sync the two Sonys?
For now I don't use any device to control the two cameras.
Quote:
Originally Posted by Powerplay4 View Post

I made another video in 3D using two Sony HDR-CX130. This time inside and outside the car using a 30cm rig (28cm stereo base). ... http://www.youtube.com/watch?v=jMeUQhu13FM

I downloaded the 1.3GB 1080p version from YouTube and watched the first 20 minutes. To me this was a "proof of concept" video, or a "technical trial".

It proves that stereoscopic 3D with a wide stereo base is feasible from a moving car. Much of the video was taken in a 70 kph zone. The wide stereo base made this seem faster than real life (for my eyes anyway). For my eyes it enhanced the strength of the 3D effect, without seeming too exaggerated.

Watching without 3D glasses, I could see continual minor horizontal and vertical jitter in the separation between the camera images (e.g. of line markings on the road). With 3D glasses on, my vision fused the Left and Right views without difficuilty and I was unaware of the slight jitters. My vision was also able to unconsciously correct for the vertical misalignment in the opening section, and at around 9m28s (via rápida Portão). ] The human brain is quite powerful in its ability to fuse Left and Right!

All of the titles worked ok for my eyes, except for the title for Champagnay (13m 14s), where I found I could not converge on the title at all, as the image of the road required a very different convergence and distracted my eyes.

For some unknown reason the 70kmph signs would become unfused for my vision for an instant just as the cameras passsed each sign. And yet the signs are perfectly aligned in the Left and Right images when examined frame by frame!

This was a good test of camera synchronisation. YouTube provided 29.97 fps progressive. The distance the car travelled per frame was significant. Yet the alignment between Left and Right was out by only a small fraction of the original 60fps frames. Here is an animated anaglyph version of frames 11327 and 11328 (at around 6min 17sec) of Milton's video:

70kphFrameComparisonAnaglyph.gif 209k .gif file
Note: it may be necessary to save this file for viewing in a web browser to see the animation.

The red and cyan images of the road lines immediately in front of the vehicle are pretty closely in alignment (in the direction of travel).


Milton,
did you use anything to assist with obtaining a Left Right synchronisation better than 1/120th second? (Or was that just good luck?) Did you have to discard any video that was too far out in its timing?

Cheers
post #1053 of 1087
Quote:
Originally Posted by MLXXX View Post

Milton,
did you use anything to assist with obtaining a Left Right synchronisation better than 1/120th second? (Or was that just good luck?) Did you have to discard any video that was too far out in its timing?
Cheers

Jeff, thanks for the comments.

I record always in 1080p 60fps. Recording a sound signal (snapping my fingers at the start of the recording). This helps a lot of time synchronization via Vegas, as the difference of a frame is a maximum of 16ms. This greatly facilitates the synchronization of the image from left to right. Even if recording for a long period of time, eg 15 minutes, I record a sound signal also for synchronizing at the end of the video. This helps to verify that the end of the video is also synchronized. I did not record any video longer than 15 minutes, so do not know if there will be problems in some delay in the final. In this video I made, I have not discarded any images for this kind of problem, because this problem did not occur.

I bought two more "Car Window Suction Cup" to give more stability to the cameras, but has not yet arrived. When arrive I will make a new video with these three accessories, the same way I did this to see if there was improvement in stability.

It's a shame that Youtube convert all videos to 30fps. The MP4 video at 60fps is a little better in fluidity. I tried to upload to Netload MP4 60fps, but they blocked the file. I do not understand why this is happening.

I'm also buying two remotes Sony RM-AV2 and make the interconnection them to operate also as a single remote control. On this topic http://3dvision-blog.com/forum/viewtopic.php?f=8&t=807 there is an explanation of how to do this. I live in Brazil it is very expensive to import a LANC Shepherd because taxes in Brazil.

Thanks,
Milton
post #1054 of 1087
Milton- You and I work in a very similar manner and I can confirm all your experience. The technique works not just for video camcorders but also for DSLR video, like my NEX 5n pair.
Also, I have found that the claims of the two cameras going out of sync over reasonably short time to be without basis. Like you, I use a sound sync both at the beginning and at the end, and as of this time there has never been any loss of sync and my longest run recorded times have been 33 minutes. It's just not a problem. Some here have gone to great lengths to prove it is a problem but since I have never seen the sync go out by 1 frame, I just have to believe it is a fabricated issue based on imagined theory. When the problem ever shows up, I will worry over it then.

The link to the pairing of the remotes is a very clever idea and one I hadn't considered before. Rather than using a single control to feed to two camcorders, this guy uses two remotes and simply pairs the switch contacts. Very nice idea and I think this could work without the use of buffer circuitry I was trying to design at one time. Even with Lanc Shepherd one has to use sound sync so this approach could be a poor man's Lanc Shepherd. I also like the idea that the zoom will have two speeds, something that the Lanc Shepherd doesn't have. I don't know what the cost of a pair of the Sony Remotes will be in Brazil but here, that dual controller could be fabricated for less than $100. Thank you for posting that link.
post #1055 of 1087
Don, thanks for the comments. The Sony RM-AV2 not have to sell in Brazil, so I'm buying via eBay U.S. It costs $40 plus $10 shipping to Brazil. So, will cost $100. I can not believe I pay import tax because I bought two units on different weeks. Up to $50 do not pay tax. Above this, the tax is more than 60% of the value of the product plus shipping.

There is this other link with a similar solution to interconnect two Sony remote controls http://ledametrix.com/remote/index.html.

Milton
Edited by Powerplay4 - 9/18/12 at 2:52pm
post #1056 of 1087
Milton,
I've looked again at your video, carefully. The timing alignment between Left and Right is consistently very good, certainly better than 1/120th sec (8.3mS).

I think that, by pure chance, achievable alignment (in the editor) could be expected to be within 1/240th second or better, for 50% of camera starts, for 60p asynchronous capture.
Quote:
Originally Posted by Don Landis View Post

but since I have never seen the sync go out by 1 frame, I just have to believe it is a fabricated issue based on imagined theory. When the problem ever shows up, I will worry over it then
Don,
your comments apply well in the context of video captures where nearest frame alignment (or nearest half frame) is good enough., e.g. for slow moving scenes, with slow pans. For such a video, a fairly obvious corrective treatment if there were ever a significant accumulation of drift over a very long duration clip (e.g. an hour) would be as follows:
  • If Right slowly gained on Left, a frame could be dropped in the Left clip at the point where Right had drifted more than half a frame ahead.
  • If Left slowly gained on Right, a frame could be dropped from the Right clip at the point where Left had drifted more than half a frame ahead.

However there is video subject matter where alignment to the nearest half frame would not be good enough. This would be particularly so for a capture at 24fps which even for 2D purposes is a slow frame rate, ill-suited to rapid motion.

I illustrated the distorting effect of timing misalignment, for smooth repeated motion, at post #924:
Quote:
Originally Posted by MLXXX View Post

...
I've prepared a video to show the effects of a mismatch between Left and Right timing on apparent motion. I captured at 60i with my Sony HDR-TD10, extracted Left and Right (using the MVC to AVI converter from 3dtv.at), and with VirtualDub converted to 60p (using odd and even fieds). I used VirtualDub again, with its motion interpolation filter, to arrive at 240p. The result was as if I had captured the moving orbs of the anniversary clock at 239.76fps!

I then used VirtualDub to harvest every tenth frame to get to 23.976fps (a frame "decimate" option in VirtualDub). But the point of harvest could be offset by 1, 2, ..., 10 frames, to simulate capture delays ranging from 4.17mS (1/10th of a frame at 23.976fps) to 41.7mS (one frame at 23.976fps). Even at 4.17mS, an effect on the motion of the orbs is apparent for my vision. Here is a link to the YouTube:

The video lasts just under 7 minutes. The smallest mismatch shown (4.17mS), begins at 3m 15 sec.
...
In that video, the timing misalignment of 4.17mS (1/240th second) leads to a visible discrepancy for my vision, and the misalignment of 8.34mS (1/120th second) results in a very noticeable distortion to the motion, for my eyes. (This is with an effectve frame rate of 23.976fps.)


I suspect that coverage of sport could be demanding on stereoscopic synchronisation, e.g. a 100 metre sprint captured at 60fps.

Another relevant factor is the type of viewing device. Most people seem to use shutter glasses operating at 120Hz (or for PAL region material, 100Hz). Shutter glasses introduce a phase discrepancy between presentation of the Left and Right views. Conceivably that might noticeably compound any already noticeable phase discrepancy in the source video - I'm not sure about this.

I personally prefer passive glasses for watching 3D sport. The motion is more solid for my eyes.
Edited by MLXXX - 9/19/12 at 9:50am
post #1057 of 1087
Jeff, we do agree, I think.

In my world of shooting 3D, I use 24p for most projects. It works well and is compatible with BluRay 3D.

In a few projects I plan to do and have done some tests, I will be using 60p. I haven't experimented much yet with how to display this at 60p.

The deciding factor between 24p and 60p is whether the subject contains lots of movement, but not super high speed movement. Recall the movement on Frank's birds feeding. In this I would prefer even higher speed cameras than 60p. They do make them and are within his budget. But, I have nothing on the plate that warrants this level of speed and synchronization accuracy, intraframe.

High speed like shooting a slomo of a bullet exiting the muzzle of a gun in 3D would be interesting but not anything I'm interested in shooting. I'll leave the detailed motion of squirrels and birds up to Frank. Currently my only interest in 60p projects is flowing water, falls, and fountains and fireworks. At this point the motion and synchronization at 60p is quite satisfactory.

While you have spent lots of time developing ways to measure the lack of synchronization, it doesn't seem to lead to any way to achieve something better other than the traditional video engineering like genlock. In the case of higher speed subject, the step one must take to improve on the problem is to use genlock. Show me an alternative to genlock the timing in two cameras and then I'm all ears. smile.gif As I said the problem doesn't exist in the scope of what we do to clarify.
post #1058 of 1087
Quote:
Originally Posted by Don Landis View Post

While you have spent lots of time developing ways to measure the lack of synchronization,
With Milton's recent video I could see the alignment was better than 1/120th second simply by inspecting selected frames by eye.

With the Anniversary clock video I created differerences (I must admit that was time consuming!) and then sat back and observed how noticeable an impact the differences had when played at 23.976fps. The impact was quite noticeable.
Quote:
it doesn't seem to lead to any way to achieve something better other than the traditional video engineering like genlock. In the case of higher speed subject, the step one must take to improve on the problem is to use genlock. Show me an alternative to genlock the timing in two cameras and then I'm all ears. smile.gif As I said the problem doesn't exist in the scope of what we do to clarify.
I think there was a potential for Milton's latest video to go wrong. If he had captured at 24p and if he had been very unlucky in the "camera startup stakes", the achieved discrepancy in the editor would have been 1/48th sec (20.8mS). Approaching intersections, the traffic passing at speed from left to right or from right to left in front of the cameras would have been significantly interfered with, I would say, in its 3D appearance.

If two cameras lack genlock, and if they do not start consistently when given simultaneous remote control commands to start, then the best method would appear to be a monitoring device (such as the Lanc Shepherd). The cameras would be restarted a few times (if necessary powering down one of the cameras between trials to reset its timing) until fate provided a better than a desired maximum 1/4 frame (or even 1/8th frame) discrepancy, depending on how critical the application was. Another option if multiple takes are feasible for a short critical scene would be to do 3 or 4 takes without any measuring device. At least one of the takes would be likely to have better than 1/4 frame misalignment!

_________


Does anyone happen to have an answer for the following?

I'd be interested to know what happens synchronisation-wise when two 720p webcams are simultaneously connected to a pc via USB. Is the phase difference in the delivered frames random, or would the camera drivers bring the two cameras into perfect synch? Does it depend on the camera model?
post #1059 of 1087
Quote:
Originally Posted by MLXXX View Post

If two cameras lack genlock, and if they do not start consistently when given simultaneous remote control commands to start, then the best method would appear to be a monitoring device (such as the Lanc Shepherd). The cameras would be restarted a few times (if necessary powering down one of the cameras between trials to reset its timing) until fate provided a better than a desired maximum 1/4 frame (or even 1/8th frame) discrepancy, depending on how critical the application was. Another option if multiple takes are feasible for a short critical scene would be to do 3 or 4 takes without any measuring device. At least one of the takes would be likely to have better than 1/4 frame misalignment!

Behaviour of two AVCHD consumer cameras is different. I have a pair that stays in sync in a great way, so the sync-drift is low. BUT you have to start up them some times until the start point is in sync.
Controlers like the Lanc Shepherd or my stefra-lanc cannot sync the cameras. All what they can do is to startup them similar (better: to send the start-up signal at the same time). And to measure the sync. If the sync-drift has become to large, you have to reset or restart the system.
Quote:
Originally Posted by MLXXX View Post

I'd be interested to know what happens synchronisation-wise when two 720p webcams are simultaneously connected to a pc via USB. Is the phase difference in the delivered frames random, or would the camera drivers bring the two cameras into perfect synch? Does it depend on the camera model?

With a pure usb connecton nothing will happen. Why should a driver bring them in sync? That is not what the drivers is doing at all.
post #1060 of 1087
Quote:
Originally Posted by Wolfgang S. View Post

Quote:
Originally Posted by MLXXX View Post

I'd be interested to know what happens synchronisation-wise when two 720p webcams are simultaneously connected to a pc via USB. Is the phase difference in the delivered frames random, or would the camera drivers bring the two cameras into perfect synch? Does it depend on the camera model?
With a pure usb connecton nothing will happen. Why should a driver bring them in sync? That is not what the drivers is doing at all.

I note it is feasible to run more than one webcam at a time. There are references on the net to achieving that by running two instances of certain capture software.

It's some years since I purchased a webcam. I have no experience with recent models. I recall that my old webcams could be controlled by the pc to deliver different resolutions. (Lower resolutions were capable of higher frame rates.)


I know nothing of the detail of how webcam drivers typically operate. However it occurred to me that if a webcam operates by filling a video buffer and the driver operates by sending a frame by frame signal requesting transfer of one frame of buffer contents, there is a possibility the webcam capture timing would become indirectly synched to those frame requests.

I would be much happier purchasing two webcams for use with high interaxial separation if I knew they could be brought into synch in some way by the pc.
Edited by MLXXX - 9/21/12 at 7:47am
post #1061 of 1087
Jeff- your webcam is usually connected by USB to the PC. The driver software is just a video display software. You would also need some sort of capture software to save the video to a streaming file, typically AVI file. Each webcam would need to be running these programs and saving the avi file. Then the two avi files would need to be aligned in some video editing software that can pair the clips for 3D. That's one way.

Another way is to use webcams that output real video and are connected to the PC via a video capture card, but in this case you feed both camera's video output to V1 and V2 of a frame store synchronizer. The output of the frame store can then be fed to the PC for capture and pairing. You'd still need two capture cards capable of capturing at the same time. This is what Frank did with a special device he bought that does this, although he did not use webcams as his asynchronous sources.

Neither of these is easy, maybe not even possible with current computer speeds. Frank's system, IIRC, only displays the live feed. For saving the files, he still does that in each camera's recorder and pairs them later. I don't recall he bought a remote dual video recorder for that live feed system.
post #1062 of 1087
Thanks Don. I understand that using USB for two webcams can be tricky at higher frame rates, depending on the pc.

An example of simultaneous capture software can be found at http://www.3dtv.at/products/multiplexer/index_en.aspx

What I haven't been able to find anywhere is a reference to the timing alignment (or lack thereof) that can be expected when capturing the video from two webcams, on one pc.
post #1063 of 1087
Thread Starter 
Quote:
Originally Posted by Don Landis View Post

Jeff- your webcam is usually connected by USB to the PC. The driver software is just a video display software. You would also need some sort of capture software to save the video to a streaming file, typically AVI file. Each webcam would need to be running these programs and saving the avi file. Then the two avi files would need to be aligned in some video editing software that can pair the clips for 3D. That's one way.
Another way is to use webcams that output real video and are connected to the PC via a video capture card, but in this case you feed both camera's video output to V1 and V2 of a frame store synchronizer. The output of the frame store can then be fed to the PC for capture and pairing. You'd still need two capture cards capable of capturing at the same time. This is what Frank did with a special device he bought that does this, although he did not use webcams as his asynchronous sources.
Neither of these is easy, maybe not even possible with current computer speeds. Frank's system, IIRC, only displays the live feed. For saving the files, he still does that in each camera's recorder and pairs them later. I don't recall he bought a remote dual video recorder for that live feed system.
Hi Don,
Actually my system that you referred to has 3 options for recording the live 3D video.
As you stated, I can record on the cameras themselves but also on the computer that is doing the streaming using the Avermedia software. I can also record the remote stream over the internet using VLC.
I just watched a large ship cruising through Duluth harbor in 3D while sitting on the couch in my living room.biggrin.gif
I was controlling the remote 3D cameras with my Iphone.
post #1064 of 1087
Thanks, Frank. I forgot that the 3D "Combiner" (a.k.a. 3D frame store synchronizer) that you got can combine the two cameras HDMI outputs to a single 3D SBS half for recording.
post #1065 of 1087
I had some encouraging results today with two Logitech HD Pro C920 webcams. The C920 comes with a tripod thread socket, or can hang over the top of an LCD pc monitor and be slid from left to right for best positioning. The minimum interaxial distance (mounting the webcams in their usual horizontal orientation) is 3.7" (94mm). Here's what they look like sitting on top of a notepad pc:




The notebook pc wasn't fast enough for smooth motion, but I'm hoping it might be fast enough to capture some very slow frame rate shots for later playback at a normal frame rate, as a special effect.

Using a desktop pc (64 bit i5, 3.1GHz) , I obtained 30.00fps at (1280 + 1280) x 720, for concurrently capturing displaying and saving. (Software used was Stereoscopic Multiplexer. The codec I used for saving was X-vid , set for "real time". The camera feature "Right light" was disabled.)

This is wide interaxial on the cheap! The webcams can be obtained for less than $100 each, and the Stereoscopic Multiplex software can be used initially as a trial. The image detail is a little soft. Perhaps by next weekend I'll have some suitable sample shots to upload.


Now the question that had been of great concern and interest to the writer. Did the essentially asynchronous webcams for some reason operate in phase with each other? Well, with the Stereoscopic Multiplexer application manually set for 24fps, no they did not. Synch was erratic. But allowed to operate at the webcam's default of 30.00fps, and a good standard of timing alignment was obtained for each and every recording! What a pleasant surprise. Disclaimer: Other pcs may behave differently!

I was able to step through the recorded 2560x1280p30 side by side frames using VirtualDub and confirm that hand waving or other fast movements were synchronised to within a small fraction of a frame.

And playing the video at full speed in 3D, the revolving orbs of the anniversary clock looked very similar clockwise and anti-clockwise (though even with the webcams at the minimum interaxial separation, the clock seemed to have more depth than in reality).

Webcams are no substitute for dedicated video cameras but they might allow someone like me to take some interesting wide interaxial still photos, and videos, at moderate cost.
Edited by MLXXX - 9/23/12 at 3:47pm
post #1066 of 1087
Quote:
The minimum interaxial distance (mounting the webcams in their usual horizontal orientation) is 3.7" (94mm).

Using the general rule of thumb for wide stereo base of 30:1 your minimum distance to your closest object should be about 9 ft.
post #1067 of 1087
Yes, I've had some very unnatural stretching effects by placing objects too near; even after editing to reduce the horizontal displacement of the captured images.

I saw some fireworks this weekend at around 300m distance and the stereoscopic effect was fairly strong for my unaided vision. It made me think that not too great an inter-axial might do the trick for fireworks at that sort of distance.
_____


I've obtained some more, promising, results with the Logitech C920 webcams for portable use with an old notepad pc. This is not a point and shoot methodology! It requires time at the computer screen to set up and control (including the settings for each camera). But for occasional 3D high interaxial shots, it strikes me as worthwhile experimenting with.

The C920 webcam includes hardware encoders for MPEG-4 AVC, and motion JPEG, taking the processing load away from the pc. Using two webcams, and graphedit.exe,1 I found that my notepad pc and my desktop pc could record MJPG streams in correct synch using the following graph I put together:



The notepad could record Left and Right MJPG at 1280x720p30 or 1920x1080p24 without dropping frames. The desktop pc could manage 1920x1080p30 very comfortably. [There is no monitoring facility with this graph. To correctly aim the cameras prior to recording, I suggest using the software Sterescopic Multiplexor. After running the graph, the saved files -Left.asf and -Right.asf can be renamed so that the next running of the graph doesn't overwrite them. It is likely that the asf files will be out of time alignment by a small whole number of frames. This will be noticeable if playing using Stereoscopic Player. Edit: see footnote 2. Note that the audio obtained from the camera using this graph may not be of particularly good quality. The graph for the Right file shows another source of pc audio, useful if separate mikes are available.]

I found the MJPG frames showed minimal encoding artifacts. They are simply an individual jpeg image for each frame.

(I tried capturing two MPEG-4 AVC streams, using the following filter in graphedit: GDCL Mpeg-4 Multiplexor, but there were fairly noticeable encoding artifacts, and the streams were asynchronous.)

Angle bracket a bit conspicuous

For about $10, I was able to obtain some 1/4" bolts with UNC thread, washers and nuts (for the tripod socket in each camera), and a length of slotted metal angle bracket. Using different slots in the bracket I could obtain various interaxial distances for the webcam lenses, up to 24" (60cm). With the help of Stereoscopic Multiplexor to view the combined camera output, I aligned by eye. This was close enough to be usable. In fact Vegas Pro doing an automatic adjustment reported nil vertical correction required for a number of my test videos! This was ok for use on a car dashboard, but in public the bracket looked conspicuous.

I found it convenient in public simply to place the webcams on top of the notepad screen, and that way could obtain up to 7.16" (182mm) lens separation. That is about 3 times the mean adult interpupillary distance.

Now that I have the basics in place, I should be able to concentrate on getting some interesting footage with my portable notepad pc and the two webcams. So far I've noticed that even 30fps is slow for road traffic on city streets. During the day, the webcams are producing a blur-free image for each frame, so must be using a short exposure time, but the distance moved by the traffic even in 1/30th second is considerable.

____________

1 Graphedit can be useful for these oddball tasks. One site where 64-bit and 32-bit versions can be found is: http://www.videohelp.com/tools/GraphEdit

2 Running two instances of VirtualDub can be good for detecting the number of frames of misalignment between the Left and Right files. The delayed file can then be resaved omitting the required number of frames at the beginning, using VirtualDub set for "Direct stream copy". You should then have Left and Right files that are time aligned within a small fraction of the duration of a frame. Disclaimer: I cannot guarantee use with other pcs will result in C920 webcam electronic shutters coming into close synch with each other.

Edited by MLXXX - 9/30/12 at 7:47pm
post #1068 of 1087
Cameras: two Logitech C920 webcams
Interaxial distance: 18” (458mm), parallel, adjusted in Vegas Pro for convergence on the nearest power pole, 20 metres away
Personal computer (notepad) processor: Intel Atom N270 (running Vista)
Graphedit graph: as per my post immediately above

Procedure

The metal bracket supporting the cameras was placed in position. The camera view was checked with Stereoscopic viewer. Graphedit capture pins were set for 1280x720 at 30fps, MJPG format. Stereoscopic viewer was closed before running the graph.

The graphedit graph was run three times over a 15 minute period. After each take, the output files were renamed. For the three takes, the left file was found to lag by 3, 3, and 5 frames, respectively. This was discovered using VirtualDub and comparing the position of the train in the left and right files. The train moved a very significant distance in 1/30th second making it fairly easy to select the correct frame. VirtualDub was used to resave the left file as a stream copy and using a frame range that omitted the required number of initial frames. Although this frame time alignment could have been done in Vegas Pro, I found it more convenient to adjust beforehand.

After pairing the Left and Right clips in Vegas Pro I used the “Stereoscopic 3D adjust” effect for automatic geometric corrections. I then set the horizontal offset for convergence on the nearest power pole. This meant that the wall on the far left of the camera view was hard to view in 3D but that seemed the best compromise. There was an exception to this. For the last clip, the horizontal offset starts and finishes with the power pole as the point of convergence but uses a nearer point as the train passes (the horizontal offset is “animated” along the time line). I did this as the train was on the nearest track to the cameras and was easier to view with a closer convergence point. This last clip was a very demanding test of camera timing phase as between Left and Right.

The MJPG files (including the muxed in audio streams) are not compact. One of the takes today required 1.9GB for the left file and 2.0GB for the right file. This take lasted three and a half minutes.

Conclusions

Examination of the files indicated that the webcams kept in phase for all three takes (despite the number of frames of lag of the left file varying). The methodology of extracting the MJPG stream from the two cameras using graphedit worked successfully!

I could have used a much lesser interaxial distance for the distances involved. [The decision to use the particular camera location was spur of the moment; and the afternoon light was beginning to fade. It would have been better not to have had so much foreground on the left (wall and grass).]

Conditions were a bit overcast but this YouTube video [uploaded as 29.97fps progressive 1920x1080 side by side] should give a broad idea of video quality possible with the simple webcam arrangement:

There is loss of quality with the uploading process. Here is a still frame extracted from one of the MJPG files (appears in the YouTube video as the right hand side of frame 478):
:
Edited by MLXXX - 10/1/12 at 2:46pm
post #1069 of 1087
These words of Don's come from the thread Vegas-pro-11-released:
Quote:
Originally Posted by Don Landis View Post

One thing I learned in class is if you have to make any adjustments to a paired camera set, then if there is a change in the shot itself where objects move, either by themselves or by moving the camera or zoom lens then you will need to make additional key frame adjustments to compensate for the geometry change. It's just the math. You may recall the classroom exercise used by Sony with the Airplane taxiing on a runway.
The above does raise an interesting question for a situation where the cameras are stationary, or panned: how many frames do you run an adjustment for?

My (intuition-based) tendency has been to do a single auto-correction with the Vegas Stereoscopic Adjust plug-in, choosing a scene with mostly distant content. I figure that should allow the auto-correct algorithm more easily to identify true disparities for vertical misalignment and rotation (due to the mounting of the cameras or slight fixed differences in their lenses), and, if necessary, for zoom.

I haven't done an exhaustive analysis but when I go back to some of my recent high interaxial shots with fixed zoom webcams and run additional auto-corrections along the time line I don't for my own vision notice any great change in the 3D effect (e.g. reduction in apparent geometrical distortion) other than -- oftentimes -- a change in convergence. So I think that if I was being "fussy" and doing additional auto-corrections I would also be taking note of the convergence and be ready to do a manual adjustment of it to keep it stable or to track it to what was considered desirable. Of course it could be that my webcam setup has had relatively gross errors in alignment compared with what others have been doing. (I have remounted the webcams on location and with just a quick alignment by eye using a small monitor). I note that convergence can become a more critical issue with high IAs if the scene includes close image content, resulting in a large horizontal separation between the raw Left and Right frames.

I can appreciate that if zooming two cameras, a new auto-correction is probably highly desirable immediately at the conclusion of the zoom, or as soon as a stable, representative, scene is available at the new zoom setting (perhaps copying and pasting the new scene stereoscopic adjustment values back to the timeline point at the end of the zoom).
Edited by MLXXX - 10/18/12 at 11:38pm
post #1070 of 1087
Jeff, I do the autocorrection at the beginning of each clip - or event as you call that in Vegas. Or less, it depends on the footage.

The convergence adjustment is another story, since that is not really touched by the autocorrection. Here I can do more. Or I use it in a way where I adjust the convergence to a major point in the clip (e.g. if the nearpoint is changed during the take) - and tend to keep it constant from the beginning and the end of the event.
post #1071 of 1087
Wolfgang, and Jeff-

The stereoscopic 3D adjust FX has three functions that aren't exactly linked to each other except they adjust 3D.

1. The first one is the horizontal adjust that is used to place the contents of a clip forward or back into the scene as referenced by the screen plane. While this can be used to improve clarity of 3D it's main purpose was to fix the location of 2D images in the 3D stage. Like putting titles on the screen in a location that does not occlude with the main 3D scenery. You can set keyframes to animate the titles in 3D space. Most of the time I leave this at zero for 3D cameras, and make use of it on all titles and mattes. The reason is my cameras do a pretty good job of deciding the depth of the image in the original clip for single cameras. For paired cameras, I test and change it if I think the scene can be improved.

2. The 2D correction of the left and right eye images so they match in pairing for stereoscopic effect. Here, the correction purpose is to fix the errors in left and right eye alignment. Even what you might consider a perfect alignment of your two physical cameras, it is most likely they are not "perfect" So, a correction keyframe set is necessary for good quality 3D. The problem here is because something is not perfectly aligned and the geometry of misalignment, we can only have one set of correction factors for one image at a given zoom on the lens. If the scene changes and or the zoom changes but zoom on both cameras are in sync, the geometry still changes in many ways. If the first keyframe set in auto correct offers zero correction in every setting, they we might assume the two cameras are "perfect;y" aligned for all synced zoom ranges. This would be extremely rare except with single 3D cameras manufactured that way. With our twin rigs even with careful mechanical construction, in most cases if not all cases, there is some degree of misalignment as indicated by the corrections in this auto correct. So, if we don't zoom or change the image in the scene much, we probably only need the one set of auto correction Keyframes in the clip. Where there is scene change or zooming of the lenses, then we would need at minimum of 3 sets of auto corrections. One in the clip beginning, one set at the beginning of the start of the scene change or zoom, and a 3rd at the end of the scene change or zoom. This will generate a ramp on the keyframe timeline in each setting that needs correction. It is anticipated that if the scene change is linear in change over time, the two sets of keyframes would be adequate. If the zoom or scene change in nonlinear then we may require additional autocorrects between the two end points of change. Scene change that require additional auto correction would be any where the objects in the scene change distance from the camera, either closer or farther away.
There is also some creative correction you can do with the check boxes. Normally you would check off both left and right and the program optimizes both against each other. Sometimes I had a zoom that didn't sync well and I ended up with two different framings of left and right eye. Here I could do the correction individually for the left and then the right checking off only one camera and make a match for the zoom. It's nice to have that selection available if needed but you really have to understand your stereoscopic construction to do the correction here.

3. The third use of 3D stereographic adjust is the bottom section that allows you to add a transparency matte to one of the images on the left or right side where an object shows up in the scene that won't converge since it is too close. Here you can convert the part of the scene from 3D to 2D by masking out one of the left or right eye images for a part of the screen where it appears. You can keyframe this in and out within your clip as well. This is something I have found little use for but it's nice to know it is there if needed.


I hope the above helps improve the understanding of these adjustments in Vegas Pro 3D and improve your efficiency in editing.
post #1072 of 1087
Quote:
Originally Posted by Wolfgang S. View Post

The convergence adjustment is another story, since that is not really touched by the autocorrection. Here I can do more. Or I use it in a way where I adjust the convergence to a major point in the clip (e.g. if the nearpoint is changed during the take) - and tend to keep it constant from the beginning and the end of the event.
Yes controlling the convergence can help in many ways, e.g. for a non-jarring cut to a different scene, or camera angle. Or to reduce eye-strain and ghosting. But generally, like you, I keep the convergence constant within an "event".

Quote:
Originally Posted by Don Landis View Post

So, if we don't zoom or change the image in the scene much, we probably only need the one set of auto correction Keyframes in the clip.
For my fixed zoom wide stereo base material I have been using only one autocorrect, but choosing a representative point on the timeline for running the autocorrect.
Quote:
Where there is scene change or zooming of the lenses, then we would need at minimum of 3 sets of auto corrections. One in the clip beginning, one set at the beginning of the start of the scene change or zoom, and a 3rd at the end of the scene change or zoom.
If there has been little change between the start of the clip and the start of the zoom/scene change, it may suffice to click the plus symbol to insert a key frame in the time line of the animated "stereoscopic 3D adjust" event effect. (This can avoid a minor, unnecessary, ramp.)

In terms of performing an autocorrect after a major scene change, I like to choose a point in the new scene that is "representative" and "stable" for launching the autocorrect function. I can then drag (or copy and paste) the new parameters back to the point where the new scene started. For example with cameras mounted on a car I may choose a point where the vehicle is waiting at traffic lights (minimal vibration) and the cameras have a clear view (e.g. no vehicles immediately in front of the cameras).
Quote:
3. The third use of 3D stereographic adjust is the bottom section that allows you to add a transparency matte to one of the images on the left or right side where an object shows up in the scene that won't converge since it is too close. Here you can convert the part of the scene from 3D to 2D by masking out one of the left or right eye images for a part of the screen where it appears. You can keyframe this in and out within your clip as well. This is something I have found little use for but it's nice to know it is there if needed.
I may experiment with that, Don. (A method I have used is the Pan/Crop Effect, to zoom into the frame, and thus avoid the edges.)

Quote:
Originally Posted by MLXXX View Post

Examination of the files indicated that the webcams kept in phase for all three takes (despite the number of frames of lag of the left file varying). The methodology of extracting the MJPG stream from the two cameras using graphedit worked successfully!
I've been trying to measure the accuracy of the phase lock. With my notepad pc, there appears sometimes to be a small discrepancy, perhaps 1/8th of a frame. I'll report on this further when I have some more precise measurements.
post #1073 of 1087
Quote:
If there has been little change between the start of the clip and the start of the zoom/scene change, it may suffice to click the plus symbol to insert a key frame in the time line of the animated "stereoscopic 3D adjust" event effect. (This can avoid a minor, unnecessary, ramp.)

True, but IIRC, you will need to do that for each of the adjustments. I simply place the cursor at the beginning of the change and then do auto correct. Normally, each line will be flat if done correctly. However, once in awhile I will see a radical change between the keyframes of adjustment. This is a warning to me that I may have missed some change point. Did that make sense?
Quote:
In terms of performing an autocorrect after a major scene change, I like to choose a point in the new scene that is "representative" and "stable" for launching the autocorrect function. I can then drag (or copy and paste) the new parameters back to the point where the new scene started.

At first glance I felt this approach to be unnecessary work flow, but I will play with this to see if it really is, or, indeed a better way in the real world, hands on editing session. Always eager to learn a new trick from friends here on the forum. smile.gif
post #1074 of 1087
Quote:
Quote:
Originally Posted by MLXXX View Post

Examination of the files indicated that the webcams kept in phase for all three takes (despite the number of frames of lag of the left file varying). The methodology of extracting the MJPG stream from the two cameras using graphedit worked successfully!
I've been trying to measure the accuracy of the phase lock. With my notepad pc, there appears sometimes to be a small discrepancy, perhaps 1/8th of a frame. I'll report on this further when I have some more precise measurements.
I have found material discrepancies, of up to 1/4 of a frame. I mounted the two webcams one on top of the other, and shot passing traffic on an arterial road. I measured horizontal discrepancies between the Left and Right images to the nearest pixel (or two). After each take, graphedit was dismissed and the webcams were unplugged. So each take was independent of the preceding one. Here are results I obtained one afternoon:

Take | Rate | Discrepancy in frames
1 24fps 1.26
2 24fps 64.09
3 24fps 12.02
4 30fps 13.21
5 30fps 14.19


So after time aligning to Left and Right clips of each take to the nearest frame, the remaining discrepancies were:

Take | Fractional frame offset
1 26.00%
2 9.00%
3 2.00%
4 21.00%
5 19.00%

Whilst a better result overall than what would be likely (statistically) with fully independent asynchronous cameras, the above results (particularly for takes 1, 4 and 5) were disappointing.

In a further 15 less accurately measured tests, I found the frame timing discrepancies not to exceed around 1/4 of a frame.

In summary, the arrangement has been yielding smaller average timing mismatches than could be expected with fully independent cameras operating at 24fps or 30fps, but the discrepancies can still be high on some occasions.

Quote:
Originally Posted by Don Landis View Post

Always eager to learn a new trick from friends here on the forum. smile.gif
Stereoscopic 3D is still a developing art, with many differences of opinion as to which approaches are "useful". I have not been nearly as involved as Don in these sorts of issues but I occasionally notice something I think might be worth sharing.

I found it interesting recently to play parts of Avatar 3D and some other 3D movies in 3D but without wearing 3D glasses. I looked for which parts of the scenes were free of a double image parallax effect from overlapping Left and Right image content. I noticed that a great amount of the time, the actor speaking would be given nil parallax. So not only must the focus have been pulled to favour that actor (a well established practice), but the stereoscopic convergence must also have been adjusted for that actor (very possibly in post). I personally find this makes for easy viewing in 3D.

Opinions could vary on how much "fine tuning" of the convergence is really worthwhile, for high interaxial video.
Edited by MLXXX - 11/7/12 at 7:44am
post #1075 of 1087
This video was made with interaxial 20cm, with two camcorders Sony HDR-CX130.

Thank you!
Milton
post #1076 of 1087
Quote:
Originally Posted by Powerplay4 View Post

This video was made with interaxial 20cm, with two camcorders Sony HDR-CX130.
Thanks for this, Milton. In places some very spectacular 3D. But the price to be paid is that some of the foreground was very hard to view, for my eyes anyway.

The greens produced by the Sony HDR-CX130s looked pretty good to me; I find it amazing what relatively inexpensive camcorders can achieve these days!
post #1077 of 1087
Not much activity in the wide stereo base camp for posting but that does not mean we haven't been at work.

Here's a video I just completed.

I planned to use this setup with 3 cameras for a couple projects last October and December but other circumstances prevented me from doing the shoot. This time I thought the weather, being so cold that I thought the performance would be cancelled.

I shot this with the NEX5n's in 1080 60i mode so the video would be more compatible with the TD 10 footage. Not sure how important that is with Sony Vegas able to handle different video so well. I was pleased with the results. In any shoot like this the technical issues are sometimes over shadowed by the location problems, people who are disrespectful, rude, and weather, as well as me just forgetting something. There is a shot of my system at the end that a local volunteered to take for me.

This is my third shoot in this location for 3D and what I would like to do in the future is expand the IA on the NEX 5n's to 20" to see how the depth improves within the water spray. The trick here is the rail along the sidewalk that gets in the way and people who move in and stand just inside my wide angle shot.
post #1078 of 1087
Hi Don, having only joined this forum recently, I seem to be finding fascinating threads that I have just missed. I bought a TD1 and found that thread after about 65 pages, then this one after 36 pages!! I've also read this one from the beginning so have a pretty good idea of what You, Frank, Joe, Wolfgang and others have been experimenting with. I have also been exchanging info with Wolfgang on the 3dphoto.net forum where I have been a long term member.

My twin rig was originally two Panasonic minidv cams finger synched on a simple aluminium rail. I experimented from scratch with that, finding out what the basics were. It soon became clear that finger synching tape based cams was quite frustrating, so I changed to a pair of Panasonic SD700 cams. These give me a 1920x1080 HD capability and much quicker startup than tape. Synchimg was the next problem and as the cams have an IR remote, I found that one remote would start both, but only from in front of the cams. I then opened one of the remotes and removed the IR led, soldered a twin lead to the led connection and soldered an IR led to the end of each lead. The remote is velcroed to the tripod panhandle so that it can easily be detached, and the leds are velcroed to the front of the slide bar in front of the cam IR receivers.

The sync is surprisingly accurate and even zoom can usually be kept useable. The sync shift between cams seems to be minimal, even on clips up to 50 minutes long. Very occasionally the cams will start half a frame out, which then becomes noticeable, but less than that I find acceptable for the level I am working at.

Like you, I prefer a wider IA than most consumer cams offer, but at the moment 12" is the limit on my slide bar. The sd700 cams have a viewfinder in addition to the screen, and with the screens folded in, the cameras will close up to about 65mm. That enables me to use the viewfinders as a 3D binocular for accurate lining up. Further apart I use a piece of screen protector on the foldouts with cross hairs marked, or the grid overlay on the camera.

I tried a successful experiment for monitoring about 18 months ago, which I intend to refine, and reading through this thread sounds like the sort of project that Frank could do with his eyes shut. I bought a pair of cheap mini monitors for use with a car reversing cam, which are about 3"x2". They have a composite video input and my sd700 cams have a composite output, so I made a box from thick card with the monitors side by side at one end and the lenses from a Loreo light viewer at the other. With the feed from the cams going into each monitor they can be viewed in 3D the same way as a holmes card in a viewer. The resolution of the screens is adequate for quite accurate alignment and IA spacing setup. With a rigid case, adjustable lenses and suitable mount, it should be very simple to use, powered by a small battery pack. There may be smaller screens available or I also thought of the possibility of wiring a couple of old viewfinders as a permanent binocular style monitor, just as I use my existing viewfinders but on a permanent mount. Either method would give a continuous and cheap 3D monitor which could be used with a big camera separation.

There are other variations that may be possible, such as using a pair of video senders and receivers to send a picture from the cams to the monitors when an IA of many metres may be required. Most video senders have 4 channels available so shouldn't be a problem.

The final monitoring suggestion is again very simple and similar to a suggestion earlier in the thread. I have a 7" screen TV that can also be used as a reversing cam monitor. Many of these, like mine have two line inputs for switching between two cameras, so this would make a very basic way of checking the alignment of a twin rig.

Sorry for coming in with such a long post,

Roger
post #1079 of 1087
Roger- for alignment of the twin cameras, I use the grid lines turned on the cam monitors. You set this in the menus. I begin with wide angle for a rough adjustment for each camera and align the mounts so the cross hairs are on a common distant object. Then I go to full optic zoom and tweak the alignment allowing a little horizontal for the IA of the cameras. I sync the recordings with a sound waveform or a clap board. After doing my shoots, I align the left and right files on the timeline, slide them so the audio is as close as possible ( frame quantization on ) and then pair. Finally I do the auto stereo calibrate with the 3D stereoscopic auto calibrate in Sony Vegas and is makes a small correction. I verify the video looks good on the 3D monitor in post. If needed I may push or pull the center of the scene with the horizontal adjust as desired for effect. This is especially necessary when I am working with different I.A. in the same project. This work flow I find I really don't need to use the field 3D monitor and in some cases, especially after shooting for a long day, my eyes get tired so I even switch the 3D auto stereo off and just allow the camcorder to do it's thing.

There is a really nice 7" 3D auto stereo ( parallax barrier) field monitor made by Marshall if you have deep pockets. Last I priced it it was $7000. It will work directly with hdmi cable from a Sony TD10 and with a 3D stereo frame store combiner, like Frank experimented with some time ago, you can connect two 2D cameras and combine the hdmi feeds to view frame packed 3D. The monitor is also equipped with some nice broadcast diagnostic scopes.
Also there was a company that made a 3D twin monitor setup in a suitcase that used the Loreo 3D light viewer. Possibly you have seen that.
There are so many ways to solve the problems, my trouble with most of them is they take up too much space and cost too much money for my field shooting kits. Often I'm hiking and rock climbing and have to carry everything through crowds or rough terrain.
post #1080 of 1087
Don, your working method is virtually identical to mine, as I also like to keep light weight and portable. I use the camera grid from the menu or just my own cross hair for lining up. I work on limited funds, so their is no way I would be interested in a 3d monitor costing ten times the cost of my cameras. I thought during the course of this long thread that there was a lot of discussion on 3d monitoring, so I offered my own experiences and experiments. Once I have set my own rig, I usually just use one 2d screen until I change the IA.

Editing is also the same except that I use Magix MX18. After using an audio cue at the recording stage, I line up the audio for each clip using the NLE audio sync, then auto align the horizontal and rotational alignment. Each scene is depth adjusted as I work, to give the most comfortable viewing.

Roger
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: 3D Source Components
AVS › AVS Forum › 3D Central › 3D Source Components › When small interaxials just don't cut it!