Sony HDR-TD10 3D-Capable Camcorder - Page 54 - AVS Forum | Home Theater Discussions And Reviews
Baselworld is only a few weeks away. Getting the latest news is easy, Click Here for info on how to join the Watchuseek.com newsletter list. Follow our team for updates featuring event coverage, new product unveilings, watch industry news & more!


Forum Jump: 
 2Likes
Reply
 
Thread Tools
post #1591 of 1608 Old 10-19-2015, 05:11 AM
Member
 
Roger Gunkel's Avatar
 
Join Date: Dec 2012
Location: Near Cambridge, UK
Posts: 29
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 4 Post(s)
Liked: 11
Quote:
Originally Posted by MLXXX View Post
Hi Roger, you may be perfectly well aware of this issue. However I suspect that occasional readers of this part of AVS Forum might not appreciate that generally speaking you cannot achieve perfect Left and Right image synch with two separate home consumer video cameras, even if controlling them using a single remote control.


When you wrote "perfect synch", I believe you meant that the two cameras were out of kilter with each other by no more than half a frame. (In any case where the mismatch in the capture time of the raw footage happened to be greater than half a frame, then the Left or Right clip could be advanced or retarded one or more whole frames in the editing so as to reduce the discrepancy between the clips to no more than half a frame.)

With unynchronized capture, the effect of any discrepancy can be minimised to be no more than a half-frame. Use of a single remote control is no guarantee the two cameras responding to the remote control will start up at the same time (say to the nearest millisecond) for video capturing purposes. Generally, there will be significant variations in starting up.

I note that at 60fps (USA), half a frame is 8.33mS. At 50fps (Europe), half a frame is 10.00mS.

For certain fast moving scenes (such as a close-up of a water fountain) a timing mismatch of several milliseconds will markedly compromise the 3D effect for many viewers. (I myself see a mirage effect or a general blurring.) Other viewers may not notice any anomaly.


I think there is a significant limitation in using separate non-synchronised cameras (be they 2D or 3D camera models). The 3D effect from the captures will not be successful for fast motion in the foreground; unless you are lucky enough to find that the two cameras happened to be out of kilter with each other by only a small fraction of a frame, for the particular take. For less demanding scenes, you can get by without synchronization at the time of capture, if using 50i/50p or 60i/50p. (Using an unsynchronized capture frame rate of only 24p or 25p would be risky, unless movement of the camera, and in the scene were sedate.)

There are other issues that can arise too with two independent cameras, such as variations in automatic exposure, focus, and colour balance.

I do regret the disappearance of consumer level 3D video camera models from retailers' shelves. Even with GoPros, I rarely see a 3D kit on display these days.
I have been producing 3D video for a number of years, some of which I get paid to produce as part of my full time video filming business, so am well aware of the synch problems with two ungenlocked cameras. I'm also well aware that if you want to take best cinema quality 3D footage, then you need to spend huge ammounts of money and probably won't be asking questions on this forum.

My post was mainly in response to other posters talking about having two separate 3D cameras for a wider stereo base, which would give exactly the same sync problems as a pair of 2d cameras. When I mentioned 'perfect sync' from the remotes, I meant that there was often no need to adjust the two video streams in post as they started within half a frame of each other.

Unless you are watching action shots with lots of very fast movement, an accuracy in PAL of 1/50th of a second between video streams gives a perfectly acceptable 3D image for the type of personal documentary work and contracted wedding work that I do. The more noticeable problems arise when the streams start drifting further on longer duration shots. This of course is not really a problem when taking short clips or if cutaways are taken with a second pair during long shots. The cutaways or different angles will enable the long clips to be cut and resynched if necessary.

All my synching is done to the audio track using auto synching to the natural sound or a cue signal if appropriate, although it is pretty straight forward to visually sync from the audio waveform.

Using a pair of consumer cams is going to make matching the streams more time consuming if you choose to use auto everything. I always manually set the cameras for white balance, exposure etc and never use in camera stabilizing as that is impossible to get identical on both cameras. I have frequently used Mercalli though to stabilise already synched footage.

I think we need to be clear here that there is a big difference between commercially produced 3D video for broadcast or cinema and video for the use of family and friends. Most reading these threads will be wishing to produce 3D video for their own use and it is perfectly viable to make 3D video using pairs of cameras at almost any price providing you have some control over manual settings and follow basic 3D filming practice.

I would like to encourage more people to try 3D video filming and pointing out simpler and cost effective ways to do it seems a good way to go.

Roger
Roger Gunkel is offline  
Sponsored Links
Advertisement
 
post #1592 of 1608 Old 10-19-2015, 07:47 AM
AVS Special Member
 
tomtastic's Avatar
 
Join Date: Sep 2012
Location: Wichita, KS
Posts: 1,154
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 420 Post(s)
Liked: 139
Quote:
Originally Posted by Roger Gunkel View Post
I'm just wondering why nobody seems to be recommending using twin 2d cam video rigs, rather than twin 3D cams. It is far cheaper to get a pair of modern HD cams, than trying to get hold of rapidly disappearing and aging 3D cams. By far my most heavily used twin rig for the past few years has been a pair of Panasonic SD700 cams, long since superseded, but still giving great HD performance. They are synched for stop/start, zoom etc with the standard IR remote, which works on both cameras at the same time. Usually they start in perfect sync, but if they are a frame out, they can easily be audio synched in post. They are also small enough to get IA down to about 60mm and I use both viewfinders like a telescope for a full live 3D view.
Roger
I would say they use dual 3D cams solely because it doubles the arsenal of 3D cameras that user has, also like you mentioned in your next post, a lot of the usage here isn't for professional use, but recreation and 3D enthusiasts with various levels of experience. If you're traveling to say a National Park and shooting scenery in 3D, one person can only carry so much so taking two 3D cameras that could shoot in 3D if needed by themselves, plus shoot in wide I.A. 3D works for that person.

Of course the biggest problems with all in one 3D cameras are the fixed lens spacing and performance of the cameras (small sensors and consumer grade in-camera compression).

I agree, if I was going that route I'd probably use better 2D cameras. The other problem would then be that you'd need a mirror rig for close up 3D as the cameras won't be close enough to shoot anything under 10 to 15 feet. And that's more weight and money and time for most enthusiasts.

I experimented briefly with dual cameras with a cheap GoPro like system but learned that the best outcome was a half frame off sync. I would have to unqauntize to frames to get them paired up correctly but I understand that would result in interlacing issues if it re renders frames to match. It was only a 130.00 experiment for the cameras and I built the case for around 5.00. A good learning experience. The lens spacing I made adjustable from 1.5 to 3 inches and with no zoom on these action cams it was perfectly fine. As long as there's not much quick movement up close they work ok.

If I were to move to a better 2D camera I think I'd go the genlock route. I have both the Panasonic Z10k and the Panasonic 3DA1 which has the wider I.A. and the genlock and alignment are something that go unappreciated in 3D. It's just something that you expect with shooting 3D after you own one of these and then moving to something that isn't aligned and synced out of the box and it gets really frustrating.

If you're shooting long distance shots, then it's probably not a big deal to have perfect sync, but if you want more range out of your 3D system, using it up close, mid range and distance, it's the better way to go.
MLXXX likes this.

This line intentionally left blank.
tomtastic is online now  
post #1593 of 1608 Old 10-19-2015, 09:15 AM
Senior Member
 
MLXXX's Avatar
 
Join Date: Jan 2007
Location: Brisbane, Australia
Posts: 407
Mentioned: 3 Post(s)
Tagged: 0 Thread(s)
Quoted: 144 Post(s)
Liked: 69
Just before posting this, I've noticed a post from tomtastic. There will be a degree of overlap in some of my comments below.

Quote:
Originally Posted by Roger Gunkel View Post
My post was mainly in response to other posters talking about having two separate 3D cameras for a wider stereo base, which would give exactly the same sync problems as a pair of 2d cameras. When I mentioned 'perfect sync' from the remotes, I meant that there was often no need to adjust the two video streams in post as they started within half a frame of each other.
Yes a number of the 3D enthusiasts who already had one 3D video camera chose to purchase a matching second 3D camera for hyperstereo, rather than make use of two 2D cameras.

Reasons for that could include:
  • The first 3D camera could be used alone for regular 3D shots.
  • The second 3D camera could be pressed into service for hyperstereo and provide an excellent match of lens characteristics and image sensor characteristics.
  • The two 3D cameras could be used for simultaneous regular 3D shots from different angles.

But yes someone with no 3D camera could decide to purchase two 2D cameras and mount them in such a fashion that they could be used for regular 3D or hyperstereo. There are number of challenges here:
  • Achieving basic physical alignment of the lenses of the two cameras
  • If telescopic lenses are to be used at an intermediate extent of zoom, achieving a matching of the zoom [this could prove very difficult]
  • As previously discussed, (for a given aperture) manually setting the exposure time of each camera
  • Having a solution for focus (perhaps allowing auto-focus and accepting there will sometimes be disparities between Left and Right)
  • As previously discussed, avoiding scenes that will highlight lack of synchrony in the capture of the Left and Right images, e.g. a dog running into view in the foreground; a horse race, or an athletics event.
  • Being prepared to slip the Left or Right footage by one or more frames in the post production editing phase where the cameras for some reason were unable to start within half a frame of each other.

I note that for 3D shooting with a normal stereo base, a single dedicated 3D video camera would be considerably more convenient to use than two unsynchronised 2D cameras. With hyperstereo, inconvenience may be unavoidable.

The future

I'm hoping there'll be a new crop of home consumer 3D cameras in 2016. For example I see quite a potential to design a 4K 2D camera for alternative use as as 2K stereoscopic camera with the addition of an adaptor lens. This would provide Full HD 3D.

As for a new dedicated semi-professional 3D camera, I would hope to see the option to vary the lens separation and even the toe-in. I note that the closer the subject is to the camera lenses, the more important it can become to have the option of turning the lenses inwards, mimicking the convergence of human eyes necessary to view very close objects clearly with both eyes.
MLXXX is offline  
post #1594 of 1608 Old 10-19-2015, 11:28 AM
Member
 
Roger Gunkel's Avatar
 
Join Date: Dec 2012
Location: Near Cambridge, UK
Posts: 29
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 4 Post(s)
Liked: 11
Quote:
Originally Posted by MLXXX View Post
Just before posting this, I've noticed a post from tomtastic. There will be a degree of overlap in some of my comments below.


Yes a number of the 3D enthusiasts who already had one 3D video camera chose to purchase a matching second 3D camera for hyperstereo, rather than make use of two 2D cameras.

Reasons for that could include:
  • The first 3D camera could be used alone for regular 3D shots.
  • The second 3D camera could be pressed into service for hyperstereo and provide an excellent match of lens characteristics and image sensor characteristics.
  • The two 3D cameras could be used for simultaneous regular 3D shots from different angles.

But yes someone with no 3D camera could decide to purchase two 2D cameras and mount them in such a fashion that they could be used for regular 3D or hyperstereo. There are number of challenges here:
  • Achieving basic physical alignment of the lenses of the two cameras
  • If telescopic lenses are to be used at an intermediate extent of zoom, achieving a matching of the zoom [this could prove very difficult]
  • As previously discussed, (for a given aperture) manually setting the exposure time of each camera
  • Having a solution for focus (perhaps allowing auto-focus and accepting there will sometimes be disparities between Left and Right)
  • As previously discussed, avoiding scenes that will highlight lack of synchrony in the capture of the Left and Right images, e.g. a dog running into view in the foreground; a horse race, or an athletics event.
  • Being prepared to slip the Left or Right footage by one or more frames in the post production editing phase where the cameras for some reason were unable to start within half a frame of each other.

I note that for 3D shooting with a normal stereo base, a single dedicated 3D video camera would be considerably more convenient to use than two unsynchronised 2D cameras. With hyperstereo, inconvenience may be unavoidable.

The future

I'm hoping there'll be a new crop of home consumer 3D cameras in 2016. For example I see quite a potential to design a 4K 2D camera for alternative use as as 2K stereoscopic camera with the addition of an adaptor lens. This would provide Full HD 3D.

As for a new dedicated semi-professional 3D camera, I would hope to see the option to vary the lens separation and even the toe-in. I note that the closer the subject is to the camera lenses, the more important it can become to have the option of turning the lenses inwards, mimicking the convergence of human eyes necessary to view very close objects clearly with both eyes.
I think it all depends on what your requirements are and how much work you want to put into the end product, balanced against how much it is all going to cost you.

I have a Fuji W1, a Fuji W3, JVC GS-TD1, 2xGoPro hero3s, 2xSJ4000s, 2xLumix FZ200s, 2xLUMIX FZ2000s for 4k, 2Panasonic SD700/750s with a 3D adapter for a third SD750 and an LG3d phone, so I am pretty well tooled up for anything I may want to film. I tend to use the JVC for all the general and fairly close video work, with the twin SD700 rig for very quick and easy close to wide shots. The LUMIX cams tend to be used more for HD stills and video which requires a bigger imaging chip, but are far less portable than the SD700 rig.

The twin SD700 are easily aligned on a simple base plate and the zooms match very well with the remote although any minor variations can be adjusted in post by zooming in on or cropping slightly on one of the images (I only zoom for reframing) They also start usually on the same frame, but if not, the chances of getting them exactly half a frame out is remote. Colour matching with two identical cameras is usually not needed if they are set up properly, but is quite straight forward in post if required. The SD700s are usually on a very simple base plate, aligned, and used more like a pair of binoculars, with a bigger base plate for wider base if needed.

The JVC twin lens is by far the most convenient for instant 3D and very little editing correction, and for the same reason I still love the Fuji W3 for quick stills. Most of my non movement stills are cha cha with the LUMIX FZ1000 or with both of them twinned for more serious work.

For those that already have a 3D camera, then getting a matching one if you can find it can be useful, particularly for two angles as mentioned, but with little available new and used ones holding their price, anyone starting from scratch would find it far more economical using a matching 2d pair in my opinion and it will help with an understanding of 3D techniques.

Roger
Roger Gunkel is offline  
post #1595 of 1608 Old 10-19-2015, 06:11 PM
Senior Member
 
MLXXX's Avatar
 
Join Date: Jan 2007
Location: Brisbane, Australia
Posts: 407
Mentioned: 3 Post(s)
Tagged: 0 Thread(s)
Quoted: 144 Post(s)
Liked: 69
Quote:
Originally Posted by Roger Gunkel View Post
The twin SD700 are easily aligned on a simple base plate and the zooms match very well with the remote although any minor variations can be adjusted in post by zooming in on or cropping slightly on one of the images (I only zoom for reframing) They also start usually on the same frame, but if not, the chances of getting them exactly half a frame out is remote.
Half a frame out is the worst case; perfect synch the best case. Indeed, the chances of either exact occurrence are remote.

With 2D cameras more generally, the result will tend to lie somewhere between the two extremes, and vary from take to take. (Some takes will be closer to perfect synch, some closer to a half frame discrepancy [where applicable, after slipping one of the clips along the timeline by one or more full frames before pairing them for 3D].) I have recommended that if shooting a water fountain, several takes be done (powering down one of the cameras between takes if that helps), increasing the chances of a favourable result. Some of the videos uploaded to this forum with unsynched cameras have included fortuitously close to perfect synch footage of critical subject matter in part of the video, and poorer results in other parts. My eyes unfortunately are very sensitive to timing mismatches!

A while back I tried a very cheap solution: two web cams attached to a lap top pc, and controlled by the same software. I had hoped that this might result in good synch. I found that the synch was usually better than one-quarter of a frame out, so there was some benefit in the arrangement, but it wasn't good enough to eliminate anomalies for my eyes for many everday scenes. The arrangement was useful though for hyperstereo of distant scenes.

In case readers of this thread haven't seen this before and might possibly be interested, here is a reference to a video I prepared in mid-2012 to illustrate the effect of relatively small discrepancies in synch on the apparent motion of the balls of an anniversary clock:
Quote:
Originally Posted by MLXXX View Post

...

I've prepared a video to show the effects of a mismatch between Left and Right timing on apparent motion. I captured at 60i with my Sony HDR-TD10, extracted Left and Right (using the MVC to AVI converter from 3dtv.at), and with VirtualDub converted to 60p (using odd and even fieds). I used VirtualDub again, with its motion interpolation filter, to arrive at 240p. The result was as if I had captured the moving orbs of the anniversary clock at 239.76fps!

I then used VirtualDub to harvest every tenth frame to get to 23.976fps (a frame "decimate" option in VirtualDub). But the point of harvest could be offset by 1, 2, ..., 10 frames, to simulate capture delays ranging from 4.17mS (1/10th of a frame at 23.976fps) to 41.7mS (one frame at 23.976fps). Even at 4.17mS, an effect on the motion of the orbs is apparent for my vision. Here is a link to the YouTube: http://www.youtube.com/watch?v=k_m4ETc-ydY

The video lasts just under 7 minutes. The smallest mismatch shown (4.17mS), begins at 3m 15 sec.

...
As 7 minutes is a long time, it might be convenient to proceed directly to the point 3m 15 sec into the video, where a mismatch of only 4.17mS (1/240th second) is demonstrated.
MLXXX is offline  
post #1596 of 1608 Old 10-20-2015, 05:47 AM
Member
 
Bergj69's Avatar
 
Join Date: Apr 2014
Location: Lage Vuursche, The Netherlands
Posts: 92
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 5 Post(s)
Liked: 23
Quote:
Originally Posted by termite View Post
Do you know if there's a SBE available for TD20?
Has anyone used one?
No, there is not (and probably will not ever become) a SBE available for the TD20. The lenses are mounted too close on the TD20 model to make the mirror construction possible, the distance on the TD10 is just that wee bit more and the minimum required spacing for a SBE to function. In fact, that is the only reason why I purchased myself a TD10 on top of the TD20 that I already had! So that I could equip it with a SBE. With the SBE mounted on my TD10 I cannot increase the 3D range as much as one can using a rig, but working with a rig puts the whole 3D project at a much more advanced level, including effort and time required both in shooting (setting up the rig) and in editing.

The set of TD20 and TD10 with SBE was portable enough (albeit a TD10 fitted with a SBE unit is still quite bulky) to carry along on my holiday's and use for "from the hip shooting". I had the SBE permanently mounted on the TD10 to have it readily available for the long shots and used the TD20 for the nearby shots. I simply do not have enough time to extend my holidays long enough to see just as much as I did now and shoot footage at the same spots but using rigs for the long shots (setting up, shooting, breaking up, etc). Don Landis has made breathtaking 3D projects with his rig, but it very likely has taken him quite a lot of time to produce them (figuring out the setup of the rigs, carrying the stuff around, setting it up, fine tuning the hardware, etc, etc). As I said, by using the SBE I managed to increase the 3D depth such that for me it still produced breathtaking shots from the Grand Canyon and Sedona. But footage made using a rig and a lot of time will definitely be more impressive.
Bergj69 is offline  
post #1597 of 1608 Old 10-20-2015, 09:28 AM
Senior Member
 
termite's Avatar
 
Join Date: Jan 2003
Posts: 220
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 34 Post(s)
Liked: 13
Quote:
Originally Posted by Bergj69 View Post
No, there is not (and probably will not ever become) a SBE available for the TD20. The lenses are mounted too close on the TD20 model to make the mirror construction possible, the distance on the TD10 is just that wee bit more and the minimum required spacing for a SBE to function. In fact, that is the only reason why I purchased myself a TD10 on top of the TD20 that I already had! So that I could equip it with a SBE. With the SBE mounted on my TD10 I cannot increase the 3D range as much as one can using a rig, but working with a rig puts the whole 3D project at a much more advanced level, including effort and time required both in shooting (setting up the rig) and in editing.

The set of TD20 and TD10 with SBE was portable enough (albeit a TD10 fitted with a SBE unit is still quite bulky) to carry along on my holiday's and use for "from the hip shooting". I had the SBE permanently mounted on the TD10 to have it readily available for the long shots and used the TD20 for the nearby shots. I simply do not have enough time to extend my holidays long enough to see just as much as I did now and shoot footage at the same spots but using rigs for the long shots (setting up, shooting, breaking up, etc). Don Landis has made breathtaking 3D projects with his rig, but it very likely has taken him quite a lot of time to produce them (figuring out the setup of the rigs, carrying the stuff around, setting it up, fine tuning the hardware, etc, etc). As I said, by using the SBE I managed to increase the 3D depth such that for me it still produced breathtaking shots from the Grand Canyon and Sedona. But footage made using a rig and a lot of time will definitely be more impressive.

Great info. Thanks Bergj69!
termite is offline  
post #1598 of 1608 Old 10-20-2015, 09:47 AM
Advanced Member
 
3DBob's Avatar
 
Join Date: Aug 2014
Location: Southeastern Michigan
Posts: 772
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 268 Post(s)
Liked: 68
Please share some results directly or through links, guys. The proof is in the pudding...err 3D videos that came out of all this experience.
3DBob is online now  
post #1599 of 1608 Old 10-20-2015, 12:29 PM
AVS Special Member
 
Barry C's Avatar
 
Join Date: Oct 2012
Posts: 1,070
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 221 Post(s)
Liked: 134
I have found that using the SBE on a JVC-TD1 and then adding additional depth with the Edius Stereoscopic filter horizontal slider gives a sufficient 3D bump to exceed what the camera and SBE alone can do. Here's an example of some Yosemite footage which was done in this manner. To me, the added depth effect combined with the SBE looks good, however, I realize that some people will disagree and perceive the 3D effect as ineffective with this technique. It's all in the eye of the beholder, no real right or wrong, IMHO! The important thing is that I like the way it looks

Barry C is online now  
post #1600 of 1608 Old 10-20-2015, 06:42 PM
Senior Member
 
MLXXX's Avatar
 
Join Date: Jan 2007
Location: Brisbane, Australia
Posts: 407
Mentioned: 3 Post(s)
Tagged: 0 Thread(s)
Quoted: 144 Post(s)
Liked: 69
Quote:
Originally Posted by Barry C View Post
To me, the added depth effect combined with the SBE looks good, however, I realize that some people will disagree and perceive the 3D effect as ineffective with this technique. It's all in the eye of the beholder, no real right or wrong, IMHO!
I particularly like a scene near the end (at 5min 13sec) of a mountain peak in the middle to far distance. For my eyes there's a full and interesting 3D effect in the overall composition of that scene.
MLXXX is offline  
post #1601 of 1608 Old 10-21-2015, 02:10 AM
AVS Club Gold
 
Don Landis's Avatar
 
Join Date: Jun 1999
Location: Jacksonville, FL
Posts: 11,928
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 770 Post(s)
Liked: 250
Using the SBE is a tiny step in the right direction but, still El Capitan is flat. To achieve a real 3D solid of the distant mountain your IA needs to be much wider than the SBE permits.

With an 18mm lens, you would need 1 meter or more of IA for a distance of 2000 meters and maintain a minimum distance of 100 meters to the near object in the scene.

What I see is what you were able to achieve with the tools you had. You got improvement of 3D of the near trees in the scene only as a result of the SBE. The effect H slider only pushes the far distant mountain back farther away. This adds the illusion of more distance but not depth of the mountain itself. Stereographers call this the cardboard cutout look. Personally, I don't mind that in some 3D but it's no substitute for the look of real 3 dimensional capture of a landscape scene to the small TV screen. Of course this practice also has the artifact of miniaturization. I prefer the latter ( more 3 dimensional with some miniaturization ) than the cardboard cutout flat look. The background mountain, if the subject of the scene, should be optimized for 3 dimensional with wide IA, but if the scene's focus is the foreground and the mountain is just background, then the flat look is OK.
Don Landis is offline  
post #1602 of 1608 Old 10-21-2015, 08:10 AM
AVS Special Member
 
Wolfgang S.'s Avatar
 
Join Date: Aug 2011
Location: Vienna/Austria
Posts: 1,206
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 36 Post(s)
Liked: 16
Sure, the hyperstereo aspects are great points I think. When pairing two TD10 units I used the ste-fra lance to measure how long the units stay in sync - tends to be something between 30-45 minutes in my case. And I also use the side-by-side rig with an IO of up to 1.5 meters - what is fine for many shootings (typically I use the 60cm base when I am travelling).


So sure, I also recommend to use two cameras - and I still like the idea to use two TD10 units to do that. For much smaller IOs one can use one unit of the TD10/Z10000 too, or has to invest in a beam splitter rig.


All of that is a question of equipment only.

Kind regards,
Wolfgang
videotreffpunkt.com
Wolfgang S. is offline  
post #1603 of 1608 Old 10-21-2015, 08:17 AM
AVS Special Member
 
tomtastic's Avatar
 
Join Date: Sep 2012
Location: Wichita, KS
Posts: 1,154
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 420 Post(s)
Liked: 139
I think with scenes like mountains or something large and in the distance, easier to not worry about making that the convergence point. Like Don said, you'd probably need anywhere from several yards to hundreds of feet to get it right, depending on how far away they are. Just set up shots with something in the foreground either as positive parallax or set in. It's not like our eyes see mountains in 3D anyway.
Barry C likes this.

This line intentionally left blank.
tomtastic is online now  
post #1604 of 1608 Old 10-21-2015, 08:24 AM
AVS Special Member
 
Barry C's Avatar
 
Join Date: Oct 2012
Posts: 1,070
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 221 Post(s)
Liked: 134
Quote:
Originally Posted by Don Landis View Post
Using the SBE is a tiny step in the right direction but, still El Capitan is flat. To achieve a real 3D solid of the distant mountain your IA needs to be much wider than the SBE permits.

With an 18mm lens, you would need 1 meter or more of IA for a distance of 2000 meters and maintain a minimum distance of 100 meters to the near object in the scene.

What I see is what you were able to achieve with the tools you had. You got improvement of 3D of the near trees in the scene only as a result of the SBE. The effect H slider only pushes the far distant mountain back farther away. This adds the illusion of more distance but not depth of the mountain itself. Stereographers call this the cardboard cutout look. Personally, I don't mind that in some 3D but it's no substitute for the look of real 3 dimensional capture
Don, this is exactly what I meant when I said that some people would disagree with this approach and percieve the 3D effect as ineffective Or, in your case, percieve the mountain- Half Dome, the narrow FOV with the SBE wouldn't let me to get all of El Capitan- as flat But, we all pervieve things differently, as I don't see it as flat or cardboard at all and, since I am very familiar with this terrain in that I'm in Yosemite at least a few times a year, this depth effect is far more crucial than what would be achieved with a wider IA. Again, this is a very subjective thing, and as I said before, really no right or wrong here. It's just what works for me in the best way to approximate what I actually see in real life.

Forinstance, when shooting scenes such as Yosemite Falls, which is basically water coming down a flat cliff with foreground trees, getting some 3D effect in the foreground trees and then pushing the waterfall back to add depth cues works well. I don't feel that anything would be gained here shooting a waterfall on a flat cliff with a wider IA. Also, I STRONGLY believe- again just my perspective- that much of what our eyes see and percieve as 3D is all about depth between us and the subject. When we see distant subjects, mountains forinstsnce, the 3D interpretation has everything to do with lighting and shading/shadowing of that mountain or other distant object. Considering our eyes are only about 65mm apart, this makes sense. When shooting the ending scenes of Half Dome, artificially adding the correct amount of depth to approximate what I see when I'm there and then letting the natural lighting and shading do the rest, works well for me. However, I find it perfectly acceptable that you should disagree.

You know I've been trying to get you to come out here so we can join up for a Yosemite trip where you could bring your twin cam rig. It would be very cool to see what effect you would get this way. I have no doubt it would be fantastic. I'm not sure, though, that it would make for a more realistic presentation of what the naked eye actually sees, however. So, any chance I can get you out here next year?

Last edited by Barry C; 10-21-2015 at 08:33 AM.
Barry C is online now  
post #1605 of 1608 Old 10-21-2015, 10:51 AM
AVS Club Gold
 
Don Landis's Avatar
 
Join Date: Jun 1999
Location: Jacksonville, FL
Posts: 11,928
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 770 Post(s)
Liked: 250
OK, first lets keep shading and other aspects of depth illusion out of the discusion because while it does affect the perceived depth it does not relate to the science of the optics for creating a stereoscopic illusion. These other qualities of the image relate more to depth illusion on a 2D image and only enhance the depth of a stereo image.

The practice of sizing the optics for optimizing 3D stereo illusion is not subjective at all. It is a rather precise set of mathematical relationships that optimize the appearance for what you want to see in the stereo illusion. These have all been well defined in various literature and put in layman's terms by one Bernard Mendiburu. In his discussion of the physics he explains that the interocular distance of 65mm is not a limiting factor because the fact that we are not displaying the captured image in real size, but rather shrinking it to a display size. Therefore the use of greater than 65mm for our interaxial cameras is perfectly acceptable to minimize the flatness of distant subjects.

I mentioned the use of 18mm as the focal length of the camera lens but that is not representative of what the human eye is either. The focal length of the human eye has never been matched with lens optics. Peripheral vision that is near 180° while no horizontal line distortion and a front distance that mimics a 50° lens for front view distance has not yet been achieved. So, because of that no ultra wide angle lens has ever been produced. We can get a fisheye 6mm but then the image is farther way and severely curved.

So, in stereoscopic 3D the math dictates that the wider the the view angle the less the 3D effect as subject is pushed back. But keeping the same IA, the more we zoom in the better the 3D effect and closer to reality the image looks except that we lose scene width and object or subject depth in the distance. The latter may be recovered by using wider IA. The math says that the wider the field of view, the wider you need to have the IA to achieve the same depth of the distant object.

In the practical example of the water fall you zoom in and demonstrate the validity of the science because the water fall is now seen in front of the rock wall while in the wider shot the water fall is almost on the rock wall. What I am saying is by increasing the IA of the cameras well beyond the SBE you are limited to, you could achieve an appropriate spread of the water fall over the rock face and do that with a wider angle of view that would be closer to what you see with your eyes in the real world as you project it to your small screen at home.

In other words, you buy an SBE to improve the depth of your camera for distance but that is not the limit of the science. You can achieve more depth over greater distances with wider angle lenses by using greater spread than the SBE. You have nearly all the tools. You have GoPro's with very wide angle lenses. You have two of them. Now just mount them on a precise slide table to achieve the depth at distance.

My widest angle lens system for My twin system is 8.8mm on the DSLR's and I have a 1 Meter bench. The trick in using this maximum system is finding a location where I can use it and keep near objects from the scene as the near objects violate the stereo convergence. Places like the rim out on a point at the Grand Canyon would be one of the few locations this can be used. Next year I want to try to shoot NYC skyline in 3D with this setup. I have it now with a 18mm lens and 150mm by shooting from a high point off a cruise ship top deck in the center of NY Harbor. I have a feeling that the Yosemite El Capitan is too compromised with near objects to use the techniques I mentioned. This is one reason my bench uses the twin cameras at either end and the Z10K in the center for a tighter shot. My twin 3D system is not capable of zooming in sync.


PS- the water fall would also appear blurry due to the problem of the rapid motion not being genlocked, however, the mountain would be sharp as a tack. To fix this I would take your genlock cable for the GoPro's and extend it to 2 meters in length. Might buy an extra one as it would no longer fit in your housing. Just cut the genlock GoPro cable in two, match the wires up with some stock multi wire cable and splice it in. There are wiring diagrams on line I have seen for doing this, but you really don't need them.

One day we will get out there. I'm pretty much done with everything I wanted to do in Death Valley now. I may make it out there in April if I don't go on a cruise in May. We've also been looking at an Alaskan trip next summer too. All depends on the money. I like paying for the trips when I sign up so that is not an issue when it's time to go.





Ok enough for now. I need to get back to work on my video.
Don Landis is offline  
post #1606 of 1608 Old 10-21-2015, 11:34 AM
AVS Special Member
 
Barry C's Avatar
 
Join Date: Oct 2012
Posts: 1,070
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 221 Post(s)
Liked: 134
Don, I think some of our differences are philosophical. I'm more of a seat of the pants shooter and you're more of an engineer/ technician. I believe in trying to find things that work for me even when it means disregarding the science and orthodoxy. Doesn't mean it's flawed, just means that I don't regard it as gospel. I've found a technique with the SBE that works for me, and I like it. Is it correct from a technical standpoint? No! But, it does look like it does when I'm standing there. With my recent Bahamas underwater projects which utilized the Gopro Duals for the first time, the gospel rule was that you can't shoot wide angle underwater through a flat port without causing chromatic abberations and distortion. Well, I used the flat port, and there were no abberations or distortion that I could see. Is this optically possible? Theoretically, no. But nevertheless, it wasn't there in spite of the established theory. So, again, I'll go with what works for me.

I'm hoping to get to Yosemite this winter- assuming we have one. It's been several years since I've been there when there was snow on the valley floor. It's really quite spectacular and I'd love to get some 3D of the snow on the mountains and trees shot from the same vantage points as the summer shots were shot from. I actually recently created a Yosemite YouTube channel to feature content from Yosemite and surrounding areas in the Sierras. As for the next diving trip, I've pretty much decided on Playa Del Carmen.

Looking forward to seeing the project you're working on now.
Barry C is online now  
post #1607 of 1608 Old 10-21-2015, 12:12 PM
AVS Club Gold
 
Don Landis's Avatar
 
Join Date: Jun 1999
Location: Jacksonville, FL
Posts: 11,928
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 770 Post(s)
Liked: 250
The primary reason for using a dome ported lens is that it permits the correction of the 1.33x enlargement factor of any lens, thus preserving the original angle of view of the lens as it was used in air. But, putting a lens behind a flat port will cause the colors of light that pass through the a flat port to the first lens element to arrive at different times causing a chromatic blur to appear on the edges. This varies with different focal length lenses. When shooting in deeper water the distribution of color in the light is reduced so the artifact is less unless using local light source. In a complex scene such as a reef this artifact is often difficult to see, especially when there is nothing to compare it to.

Quote:
Looking forward to seeing the project you're working on now.
Unlikely as it is one of those personal travel logs that won't be uploaded to YT. I really didn't shoot much of the last trip in 3D.

I shot the Eclipse from the ship and posted a ship board edited piece using 300 still images put to stop frame animation. That was uploaded while on board. It's 2D of the moon.

The only thing I did shoot was the Statue of Liberty and NY harbor, departing in 3D.

I also spent 2 days exploring cemeteries for ancestor's grave markers to put with my ancestor.com family tree.


I 'll be in Bonaire next month and plan to shoot some U/W on Kleine Bonaire but time will be short as I only have 4 hours to play. It's more of a revisit from my trips there back in the early 70's. I'll probably take the Nabi 3D rig and maybe also the GoPro4 B to shoot some 4K.
Don Landis is offline  
post #1608 of 1608 Old 10-21-2015, 04:05 PM
AVS Special Member
 
Barry C's Avatar
 
Join Date: Oct 2012
Posts: 1,070
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 221 Post(s)
Liked: 134
Quote:
Originally Posted by Don Landis View Post
I 'll be in Bonaire next month and plan to shoot some U/W on Kleine Bonaire but time will be short as I only have 4 hours to play. It's more of a revisit from my trips there back in the early 70's. I'll probably take the Nabi 3D rig and maybe also the GoPro4 B to shoot some 4K.
Sounds good
Barry C is online now  
Sponsored Links
Advertisement
 
Reply 3D Source Components



Forum Jump: 

Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off