Sony HDR-TD10 3D-Capable Camcorder - Page 54 - AVS Forum | Home Theater Discussions And Reviews
Baselworld is only a few weeks away. Getting the latest news is easy, Click Here for info on how to join the Watchuseek.com newsletter list. Follow our team for updates featuring event coverage, new product unveilings, watch industry news & more!


Forum Jump: 
 4Likes
Reply
 
Thread Tools
post #1591 of 1623 Old 10-19-2015, 04:11 AM
Member
 
Roger Gunkel's Avatar
 
Join Date: Dec 2012
Location: Near Cambridge, UK
Posts: 29
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 4 Post(s)
Liked: 11
Quote:
Originally Posted by MLXXX View Post
Hi Roger, you may be perfectly well aware of this issue. However I suspect that occasional readers of this part of AVS Forum might not appreciate that generally speaking you cannot achieve perfect Left and Right image synch with two separate home consumer video cameras, even if controlling them using a single remote control.


When you wrote "perfect synch", I believe you meant that the two cameras were out of kilter with each other by no more than half a frame. (In any case where the mismatch in the capture time of the raw footage happened to be greater than half a frame, then the Left or Right clip could be advanced or retarded one or more whole frames in the editing so as to reduce the discrepancy between the clips to no more than half a frame.)

With unynchronized capture, the effect of any discrepancy can be minimised to be no more than a half-frame. Use of a single remote control is no guarantee the two cameras responding to the remote control will start up at the same time (say to the nearest millisecond) for video capturing purposes. Generally, there will be significant variations in starting up.

I note that at 60fps (USA), half a frame is 8.33mS. At 50fps (Europe), half a frame is 10.00mS.

For certain fast moving scenes (such as a close-up of a water fountain) a timing mismatch of several milliseconds will markedly compromise the 3D effect for many viewers. (I myself see a mirage effect or a general blurring.) Other viewers may not notice any anomaly.


I think there is a significant limitation in using separate non-synchronised cameras (be they 2D or 3D camera models). The 3D effect from the captures will not be successful for fast motion in the foreground; unless you are lucky enough to find that the two cameras happened to be out of kilter with each other by only a small fraction of a frame, for the particular take. For less demanding scenes, you can get by without synchronization at the time of capture, if using 50i/50p or 60i/50p. (Using an unsynchronized capture frame rate of only 24p or 25p would be risky, unless movement of the camera, and in the scene were sedate.)

There are other issues that can arise too with two independent cameras, such as variations in automatic exposure, focus, and colour balance.

I do regret the disappearance of consumer level 3D video camera models from retailers' shelves. Even with GoPros, I rarely see a 3D kit on display these days.
I have been producing 3D video for a number of years, some of which I get paid to produce as part of my full time video filming business, so am well aware of the synch problems with two ungenlocked cameras. I'm also well aware that if you want to take best cinema quality 3D footage, then you need to spend huge ammounts of money and probably won't be asking questions on this forum.

My post was mainly in response to other posters talking about having two separate 3D cameras for a wider stereo base, which would give exactly the same sync problems as a pair of 2d cameras. When I mentioned 'perfect sync' from the remotes, I meant that there was often no need to adjust the two video streams in post as they started within half a frame of each other.

Unless you are watching action shots with lots of very fast movement, an accuracy in PAL of 1/50th of a second between video streams gives a perfectly acceptable 3D image for the type of personal documentary work and contracted wedding work that I do. The more noticeable problems arise when the streams start drifting further on longer duration shots. This of course is not really a problem when taking short clips or if cutaways are taken with a second pair during long shots. The cutaways or different angles will enable the long clips to be cut and resynched if necessary.

All my synching is done to the audio track using auto synching to the natural sound or a cue signal if appropriate, although it is pretty straight forward to visually sync from the audio waveform.

Using a pair of consumer cams is going to make matching the streams more time consuming if you choose to use auto everything. I always manually set the cameras for white balance, exposure etc and never use in camera stabilizing as that is impossible to get identical on both cameras. I have frequently used Mercalli though to stabilise already synched footage.

I think we need to be clear here that there is a big difference between commercially produced 3D video for broadcast or cinema and video for the use of family and friends. Most reading these threads will be wishing to produce 3D video for their own use and it is perfectly viable to make 3D video using pairs of cameras at almost any price providing you have some control over manual settings and follow basic 3D filming practice.

I would like to encourage more people to try 3D video filming and pointing out simpler and cost effective ways to do it seems a good way to go.

Roger
Roger Gunkel is offline  
Sponsored Links
Advertisement
 
post #1592 of 1623 Old 10-19-2015, 06:47 AM
AVS Special Member
 
tomtastic's Avatar
 
Join Date: Sep 2012
Location: Wichita, KS
Posts: 1,263
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 477 Post(s)
Liked: 165
Quote:
Originally Posted by Roger Gunkel View Post
I'm just wondering why nobody seems to be recommending using twin 2d cam video rigs, rather than twin 3D cams. It is far cheaper to get a pair of modern HD cams, than trying to get hold of rapidly disappearing and aging 3D cams. By far my most heavily used twin rig for the past few years has been a pair of Panasonic SD700 cams, long since superseded, but still giving great HD performance. They are synched for stop/start, zoom etc with the standard IR remote, which works on both cameras at the same time. Usually they start in perfect sync, but if they are a frame out, they can easily be audio synched in post. They are also small enough to get IA down to about 60mm and I use both viewfinders like a telescope for a full live 3D view.
Roger
I would say they use dual 3D cams solely because it doubles the arsenal of 3D cameras that user has, also like you mentioned in your next post, a lot of the usage here isn't for professional use, but recreation and 3D enthusiasts with various levels of experience. If you're traveling to say a National Park and shooting scenery in 3D, one person can only carry so much so taking two 3D cameras that could shoot in 3D if needed by themselves, plus shoot in wide I.A. 3D works for that person.

Of course the biggest problems with all in one 3D cameras are the fixed lens spacing and performance of the cameras (small sensors and consumer grade in-camera compression).

I agree, if I was going that route I'd probably use better 2D cameras. The other problem would then be that you'd need a mirror rig for close up 3D as the cameras won't be close enough to shoot anything under 10 to 15 feet. And that's more weight and money and time for most enthusiasts.

I experimented briefly with dual cameras with a cheap GoPro like system but learned that the best outcome was a half frame off sync. I would have to unqauntize to frames to get them paired up correctly but I understand that would result in interlacing issues if it re renders frames to match. It was only a 130.00 experiment for the cameras and I built the case for around 5.00. A good learning experience. The lens spacing I made adjustable from 1.5 to 3 inches and with no zoom on these action cams it was perfectly fine. As long as there's not much quick movement up close they work ok.

If I were to move to a better 2D camera I think I'd go the genlock route. I have both the Panasonic Z10k and the Panasonic 3DA1 which has the wider I.A. and the genlock and alignment are something that go unappreciated in 3D. It's just something that you expect with shooting 3D after you own one of these and then moving to something that isn't aligned and synced out of the box and it gets really frustrating.

If you're shooting long distance shots, then it's probably not a big deal to have perfect sync, but if you want more range out of your 3D system, using it up close, mid range and distance, it's the better way to go.
MLXXX likes this.

This line intentionally left blank.
tomtastic is online now  
post #1593 of 1623 Old 10-19-2015, 08:15 AM
Senior Member
 
MLXXX's Avatar
 
Join Date: Jan 2007
Location: Brisbane, Australia
Posts: 411
Mentioned: 3 Post(s)
Tagged: 0 Thread(s)
Quoted: 148 Post(s)
Liked: 71
Just before posting this, I've noticed a post from tomtastic. There will be a degree of overlap in some of my comments below.

Quote:
Originally Posted by Roger Gunkel View Post
My post was mainly in response to other posters talking about having two separate 3D cameras for a wider stereo base, which would give exactly the same sync problems as a pair of 2d cameras. When I mentioned 'perfect sync' from the remotes, I meant that there was often no need to adjust the two video streams in post as they started within half a frame of each other.
Yes a number of the 3D enthusiasts who already had one 3D video camera chose to purchase a matching second 3D camera for hyperstereo, rather than make use of two 2D cameras.

Reasons for that could include:
  • The first 3D camera could be used alone for regular 3D shots.
  • The second 3D camera could be pressed into service for hyperstereo and provide an excellent match of lens characteristics and image sensor characteristics.
  • The two 3D cameras could be used for simultaneous regular 3D shots from different angles.

But yes someone with no 3D camera could decide to purchase two 2D cameras and mount them in such a fashion that they could be used for regular 3D or hyperstereo. There are number of challenges here:
  • Achieving basic physical alignment of the lenses of the two cameras
  • If telescopic lenses are to be used at an intermediate extent of zoom, achieving a matching of the zoom [this could prove very difficult]
  • As previously discussed, (for a given aperture) manually setting the exposure time of each camera
  • Having a solution for focus (perhaps allowing auto-focus and accepting there will sometimes be disparities between Left and Right)
  • As previously discussed, avoiding scenes that will highlight lack of synchrony in the capture of the Left and Right images, e.g. a dog running into view in the foreground; a horse race, or an athletics event.
  • Being prepared to slip the Left or Right footage by one or more frames in the post production editing phase where the cameras for some reason were unable to start within half a frame of each other.

I note that for 3D shooting with a normal stereo base, a single dedicated 3D video camera would be considerably more convenient to use than two unsynchronised 2D cameras. With hyperstereo, inconvenience may be unavoidable.

The future

I'm hoping there'll be a new crop of home consumer 3D cameras in 2016. For example I see quite a potential to design a 4K 2D camera for alternative use as as 2K stereoscopic camera with the addition of an adaptor lens. This would provide Full HD 3D.

As for a new dedicated semi-professional 3D camera, I would hope to see the option to vary the lens separation and even the toe-in. I note that the closer the subject is to the camera lenses, the more important it can become to have the option of turning the lenses inwards, mimicking the convergence of human eyes necessary to view very close objects clearly with both eyes.
MLXXX is offline  
post #1594 of 1623 Old 10-19-2015, 10:28 AM
Member
 
Roger Gunkel's Avatar
 
Join Date: Dec 2012
Location: Near Cambridge, UK
Posts: 29
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 4 Post(s)
Liked: 11
Quote:
Originally Posted by MLXXX View Post
Just before posting this, I've noticed a post from tomtastic. There will be a degree of overlap in some of my comments below.


Yes a number of the 3D enthusiasts who already had one 3D video camera chose to purchase a matching second 3D camera for hyperstereo, rather than make use of two 2D cameras.

Reasons for that could include:
  • The first 3D camera could be used alone for regular 3D shots.
  • The second 3D camera could be pressed into service for hyperstereo and provide an excellent match of lens characteristics and image sensor characteristics.
  • The two 3D cameras could be used for simultaneous regular 3D shots from different angles.

But yes someone with no 3D camera could decide to purchase two 2D cameras and mount them in such a fashion that they could be used for regular 3D or hyperstereo. There are number of challenges here:
  • Achieving basic physical alignment of the lenses of the two cameras
  • If telescopic lenses are to be used at an intermediate extent of zoom, achieving a matching of the zoom [this could prove very difficult]
  • As previously discussed, (for a given aperture) manually setting the exposure time of each camera
  • Having a solution for focus (perhaps allowing auto-focus and accepting there will sometimes be disparities between Left and Right)
  • As previously discussed, avoiding scenes that will highlight lack of synchrony in the capture of the Left and Right images, e.g. a dog running into view in the foreground; a horse race, or an athletics event.
  • Being prepared to slip the Left or Right footage by one or more frames in the post production editing phase where the cameras for some reason were unable to start within half a frame of each other.

I note that for 3D shooting with a normal stereo base, a single dedicated 3D video camera would be considerably more convenient to use than two unsynchronised 2D cameras. With hyperstereo, inconvenience may be unavoidable.

The future

I'm hoping there'll be a new crop of home consumer 3D cameras in 2016. For example I see quite a potential to design a 4K 2D camera for alternative use as as 2K stereoscopic camera with the addition of an adaptor lens. This would provide Full HD 3D.

As for a new dedicated semi-professional 3D camera, I would hope to see the option to vary the lens separation and even the toe-in. I note that the closer the subject is to the camera lenses, the more important it can become to have the option of turning the lenses inwards, mimicking the convergence of human eyes necessary to view very close objects clearly with both eyes.
I think it all depends on what your requirements are and how much work you want to put into the end product, balanced against how much it is all going to cost you.

I have a Fuji W1, a Fuji W3, JVC GS-TD1, 2xGoPro hero3s, 2xSJ4000s, 2xLumix FZ200s, 2xLUMIX FZ2000s for 4k, 2Panasonic SD700/750s with a 3D adapter for a third SD750 and an LG3d phone, so I am pretty well tooled up for anything I may want to film. I tend to use the JVC for all the general and fairly close video work, with the twin SD700 rig for very quick and easy close to wide shots. The LUMIX cams tend to be used more for HD stills and video which requires a bigger imaging chip, but are far less portable than the SD700 rig.

The twin SD700 are easily aligned on a simple base plate and the zooms match very well with the remote although any minor variations can be adjusted in post by zooming in on or cropping slightly on one of the images (I only zoom for reframing) They also start usually on the same frame, but if not, the chances of getting them exactly half a frame out is remote. Colour matching with two identical cameras is usually not needed if they are set up properly, but is quite straight forward in post if required. The SD700s are usually on a very simple base plate, aligned, and used more like a pair of binoculars, with a bigger base plate for wider base if needed.

The JVC twin lens is by far the most convenient for instant 3D and very little editing correction, and for the same reason I still love the Fuji W3 for quick stills. Most of my non movement stills are cha cha with the LUMIX FZ1000 or with both of them twinned for more serious work.

For those that already have a 3D camera, then getting a matching one if you can find it can be useful, particularly for two angles as mentioned, but with little available new and used ones holding their price, anyone starting from scratch would find it far more economical using a matching 2d pair in my opinion and it will help with an understanding of 3D techniques.

Roger
Roger Gunkel is offline  
post #1595 of 1623 Old 10-19-2015, 05:11 PM
Senior Member
 
MLXXX's Avatar
 
Join Date: Jan 2007
Location: Brisbane, Australia
Posts: 411
Mentioned: 3 Post(s)
Tagged: 0 Thread(s)
Quoted: 148 Post(s)
Liked: 71
Quote:
Originally Posted by Roger Gunkel View Post
The twin SD700 are easily aligned on a simple base plate and the zooms match very well with the remote although any minor variations can be adjusted in post by zooming in on or cropping slightly on one of the images (I only zoom for reframing) They also start usually on the same frame, but if not, the chances of getting them exactly half a frame out is remote.
Half a frame out is the worst case; perfect synch the best case. Indeed, the chances of either exact occurrence are remote.

With 2D cameras more generally, the result will tend to lie somewhere between the two extremes, and vary from take to take. (Some takes will be closer to perfect synch, some closer to a half frame discrepancy [where applicable, after slipping one of the clips along the timeline by one or more full frames before pairing them for 3D].) I have recommended that if shooting a water fountain, several takes be done (powering down one of the cameras between takes if that helps), increasing the chances of a favourable result. Some of the videos uploaded to this forum with unsynched cameras have included fortuitously close to perfect synch footage of critical subject matter in part of the video, and poorer results in other parts. My eyes unfortunately are very sensitive to timing mismatches!

A while back I tried a very cheap solution: two web cams attached to a lap top pc, and controlled by the same software. I had hoped that this might result in good synch. I found that the synch was usually better than one-quarter of a frame out, so there was some benefit in the arrangement, but it wasn't good enough to eliminate anomalies for my eyes for many everday scenes. The arrangement was useful though for hyperstereo of distant scenes.

In case readers of this thread haven't seen this before and might possibly be interested, here is a reference to a video I prepared in mid-2012 to illustrate the effect of relatively small discrepancies in synch on the apparent motion of the balls of an anniversary clock:
Quote:
Originally Posted by MLXXX View Post

...

I've prepared a video to show the effects of a mismatch between Left and Right timing on apparent motion. I captured at 60i with my Sony HDR-TD10, extracted Left and Right (using the MVC to AVI converter from 3dtv.at), and with VirtualDub converted to 60p (using odd and even fieds). I used VirtualDub again, with its motion interpolation filter, to arrive at 240p. The result was as if I had captured the moving orbs of the anniversary clock at 239.76fps!

I then used VirtualDub to harvest every tenth frame to get to 23.976fps (a frame "decimate" option in VirtualDub). But the point of harvest could be offset by 1, 2, ..., 10 frames, to simulate capture delays ranging from 4.17mS (1/10th of a frame at 23.976fps) to 41.7mS (one frame at 23.976fps). Even at 4.17mS, an effect on the motion of the orbs is apparent for my vision. Here is a link to the YouTube: http://www.youtube.com/watch?v=k_m4ETc-ydY

The video lasts just under 7 minutes. The smallest mismatch shown (4.17mS), begins at 3m 15 sec.

...
As 7 minutes is a long time, it might be convenient to proceed directly to the point 3m 15 sec into the video, where a mismatch of only 4.17mS (1/240th second) is demonstrated.
MLXXX is offline  
post #1596 of 1623 Old 10-20-2015, 04:47 AM
Member
 
Bergj69's Avatar
 
Join Date: Apr 2014
Location: Lage Vuursche, The Netherlands
Posts: 96
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 7 Post(s)
Liked: 23
Quote:
Originally Posted by termite View Post
Do you know if there's a SBE available for TD20?
Has anyone used one?
No, there is not (and probably will not ever become) a SBE available for the TD20. The lenses are mounted too close on the TD20 model to make the mirror construction possible, the distance on the TD10 is just that wee bit more and the minimum required spacing for a SBE to function. In fact, that is the only reason why I purchased myself a TD10 on top of the TD20 that I already had! So that I could equip it with a SBE. With the SBE mounted on my TD10 I cannot increase the 3D range as much as one can using a rig, but working with a rig puts the whole 3D project at a much more advanced level, including effort and time required both in shooting (setting up the rig) and in editing.

The set of TD20 and TD10 with SBE was portable enough (albeit a TD10 fitted with a SBE unit is still quite bulky) to carry along on my holiday's and use for "from the hip shooting". I had the SBE permanently mounted on the TD10 to have it readily available for the long shots and used the TD20 for the nearby shots. I simply do not have enough time to extend my holidays long enough to see just as much as I did now and shoot footage at the same spots but using rigs for the long shots (setting up, shooting, breaking up, etc). Don Landis has made breathtaking 3D projects with his rig, but it very likely has taken him quite a lot of time to produce them (figuring out the setup of the rigs, carrying the stuff around, setting it up, fine tuning the hardware, etc, etc). As I said, by using the SBE I managed to increase the 3D depth such that for me it still produced breathtaking shots from the Grand Canyon and Sedona. But footage made using a rig and a lot of time will definitely be more impressive.
Bergj69 is offline  
post #1597 of 1623 Old 10-20-2015, 08:28 AM
Senior Member
 
termite's Avatar
 
Join Date: Jan 2003
Posts: 228
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 36 Post(s)
Liked: 13
Quote:
Originally Posted by Bergj69 View Post
No, there is not (and probably will not ever become) a SBE available for the TD20. The lenses are mounted too close on the TD20 model to make the mirror construction possible, the distance on the TD10 is just that wee bit more and the minimum required spacing for a SBE to function. In fact, that is the only reason why I purchased myself a TD10 on top of the TD20 that I already had! So that I could equip it with a SBE. With the SBE mounted on my TD10 I cannot increase the 3D range as much as one can using a rig, but working with a rig puts the whole 3D project at a much more advanced level, including effort and time required both in shooting (setting up the rig) and in editing.

The set of TD20 and TD10 with SBE was portable enough (albeit a TD10 fitted with a SBE unit is still quite bulky) to carry along on my holiday's and use for "from the hip shooting". I had the SBE permanently mounted on the TD10 to have it readily available for the long shots and used the TD20 for the nearby shots. I simply do not have enough time to extend my holidays long enough to see just as much as I did now and shoot footage at the same spots but using rigs for the long shots (setting up, shooting, breaking up, etc). Don Landis has made breathtaking 3D projects with his rig, but it very likely has taken him quite a lot of time to produce them (figuring out the setup of the rigs, carrying the stuff around, setting it up, fine tuning the hardware, etc, etc). As I said, by using the SBE I managed to increase the 3D depth such that for me it still produced breathtaking shots from the Grand Canyon and Sedona. But footage made using a rig and a lot of time will definitely be more impressive.

Great info. Thanks Bergj69!
termite is offline  
post #1598 of 1623 Old 10-20-2015, 08:47 AM
Advanced Member
 
3DBob's Avatar
 
Join Date: Aug 2014
Location: Southeastern Michigan
Posts: 850
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 293 Post(s)
Liked: 72
Please share some results directly or through links, guys. The proof is in the pudding...err 3D videos that came out of all this experience.
3DBob is online now  
post #1599 of 1623 Old 10-20-2015, 11:29 AM
AVS Special Member
 
Barry C's Avatar
 
Join Date: Oct 2012
Posts: 1,130
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 236 Post(s)
Liked: 150
I have found that using the SBE on a JVC-TD1 and then adding additional depth with the Edius Stereoscopic filter horizontal slider gives a sufficient 3D bump to exceed what the camera and SBE alone can do. Here's an example of some Yosemite footage which was done in this manner. To me, the added depth effect combined with the SBE looks good, however, I realize that some people will disagree and perceive the 3D effect as ineffective with this technique. It's all in the eye of the beholder, no real right or wrong, IMHO! The important thing is that I like the way it looks

Barry C is online now  
post #1600 of 1623 Old 10-20-2015, 05:42 PM
Senior Member
 
MLXXX's Avatar
 
Join Date: Jan 2007
Location: Brisbane, Australia
Posts: 411
Mentioned: 3 Post(s)
Tagged: 0 Thread(s)
Quoted: 148 Post(s)
Liked: 71
Quote:
Originally Posted by Barry C View Post
To me, the added depth effect combined with the SBE looks good, however, I realize that some people will disagree and perceive the 3D effect as ineffective with this technique. It's all in the eye of the beholder, no real right or wrong, IMHO!
I particularly like a scene near the end (at 5min 13sec) of a mountain peak in the middle to far distance. For my eyes there's a full and interesting 3D effect in the overall composition of that scene.
MLXXX is offline  
post #1601 of 1623 Old 10-21-2015, 01:10 AM
AVS Club Gold
 
Don Landis's Avatar
 
Join Date: Jun 1999
Location: Jacksonville, FL
Posts: 12,132
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 916 Post(s)
Liked: 271
Using the SBE is a tiny step in the right direction but, still El Capitan is flat. To achieve a real 3D solid of the distant mountain your IA needs to be much wider than the SBE permits.

With an 18mm lens, you would need 1 meter or more of IA for a distance of 2000 meters and maintain a minimum distance of 100 meters to the near object in the scene.

What I see is what you were able to achieve with the tools you had. You got improvement of 3D of the near trees in the scene only as a result of the SBE. The effect H slider only pushes the far distant mountain back farther away. This adds the illusion of more distance but not depth of the mountain itself. Stereographers call this the cardboard cutout look. Personally, I don't mind that in some 3D but it's no substitute for the look of real 3 dimensional capture of a landscape scene to the small TV screen. Of course this practice also has the artifact of miniaturization. I prefer the latter ( more 3 dimensional with some miniaturization ) than the cardboard cutout flat look. The background mountain, if the subject of the scene, should be optimized for 3 dimensional with wide IA, but if the scene's focus is the foreground and the mountain is just background, then the flat look is OK.
Don Landis is offline  
post #1602 of 1623 Old 10-21-2015, 07:10 AM
AVS Special Member
 
Wolfgang S.'s Avatar
 
Join Date: Aug 2011
Location: Vienna/Austria
Posts: 1,210
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 39 Post(s)
Liked: 16
Sure, the hyperstereo aspects are great points I think. When pairing two TD10 units I used the ste-fra lance to measure how long the units stay in sync - tends to be something between 30-45 minutes in my case. And I also use the side-by-side rig with an IO of up to 1.5 meters - what is fine for many shootings (typically I use the 60cm base when I am travelling).


So sure, I also recommend to use two cameras - and I still like the idea to use two TD10 units to do that. For much smaller IOs one can use one unit of the TD10/Z10000 too, or has to invest in a beam splitter rig.


All of that is a question of equipment only.

Kind regards,
Wolfgang
videotreffpunkt.com
Wolfgang S. is offline  
post #1603 of 1623 Old 10-21-2015, 07:17 AM
AVS Special Member
 
tomtastic's Avatar
 
Join Date: Sep 2012
Location: Wichita, KS
Posts: 1,263
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 477 Post(s)
Liked: 165
I think with scenes like mountains or something large and in the distance, easier to not worry about making that the convergence point. Like Don said, you'd probably need anywhere from several yards to hundreds of feet to get it right, depending on how far away they are. Just set up shots with something in the foreground either as positive parallax or set in. It's not like our eyes see mountains in 3D anyway.
Barry C likes this.

This line intentionally left blank.
tomtastic is online now  
post #1604 of 1623 Old 10-21-2015, 07:24 AM
AVS Special Member
 
Barry C's Avatar
 
Join Date: Oct 2012
Posts: 1,130
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 236 Post(s)
Liked: 150
Quote:
Originally Posted by Don Landis View Post
Using the SBE is a tiny step in the right direction but, still El Capitan is flat. To achieve a real 3D solid of the distant mountain your IA needs to be much wider than the SBE permits.

With an 18mm lens, you would need 1 meter or more of IA for a distance of 2000 meters and maintain a minimum distance of 100 meters to the near object in the scene.

What I see is what you were able to achieve with the tools you had. You got improvement of 3D of the near trees in the scene only as a result of the SBE. The effect H slider only pushes the far distant mountain back farther away. This adds the illusion of more distance but not depth of the mountain itself. Stereographers call this the cardboard cutout look. Personally, I don't mind that in some 3D but it's no substitute for the look of real 3 dimensional capture
Don, this is exactly what I meant when I said that some people would disagree with this approach and percieve the 3D effect as ineffective Or, in your case, percieve the mountain- Half Dome, the narrow FOV with the SBE wouldn't let me to get all of El Capitan- as flat But, we all pervieve things differently, as I don't see it as flat or cardboard at all and, since I am very familiar with this terrain in that I'm in Yosemite at least a few times a year, this depth effect is far more crucial than what would be achieved with a wider IA. Again, this is a very subjective thing, and as I said before, really no right or wrong here. It's just what works for me in the best way to approximate what I actually see in real life.

Forinstance, when shooting scenes such as Yosemite Falls, which is basically water coming down a flat cliff with foreground trees, getting some 3D effect in the foreground trees and then pushing the waterfall back to add depth cues works well. I don't feel that anything would be gained here shooting a waterfall on a flat cliff with a wider IA. Also, I STRONGLY believe- again just my perspective- that much of what our eyes see and percieve as 3D is all about depth between us and the subject. When we see distant subjects, mountains forinstsnce, the 3D interpretation has everything to do with lighting and shading/shadowing of that mountain or other distant object. Considering our eyes are only about 65mm apart, this makes sense. When shooting the ending scenes of Half Dome, artificially adding the correct amount of depth to approximate what I see when I'm there and then letting the natural lighting and shading do the rest, works well for me. However, I find it perfectly acceptable that you should disagree.

You know I've been trying to get you to come out here so we can join up for a Yosemite trip where you could bring your twin cam rig. It would be very cool to see what effect you would get this way. I have no doubt it would be fantastic. I'm not sure, though, that it would make for a more realistic presentation of what the naked eye actually sees, however. So, any chance I can get you out here next year?

Last edited by Barry C; 10-21-2015 at 07:33 AM.
Barry C is online now  
post #1605 of 1623 Old 10-21-2015, 09:51 AM
AVS Club Gold
 
Don Landis's Avatar
 
Join Date: Jun 1999
Location: Jacksonville, FL
Posts: 12,132
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 916 Post(s)
Liked: 271
OK, first lets keep shading and other aspects of depth illusion out of the discusion because while it does affect the perceived depth it does not relate to the science of the optics for creating a stereoscopic illusion. These other qualities of the image relate more to depth illusion on a 2D image and only enhance the depth of a stereo image.

The practice of sizing the optics for optimizing 3D stereo illusion is not subjective at all. It is a rather precise set of mathematical relationships that optimize the appearance for what you want to see in the stereo illusion. These have all been well defined in various literature and put in layman's terms by one Bernard Mendiburu. In his discussion of the physics he explains that the interocular distance of 65mm is not a limiting factor because the fact that we are not displaying the captured image in real size, but rather shrinking it to a display size. Therefore the use of greater than 65mm for our interaxial cameras is perfectly acceptable to minimize the flatness of distant subjects.

I mentioned the use of 18mm as the focal length of the camera lens but that is not representative of what the human eye is either. The focal length of the human eye has never been matched with lens optics. Peripheral vision that is near 180° while no horizontal line distortion and a front distance that mimics a 50° lens for front view distance has not yet been achieved. So, because of that no ultra wide angle lens has ever been produced. We can get a fisheye 6mm but then the image is farther way and severely curved.

So, in stereoscopic 3D the math dictates that the wider the the view angle the less the 3D effect as subject is pushed back. But keeping the same IA, the more we zoom in the better the 3D effect and closer to reality the image looks except that we lose scene width and object or subject depth in the distance. The latter may be recovered by using wider IA. The math says that the wider the field of view, the wider you need to have the IA to achieve the same depth of the distant object.

In the practical example of the water fall you zoom in and demonstrate the validity of the science because the water fall is now seen in front of the rock wall while in the wider shot the water fall is almost on the rock wall. What I am saying is by increasing the IA of the cameras well beyond the SBE you are limited to, you could achieve an appropriate spread of the water fall over the rock face and do that with a wider angle of view that would be closer to what you see with your eyes in the real world as you project it to your small screen at home.

In other words, you buy an SBE to improve the depth of your camera for distance but that is not the limit of the science. You can achieve more depth over greater distances with wider angle lenses by using greater spread than the SBE. You have nearly all the tools. You have GoPro's with very wide angle lenses. You have two of them. Now just mount them on a precise slide table to achieve the depth at distance.

My widest angle lens system for My twin system is 8.8mm on the DSLR's and I have a 1 Meter bench. The trick in using this maximum system is finding a location where I can use it and keep near objects from the scene as the near objects violate the stereo convergence. Places like the rim out on a point at the Grand Canyon would be one of the few locations this can be used. Next year I want to try to shoot NYC skyline in 3D with this setup. I have it now with a 18mm lens and 150mm by shooting from a high point off a cruise ship top deck in the center of NY Harbor. I have a feeling that the Yosemite El Capitan is too compromised with near objects to use the techniques I mentioned. This is one reason my bench uses the twin cameras at either end and the Z10K in the center for a tighter shot. My twin 3D system is not capable of zooming in sync.


PS- the water fall would also appear blurry due to the problem of the rapid motion not being genlocked, however, the mountain would be sharp as a tack. To fix this I would take your genlock cable for the GoPro's and extend it to 2 meters in length. Might buy an extra one as it would no longer fit in your housing. Just cut the genlock GoPro cable in two, match the wires up with some stock multi wire cable and splice it in. There are wiring diagrams on line I have seen for doing this, but you really don't need them.

One day we will get out there. I'm pretty much done with everything I wanted to do in Death Valley now. I may make it out there in April if I don't go on a cruise in May. We've also been looking at an Alaskan trip next summer too. All depends on the money. I like paying for the trips when I sign up so that is not an issue when it's time to go.





Ok enough for now. I need to get back to work on my video.
Don Landis is offline  
post #1606 of 1623 Old 10-21-2015, 10:34 AM
AVS Special Member
 
Barry C's Avatar
 
Join Date: Oct 2012
Posts: 1,130
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 236 Post(s)
Liked: 150
Don, I think some of our differences are philosophical. I'm more of a seat of the pants shooter and you're more of an engineer/ technician. I believe in trying to find things that work for me even when it means disregarding the science and orthodoxy. Doesn't mean it's flawed, just means that I don't regard it as gospel. I've found a technique with the SBE that works for me, and I like it. Is it correct from a technical standpoint? No! But, it does look like it does when I'm standing there. With my recent Bahamas underwater projects which utilized the Gopro Duals for the first time, the gospel rule was that you can't shoot wide angle underwater through a flat port without causing chromatic abberations and distortion. Well, I used the flat port, and there were no abberations or distortion that I could see. Is this optically possible? Theoretically, no. But nevertheless, it wasn't there in spite of the established theory. So, again, I'll go with what works for me.

I'm hoping to get to Yosemite this winter- assuming we have one. It's been several years since I've been there when there was snow on the valley floor. It's really quite spectacular and I'd love to get some 3D of the snow on the mountains and trees shot from the same vantage points as the summer shots were shot from. I actually recently created a Yosemite YouTube channel to feature content from Yosemite and surrounding areas in the Sierras. As for the next diving trip, I've pretty much decided on Playa Del Carmen.

Looking forward to seeing the project you're working on now.
Barry C is online now  
post #1607 of 1623 Old 10-21-2015, 11:12 AM
AVS Club Gold
 
Don Landis's Avatar
 
Join Date: Jun 1999
Location: Jacksonville, FL
Posts: 12,132
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 916 Post(s)
Liked: 271
The primary reason for using a dome ported lens is that it permits the correction of the 1.33x enlargement factor of any lens, thus preserving the original angle of view of the lens as it was used in air. But, putting a lens behind a flat port will cause the colors of light that pass through the a flat port to the first lens element to arrive at different times causing a chromatic blur to appear on the edges. This varies with different focal length lenses. When shooting in deeper water the distribution of color in the light is reduced so the artifact is less unless using local light source. In a complex scene such as a reef this artifact is often difficult to see, especially when there is nothing to compare it to.

Quote:
Looking forward to seeing the project you're working on now.
Unlikely as it is one of those personal travel logs that won't be uploaded to YT. I really didn't shoot much of the last trip in 3D.

I shot the Eclipse from the ship and posted a ship board edited piece using 300 still images put to stop frame animation. That was uploaded while on board. It's 2D of the moon.

The only thing I did shoot was the Statue of Liberty and NY harbor, departing in 3D.

I also spent 2 days exploring cemeteries for ancestor's grave markers to put with my ancestor.com family tree.


I 'll be in Bonaire next month and plan to shoot some U/W on Kleine Bonaire but time will be short as I only have 4 hours to play. It's more of a revisit from my trips there back in the early 70's. I'll probably take the Nabi 3D rig and maybe also the GoPro4 B to shoot some 4K.
Don Landis is offline  
post #1608 of 1623 Old 10-21-2015, 03:05 PM
AVS Special Member
 
Barry C's Avatar
 
Join Date: Oct 2012
Posts: 1,130
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 236 Post(s)
Liked: 150
Quote:
Originally Posted by Don Landis View Post
I 'll be in Bonaire next month and plan to shoot some U/W on Kleine Bonaire but time will be short as I only have 4 hours to play. It's more of a revisit from my trips there back in the early 70's. I'll probably take the Nabi 3D rig and maybe also the GoPro4 B to shoot some 4K.
Sounds good
Barry C is online now  
post #1609 of 1623 Old 03-01-2016, 08:36 AM
Newbie
 
van Gageldonk's Avatar
 
Join Date: Mar 2013
Posts: 14
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 2 Post(s)
Liked: 10
Quote:
Originally Posted by Bergj69 View Post
Yep, still using it. Mostly with the Cyclopital Stereo Base Extender attached. And only twice a year (holiday's). I'm lagging behind in the editing though... Something to do for next winter.

Had a quick look at some raw footage I shot at the Grand Canyon with the TD10, equipped with the SBE, it was breathtaking (watched it using my Full HD Optoma 3D beamer), so I'm puzzled as to why the consumer 3D market has collapsed.
Ah, Bergj69, I've found a fellow Dutchman here, would be nice to compare notes?

A bit further down the thread I saw people distinguishing between professional use and 3D for a hobby. Of course there's a difference, I don't argue with that. But I think there's an area between those two: we are not just shooting for cinema or shooting our kids birthday. There's a whole new area of channels out there that's in between professional and 'fun'. It's the area where people experiment, and hopefully find a new way to make some money with this.

I've been experimenting with TD10 for quite some years now documenting museums and works of art in a way nobody in the museum is noticing me. I combine it with computer graphics and voice over to make a 'proper' presentation. I intend to get more CG into the footage composited into the TD10 3D footage.
The only limitation is the fixed IA distance, but I try to overcome that problem....managing more or less.

Shooting TD10 has posibilities a couple of 2D camera's or a large 3D camera don't have.... it's small and nobody notices it. I hope I can put it to commercial use some day.
Take a look at my museum presentations and other here: (you can even download the S3D .iso file) https://www.youtube.com/user/rnvgrenevg
van Gageldonk is offline  
post #1610 of 1623 Old 03-01-2016, 09:03 AM
AVS Special Member
 
tomtastic's Avatar
 
Join Date: Sep 2012
Location: Wichita, KS
Posts: 1,263
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 477 Post(s)
Liked: 165
Quote:
Originally Posted by van Gageldonk View Post
I hope I can put it to commercial use some day.
Take a look at my museum presentations and other here: (you can even download the S3D .iso file) https://www.youtube.com/user/rnvgrenevg
You have to be careful when crossing into the commercial area. Institutions usually have policies regarding filming for commercial use. My local Zoo for instance, has a specific policy for that. You have to be insured and bonded, get prior approval for everything down to the number of crew working to equipment and days of filming.

They may not take notice of you filming as most are just there for the museum experience but if you plan to make money from that, you need to inform those places and find out what their policy is on commercial use of their institution. Better to find out before hand than a lawsuit down the line.

The other thing is filming individuals without their consent. For documentaries it's sort of a gray area. Generally, if you film a feature you should have anyone in that feature on your payroll as actors. For Youtube stuff, I don't worry about it. But if I was doing something for money, I wouldn't have anyone in the frame that I didn't put there. Not worth the legal hassles.

This line intentionally left blank.
tomtastic is online now  
post #1611 of 1623 Old 03-01-2016, 09:15 AM
Member
 
Bergj69's Avatar
 
Join Date: Apr 2014
Location: Lage Vuursche, The Netherlands
Posts: 96
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 7 Post(s)
Liked: 23
You are right about the segment between professional use and "birthday shooting". My Grand Canyon project is almost near publishing state, I've shown it to friends (before watching "The Martian" and they were impressed). I did some extensive cutting, the clip is now reduced to 19 minutes. In my enthousiasme, I deleted so many scenes that my carefully timed music is now misplaced (in the helicopter ride, you hear music and narration during the ride, the helicopter company "Pappilon" has this music clip that they let you hear when you are approach the rim (a rought 100 feet above the trees and than the earth falls downward a mile or so). I managed to identify and download this clip, obviously I want that music at that point in the footage and presently it is not).

In my humble opinion the Grand Canyon is precisely a location where 3D is "added value" to you "home video", you can almost "feel" the depth of the rim on some of those shots when viewing the footage using a 3D projector. As soon as I have the music in place, I will drop a link here.

I'll have a look at your museum project. I tend to visit musea on a regular basis but rarely carry the TD10 nor TD20. For indoor shooting, I tend to use the TD20, outdoors the TD10 comes in to view, standard the Cyclopital Stereo Base Extender fitted. I had the opportunity to compare shots taken with the TD20, standard IA (smaller than the standard IA of the TD10) and the TD10 with SBE. The TD20 does a relatively good job, apparently, a lot of the depth is created by on board software. Nonetheless, the TD10 Cyclopital shots had more pronounced depth feel in them. The downside of the SBE is the long range you need the have to you subject, since you have to zoom in at least 33% of the zoom travel to prevent gray bars to appear on the right and left side (zoomed in, you see the inside of the mirror construction of the SBE). On top of that, where it can be rather akward to have something too close by in the shot, this is even more so when shooting with the SBE.

Last edited by Bergj69; 03-01-2016 at 09:18 AM.
Bergj69 is offline  
post #1612 of 1623 Old 03-01-2016, 10:22 AM
Advanced Member
 
3DBob's Avatar
 
Join Date: Aug 2014
Location: Southeastern Michigan
Posts: 850
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 293 Post(s)
Liked: 72
Quote:
Originally Posted by van Gageldonk View Post
Shooting TD10 has posibilities a couple of 2D camera's or a large 3D camera don't have.... it's small and nobody notices it. I hope I can put it to commercial use some day.
Take a look at my museum presentations and other here: (you can even download the S3D .iso file) https://www.youtube.com/user/rnvgrenevg
Very nice. I have to reconsider my TD10 again. I've been using my dual gopro system for a while now, but the TD10 still looks good on closeup work.
3DBob is online now  
post #1613 of 1623 Old 03-01-2016, 12:25 PM
Newbie
 
van Gageldonk's Avatar
 
Join Date: Mar 2013
Posts: 14
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 2 Post(s)
Liked: 10
Quote:
Originally Posted by tomtastic View Post
You have to be careful when crossing into the commercial area. Institutions usually have policies regarding filming for commercial use. My local Zoo for instance, has a specific policy for that. You have to be insured and bonded, get prior approval for everything down to the number of crew working to equipment and days of filming.
.
You are right. I’ve been in touch with The Rijksmuseum in Amsterdam. When working professionally, I can only work with equipment that’s approved for their insurrance purposes. And only certain equipment can be used. Up to now I've just been making demo's or 'proof of principle' so I guess no problem there now. Thanks for pointing that out though.
van Gageldonk is offline  
post #1614 of 1623 Old 03-01-2016, 12:37 PM
Newbie
 
van Gageldonk's Avatar
 
Join Date: Mar 2013
Posts: 14
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 2 Post(s)
Liked: 10
Quote:
Originally Posted by 3DBob View Post
Very nice. I have to reconsider my TD10 again. I've been using my dual gopro system for a while now, but the TD10 still looks good on closeup work.
thanks Bob. When using closeup I dial in depth all the way and get as close as 60 cm (is that 2 feet?) to get the object at screen level with zero parallax or a bit negative.
van Gageldonk is offline  
post #1615 of 1623 Old 03-01-2016, 11:23 PM
AVS Club Gold
 
Don Landis's Avatar
 
Join Date: Jun 1999
Location: Jacksonville, FL
Posts: 12,132
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 916 Post(s)
Liked: 271
Professional vs Amateur shooting generally has to do with working for money. It has nothing to do with the equipment.

While what Tom suggested is correct in some limited way, there are other factors to consider as well-

You have to be aware of the laws in your country that limit what you can and cannot do in shooting.

In the US we have a first Amendment that allows us to shoot video journalism without the need to seek approval, get permits, have certificate of insurances etc. Anyone has a First Amendment right to shoot video as a journalist. There are still limits in that right does not give you permission to breach private property that is not open to the public. In a private property that is open to the public, such as a retail store, you can do your journalism until you are asked to leave and then you must comply. On a public right of way, you may not be required to leave by the government unless there are specific security risks to your being there or your presence disrupts the public good. What you shoot as a journalist is yours to profit to your delight.

If you produce a product to sell such as a fictional video or a documentary, then the production of that falls under certain copyright limits for music, and images contained within your work. People need to sign a model release and professional actors should be paid something to avoid hassle with their agents that are under contract.

Working as an Amateur is simple and easy, but not profitable. Being a journalist in the US, is a protected right with some restrictions. Shooting a production for a business is costly and full of restrictions that cost a lot of money.

Last edited by Don Landis; 03-01-2016 at 11:28 PM.
Don Landis is offline  
post #1616 of 1623 Old 03-02-2016, 12:02 AM
Newbie
 
van Gageldonk's Avatar
 
Join Date: Mar 2013
Posts: 14
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 2 Post(s)
Liked: 10
Quote:
Originally Posted by Don Landis View Post
Professional vs Amateur shooting generally has to do with working for money. It has nothing to do with the equipment.

While what Tom suggested is correct in some limited way, there are other factors to consider as well-

You have to be aware of the laws in your country that limit what you can and cannot do in shooting.

In the US we have a first Amendment that allows us to shoot video journalism without the need to seek approval, get permits, have certificate of insurances etc. Anyone has a First Amendment right to shoot video as a journalist. There are still limits in that right does not give you permission to breach private property that is not open to the public. In a private property that is open to the public, such as a retail store, you can do your journalism until you are asked to leave and then you must comply. On a public right of way, you may not be required to leave by the government unless there are specific security risks to your being there or your presence disrupts the public good. What you shoot as a journalist is yours to profit to your delight.

If you produce a product to sell such as a fictional video or a documentary, then the production of that falls under certain copyright limits for music, and images contained within your work. People need to sign a model release and professional actors should be paid something to avoid hassle with their agents that are under contract.

Working as an Amateur is simple and easy, but not profitable. Being a journalist in the US, is a protected right with some restrictions. Shooting a production for a business is costly and full of restrictions that cost a lot of money.
Thanks for clearing that out. Being a professional certainly comes with a lot of limitations and costs. My intention with these S3D productions is finding a job in this field or some kind of cooperation. I produce ideas and the productions are the result and showcase. In this case of museum docu's called "Being There" I want to give people the feeling of being there with the added realism of S3D. In future this can be enhanced with HDR, HD-sound, 8K, smell, VR, interaction etc. .....but I'm getting ahead of myself .

My S3D presentationdownloads here Or youtube
René van Gageldonk
van Gageldonk is offline  
post #1617 of 1623 Old 03-02-2016, 05:58 AM
AVS Special Member
 
Barry C's Avatar
 
Join Date: Oct 2012
Posts: 1,130
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 236 Post(s)
Liked: 150
I'm sorry to say, but trying to do commercial work with 3D won't be particularly profitable, since very few people have 3D capability- or an interest in it. Forinstance, for the museum piece, how many people who go to, or are interested in seeing a documentary on it, have 3D? Probably very few. So, that will mean you've cut off 90% of your potential customers if you only offer it in 3D.

There is another very talented gentleman on the forum- Joseph Clark- who, off and on over the past few years, has been working on an excellent documentary of the Missouri Botanical Gardens. However, he's well aware of these limitations of 3D and knows he will have to offer a 2D version as well to make it commercially viable.

Again, I take no joy in pointing this out, but unfortunately, this is the state of things with 3D products. Like Don, I'm fortunate enough to be retired and can produce 3D features for my YouTube channels solely as a labor of love without having to be concerned about their commercial viability.
Barry C is online now  
post #1618 of 1623 Old 03-02-2016, 06:36 AM
AVS Special Member
 
tomtastic's Avatar
 
Join Date: Sep 2012
Location: Wichita, KS
Posts: 1,263
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 477 Post(s)
Liked: 165
Quote:
Originally Posted by Barry C View Post
I'm sorry to say, but trying to do commercial work with 3D won't be particularly profitable,
And along with that docu's themselves aren't profitable either, most are self funded and don't make back what they put into them. It would be far easier to seek employment with an already established stereography crew which already has professional 3D rigs and equipment particularly large sensor cameras and rigs, basically all the stuff a small crew can't manage on their own.

Along with that, 30 ft. jigs, motion control sliders, arial shots, dedicated lighting and sound engineers. Trying to market something shot by one or two individuals shot on prosumer cameras, will appear amateur in comparison. When someone purchases a 3D blu ray, they expect a certain level of expertise, falling short of IMAX expectations they'll leave scathing reviews.

Not to sound negative, I'll given this a lot of thought too, would love to startup my own crew, but I think it would have to be much more than just myself. And getting back to my original point, there's not any profit in it, certainly not from the start.

This line intentionally left blank.
tomtastic is online now  
post #1619 of 1623 Old 03-02-2016, 07:04 AM
Member
 
Bergj69's Avatar
 
Join Date: Apr 2014
Location: Lage Vuursche, The Netherlands
Posts: 96
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 7 Post(s)
Liked: 23
Quote:
Originally Posted by Barry C View Post
I'm sorry to say, but trying to do commercial work with 3D won't be particularly profitable, since very few people have 3D capability- or an interest in it. Forinstance, for the museum piece, how many people who go to, or are interested in seeing a documentary on it, have 3D? Probably very few. So, that will mean you've cut off 90% of your potential customers if you only offer it in 3D.

There is another very talented gentleman on the forum- Joseph Clark- who, off and on over the past few years, has been working on an excellent documentary of the Missouri Botanical Gardens. However, he's well aware of these limitations of 3D and knows he will have to offer a 2D version as well to make it commercially viable.

Again, I take no joy in pointing this out, but unfortunately, this is the state of things with 3D products. Like Don, I'm fortunate enough to be retired and can produce 3D features for my YouTube channels solely as a labor of love without having to be concerned about their commercial viability.
Though I personally have the feeling the approachable portion of the market is about to grow on an exponential scale: I think it won't take too long before a large number of the consumers have one ore more Oculus Rift VR sets of glasses in their households. Though set out to create a different sensation (Virtual Reality, you get to see in 3D what you are "looking at", so the view follows the movement of your head, 3D in an entire different setting and put to a "beyond the reach of prosumer standard" since you require an absurd amount of camera's to do the shooting and a supercomputer to do rendering), I believe people will discover the option to simply watch movies in a private setting with that thing. And since it is equipped with two display units, watching 3D movies comes that much closer....

But I might be wrong here...

Last edited by Bergj69; 03-02-2016 at 07:07 AM.
Bergj69 is offline  
post #1620 of 1623 Old 03-02-2016, 07:12 AM
AVS Special Member
 
tomtastic's Avatar
 
Join Date: Sep 2012
Location: Wichita, KS
Posts: 1,263
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 477 Post(s)
Liked: 165
I think VR will be marketed to PR stuff, amusement parks, training programs and of course YT, as far as it taking over theater and primetime TV, I won't live to see it. If people gawk at putting on shades to watch a movie, those goggles will surely be over the top and as difficult as it is to get directors to shoot in native 3D, can you imagine how they would react to 360 shooting? Sure, there might be that one James Cameron that will do it, but it will be far, far niche than 3D.
Don Landis and Barry C like this.

This line intentionally left blank.
tomtastic is online now  
Sponsored Links
Advertisement
 
Reply 3D Source Components

taboola here
Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off