Vegas Pro 11 released - Page 19 - AVS Forum
Forum Jump: 
Reply
 
Thread Tools
post #541 of 564 Old 07-24-2012, 09:08 AM - Thread Starter
AVS Special Member
 
Wolfgang S.'s Avatar
 
Join Date: Aug 2011
Location: Vienna/Austria
Posts: 1,132
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 5 Post(s)
Liked: 16
Quote:
Originally Posted by Wolfgang S. View Post

Had you changed both the prerenderd file folder (file/properties/video) but also the temporary file folder (options/preferences/generam)? And third the path to the image in the burning dialog?

It is moved with the temp folder.

Kind regards,
Wolfgang
videotreffpunkt.com
Wolfgang S. is offline  
Sponsored Links
Advertisement
 
post #542 of 564 Old 10-03-2012, 05:42 PM
Member
 
Powerplay4's Avatar
 
Join Date: Jan 2007
Location: Curitiba PR - Brazil
Posts: 35
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 11
I'm quite frustrated with the results of the ISO Blu-ray 3D 1080p24 generated by Sony Vegas (MVC 1920x1080-24p 25Mbps video stream). See examples below with MVC 1080p24, MVC 720p60 and 3D SBS 1080p60:

MVC 1080p24:

MVC 720p60:

3D SBS 1080p60:


This video was originally recorded with two camcorders Sony HDR-CX130 attached in a moving car in 1080p60 and edited in Sony Vegas Pro 11.

I wonder of you are having the same problem of fluidity in the generation of ISO 1080p24 Blu-ray 3D with Sony Vegas. Unfortunately I have no other software that render Blu-ray 3D 1080p24. What most intrigues me is because I've seen many movies in 3D 1080p24 and not seen this type of problem. I'm using PowerDVD 12 for 3D videos.

Thanks!
Milton
Powerplay4 is offline  
post #543 of 564 Old 10-03-2012, 11:42 PM - Thread Starter
AVS Special Member
 
Wolfgang S.'s Avatar
 
Join Date: Aug 2011
Location: Vienna/Austria
Posts: 1,132
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 5 Post(s)
Liked: 16
You take a moving camera in a car, shoot in 1080 60p. Then you pair the streams and render the 1080 60p footage to 1080 24p - that is what you do?

If that is right, think about what you ask the system to do here: you ask the system to take the 60 frames in every second - calculate from 60 frames 24 frames. So, everyone of the new 24 frames must be interpolated, since NO position of any new frames with 24p will in the timeline match the position of an old frame in the 60p stream in the timeline. So, what you do is to take out all orginal frames, and substitute them with interpolated frames. And you have fast movement. Means, that the system blends at least two frames to come up with the average that you have asked for - and that is what you see.

Since you are not doing that game when you render to 720 60p from your original 1080 60p footage, the result is much much better in 720 60p.

To my opinion, you have two possibilities:
- either stick with 720 60p for 3D-BD
- or shoot with 1080 24p your original footage.

By the way, that is not so much an issue of Vegas - more an issue of that coversion that cannot be done in superior quality really.

Kind regards,
Wolfgang
videotreffpunkt.com
Wolfgang S. is offline  
post #544 of 564 Old 10-04-2012, 09:01 AM
AVS Special Member
 
TrickMcKaha's Avatar
 
Join Date: Jun 2004
Posts: 1,016
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 1 Post(s)
Liked: 13
Yes, I can never get acceptable rendering of significant pans at 24p. Besides the blurring there is a ridiculous amount of judder and sometimes other distortions that look like excessive ghosting. I'm rendering at 720 60p if the video has much motion. I sure wish displays were capable of showing 1080p 3D at 30 fps - or even better, 60 fps.

I think this problem is primarily a shortcoming in the HDMI specs and is not specific to Sony Vegas. It astounds me how shortsided HDMI specs have been in every version since their beginning.
TrickMcKaha is offline  
post #545 of 564 Old 10-04-2012, 10:39 AM
Member
 
Powerplay4's Avatar
 
Join Date: Jan 2007
Location: Curitiba PR - Brazil
Posts: 35
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 11
Ok, I came to the conclusion that it is better to generate Blu-ray 3D in 720p 60Hz, since the two camcorders that I use do not record in 1080p 24Hz, only record 1080p 60Hz or 1080i 60Hz. I will create 3D SBS (side by side) content in 1080p 60Hz.

When Sony Vegas can generate 3D Blu-ray in 1080i 60Hz (AVCHD 2.0/MVC), then I think that would be a better option. But from what I saw, the Vegas 12 still is not generating 3D Blu-ray in 1080i 60Hz, only 1080p 24Hz and 720p 60Hz at this moment. We also need a Blu-ray player that reads this format AVCHD 2.0 (MVC).

Thank you!
Milton
Powerplay4 is offline  
post #546 of 564 Old 10-04-2012, 10:45 AM
Member
 
ComPH3D's Avatar
 
Join Date: Jul 2012
Posts: 18
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Good 3DTVs have no problem displaying 24/48Hz MVC files from 3D blu-ray discs - no judder. The problem is going the other way. It is difficult to convert to 24Hz due to non-integer relationship.If you need 24 fps video, the best way is to capture natively that way. Wolfgang explained it. It is difficult to do a 3:2 pulldown without judder. Little smear actually helps with the motion, even though it doesn't look great frame by frame.
ComPH3D is offline  
post #547 of 564 Old 10-04-2012, 11:27 AM
AVS Special Member
 
TrickMcKaha's Avatar
 
Join Date: Jun 2004
Posts: 1,016
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 1 Post(s)
Liked: 13
Yes, and like you say, "If you need 24 fps video..."

Why do we need 24 fps video?
TrickMcKaha is offline  
post #548 of 564 Old 10-04-2012, 01:29 PM
Member
 
ComPH3D's Avatar
 
Join Date: Jul 2012
Posts: 18
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
For movie theater systems it is the standard frame rate, set up long time ago. All movies and projectors (pretty much) are that way. It was a limitation of mechanical movie cameras and projectors moving celluloid - and it stuck. If you ever make a video for a movie theater projector use, they'll demand 24 FPS (http://3dff.org/?page_id=52). In 3DBDs it is partially because of that. It also helps to keep the bitrate down for HDMI. It is very demanding for equipment and cables to send serial bits for all the pixels (uncompressed) for two 1080p streams at higher frame rates. Everything is more expensive, and hard to make reliable, especially long cables. The high prices make it difficult to sell to consumers for home use. To change, there are legacy content and equipment cost issues. Part of the reason why ATSC ended up with 36 formats.

Also see: http://www.projectorpeople.com/resources/pulldown.asp and http://en.wikipedia.org/wiki/Telecine.

Of course when going from 30 FPS to 24 (reverse-Telecine), one in 5 frames has to be dropped, which causes judder, where the smearing helps a little overall.
ComPH3D is offline  
post #549 of 564 Old 10-06-2012, 02:49 AM
Member
 
MLXXX's Avatar
 
Join Date: Jan 2007
Location: Brisbane, Australia
Posts: 154
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 4 Post(s)
Liked: 15
Quote:
Originally Posted by Powerplay4 View Post

I'm quite frustrated with the results of the ISO Blu-ray 3D 1080p24 generated by Sony Vegas (MVC 1920x1080-24p 25Mbps video stream). See examples below with MVC 1080p24, MVC 720p60 and 3D SBS 1080p60:

MVC 1080p24:
Traditional cinematography used a 50% exposure time. This blurred fast motion, and pans. Movie directors worked within the limitations of 24fps. What Vegas has produced for you is not unlike traditional 24fps blur!

Road traffic, or scenes shot from a moving car, will look jerky if captured crisply (short exposure time) at 24fps from a distance of a few metres, and displayed on a computer monitor, or TV, without motion smoothing.

If you examine individual frames in your 60p footage you will see the huge distance a car moves over two frames. 60p is a more natural frame rate to use for this type of material.

Will the display do motion interpolation for Full HD 3D?

I have a problem with some 1080p24 footage I took recently using two webcams (for 3D purposes). Each frame is quite clear, but if I author as a Full HD Bluray at 24fps, the pans look very jittery; and the road traffic too.

In 2D mode my TV set is able to do motion interpolation for 1080p24 (Panasonic Intelligent Frame Creation) and this cleans up the pans very nicely, and lessens the jerkiness of the moving traffic. However there is no motion interpolation available on my set for frame packed (MVC) 3D.

I think a good compromise for fast moving 60p 3D source material is to author as frame packed 720p60.
MLXXX is online now  
post #550 of 564 Old 10-06-2012, 05:23 AM
AVS Club Gold
 
Don Landis's Avatar
 
Join Date: Jun 1999
Location: Jacksonville, FL
Posts: 10,771
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 41 Post(s)
Liked: 106
Hey Jeff:

Thanks for your assessment and opinions above. I've been trying to decide how to shoot a fireworks and outdoor moderate speed motion performance tonight. I want to shoot in 1080 60p but was not sure how to finish it for blu ray. I forgot about the 720 60p option. I'll be shooting with 18" IA and the TD10's as I also need to have zoom. The performance will be at low light too so slower shutter speed and wide open iris too.

I'll try to check in with my smart phone later today to see if you have any additional thoughts on this.
Don Landis is offline  
post #551 of 564 Old 10-07-2012, 09:06 PM
Member
 
MLXXX's Avatar
 
Join Date: Jan 2007
Location: Brisbane, Australia
Posts: 154
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 4 Post(s)
Liked: 15
Sorry, just saw your post. Don, all I can do is offer some thoughts. I agree with your decision to shoot at 60p. I think that whenever the action has rapid changes, 60p is safer for 3D using two asynchronous cameras because of the worst case timing disparity of only 1/120th second (rather than 1/48th second).

As for authoring, I have mixed feelings. As far as I know, frame interpolation is generally not available with shutter glasses 3D flat panel displays and projectors. So if panning is rapid or the action itself is rapid, rendering 1920x1080p60 captured material at 24p will result in jitter that may be uncomfortable to watch; and may too big a price to pay for the improved visible resolution for static scenes compared with 1280x720p60.

[ I want to experiment with some footage I recently captured in 3D at synchronous 24p that includes a jittery pan and some jittery action. I am going to try rendering it at 720p. If Vegas does a good interpolation job, the result may look better than 1080p24 on my shutter glasses display.]
MLXXX is online now  
post #552 of 564 Old 10-07-2012, 09:57 PM
AVS Club Gold
 
Don Landis's Avatar
 
Join Date: Jun 1999
Location: Jacksonville, FL
Posts: 10,771
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 41 Post(s)
Liked: 106
One of my projects got cancelled so that was a bust but I did do the fireworks from a great distance ( at about 2 miles away), at 28" IA and lockdown at 1080 60p. Next time I plan to do the fireworks using the NEX5n much closer and with the 12mm wide angle lenses. None of these are using live zoom or pans.
Don Landis is offline  
post #553 of 564 Old 10-13-2012, 12:56 PM
AVS Club Gold
 
Don Landis's Avatar
 
Join Date: Jun 1999
Location: Jacksonville, FL
Posts: 10,771
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 41 Post(s)
Liked: 106
I completed the editing of one project which is the Magic Kingdom's Wishes Performance. I shot this from two locations using two wide stereobase TD-10 systems.

One camera set was using 1080 60p and the other was recording at 1080 24p. I mixed the two in editing and rendered three ways-

I could find no artifacts when dissolving a 24p with a 60p. Vegas handled the rendering of these dissolved clips surprisingly well. However, the timeline would not play live without crashing at the dissolve. ( I'm using V11, not v12 )

1. 1080 24p for BluRay compatibility
2. 1080 30p wmv for YT upload processing as I post this.
3. 720 60p as a comparison for Blu Ray playback.

So far the motion is quite smooth on 1 and 2. 3 is still rendering. The wmv is a tad softer due to the rendering at half rez SBS and only 8K Bitrate. It is currently uploading and processing on YT.

I decided not to shoot the Fireworks using the NEX 5N pair because of the one project at that location was cancelled. So, I ended up shooting it at great distance and needing zoom capability so I shot using twin TD10's as opposed to twin NEX5n.
Don Landis is offline  
post #554 of 564 Old 10-14-2012, 08:25 AM
Member
 
JOAT09's Avatar
 
Join Date: Jan 2012
Posts: 24
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Don Landis Congratulations your work is impressive. It is very difficult to capture the fireworks in 3D.

I request help in correcting the color of underwater footage with Sony Vegas 11.
I recorded coral reefs to 12 meters deep with unfiltered TD10 and need instructions to correct the color.
Anyone know of any guide.
JOAT09 is offline  
post #555 of 564 Old 10-14-2012, 10:15 AM
AVS Club Gold
 
Don Landis's Avatar
 
Join Date: Jun 1999
Location: Jacksonville, FL
Posts: 10,771
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 41 Post(s)
Liked: 106
My legacy is underwater film and video. I'm also a retired PADI Master Instructor and YMCA Gold Instructor with photo and video instructor specialties.

Here's what I can tell you about underwater color. Water is a selective color filter and the more distance between the light source and the camera, reflecting off the subject, the less reds you will have in your exposure. The simple solution to achieving the best color is to use a local light source, usually a mounted strobe or flood light near the camera. To reduce the back scatter of particulate matter in the shot, locate the light source off to one side or overhead aimed down on your subject. In very clear water such as Tropical warm water or underwater caves, you can even place light sources close to the subject and get back farther away from the subject for a wider shot and still maintain color. But the color in a natural light exposure from the sun will contain a fair amount of red content up to about 15 ft maximum. As you go deeper the sunlight from the surface begins to lose reds first then oranges and then yellows so that at about 60 feet of total light distance traveled under water the greens begin to disappear. This leaves only the blue light in the spectrum illuminating your subject in deeper water beyond 100 ft.

With that explanation, please understand that filters can correct the color balance if there is something there to correct. If the deep water has filtered out all the warmer colors, then you will have nothing to rebalance ( assuming the remaining light is all blue light, there are no reds left to build up. Any attempt to rebalance may result in nothing more than a darkening of the blue and give you a darker, browner image as you begin to boost and change blues to browns. This is true whether you use a deeper color correction filter or a try to correct the colors using color correction adjustments in the editing software. You may have some success using colorizing black and white methods where you strip all the color back to a black and white image and then build the colors back into the shot one at a time, like they do when they color old B&W movies. This is a very laborious process.

FWIW- My last U/W video system used two 250 watt flood lights located on each side of the camera, plus one 350 watt flood I would give to my dive assistant to locate in the shot near the subject. Underwater lighting is no easy task but it gives you the best color, far superior to any correction filter except in shallow, less than 10 feet of depth. Here is where the color correction ( rose colored) filter will be better. Not at 12 meters! Smaller lights are commercially sold to underwater shooters but these generally have effective distance only up to about 6-8 feet maximum. My 850 watt system would work up to 30 feet in clear water. But it weighed in at 85 pounds!
Don Landis is offline  
post #556 of 564 Old 10-15-2012, 07:04 AM
Member
 
MLXXX's Avatar
 
Join Date: Jan 2007
Location: Brisbane, Australia
Posts: 154
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 4 Post(s)
Liked: 15
Quote:
Published on Oct 13, 2012 by DonLandis
This 3D video was shot on two separate dates and edited together for two points of view. The close up shot was recorded on December 17th 2011 and the distant shot was recorded on October 6, 2012. The two performances of Wishes were very closely in sync which is an amazing testimony to the Disney capability.

The video was recorded using Twin Sony HDR TD10 cameras in 2D mode spaced 28 inches apart and paired for stereoscopic 3D in Sony Vegas Pro editing software.
Hi Don,
I downloaded the YouTube 1080p version and obtained a 298MB mp4 at 29.97p, which I imported into Vegas Pro. I saw that there was a lot of Vegas plug-in animated horizontal offset adjustment in the first 4 minutes.

There are so many possible choices for horizontal offset! For my own vision I like to use a setting that keeps the Left-Right separation at a minimum for the prominent parts of the content [subject of course to avoiding extreme separations elsewhere in the picture]. So, with the distant view, I would probably have chosen to set the convergence on the distant building, except for the climactic finale when the water is noticeably illuminated. Early in the video (23 seconds) where the boat passes by in the foreground I'd probably have set the convergence on the boat. My partner's vision can tolerate wider offsets than mine.

When watching the video I didn't notice any glitches with frame rate, apart from a possible anomaly at 1min 56.583 sec, where the Left view of the tower is brightly illuminated but the Right view isn't. (This might have had to do with with angles of reflection. Whatever caused it, it caught my eye.)

Thanks for this entertaining video.
MLXXX is online now  
post #557 of 564 Old 10-15-2012, 08:59 AM
AVS Club Gold
 
Don Landis's Avatar
 
Join Date: Jun 1999
Location: Jacksonville, FL
Posts: 10,771
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 41 Post(s)
Liked: 106
Thanks Jeff.

I made no Horizontal offset corrections in either clip that I recall.

In assembling the edit I used my original stereographic auto corrections from last December for the close clip. But, after I uploaded the version you saw, I noticed some problems with the opening scenes where the castle did not converge visibly. It seemed OK on my 32" screen but didn't work on the 105" screen. So, I made some new auto corrections and re-rendered. It looks cleaner now. Note- In Vegas the auto-correction does not affect Horizontal, only vertical, Keystone, zoom and rotation.

Normally, when I slice up a long clip to intercut with the distant shot, I would go back in and verify the auto correct key frames at the beginning and end of the new shorter clip, maybe additional auto corrects mid way as well. These corrections need to be done often especially when I might zoom with the camera or change the IA (rarely I do this).

This whole edit was mostly an experiment to test the ability to mix 60p and 24p video in edit with dissolves. As far as I can see that part was a success. It is possible without obvious problems. But, in future projects I will try to shoot all video in a single frame rate. In January, I do plan to shoot 1080 60p on my wide IA NEX5n rig, wide angle camera rig with a single TD10 ( 1080 60i) in the center to record a close up view of the same scene. Then render to 24p, 60p and 30p as was done for the fireworks.

What still is a huge problem I deal with on every wide IA project is the compromise on horizontal convergence between different screen sizes. What you described.


The recent movie, Prometheus 3D was shot with Red Epic cameras in a beam splitter configuration. Mostly using 4 twin camera locations. In this production I saw some scenes where the stereo did not converge on my 105" screen yet looked fine on my 32" screen. But, this production also was shot in sets that were huge in size, mixed with normally close in shots. Based on this observation, I don't think I'm alone in this screen size conflict. Whether this is an human error or a problem with the science in screen size compromise remains to be answered. The solution so far seems to be to use less IA so that the separation of left and right images is much less than what I like to see in the depth. In other words, go for more of the flat object look and less exaggerated roundness in the scene.
Don Landis is offline  
post #558 of 564 Old 10-15-2012, 10:18 AM
Member
 
MLXXX's Avatar
 
Join Date: Jan 2007
Location: Brisbane, Australia
Posts: 154
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 4 Post(s)
Liked: 15
Quote:
Originally Posted by Don Landis View Post

Note- In Vegas the auto-correction does not affect Horizontal, only vertical, Keystone, zoom and rotation.
Yes, the manual horizontal offset slider does not move when launching a Vegas stereoscopic auto-adjust. However the actual horizontal offset (and z value for convergence) can end up being affected, sometimes by quite a bit. I think this is because Vegas needs to crop and resize the raw Left and Right frames to implement some of its stereoscopic adjustments..

(Talk about large IAs, I recently tried a 70" angle bracket, supported in the middle with a tripod, and with a webcam attached at either end. Not for use in a crowded location! Most of the footage was unusable - too much stereo content captured at short distances.)
MLXXX is online now  
post #559 of 564 Old 10-15-2012, 12:27 PM - Thread Starter
AVS Special Member
 
Wolfgang S.'s Avatar
 
Join Date: Aug 2011
Location: Vienna/Austria
Posts: 1,132
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 5 Post(s)
Liked: 16
It is very true that the horizontal adjustment is not affected by the autocorrection. The reason for that is that it is the decision of the editor, where he or she wishes to position the zero-level in the footage. That is to some extend a technical decision, but to some extend also a part of the art how to edit s3D. So I think it is very sound that the autocorrection does not touch the horizontal settings.

The only reason why it touches the settings is that autocrop is nothing else then to zoom into the footage. Ok, when I zoom into the footage I will increase always the horizontal disparity by the zoom-factor. Can be an issue if I work hard to the edge.

The last point: well, with s3D we produce for a defined size of the screen. We have to stay between the distance of our eyes, to avoid divergence. In terms of pixel or disparity in % of screen width, that means that for a huge screen size we have to go for smaller disparitys, compared to a smaller screen size. That is well-known practive in s3D today. Beside that this is linked to the base we use for shooting, all together still on of the major points one has to get right for s3D.

Kind regards,
Wolfgang
videotreffpunkt.com
Wolfgang S. is offline  
post #560 of 564 Old 10-16-2012, 06:46 AM
Member
 
MLXXX's Avatar
 
Join Date: Jan 2007
Location: Brisbane, Australia
Posts: 154
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 4 Post(s)
Liked: 15
I'm not sure I've explained myself sufficiently.

I am saying that the autocorrect feature for the Sony Vegas Stereoscopic Adjustment plug-in can change the horizontal separation of Left and Right if the source frames require substantial corrections.

In recent times I've used interaxial distances of around 600mm, 1200mm and 1800mm (24", 48" and 70") for two webcams. My workflow has been as follows:

1. Choose a point in the video where the cameras were static for a second or two (i.e. not in the middle of a pan), and capturing distant objects (nothing too close).

2. Activate autocorrect on the stereoscopic plug-in. [This causes at least a slight change in Left-Right separation, and sometimes a considerable change.]

3. Set the horizontal adjustment for convergence as desired. (Often I do this as an animated adjustment along the time line.) It's easy to see the extent of horizontal disparity if using a shutter glasses monitor operating in 3D mode but without wearing shutter glasses.


Don's recent YouTube fireworks video shows a variety of movement in the convergence in the first 4 minutes. See for example the change in the appearance of the castle from the 1min 36 sec mark (Left and Right images of the castle have nil, or near nil, separation) to the 2min 5 sec mark (Left and Right images of the castle have a large separation). I can't tell whether these changes were somehow associated with panning and zooming in the editing but in that particular time interval there seems to be very little panning or zooming.

I can say that with my own editing that I get convergence changes solely through activating autocorrect (no manual change in zoom, etc). And I find Vegas calculates slightly different correction parameters (with slightly different impacts on convergence), depending on the point in the timeline chosen to launch an autocorrection.

The main thing for me is to minimise eye strain. Where feasible, I like to set the convergence on the main object of interest. Of course that isn't always possible if the scene includes both very close and very distant objects and a high interaxial distance). Such scenes may necessitate a convergence in the middleground, even if the main object of interest is elsewhere (foreground/background).
MLXXX is online now  
post #561 of 564 Old 10-16-2012, 09:48 AM
AVS Club Gold
 
Don Landis's Avatar
 
Join Date: Jun 1999
Location: Jacksonville, FL
Posts: 10,771
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 41 Post(s)
Liked: 106
Quote:
Don's recent YouTube fireworks video shows a variety of movement in the convergence in the first 4 minutes. See for example the change in the appearance of the castle from the 1min 36 sec mark (Left and Right images of the castle have nil, or near nil, separation) to the 2min 5 sec mark (Left and Right images of the castle have a large separation). I can't tell whether these changes were somehow associated with panning and zooming in the editing but in that particular time interval there seems to be very little panning or zooming.

Jeff, repeating, I made some corrections to the closer shot clips that fixed the disparity issues after that video was uploaded. Originally, I was not concerned over the original keyframes in that older clip since my test was mainly to check for issues in the dissolves between clips of different frame rates. I took a look at the specific location you mention and the keyframes did need to be adjusted to correct the changing settings.

The one thing I have observed, is if you set an auto correct for a long clip and then slice it up into smaller clips, you really need to go back and do additional auto correct keyframes as the original settings never seem right after slicing the clip up. I don't think this is a bug, but rather just the nature of the beast.


If the cameras and subjects remain static there should be no need to do any more than one auto correct at the beginning of the clip. If you zoom in or out with the cameras, or have subjects change distance along the depth axis, or pan along a perspective depth line axis, then you need to do an auto correct at the beginning of the change, and at the end of the change. This will create a ramp of correction through the image change. If the rate of image change happens then you may need another keyframe set at the point of change. These corrections are needed for twin independent cameras because it is virtually impossible to achieve 100% perfect alignment at the huge distances wide IA cameras are used for. So, we do final keyframe alignment in post.

The horizontal offset keyframe, if incorrect, can generate lots of ghosting. For my completed projects, I set all the horizontal offset keyframes to reduce ghosting. Generally this balances the primary subject in positive, negative, or zero parallax based on the lens focal length, distances and I.A. I used for the shot.
Don Landis is offline  
post #562 of 564 Old 10-17-2012, 01:31 AM - Thread Starter
AVS Special Member
 
Wolfgang S.'s Avatar
 
Join Date: Aug 2011
Location: Vienna/Austria
Posts: 1,132
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 5 Post(s)
Liked: 16
I tend to use autocorrect at the beinning of every clip. From time to time it happens that I do that also within a clip - but here I have seen that my Z10K had an defect that made that necessary (I have send it to the service for that reason).

To my opinion, he major task for s3D shooting is to get the disparity right. To keep it small is a nice way to reduce the stress for the viewer, and to avoid ghosting.

Kind regards,
Wolfgang
videotreffpunkt.com
Wolfgang S. is offline  
post #563 of 564 Old 10-17-2012, 08:21 AM
AVS Club Gold
 
Don Landis's Avatar
 
Join Date: Jun 1999
Location: Jacksonville, FL
Posts: 10,771
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 41 Post(s)
Liked: 106
I have spot checked auto stereo adjustment on self contained 3D clips from my Panny 3D1 and my TD-10's but they never need any adjustment. Only the paired clips from the twin TD10 and the Twin NEX5n's require auto stereo adjust.

My old Sony 3D Bloggie also needed adjustment because it suffered a vertical alignment issue, first spotted by Frank right after I got it. However, I haven't used that camera in over a year.

Sorry to hear about your Z10k being defective. You seem to get quite a few defective cameras. I recall you had several TD10's that didn't align properly too.

One thing I learned in class is if you have to make any adjustments to a paired camera set, then if there is a change in the shot itself where objects move, either by themselves or by moving the camera or zoom lens then you will need to make additional key frame adjustments to compensate for the geometry change. It's just the math. You may recall the classroom exercise used by Sony with the Airplane taxiing on a runway.
Don Landis is offline  
post #564 of 564 Old 10-17-2012, 08:58 AM - Thread Starter
AVS Special Member
 
Wolfgang S.'s Avatar
 
Join Date: Aug 2011
Location: Vienna/Austria
Posts: 1,132
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 5 Post(s)
Liked: 16
The TD10 issue was not a defect really - it was more the question if the pairing works out or not. And it did not with my first TD10. But I still use this unit - to shoot standalone.

The Z10K defect is really a defect, confirmed by the Panasonic technical service guy who has tested that by himself (what is the advantage of the better service due to the fact that the unit is taken over by the professional service center and not the consumer guys, at least here in Austria. The Z10K shows a shift in the vertical position - what must happen. Sometimes it was a slow shifting, but sometimes it was a jump within less then a second. You can correct that with the autoadjustment and I have some shooting where I have done so, but that is additional work and it is better to repair the defect unit. I hope that will be gone when I get the unit back.

Kind regards,
Wolfgang
videotreffpunkt.com
Wolfgang S. is offline  
Reply 3D Source Components

User Tag List

Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off