Originally Posted by forsureman
I was hinting at the glasses. So your saying with the
- lcd at 120 Hz
-using this back light technology
-3D glasses at 120 Hz
-video at 120 fps
= there is no blur
How is the frame stereoscopically fused in the mind if the left and right frame don't fit together stereoscopically?
There's no such thing as "the left and right frame don't fit together stereoscopically" from this context -- the human vision system doesn't work on a discrete-basis like that. "Persistence of vision" behaves differently -- for maximum "soap opera" effect (often desired in high-framerate materials such as video games and sports, but not always for movies or intentionally-low-framerate materials), it is better for the frame presentation to mimic real-life.
Allow me to explain why it is more proper/natural at 120fps
The easiest way to explain this: It's like real world -- if you wear 120 Hz shutter glasses in real life but watch real life (Example: sit in front of the TV and enable the shutter glasses. Then look away from the TV set to watch other real-life objects such as a person walking across the room near the TV). The person is moving continuously. Your shutter glasses is blocking one eye at a time. So you never have both of your eyes see the moving real-life object in exactly the same position at exactly the same instant in time. But your eyes are successfully able to see this, and the movement of real-life objects in the same room is still more fluid than movements on the TV.
Similiar effects can be seen when looking between slats of a tall picket fence from a moving vehicle or bicycle, etc -- the movement allows you to see more of a scene through the "slits" between the fence slats. Your left eye and right eye are combining the view through persistence of vision, and your left eye is getting different angle views at various different times than your right eye view, yet you're able to combine a scene, and if you're moving fast enough, you also get 3D depth perception as you whoosh by.
Eyes are always continuously tracking. Watching a moving object as the shutter switches from left to right, if the object didn't move (in 3D space in 3D mode), your human vision system gets the same effect of a repeated frame (Even though it's of a different angle), your left eye saw the same scene from one angle, then 1/120th of a second later, your right eye sees the same scene (of exact same instant) from a slightly offset angle. But your eyes are continuously tracking in 3D, even over a 1/120th second interval, and if there's no movement in that 1/120 second, then your vision system perceives the telltale judder of a frame repeat. That's exactly what happens.
So, to simulate real-life at the maximum fluidity with 3D shutter glasses, one could theoretically do:
- Capture left-eye frame at even 1/120th second intervals (T+0/120second, T+2/120, T+4/120, T+6/120)
- Capture right-eye frame at odd 1/120th second intervals (T+1/120second, T+3/120, T+5/120, T+7/120)
This would work properly only with alternate-frame shutter glasses (where each eye take turns), but would be unmatched (on time-basis) for polarized systems (where both eyes sees their image simultaneously). You wouldn't need to capture 120fps pairs, just capture at 120fps one frame at a time - left eye, right eye, left eye, right eye.
(Or render, of course -- as in 3D video games, but this is not done in actual practice since games always render both pairs for the same in-game instant.)
This would make it mimic the real-life scenario I explained more, because of the 1/120th second stepping.
Thus, getting the "soap opera effect".(Extra thought: This is theoretically already compatible with existing 3D systems, you simply use existing workflows and 3D mastering workflows, except tweak the timing of the right-camera shutter forward by 1/120th of a second relative to the left eye, for 60fps material. So left eye running at 60fps at even-numbered 1/120th second intervals, and right eye running at 60fps at odd-numbered 1/120th second intervals, to keep capture time exactly relative to presentation time. (i.e. keep everything "perfectly aligned" in capture-scene-time versus shutter-presentation-time, to be consistent with real life). But of course, you run into problems when presenting on polarized systems, so this isn't something that is ever done in practice. The polarized system, in this situation, would have the unfortunate effect of simultanous presentation of left/right eye frames taken at different times. Just something that's possible to do.)
Another way is simply capture both left/right eye pairs at 120fps, and present only one eye of each frame pair. (e.g. left eye of one pair, then right eye of the next pair, and then left eye again of the subsequent pair, and so on). This is quite much more inefficient, obviously, since half of the frames are never presented. This would result in exactly the same "soap opera effect" as the above method explained. But this more-wasteful method would certainly maximize compatibility with multiple types of active and polarized systems (Both 60Hz polarized and theoretical future 120Hz-native polarized).
It's all not being done in actual practice by filmers and videographers (at this time, to my knowledge, due to the incompatibilities introduced to non-shutter-type 3D systems), except when doing motion interpolation, e.g. 120fps motion interpolation (supported by *some* 3D displays), then you're getting the 'real-life-smooth' feel of 120fps @ 120Hz, the soap-opera-effect, through 3D shutter glasses.
This technique is doable with PC based 3D too, games can always render both pairs of frames 120 times a second, even if only one of the two is ever seen by the human eye during high-framerate 120fps moments (until framerate slowdowns gives time for the other eye, of the same 3D frame pair, to be shown)
It's still 60fps per eye, but video captured for right eye's 60fps would have been photographed by the camera 1/120th second different relative to left eye. When you begin to think this way (in order to mimic real-life, like looking away from the TV set and watching a person walk across a room, while you're wearing 120Hz 3D glasses), it suddenly starts to make a hell lot more sense; though the knowledge about vision physics is not quite this fine-tuned by many people... If you're still confused, I will attempt to try to find some references, but one good reference is this one:
"Perceptually-motivated Real-time Temporal Upsampling of 3D Content for High-refresh-rate Displays
It may not explain all bases, but it certainly scientifically explains a lot of this.