Scott Wilkinson's next episode of Home Theater Geeks will be about Ultra-D. I'm really looking forward to this one. When he announced his guest, Scott noted that he thinks this technology works. Another person I trust on this issue is Don Landis (because he and I are both 3D shooters/editors). He came away from CES with a positive impression of Ultra-D technology. Don't write it off because of one negative review. What I hope Scott can wrangle from his guest is info on release, pricing and availability. Ultimately, I'll reserve judgment until I see Ultra-D for myself.
In another thread where I presumed Don Landis gave his two thumbs up from his impression, Don stated that he never gave two thumbs up. It seemed that he had some reservation regarding certain aspects of Ultra D. But then again, Don's standard of 3D is way, way higher than mine.
The following discussion also belongs in this thread:
Originally Posted by Paul H
Would like to know about Ultra-D delivering unaltered native-3D views of movies on their Glasses-Free displays when usning Blu-ray 3D discs.
From another forum there is a discussion about Phillips demonstrations since 2008, about Dolby 3-D TV, and "other glasses-free TV's", are "using an "on the fly" conversion algorithm to take the left eye from a 3-D BluRay and synthesize a new right eye with interpolated in between images. Also there is a bothersome part for purist enthusiasts at least, about "to create the multiple views, the manufacturers are throwing away the original stereo and synthesizing new depth from a single view. In some cases, this means preparing new content ahead of time" for an "in-house conversion of 3-D content within the parameters that work for their TV". Is the part that specifies Dolby: "several of the content creators who are disgusted that Dolby has destroyed their stereoscopic intent without noting that the 3-D has been modified from it's original format, be an accurate statement for Ultra-D Glasses-Free demonstration preparations as well.?
With a stereo source, they aren't "throwing away" the native stereo. Rather, that L/R difference is used as the basis for creating the alternate angles.
Think of it more like old-fashioned ProLogic surround sound... you've got a stereo source... but the decoder is able to synthesize a center and rear channels from the mix.
This analogy isn't 1:1 since the "stereo" image isn't specifically encoded for Ultra-D decoding. However, the principles of their algorithm use the L/R native signal as the foundation for the additional information.
In essence, they are augmenting/enhancing the original stereo signal... not "destroying" it.
I understand from a purist point of view, the initial negative reaction towards anything that alters the source signal. But in a digital-signal-processing world, smart algorithms can arguably "add" to the signal in ways that are genuinely additive and not destructive.
Think of high-quality upscaling like Sony's RC which re-writes the original 1920 x 1080 data points in the upscaled 4K image. Or think of Wadia's resolution-enhancement for 16-bit audio that actually *moved* the original 16-bit word measurements to try to replicate more natural acoustic wave-forms based on the understanding of the limitation of 16/44.1 quantization/sampling. That process changed the original information (not merely added new data between the source data points). Yet it sounded more like the original analog waveform prior to A/D conversion.
Let's assume that well-employed digital algorithms should be given an opportunity to prove themselves, even if in doing so they "modify" the original signal in some way.
Originally Posted by Paul H
Synthesizing new depth from a single view or original stereo?
Originally Posted by DaViD Boulet
Either scenario still applies to the notion that a "modified" signal, by nature, may not be a bad thing for an audio/video-phile... it depends on the goal of the listener/viewer and to the effectiveness of the processing.
That's why I'm encouraging us to not rule out the possibility of a genuinely high-quality result with Ultra-D (though I'm sure that their result is better when they are able to start with a stereo image rather than a mono-one... they've said so as well).
Sony creates detail using RC by comparing to libraries of objects and textures... a recognition of "things" in the picture that can be handled according to known rules or patterns in real life.
Why couldn't "depth" be similar? a sophisticated image analysis... especially one that has the luxury of starting with a stereo image pair with Left/Right differential, could intelligently interpolate data long the z-axis.
Again, we benefit from DSP in many forms that we don't criticize for changing the original signal... rather we tend to criticize DSP algorithms that simply don't do a very good job of what they are supposed to do.
I'm a purist too... I'm one of those guys who hears "jitter" and can't stand lossy compressed music even when driving at highway speeds in the car. However, I also appreciate good resolution enhancement with digital audio... Audio Alchemy being my first processor to dare to take 16 bit word lengths and modify the original data points to accommodate 20 bit resolution with subjectively higher fidelity than the "original" 16 bit source. How can it be judged as "better"? Once you leave the realm of being" 100% faithful to the source signal" as the golden rule, I grant you that it becomes subjective, but subjective evaluation is a risk worth taking when your 16 bit audio library starts to sound more like the session master tape or mixing-console signal and less like a 1980's digital resolution compromise.
So let's be slow to criticize Ultra-D on the basis of "modified signal" alone. Instead, let's see how successful they are at accommodating the principles of human vision/perception to create an image that looks more realistic and naturally "3D" than the original parallax-based stereo L/R signal on its own.