Originally Posted by Chuck Anstey
The next standard must be complete and with zero need to ever expand it as display technology improves. Which means it should not be defined by what is currently possible with current displays as far as bit depth and color space but rather an open-ended resolution and bit depth and the full color space the typical human eye can see. Then create the appropriate standards to convert from this space down to whatever the current display is capable with possibly a reasonable set of fixed standards. For example, resolutions limited to 2K, 4K, and 8K, fixed refresh rates from 24p up to at least 144p (3D x 3*24p) and color spaces Rec. 709, some expanded color space, and the complete color space with support for up to 16-bit depth. I think the key to the new standard is that it meet or exceed what is physically practically possible for home viewing even if the current displays are nowhere near that capability.
Ideally they would also completely separate the display from the broadcast/encoding standards such that I could hook up any new system to my display and it would just work. We had this in the old analog days where many projectors could handle HDTV long before the standard even came out because the protocol (analog RGB) for display was independent of the HDTV standard. Then they broke it by saying that if your device can't talk HDMI with the specific standards, your display was not allowed. So we run into the problem we have right now where any time a new standard comes out, the display itself becomes obsolete, like Blu-ray 3D. If all you have is a 3D disc, you must have a new 3D capable player and a 3D display or you cannot use the disc. It would have been a far superior to have planned ahead to have it work in 2D players, and at the very least in 3D players connected to 2D displays as the "conversion" is as simple as displaying only the left eye's image. Same problem with Deep Color. Don't require my display to talk the latest transmission protocol or it cannot be used!
I agree with you're desire, but this is impossible not just because technology evolves, but because technology evolves in unpredictable ways: the questions that we're trying to answer and the problems that we're trying to solve with current video paradigmns won't necessarily be the same questions/problems to address in 10, 20, or 30 years.
For instance, right now we take for granted that video-media captures and reproduces flat images (3D being defined as two flat stereo images in parallel) in the shape of a rectangle (16x9, 21x9, 1.33:1 etc.... there are few variations but they're all flat quandrant-shaped boxes). Given those assumptions, we then have more assumptions such as how images are recorded in a matrix of square-shaped-pixels measured from sampling/quantizing light along a neat array of grid-like sample points.
From there, we argue about what resolution density and color-depth is necessary for those grid-arrayed pixels to make the images appear life-like from a given viewing distance, and about how many such frame captures we need per second to represent natural looking motion.
What if 20 years from now we abandon this "linear sampling" methodology in favor of fractal algorithms that represent the original image and can be rendered to a theoretical infinite output resolution? What if we abandon the whole idea of "flat retangle image capture" and go with holographic capture that measures depth axis in addition to other image parameters, so we can recreate real holographic playback viewed from any angle in true 3D? Or what if we develop cameras that can record in 360 degrees horizontal and vertical so we have "dome shaped images" for the ultimate immersion experience? Imaging the video game or nature documentary in dome-video. :-)
Whatever the next generation of cutting edge displays can deliver, it won't be long before ultra-realism breaks out of the flat 16x9 box.