Originally Posted by coderguy
That's too long for me to respond to in a detailed fashion, let's keep it simple, as I have forum fatigue from this thread.
You would almost be correct except there is a very serious problem with what you said above, and that is that (a) the pixels are no longer in the same place once shifted, (b) video content and objects aren't always moving between frames, and (c) the root characteristic of flashing a displaced pixel grid every other frame is too fast to interfere with a+b to the point where you could objectively say we never see enough extra detail to be somewhat representative of the original non-interpolated 4k source. See flicker fusion...
It's the same regurgitated arguments we have seen throughout this thread. I will just say, it doesn't matter how you add detail, there is a complex chaotic relationship between added detail and what the lens shows spatially and temporally.
The problem is "what the lens has the potential to show" and what the lens actually is showing. With digital data, the potential changes of what the lens can show, especially if you start messing with stuff in the hardware.
These analog limitations people keep envisioning in their minds do not exist with digital data that is being altered in both physical and virtual forms both spatially and temporally.
You're not quite getting what I'm referring to.
Nothing that can be done with digital data by any form of software can overcome the physical resolution limitations of the lens system. A given lens system can only resolve a specific maximum resolution at a defined MTF value using a defined measurement procedure.
Let us assume for a moment that the pass/fail limit for MTF value is 20 percent. Just to pick a number.
Let us assume that a specific 1080p projector featuring an imaging device with an effective diagonal size of 0.8 inches diagonally is evaluated and it is found that it achieves its rated 1080p resolution at a measured MTF value of 20 percent,
which is a pass, but barely.
Now let us remove that lens and place it on a similar projector but equipped with E-shift, or an equivalent technology,
not focusing only on JVC's implementation of the technology. This projector also has a native 1080p imager with
a diagonal size of 0.8 inches and in every respect but for the wobulation system, (e-shift, or whatever) it is optically the same.
When fed 4K content, e-shift does its thing to creat a quasi-simulated 4K-ish picture.
Capture the output of the optical engine as it enters the lens. Capture two consecutive frames, one non-shifted, one shifted,
and display the theoretical result. It's two 1080p images superimposed over each other with a diagonal shift of 50 percent of the size of a pixel. This can be thought of as "virtual" 4K. The number of distinct elements in the picture to be displayed is
actually a question that is worthy of its own discussion as actually the number of pixels displayed in each frame is 1080p's pixel count, so actually two frames (shifted and not) amount to 2x1080p. But the "virtual" subdivision of the pixels takes it
up to 4K minus two missing corner pixels.
And here's the thing: That virtual 4K image, which has all the ANGULAR information of a 4K image, is beyond the capacity of this lens to resolve at the defined MTF value of 20 percent. While it can resolve both the non-shifted and the shifted frames which are individually 1080p, it does not have the resolution capacity to display 4K at the same MTF value as it can display 1080p.
You would not argue that this lens is incapable of displaying native 4K at the same MTF value.
But the e-shift combined simulated 4K image has, other than the two missing corner pixels, exactly the same amount of ANGULAR information to be RESOLVED as the 4K image. And this lens does not have that resolution capacity at
the defined MTF value.
The lens does not care at all if it's being fed the 4K pixels one pixel at a time or all together at once. The lens has no temporal transfer function. Regardless, THIS lens doesn't have the resolution needed to achieve the target MTF value for a 4K pixel pitch.
You can't get something for nothing and you can't resolve a 4K image, native or e-shifted, through a lens that doesn't have the angular resolution capacity to RESOLVE the smaller angular variation of 4K pixels.
Ultimately, lens resolution is all about ANGULAR resolution. That's the point I've been trying to make all along and finally
I've come up with the right words for it. ANGULAR RESOLUTION.
YES, you can get both 1080p frames, shifted and non-shifted, through the lens and they'll both individually resolve as well as a standard 1080p frame without any form of e-shift. Even moving the frame slightly (half a pixel's worth) that 1080p frame will still resolve. But the lens doesn't have the angular resolution for 4K pixel pitch no matter how it's displayed.
The difference is that the shifted image creates a "virtual" 4K image only in the respect that each pixel in the non-shifted frame is then shifted, and maybe you could say overlaid, by the intersection of four pixels in the shifted frame, creating four smaller pixels (as far as the boundaries are concerned) in the place of a single unshifted 1080p pixel. Those "virtual" 4K pixels are the problem. They require twice the angular resolution capacity out of the lens. The shift ITSELF is picture information.