Originally Posted by Stereodude
I don't think a single person in the thread has argued what you're trying to claim they have. I have said over and over that if the lens is adequately sharp for 1080p it is adequate for realizing the MTF rolling off extended resolution of 1080p e-shift. We don't need a better lens as Darin claims. And by adequately sharp I don't mean has a MTF that is just barely distinguishable from 0 with a 1080TV line patterns as you seem to think. I mean it has sharp & clearly defined 1080p pixels.
Your version of "adequate" is subjective, no matter how many times you say it
And, from the beginning many have agreed that if you have a lens that "has sharp and clearly defined pixels" (which brings with it a well-defined grid/SDE) then that lens will show most of the benefit available in the e-shift system, because in that case, the lens is a minor influence on the overall system MTF.
This discussion is not about the question of "how well do currently supplied lenses work?", these are thought experiments and the OP question contemplated, in Darin's words:
"There was one camp ... who felt that lenses only need to be good enough for the resolution of one of the e-shift frames and not for the finer elements of the full image frames that include 2 e-shift frames, since those frames don't pass through the lens at the same time." This proposition must
mean that some believe that "something different happens to the subframe images "in the lens at the same time" that doesn't happen when they "go through separately", because the eye/brain will receive the information the same way in either case.
Originally Posted by Dave Harper
As soon as you ...
A. Flash the two native sub-frames at the same exact moment in "time"...
Then you must move to a higher resolution lens with greater MTF,
Here's a thought experiment, just since we're having so much fun here
Forget about actual
projectors and actual lenses and assertions about them, for this thought experiment .
Imagine a full resolution 4k DMD chip with such a high fill factor (e.g.98%) that we can barely
see the grid even with a super duper diffraction limited lens, so we can ignore the grid for the discussion below, we just see pixels with sharp edges just touching. (Edit: they are getting closer
We can use a somewhat worse lens (than a diffraction limited one) and still not detect deterioration of the image from the viewing distance because the increased blur at the pixel edges is so small compared to the size of the pixels that the edges still appear sharp - your criterion. The information from each pixel does not contaminate any of its neighbours.
If we use progressively worse lenses we will come to a point where the viewers notice/perceive the degradation (like at the optician: "Is 1 sharper than 2" when switching lenses viewing letters on the chart
) Let's call this the lens "threshold" for this 4K chip at our viewing distance - i.e. the quality where a "better lens" would not be noticeable. Let's call it 100 on some arbitrary (and non-linear) scale. The scale combines 1) the digital input signal MTF and 2) the lens MTF into the system MTF and 3) includes our perception of the information
. Holistic, even, as Highjinx might say
It might be a little different depending on the image we choose - test patterns versus real-world images because of eye/brain processing etc, but let's accept the principle.
Now, I bin pixels together in squares of 4 in the processor to project a 1080p input image (each 1080p pixel created by 4 mirrors from the 4k device). It will not look as sharp as the 4k image, obviously, as everyone agrees, because of the decreased information content in the same screen area and the bigger pixels. So I can start making the lens worse and doing comparisons asking is "1 (our 4k threshold lens) better than 2 (a worse lens)" We will be able to use a worse lens before people will be able say "yes" to our question. As expected, a 1080p image doesn't require as good a lens as a 4k image. Let's call that lens a 50 on our scale. So far, we're "D'oh", right?
Now for the third part. I take the data from two subframes (those an e-shift processor would give me from a 4k source to be displayed by a 1080p chip with a shift device) and shift and superimpose them in the processor
into a 4k array. I now have a 4k data array carrying only the superimposed image data from the 1080p subframes going to the chip and through "at the same time". It obviously doesn't have 4k's worth of detail information but it projects all the information (from the 4k->2x 1080p processor) that the combination of the two subframes can carry. People here tell us that it IS significantly better than a single 1080p image so it must be carrying more detail information, right? When we do the lens testing the same people who say "eshift is better" will be able to tell the difference and will set the threshold at a 70 lens, for example, to resolve that extra information they perceive from shift projectors.
We take those subframes and send each to our 4-pixel binned chip so it projects 1080p images and project them sequentially by adding of an e-shift device so the superimposition happens on the screen after the lens. Which lens will people pick as the threshold? A 50 or a 70?
I think the piece missing from Stereodude's argument is the absence of the eye/brain processor in the assessment of the overall MTF of the system AS WE SEE IT.
The eye/brain never sees the single subframe on its own, it sees the successive suprimpositions so considering it in isolation doesn't make sense to me when discussing how sharp a picture appears