Resolution requirements for lenses - Page 31 - AVS Forum | Home Theater Discussions And Reviews
Forum Jump: 
 269Likes
Reply
 
Thread Tools
post #901 of 1307 Old 07-23-2017, 10:57 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
In case it isn't clear why I include C in this example:



I'll try to explain.

It seems like there are 2 main groups in this thread.

I'm simplifying some, but there are those who believe that "optical resolution" is all that is needed to determine the lens requirements. Then there are those who believe that the "perceived resolution" is what determines the lens requirements, where the "perceived resolution" is what a viewer thinks was displayed at once, where the perceived resolution is essentially the composite images I have mentioned.

In my example, A has the same optical resolution as B, but different perceived resolution than B. B has different optical resolution than C, but the same perceived resolution.

So, if a person is in the "optical resolution rules" camp then they should believe that A and B have the same lens requirements, but B and C have different lens requirements.

If a person is in the "perceived resolution rules" camp then they should believe that A has lower lens requirements than B, while B and C have essentially the same lens requirements.

If a person says that A, B, and C have the same lens requirements then they can't really be in the camp that says "optical resolution rules", even if they say they are. If optical resolution really does rule they should say that B has lower lens requirements than C since B has lower optical resolution rules.

And the same kind of thing for "perceived resolution rules" people.

So, C is a good check to see if a person is being consistent about what they say matters, and whether they fit in one of those 2 groups, or fit into a third group that says something other than "optical resolution" or "perceived resolution" is the driving factor.

I honestly can't figure out which camp Dave's expert is in. He seems to say that perceived resolution matters more than optical resolution when he said that with totally incoherent light it makes no difference whether the sub-frames go through the lens st the same time or at different times (that is B vs C), but then turns around and says A and B have the same lens requirements because they have the same optical resolution, even though they have different perceived resolutions. Hopefully at some point we will find out what he truly believes is the deciding factor, since at this point it looks like it can't be optical resolution or perceived resolution, given the things he has said so far.

--Darin
darinp is offline  
Sponsored Links
Advertisement
 
post #902 of 1307 Old 07-23-2017, 11:51 PM
AVS Forum Special Member
 
Javs's Avatar
 
Join Date: Dec 2012
Location: Sydney
Posts: 7,830
Mentioned: 472 Post(s)
Tagged: 0 Thread(s)
Quoted: 6770 Post(s)
Liked: 6388
Quote:
Originally Posted by darinp View Post
In case it isn't clear why I include C in this example:



I'll try to explain.

It seems like there are 2 main groups in this thread.

I'm simplifying some, but there are those who believe that "optical resolution" is all that is needed to determine the lens requirements. Then there are those who believe that the "perceived resolution" is what determines the lens requirements, where the "perceived resolution" is what a viewer thinks was displayed at once, where the perceived resolution is essentially the composite images I have mentioned.

In my example, A has the same optical resolution as B, but different perceived resolution than B. B has different optical resolution than C, but the same perceived resolution.

So, if a person is in the "optical resolution rules" camp then they should believe that A and B have the same lens requirements, but B and C have different lens requirements.

If a person is in the "perceived resolution rules" camp then they should believe that A has lower lens requirements than B, while B and C have essentially the same lens requirements.

If a person says that A, B, and C have the same lens requirements then they can't really be in the camp that says "optical resolution rules", even if they say they are. If optical resolution really does rule they should say that B has lower lens requirements than C since B has lower optical resolution rules.

And the same kind of thing for "perceived resolution rules" people.

So, C is a good check to see if a person is being consistent about what they say matters, and whether they fit in one of those 2 groups, or fit into a third group that says something other than "optical resolution" or "perceived resolution" is the driving factor.

I honestly can't figure out which camp Dave's expert is in. He seems to say that perceived resolution matters more than optical resolution when he said that with totally incoherent light it makes no difference whether the sub-frames go through the lens st the same time or at different times (that is B vs C), but then turns around and says A and B have the same lens requirements because they have the same optical resolution, even though they have different perceived resolutions. Hopefully at some point we will find out what he truly believes is the deciding factor, since at this point it looks like it can't be optical resolution or perceived resolution, given the things he has said so far.

--Darin
If Dave would show him my youtube videos which does not even contain yellow pixels yet he will clearly see it, both in the sharp version and the 'lower mtf' version, and he may well change his tune a little.

Since the yellow pixel exists and is affected by the parent pixels MTF, you have no choice but to include the smaller pixels requirements since its directly affected.

JVC X9500 (RS620) | 120" 16:9 | Marantz AV7702 MkII | Emotiva XPA-7 | DIY Modular Towers | DIY TPL-150 Surrounds | DIY Atmos | DIY 18" Subs
-
MadVR Settings | UHD Waveform Analysis | Arve Tool Instructions + V3 Javs Curves
Javs is offline  
post #903 of 1307 Old 07-24-2017, 02:08 AM
AVS Forum Special Member
 
Highjinx's Avatar
 
Join Date: Dec 2002
Location: Australia
Posts: 3,417
Mentioned: 6 Post(s)
Tagged: 0 Thread(s)
Quoted: 1219 Post(s)
Liked: 496
I feel people are being misled by JVC marketing diagrams. This has thrown the whole discussion up the incorrect path.

We start with image detail addressed accross 3840x2160 pixels. If we do not want to lose image data we need to address this to 3840 x 2160 pixels or 8,294,400 pixels.

However we only have 1920x1080 pixels or 2,073,600 per sub frame that gets projected at any given time (x 2) Since these sub frames are projected in a manner that over lap, the available pixel area for new image data/detail is less than 2,073,600 x 2

So what has JVC/NHK got to do? The software has to 'look' at the 3840x2160 master image minus shift position and create a COMPRESSED data map to suit the 1920x1080 number of carrier pixels, then the software needs to side step, look at the 3840x2160 master image again and create another COMPRESSED data map to suit the second sub frame of the same size. Ensuring overlapping areas contain similar/same data and non overlapping areas contain new data.

These as we know they are projected sequentially to create the full image, the end result contains nowhere near the original 3840x2160 image data. But from the appropriate viewing distance does just fine. Each sub frame can be manipulated differently to synthesize a desired positive outcome, but the entire pixels that over lap have to be manipulated, we can’t just address that 1/4 pixel area. New image data can only be addressed to non overlapping areas.

JVC marketing speak has 'misled' us into believing no compression takes place. Compression has to take place, they cannot cram image detail of 8,294,400 pixels into 2 overlapped 2,073,600 pixels.

At no time is data of an area of 1/4 1080 pixel ever created or addressed by the projector. This is an illusion created by the corner overlap of full sized 1080 pixels and clever compression/enhancement and human visual psychology.

Never is the lens called upon do deliver more detail than what a 1920x1080 chip can provide. The addressed image data HAS to stay within the image data limits/capacity the chip has. If each flash needs to carry more detail, the pixel count has to increase. Each sub frame carries similar detail in over lapping areas and dissimilar detail in non overlapping areas, but always within the data limitation of what 2,073,600 pixels can be addressed with.

The algorithm is a very clever one.

Original UHD Image data is being thrown away, no way around that, unless there is a 1080 4 flash with no overlap.

Think these diagrams should be discarded and the think tank do a re-boot......
Dave Harper and R Johnson like this.

May the success of a Nation be judged not by its collective wealth nor by its power, but by the contentment of its people.
Hiran J Wijeyesekera - 1985.
Highjinx is offline  
Sponsored Links
Advertisement
 
post #904 of 1307 Old 07-24-2017, 02:28 AM
 
Dave Harper's Avatar
 
Join Date: Feb 2000
Location: Paradise on Earth
Posts: 6,554
Mentioned: 62 Post(s)
Tagged: 1 Thread(s)
Quoted: 3159 Post(s)
Liked: 1721
Quote:
Originally Posted by Highjinx View Post
I feel people are being misled by JVC marketing diagrams. This has thrown the whole discussion up the incorrect path.



We start with image detail addressed accross 3840x2160 pixels. If we do not want to lose image data we need to address this to 3840 x 2160 pixels or 8,294,400 pixels.



However we only have 1920x1080 pixels or 2,073,600 per sub frame that gets projected at any given time (x 2) Since these sub frames are projected in a manner that over lap, the available pixel area for new image data/detail is less than 2,073,600 x 2



So what has JVC/NHK got to do? The software has to 'look' at the 3840x2160 master image minus shift position and create a COMPRESSED data map to suit the 1920x1080 number of carrier pixels, then the software needs to side step, look at the 3840x2160 master image again and create another COMPRESSED data map to suit the second sub frame of the same size. Ensuring overlapping areas contain similar/same data and non overlapping areas contain new data.



These as we know they are projected sequentially to create the full image, the end result contains nowhere near the original 3840x2160 image data. But from the appropriate viewing distance does just fine. Each sub frame can be manipulated differently to synthesize a desired positive outcome, but the entire pixels that over lap have to be manipulated, we can’t just address that 1/4 pixel area. New image data can only be addressed to non overlapping areas.



JVC marketing speak has 'misled' us into believing no compression takes place. Compression has to take place, they cannot cram image detail of 8,294,400 pixels into 2 overlapped 2,073,600 pixels.



At no time is data of an area of 1/4 1080 pixel ever created or addressed by the projector. This is an illusion created by the corner overlap of full sized 1080 pixels and clever compression/enhancement and human visual psychology.



Never is the lens called upon do deliver more detail than what a 1920x1080 chip can provide. The addressed image data HAS to stay within the image data limits/capacity the chip has. If each flash needs to carry more detail, the pixel count has to increase. Each sub frame carries similar detail in over lapping areas and dissimilar detail in non overlapping areas, but always within the data limitation of what 2,073,600 pixels can be addressed with.



The algorithm is a very clever one.



Original UHD Image data is being thrown away, no way around that, unless there is a 1080 4 flash with no overlap.



Think these diagrams should be discarded and the think tank do a re-boot......

I think you're on the right track Highjinx. You can clearly see what you're referring to in this diagram. See where the smaller box in each pixel is and it has a caption with an arrow pointing to it that says....."sub-frame a image content" and "sub-frame b image content":



I was going to mention this in my earlier post, but I guess I forgot. I've been extremely busy with work lately (Thank God!)
Highjinx likes this.
Dave Harper is offline  
post #905 of 1307 Old 07-24-2017, 10:23 AM
AVS Forum Addicted Member
 
Stereodude's Avatar
 
Join Date: Jan 2002
Location: Detroit Metro Area
Posts: 15,120
Mentioned: 36 Post(s)
Tagged: 0 Thread(s)
Quoted: 4126 Post(s)
Liked: 2938
Quote:
Originally Posted by AJSJones View Post
So you comments on MTF., roll-off and Nyquist are no longer relevant to the lens discussion. If you can resolve the individual pixels on screen (as shown in countless images of 2k and 4k pixel arrays and see the scans across such images), the lens is demonstrating substantial MTF for transmission of the analog image of the chip (which has physical dimensions, not digital samples any more), wherever it came from.
That's precisely relevant to the discussion because that is the entire discussion. Whether the extra detail from a UHD source can be seen (or not) without a better lens using e-shift.

Quote:
Is there more blur, proportionally, around the smaller areas that delineate the new "pixels" than there is around the areas that delineate the big pixels in each subframe????? Does that affect the perception of increased detail? See post # 889 above. Of course it does. All you need to do to appreciate that is de-focus the image slightly - you'll lose the shift detail well before losing the 1080 detal.
Well, yeah. The extra e-shift created detail has a lower MTF by the nature of how it's created and it will get lost first if you start to apply a more aggressive low pass filter to the optical system by defocusing. However, even if you have a perfect ideal lens the JVC/Epson e-shift system the MTF by a 2160TV line pattern is still 0 because those mythical 1/4 size pixels everyone is hung up on don't actually exist and can't be rendered (even if we pretend the two subframes are one).

Quote:
Originally Posted by Highjinx View Post
JVC marketing speak has 'misled' us into believing no compression takes place. Compression has to take place, they cannot cram image detail of 8,294,400 pixels into 2 overlapped 2,073,600 pixels.
They can, but with a steadily declining MTF. So the detail is less perceptible than lower res detail, but it has more detail than straight 1920x1080. As to how much depends on the threshold of MTF you want to use as the definition of resolve. The TI "4k" e-shift chips should have a better MTF out to higher resolutions. They should not have an effective MTF of 0 at 3840x2160 (ignoring the lens).
Stereodude is offline  
post #906 of 1307 Old 07-24-2017, 12:47 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Highjinx View Post
I feel people are being misled by JVC marketing diagrams. This has thrown the whole discussion up the incorrect path.
...
Original UHD Image data is being thrown away, no way around that, unless there is a 1080 4 flash with no overlap.
I think we all agree that you can't display all 8 million pixel images with 4 million pixels, especially when those overlap. One thing I noticed about your post is that the issue you are discussing has nothing to do with the timing of the 2 sub-frames.

If people want to talk about whether JVC processes the frames in such a way that even if the sub-frames went through the lens at the same time the lens requirements would not be any higher than native 1080p, then we could discuss that. It wouldn't really be relevant to the reason I started this thread, which was because people said that eShift doesn't require any higher lens than native 1080p because the sub-frames don't go through the lens at the same time. I started this thread to show that you have to consider the whole task the lens has, not just for an instant in time. Your subject would be related to lens requirements between native and eShift, but not to whether it matters when the photons go through the lens.

As I said, I have been assuming that JVC would retain certain detail above 1080p, but below 4k, from 4k sources, but they wouldn't have to.
Quote:
Originally Posted by Stereodude View Post
... because those mythical 1/4 size pixels everyone is hung up on don't actually exist and can't be rendered (even if we pretend the two subframes are one).
What is your position about what JVC (or others) would do with a 4K test pattern like the one on the left, where I showed how a 1:1 rendering on a 4K display would look:



Is it your position that my depiction is wrong, and JVC wouldn't actually render that as I showed with eShift? In other words, do you think they would deviate from the original test pattern when making the sub-frames, even though they could actually render that particular pattern accurately?

We know that they cannot do every 4K test pattern, but are you saying that even if they could do a particular one, and thus display a 1/4 sized 1080p pixel accurately to the 4K source, that they wouldn't?

--Darin
Attached Thumbnails
Click image for larger version

Name:	redAndGreen4ka.jpg
Views:	211
Size:	44.5 KB
ID:	2256849  

Last edited by darinp; 07-24-2017 at 12:52 PM.
darinp is offline  
post #907 of 1307 Old 07-24-2017, 01:09 PM
Advanced Member
 
AJSJones's Avatar
 
Join Date: Dec 1999
Location: Sonoma County, CA, US
Posts: 886
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 100 Post(s)
Liked: 24
I get it that the system MTF is made up basically of 3 parts : 1) the lens used on the camera, 2) the density/oversampling /colour array of the sensor and the signal encoding for 1080p etc : those parts are limited by sampling theory, Nyquist etc. Then there's 3) the projection system MTF . Both lenses are ultimately limited by diffraction, Airy disk etc. but before that by "quality" and the psf which lead to blur. The system MTF is the combination.

Quote:
Originally Posted by Stereodude View Post
Well, yeah. The extra e-shift created detail has a lower MTF by the nature of how it's created and it will get lost first if you start to apply a more aggressive low pass filter to the optical system by defocusing.
Now you're talkin'

The same effect will happen when you use use a poorer (i.e. with a worse MTF curve) but still focused lens. So a lens that is (only just) good enough for 1080p will not project as good an image of these "features with lower MTF" as a better lens could, to maximize system MTF.

Last edited by AJSJones; 07-24-2017 at 01:45 PM.
AJSJones is offline  
post #908 of 1307 Old 07-24-2017, 01:11 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Dave Harper,

I'll try to explain in simple terms why it is so hard to figure out what your expert's position actually is. In his initial responses to you he said:
Quote:
Originally Posted by Dave Harper View Post
The important part is, "But they could pass through the lens at the same time so long as the light was perfectly incoherent."

Seems very clear to me that he is saying the timing of the 2 sub-frames doesn't matter if the light is perfectly incoherent. I've already mentioned that we are not dealing with perfectly incoherent light, but we are normally dealing with light that is pretty incoherent. So, there is an effect from not being perfect, but I don't think that is a large effect given how many wavelengths of light we do actually get with most of these projectors.

Then later from this:
Quote:
Originally Posted by Dave Harper View Post
Here you refer to what I have told you with:

"Then he says, "time doesn't matter, it can go through separate or at the same time"."

and he responds:

"There isn't a composite image, it is just perceived to be composite"

Given those claims from your expert I'm not sure how anybody could figure out whether his true belief is that with perfectly incoherent light the timing of the sub-frames doesn't matter, or that it does. Not sure why he would imply they could pass through at the same time without changing things if he didn't believe that.

--Darin

Last edited by darinp; 07-24-2017 at 02:15 PM.
darinp is offline  
post #909 of 1307 Old 07-24-2017, 01:35 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Tomas2 View Post
Darin please stop saying this. No one is suggesting photons are bumping into each other. The interaction is a result of the absorption and re-emmision progress through the lens. As someone said "it's unavoidable"
It happens within sub-frames too, where there are millions of points (even more than the number of pixels if we consider points within single pixels, since those pixels have real physical sizes well above the atomic level) and billions upon billions of photons. So, why do you think it would matter if both sub-frames were displayed at the same time?

Maybe I just missed it, but it seems like you haven't said whether you agree or disagree with this Wikipedia page even though it has been posted many times and you keep making the same claim about interactions:

Quote:
The degree of spreading (blurring) of the point object is a measure for the quality of an imaging system. In non-coherent imaging systems such as fluorescent microscopes, telescopes or optical microscopes, the image formation process is linear in power and described by linear system theory. This means that when two objects A and B are imaged simultaneously, the result is equal to the sum of the independently imaged objects. In other words: the imaging of A is unaffected by the imaging of B and vice versa, owing to the non-interacting property of photons.
Obviously, microscopes, etc. have lenses and the "unavoidable" you mentioned applies to them too. So, how can they just add up image A to image B to get the whole distribution, if eShift can't do that with the sub-frames (other than a small effect from the light not being perfectly incoherent)?

Do you disagree with what that Wikipedia page says?

--Darin
darinp is offline  
post #910 of 1307 Old 07-24-2017, 01:58 PM
AVS Forum Special Member
 
Dave in Green's Avatar
 
Join Date: Jan 2014
Location: USA
Posts: 7,843
Mentioned: 139 Post(s)
Tagged: 0 Thread(s)
Quoted: 3537 Post(s)
Liked: 2727
Our perception of data is affected by the quality of the lens of opinion through which it's been projected. For the opinion of Dave's expert to have full weight here he should be in direct communication with Darin to get the full details of Darin's position without it being projected through the lens of someone with an opposing opinion who may not be providing the expert with a well-focused image of Darin's position.
Dave in Green is online now  
post #911 of 1307 Old 07-24-2017, 04:51 PM
AVS Forum Special Member
 
R Johnson's Avatar
 
Join Date: Nov 2002
Location: Chicago, Illinois
Posts: 1,778
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 89 Post(s)
Liked: 61
Quote:
Originally Posted by Dave in Green View Post
... a well-focused image of Darin's position.
It might help many of us to see a short, clear post from Darin which states the question he wants to debate, along with his answer or position.
R Johnson is offline  
post #912 of 1307 Old 07-24-2017, 05:58 PM
AVS Forum Special Member
 
Dave in Green's Avatar
 
Join Date: Jan 2014
Location: USA
Posts: 7,843
Mentioned: 139 Post(s)
Tagged: 0 Thread(s)
Quoted: 3537 Post(s)
Liked: 2727
Quote:
Originally Posted by R Johnson View Post
It might help many of us to see a short, clear post from Darin which states the question he wants to debate, along with his answer or position.
I don't have a technical background but Darin's point was pretty obvious to me. I've stated my understanding in my own simple, non-technical words several times in this thread and Darin has agreed that I got the gist of it. So I don't understand why people who seem to have more of a technical background than I do don't seem to get it and keep wandering off onto side issues.
Dave in Green is online now  
post #913 of 1307 Old 07-24-2017, 07:16 PM
Advanced Member
 
Tomas2's Avatar
 
Join Date: Jan 2017
Location: New Orleans,LA
Posts: 725
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 453 Post(s)
Liked: 688
I believe we all have given our perspective on the subject and I see no reason to label any member here a "troll" or the like.

If we are lucky Dave will be able to throw us a technical bone when he hears back from the PhD in optics. I also believe the B vs C hypothetical will be clearly resolved and we will all collectively agree with the details.

Cheers
Dave Harper and Highjinx like this.

SAMSUNG QLED | ROTEL | MOREL | M&K | HAFLER | TECHNICS SP-25
Tomas2 is offline  
post #914 of 1307 Old 07-24-2017, 07:28 PM
AVS Forum Addicted Member
 
coderguy's Avatar
 
Join Date: Dec 2006
Posts: 13,922
Mentioned: 61 Post(s)
Tagged: 0 Thread(s)
Quoted: 2341 Post(s)
Liked: 1270
Except that people are arguing in their own world bypassing the most important part of this and the arguments are too black and white.

The anti-aliasing effect at most seating distances is actually more important than the ability to address pixels in their algorithm, and the anti-aliasing effect is greatly affected by the lens. It does not matter if "the lens is called upon", you don't need to call upon a lens or address individual pixels exactly for the edges to still be affected by frames coming together (even temporally over time).

The fact the pixels aren't addressed pure and are instead extrapolated and based on a compressed algorithm is of little consequence to whether or not it affects lens requirements from the AA effect in a wholly sense. Sure, individual addressing pixels may increase requirements when comparing to AA alone, but that doesn't mean you can ignore needing a better lens only for AA effects either (hence, why this argument has turned to B&W nonsense).

Why I even commented on this never-ending thread, I don't know.

**Updated Projector Calculator Released NOV 2017**
-- www.webprojectorcalculator.com --

Last edited by coderguy; 07-24-2017 at 07:37 PM.
coderguy is online now  
post #915 of 1307 Old 07-24-2017, 09:48 PM
AVS Forum Addicted Member
 
Stereodude's Avatar
 
Join Date: Jan 2002
Location: Detroit Metro Area
Posts: 15,120
Mentioned: 36 Post(s)
Tagged: 0 Thread(s)
Quoted: 4126 Post(s)
Liked: 2938
Quote:
Originally Posted by darinp View Post
What is your position about what JVC (or others) would do with a 4K test pattern like the one on the left, where I showed how a 1:1 rendering on a 4K display would look:



Is it your position that my depiction is wrong, and JVC wouldn't actually render that as I showed with eShift? In other words, do you think they would deviate from the original test pattern when making the sub-frames, even though they could actually render that particular pattern accurately?

We know that they cannot do every 4K test pattern, but are you saying that even if they could do a particular one, and thus display a 1/4 sized 1080p pixel accurately to the 4K source, that they wouldn't?
You're the man of a million shifting arguments. You keep trying to find places where you can argue. Just because you can make a test "4K" test pattern with some 1/4 sized pixels (relative to HD) that e-shift can fully render doesn't make your assertion about lens quality right. Your assertion has been wrong, still is wrong, and will continue to be wrong. The temporal aspect you've based this thread on is irrelevant and is a waste of time distraction, just like this new test pattern you're talking about. A better lens is not required to make use of the extra resolution that a 1080p e-shift system can deliver if the lens is adequately sharp for 1080p. Your newly discovered slides from JVC clearly demonstrate this.
Dave Harper likes this.

Last edited by Stereodude; 07-24-2017 at 09:52 PM.
Stereodude is offline  
post #916 of 1307 Old 07-24-2017, 10:25 PM
Advanced Member
 
AJSJones's Avatar
 
Join Date: Dec 1999
Location: Sonoma County, CA, US
Posts: 886
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 100 Post(s)
Liked: 24
Quote:
Originally Posted by Stereodude View Post
. A better lens is not required to make use of the extra resolution that a 1080p e-shift system can deliver if the lens is adequately sharp for 1080p. .
But you also said above: "Well, yeah. The extra e-shift created detail has a lower MTF by the nature of how it's created"

Thise comments illustrate one key issue: what we mean by "adequate for 1080p" - if it's more than adequate (as I've suggested often above becuse we can usually see the screen door effect and demonstrably resolve features much smaller than pixels on screen) then the additional challenge of the smaller pixels will probably be met - especially since the benefit of the e-shift is modest in terms of perceived detail. So for most practical purposes the (more than adequate for 1080p) lens will not need to be upgraded if the chip is converted to the e-shift type. It's only when the lens is barely adequate for 1080p that the lower MTF you correctly identify will be a challenge, and the benefit is likely to be small. For the 2.7K 4 million pixels x 2 system, the benefit may be larger.

As for the "existence" of the "yellow pixel", that's a philosophical issue because the brain eye/brain sees it.

One contention that has generated the most discussion (I think) is the notion that there is a difference in sharpness in what the brain sees as a yellow pixel, depending on whether the subframes go through at different times or at the same time. Some have suggested that there is interaction within the lens that causes the loss of resolution if they go at the same time. No-one who has made this claim has presented any evidence for the basis of the claim - a simple physics text or optics link or wikipedia or published article. Zip. Nada. If there's any support for this idea it should be easy to find out there on the interwebs.

Last edited by AJSJones; 07-25-2017 at 10:31 AM.
AJSJones is offline  
post #917 of 1307 Old 07-25-2017, 10:51 AM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Stereodude View Post
You're the man of a million shifting arguments. You keep trying to find places where you can argue. Just because you can make a test "4K" test pattern with some 1/4 sized pixels (relative to HD) that e-shift can fully render doesn't make your assertion about lens quality right. Your assertion has been wrong, still is wrong, and will continue to be wrong. The temporal aspect you've based this thread on is irrelevant and is a waste of time distraction, just like this new test pattern you're talking about. A better lens is not required to make use of the extra resolution that a 1080p e-shift system can deliver if the lens is adequately sharp for 1080p. Your newly discovered slides from JVC clearly demonstrate this.
Talk about changing positions, this whole time you have been saying that the temporal separation between the 2 sub-frames is the reason that eShift doesn't require a better lens than the native resolution requires. So, which is it? Have you changed your mind to now saying that the temporal separation isn't the reason that you say eShift doesn't have higher lens requirements?

Going back for more detail, people can look at the whole series of posts that started this, but here are some relevant parts from back in 2015 when Seegs said:
Quote:
Originally Posted by Seegs108 View Post
8K through eshift wouldn't need a lens that can resolve 8K resolution, the same way 4K through eshift doesn't need a lens that can resolve 4K. The current JVC eshift units only flash a 1080p image through the lens at any given time so your point is a moot one (and would apply to 8K from eshift 4K panels) and if JVC were to bring us an 8K eshifting unit next year, yes, processing would be would be a big problem if they wanted to bring this unit out at an economical price point like the do with their current line up.
One of the things I brought up at that time was this:
Quote:
Originally Posted by darinp2 View Post
They are overlapped spatially, just not temporally.

Not sure what this lack of temporal overlap has to do with anything.
Quote:
Originally Posted by darinp2 View Post
I know how JVC's e-shift works and knew it when I brought the original point up. You still haven't explained why the lack of temporal overlap matters.
Quote:
Originally Posted by darinp2 View Post
The fact that the system uses temporal separation to create something that looks like the image on the lower right is irrelevant to that point.
Here is your first post on the subject matter back on 11/14/15:
Quote:
Originally Posted by Stereodude View Post
No it doesn't because the image on the bottom right is never sent through the lens. That's just what the viewer percieves.

What if eshift worked by vibrating/shifting the whole projector. Would that also require a lens with extra resolving power? What if we vibrate/shift the viewer?

Eshift is not spatial, it uses a temporal "trick" to gain percieved spatial resolution. I can't follow why you think the instantaneous image going through the lens at any given moment has increased spatial resolution.
Over 18 months later (just this month) you clarified that your position was still that the temporal separation between the sub-frames was the reason you thought that eShift doesn't have any higher lens requirements, with this exchange:
Quote:
Originally Posted by Stereodude View Post
Quote:
Originally Posted by darinp View Post
Isn't your whole position based on the lens requirements being different if the two sub-frames go through the lens at different times than at the same time?
Yes, absolutely.
Did you change your position from that post, even as you try to act like I am the one who has changed my position?

Not sure why you are now saying that the temporal aspect is irrelevant, while also claiming I am wrong, when that was what I said right from the beginning. That is, separating them temporally doesn't mean that you don't judge on resolving the whole image that is displayed over time.

If you think the JVC slides support your position for the last 18+ months that if the eShift sub-frames went through the lens at the same time the lens requirements would be higher than for native 1080p, but if they go through at different times then the lens requirements are the same as for native 1080p, then you are wrong. If you think the JVC slides support your original position about the temporal separation between the sub-frames being the driving issue for the lens requirements, then please explain how you think they support that A and B in my example have the same lens requirements, but C has higher lens requirements than B, which you clearly stated was your position just 3 weeks ago.

Or have you now changed your position so that A and C have the same lens requirements?

How do you think JVC even measured a 2700 line test pattern given that a native 1080p chip can never send a 2700 line pattern through the lens at a moment in time? The answer is that they measured the whole image over time, which is what I have said from the beginning was the important factor. The whole image has higher spatial resolution in that case than 1920, which is what I said from the beginning about eShift being able to create more spatial resolution than the native resolution can do, even though that spatial resolution is temporally separated.

--Darin

Last edited by darinp; 07-25-2017 at 11:03 AM.
darinp is offline  
post #918 of 1307 Old 07-25-2017, 11:51 AM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by AJSJones View Post
Some have suggested that there is interaction within the lens that causes the loss of resolution if they go at the same time.
I may regret commenting on this since it is kind of a side issue and I think this interaction between the sub-frames averages out to about zero. The reason I think it averages out to essentially zero is that interactions within the sub-frames between billions and billions of points on the chips (smaller than the pixels) and billions and billions of photons are already accounted for whether the sub-frames go at the same time or not. Any extra interactions between waves by sending the sub-frames at the same time would add in some cases and subtract in others, to average out to basically zero overall. Kind of like how high lamp and low lamp should average out to basically the same diffraction problem, even though high lamp sends photons at a faster rate.

However, if people were right that C in the following has different lens requirements than B because of interactions between the sub-frames:



then I think there would be something interesting.

There are different ways to define lens requirements, but one way to go is by what Dave Harper's expert said about a lens being considered a perfect lens if it has anomalies that are small enough that the problem with them is less than the diffraction problem. If we use that definition, and case C has worse diffraction than case B, then case C actually has lower lens requirements to achieve a "perfect lens" than case B does. If A and B have the same lens requirements then C would have lower lens requirements than A.

I don't recall anybody saying that they thought C had lower lens requirements than A, even though many people say that in case C there would be a problem with the light between the sub-frames interacting that is avoided by doing case B.

My impression was that those who say C has different lens requirements than B were saying that C had higher lens requirements, not lower lens requirements.

--Darin

Last edited by darinp; 07-25-2017 at 12:04 PM.
darinp is offline  
post #919 of 1307 Old 07-25-2017, 12:08 PM
AVS Forum Special Member
 
Dave in Green's Avatar
 
Join Date: Jan 2014
Location: USA
Posts: 7,843
Mentioned: 139 Post(s)
Tagged: 0 Thread(s)
Quoted: 3537 Post(s)
Liked: 2727
Part of the reason why the discussion tends to wander into debating side issues is the use of terms such as "adequate." Adequate only means satisfactory or acceptable quality, which can vary wildly depending on who's judging the quality.

That's why I continue to use the very specific wording that images with greater levels of fine detail are more demanding of lens quality. That's something that everyone agrees on, so no need to use alternative terms that cause the discussion to wander.

Similarly everyone agrees that neither 1080p alone nor 1080p+e-shift are as demanding of lens quality as 4k, so there's no reason for 4k to be brought into the discussion. There's no reason to debate any point that everyone already agrees on.

My specific wording is meant to focus on the core issue of whether or not 1080p+e-shift is more demanding of lens quality than 1080p alone. What needs to be addressed is whether or not 1080p+e-shift produces a finer level of detail than 1080p alone and is therefore more demanding of lens quality.

I continue to believe that 1080p+e-shift does in fact produce an image with a finer level of detail than 1080p alone and that how or when that finer level of detail is created does not change the fact that it's more demanding of lens quality. I understand that others disagree with that and I appreciate when the focus is on making counterpoints that specifically address the core questions:

* 1080p+e-shift either does or doesn't produce an image with a finer level of detail than 1080p alone.

* How or when a finer level of detail is created either has an impact on whether the image is more demanding of lens quality or it doesn't.
Dave in Green is online now  
post #920 of 1307 Old 07-25-2017, 12:32 PM
Advanced Member
 
AJSJones's Avatar
 
Join Date: Dec 1999
Location: Sonoma County, CA, US
Posts: 886
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 100 Post(s)
Liked: 24
Quote:
Originally Posted by darinp View Post
I may regret commenting on this since it is kind of a side issue and I think this interaction between the sub-frames averages out to about zero. The reason I think it averages out to essentially zero is that interactions within the sub-frames between billions and billions of points on the chips (smaller than the pixels) and billions and billions of photons are already accounted for whether the sub-frames go at the same time or not. Any extra interactions between waves by sending the sub-frames at the same time would add in some cases and subtract in others, to average out to basically zero overall. Kind of like how high lamp and low lamp should average out to basically the same diffraction problem, even though high lamp sends photons at a faster rate.

Think about it - if photons did interact in the lens (or even outside it) in any detectable way, we wouldn't be able to see! Those interactions you mention (for any natural (i.e. incoherent) light are about as small and as relevant as the spontaneous generation/and annihilation of particle/antiparticle pairs that happen in quantum foam You are correct that we can ignore them, however
AJSJones is offline  
post #921 of 1307 Old 07-25-2017, 12:57 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by R Johnson View Post
It might help many of us to see a short, clear post from Darin which states the question he wants to debate, along with his answer or position.
Sorry that this isn't going to be short.

My main issue right from the beginning has been whether it is true that by separating images in time the lens requirements are based only on the light that goes through the lens at an instant, not the light that goes through the lens over time.

My position from the beginning is that it is the system requirements that matter, which include performance over time, not the "instant in time" requirements.

I'm still confused by the people who say that you judge a lens for a single chip DLP partially by how well it lines up the primary colors, which means that the lens requirements cannot just be for an "instant in time", but also say that with eShift you only judge by an "instant in time" and not by the whole images the projector delivers to humans, light meters, and normal speed cameras. We couldn't even test the color performance of a single chip DLP projector if we didn't take into account the whole image over time.

My point about system requirements are what matters applies even if you use 10 projectors to create images off the same surface. If adding up the individual requirements doesn't meet the system requirements, then you need to go back and change the individual requirements until they do. If the system requirements are more demanding than all those individual requirements added up, then you need to increase the individual requirements, but if the system requirements are low you can actually decrease the individual requirements when designing the whole system.

My point also isn't just about projectors. It applies to many things in life.

As an example, if you want to 3D print a block that is 1" tall so that it looks like it is straight then there are requirements for that. If you want to print 72 of these so that they interlock and create a tower 72" tall that looks straight, the requirements for each of those blocks you print is affected by the system requirements. If all you do is print a 3D block so that it looks adequately straight all by itself that does not mean it is adequately straight so that 72 of the exact same block stacked on top of each other would look straight and not fall over due to too much lean.

I am very confident in this position that it is the system requirements that matter, even if things are separated temporally in ways that we don't even detect.

I am more flexible on my secondary position, which is that since the temporal differences don't matter, and eShift can create smaller details than the native resolution, then eShift has higher lens requirements than just the native resolution. I could be convinced that even if eShift sent both sub-frames at the same time the lens requirements wouldn't be any higher than the native resolution, if somebody can show that eShift processes the images in such a way that they won't properly display smaller details in 4K images even if they could. For instance, if somebody could show that eShift would never actually display something like this:



properly, either because it would require too much intelligence, or they have decided to filter in a way that won't display that 4K test pattern properly, then the lens wouldn't need to resolve that case. If they would display that as I have shown then the lens would be tasked with displaying it properly.

--Darin

Last edited by darinp; 07-25-2017 at 01:00 PM.
darinp is offline  
post #922 of 1307 Old 07-25-2017, 01:23 PM
AVS Forum Addicted Member
 
Stereodude's Avatar
 
Join Date: Jan 2002
Location: Detroit Metro Area
Posts: 15,120
Mentioned: 36 Post(s)
Tagged: 0 Thread(s)
Quoted: 4126 Post(s)
Liked: 2938
Quote:
Originally Posted by darinp View Post
Talk about changing positions, this whole time you have been saying that the temporal separation between the 2 sub-frames is the reason that eShift doesn't require a better lens than the native resolution requires. So, which is it? Have you changed your mind to now saying that the temporal separation isn't the reason that you say eShift doesn't have higher lens requirements?

Going back for more detail, people can look at the whole series of posts that started this, but here are some relevant parts from back in 2015 when Seegs said:
One of the things I brought up at that time was this:


Here is your first post on the subject matter back on 11/14/15:

Over 18 months later (just this month) you clarified that your position was still that the temporal separation between the sub-frames was the reason you thought that eShift doesn't have any higher lens requirements, with this exchange:

Did you change your position from that post, even as you try to act like I am the one who has changed my position?

Not sure why you are now saying that the temporal aspect is irrelevant, while also claiming I am wrong, when that was what I said right from the beginning. That is, separating them temporally doesn't mean that you don't judge on resolving the whole image that is displayed over time.

If you think the JVC slides support your position for the last 18+ months that if the eShift sub-frames went through the lens at the same time the lens requirements would be higher than for native 1080p, but if they go through at different times then the lens requirements are the same as for native 1080p, then you are wrong. If you think the JVC slides support your original position about the temporal separation between the sub-frames being the driving issue for the lens requirements, then please explain how you think they support that A and B in my example have the same lens requirements, but C has higher lens requirements than B, which you clearly stated was your position just 3 weeks ago.

Or have you now changed your position so that A and C have the same lens requirements?

How do you think JVC even measured a 2700 line test pattern given that a native 1080p chip can never send a 2700 line pattern through the lens at a moment in time? The answer is that they measured the whole image over time, which is what I have said from the beginning was the important factor. The whole image has higher spatial resolution in that case than 1920, which is what I said from the beginning about eShift being able to create more spatial resolution than the native resolution can do, even though that spatial resolution is temporally separated.

--Darin
You're not going to suck me into this strawman you've been arguing for 18+ months. You're wrong. The projector only has a 1080p image going through it at any given time. That's completely indisputable. However, even if we were to accept your argument that there is no temporal aspect to this you're still wrong. It changes nothing. The perceived resolution doesn't require a better lens. The perceived MTF still rolls off to 0 above 1080 TV lines as 2160 is approached regardless of the lens. The area under the curve after 1080 TV lines is realizable without swapping in a better lens.
Dave Harper likes this.
Stereodude is offline  
post #923 of 1307 Old 07-25-2017, 01:25 PM
AVS Forum Addicted Member
 
Stereodude's Avatar
 
Join Date: Jan 2002
Location: Detroit Metro Area
Posts: 15,120
Mentioned: 36 Post(s)
Tagged: 0 Thread(s)
Quoted: 4126 Post(s)
Liked: 2938
Quote:
Originally Posted by AJSJones View Post
But you also said above: "Well, yeah. The extra e-shift created detail has a lower MTF by the nature of how it's created"
Because the 1/4 sized "4k" pixels aren't individually addressable. That's why the MTF rolls off at the chip level regardless of the lens.
Dave Harper likes this.
Stereodude is offline  
post #924 of 1307 Old 07-25-2017, 02:00 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Stereodude View Post
You're not going to suck me into this strawman you've been arguing for 18+ months. You're wrong. The projector only has a 1080p image going through it at any given time. That's completely indisputable. However, even if we were to accept your argument that there is no temporal aspect to this you're still wrong. It changes nothing. The perceived resolution doesn't require a better lens. The perceived MTF still rolls off to 0 above 1080 TV lines as 2160 is approached regardless of the lens. The area under the curve after 1080 TV lines is realizable without swapping in a better lens.
You are being a real piece of work.

Seems like you are now stating that it doesn't matter whether you have been wrong for 18 months that the temporal separation is what matters or if I have been right for 18 months that the temporal separation doesn't matter, which I made clear was the main subject of this thread right from the original post.

Reading through the lines it seems pretty clear that you now realize you were wrong about the main subject matter (the temporal separation mattering or not) right from day one when Seegs first made the initial post about the timing mattering, then cleared it up that the timing was what mattered, and your first post where you clearly stated:
Quote:
Originally Posted by Stereodude View Post
No it doesn't because the image on the bottom right is never sent through the lens. That's just what the viewer percieves.
Seems like what you really don't want to get "sucked into" is admitting that you've been wrong this whole time. You have been happy to tell me I was wrong about the temporal separation not mattering for 18 months, and now you are seem to be conveniently changing that to something like that, "It doesn't matter if you have been right about the temporal separation not mattering." Have you lost confidence that you were right about the temporal aspect mattering?

I guess we shouldn't expect you to actually answer what you now believe the relationship is between A, B, and C in the following:



Quote:
With totally incoherent light do A, B, and C all have the same lens requirements:

A: One 2.7k image.
B: Two different 2.7k images with half pixel offset and going through the lens at different times.
C: Two different 2.7k images with half pixel offset and going through the lens at the same time.
D. One 4k image.
I would be happy to discuss the matter of whether eShift has higher lens requirements than native, if you will be honest and say whether you changed your position just in the last week about whether the temporal separation between the sub-frames matters or not to the lens requirements, but I'm not holding my breath given how hard you seem to be trying to keep from admitting that you learned something and now realize you were wrong in your very first sentence about the subject matter 18 months ago when you said:
Quote:
Originally Posted by Stereodude View Post
No it doesn't because the image on the bottom right is never sent through the lens. That's just what the viewer percieves.
--Darin

Last edited by darinp; 07-25-2017 at 02:29 PM.
darinp is offline  
post #925 of 1307 Old 07-25-2017, 02:21 PM
Advanced Member
 
AJSJones's Avatar
 
Join Date: Dec 1999
Location: Sonoma County, CA, US
Posts: 886
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 100 Post(s)
Liked: 24
Quote:
Originally Posted by Stereodude View Post
Because the 1/4 sized "4k" pixels aren't individually addressable. That's why the MTF rolls off at the chip level regardless of the lens.
That comment is relevant to the algorithm and the e-shift processing of input data but not the lens. I have made it explicit that I'm just referring to the MTF curve of the lens when discussing how the images from the physical chip get to the screen. The MTF constraints of the digital/chip side are over after that image has been created and the discussion is (or started out as) about the requirements of the lens to project the (extra) individual pixels without blurring them too much.
AJSJones is offline  
post #926 of 1307 Old 07-25-2017, 02:23 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Looks like this post from a month ago is pretty prophetic to me:
Quote:
Originally Posted by coderguy View Post
I don't think it's about people getting it, I think it's about people not wanting to admit they were wrong in a forum.
I think even when most people find out they are wrong, they either just move on, or just keep arguing.
Looks to me like Stereodude has figured out that he was wrong in the very first post he made about the subject matter, and about the main thing I created a demonstration for in the original post for this thread (whether temporal separation means the lens requirements are based only on what goes through them in an instant), but wants to continue saying I am wrong over and over again enough that he claims I changed my position, while doing everything he can to keep from admitting that he was wrong, as he continues to argue.

--Darin
darinp is offline  
post #927 of 1307 Old 07-25-2017, 05:46 PM
AVS Forum Special Member
 
Ruined's Avatar
 
Join Date: Jul 2002
Posts: 6,774
Mentioned: 35 Post(s)
Tagged: 0 Thread(s)
Quoted: 2568 Post(s)
Liked: 1220
I agree Darin, that quote is very prophetic. After all, this thread is still going...
Dave Harper likes this.
Ruined is offline  
post #928 of 1307 Old 07-25-2017, 07:49 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Ruined View Post
I agree Darin, that quote is very prophetic. After all, this thread is still going...
Of course, you can't come up with a single thing I am wrong about. All you can do is claim I said something I never said and then argue against that. I understand that it would make you, Dave, and Stereodude feel a lot better if you guys could come up with something I was wrong about, but you actually have to come up with something I've been wrong about for that.

If you can come up with a rational argument that I am wrong about something I said (not something I never said), you are welcome to. One of the guys in your group claiming I was wrong (Stereodude) now seems to realize he has been wrong for 18 months about whether the temporal separation mattered.

This thread has been going on for a long time because you and those who agree with you have been posting misinformation here.

--Darin

Last edited by darinp; 07-25-2017 at 08:06 PM.
darinp is offline  
post #929 of 1307 Old 07-26-2017, 10:31 AM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by R Johnson View Post
It might help many of us to see a short, clear post from Darin which states the question he wants to debate, along with his answer or position.
I'm going to give it one more shot.

My main point of this thread is, "If a projection system separates images temporally at a fast enough speed that a human cannot detect the temporal separation, then the lens requirements do not change to be based only on instances in time instead of on how the lens delivers the whole image over time."

To put that in pictures:

The threshold for an adequate lens to display this image:



is lower than the threshold for an adequate lens to display this image:



even if you separate the second image temporally at 120Hz like this:



or even if you display the E with one projector and the 3 with a different projector.

One thing that is difficult for me is that this seems so simple to understand, yet a whole group of people keeps claiming that if you separate images temporally then the lens only has to be good enough for what goes through it at an instant in time.

It seems like some people want to just keep repeating that if you pick a lens that is adequate for:



then it will be adequate for:



As they say, "No ____, Sherlock". But that doesn't address what I said right from the beginning.

--Darin
Attached Thumbnails
Click image for larger version

Name:	eshiftE.jpg
Views:	201
Size:	3.5 KB
ID:	2259360   Click image for larger version

Name:	eshiftE3.jpg
Views:	213
Size:	4.9 KB
ID:	2259364  

Last edited by darinp; 07-26-2017 at 10:43 AM.
darinp is offline  
post #930 of 1307 Old 07-26-2017, 06:39 PM
AVS Forum Special Member
 
R Johnson's Avatar
 
Join Date: Nov 2002
Location: Chicago, Illinois
Posts: 1,778
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 89 Post(s)
Liked: 61
Quote:
Originally Posted by darinp View Post
... It seems like some people want to just keep repeating that if you pick a lens that is adequate for ... then it will be adequate for...
The "E" and backward "E" are quite large elements, seemingly easy to display. The combination is a squarish "8" with a very thin black gap in the three horizontals. Are you contending that a lens suitable for displaying the "E" would not likely be capable of resolving the three gaps? Probably true, but that seems rather an apples versus oranges comparison with little relevance to lens quality needed for an eShift projector..
R Johnson is offline  
Sponsored Links
Advertisement
 
Reply Digital Hi-End Projectors - $3,000+ USD MSRP

Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off