Resolution requirements for lenses - Page 17 - AVS Forum | Home Theater Discussions And Reviews
Forum Jump: 
 269Likes
Reply
 
Thread Tools
post #481 of 1307 Old 07-03-2017, 12:17 PM
 
Dave Harper's Avatar
 
Join Date: Feb 2000
Location: Paradise on Earth
Posts: 6,554
Mentioned: 62 Post(s)
Tagged: 1 Thread(s)
Quoted: 3159 Post(s)
Liked: 1721
Quote:
Originally Posted by RLBURNSIDE View Post
Incorrect. (IMO)



Spoiler!

No, I am correct because I quantified by saying ...."if physical pixels". One physical pixel can only be in one particular state at one time, on, off, or some intensity in between.

You mention "wobulators" in there too, but XPR doesn't use wobulation (moot anyway since I've been talking about eShift LCoS,LCD, etc. when mentioning 1920x1080 eShift). For XPR, it uses standard DLP mirrors with only two angles, towards and away from the lens. It uses an eShift optical actuator glass for eShift duties, just like its LCoS/LCD variants. So not sure where you're going with the whole "wobulator" thing.

Quote:
Originally Posted by darinp View Post
Dave,



Feel free to ask your expert about this latest scenario if you want. I will try to shorten it to make it simpler:



Spoiler!




--Darin

No thank you. I'm done with condescending answers that belittle everyone who disagrees with you.

And for the record, I'm still of the same position until I am able to do further research myself and gather more facts.

And no, I'm not answering anymore questions until then, mystical, hypothetical or otherwise.

Also for the honesty record, I DID ask him all your questions darinp, long ago and I told you that. He hasn't replied back to me since I sent him the link to this and the XPR thread where this started. I don't blame him. I'm sure he read some and saw how certain individuals were responding and decided to get out while he could so he wouldn't also be belittled and ridiculed for possibly having a counter view. Sounds like college campuses nowadays, doesn't it?

Last edited by Dave Harper; 07-03-2017 at 01:45 PM.
Dave Harper is offline  
Sponsored Links
Advertisement
 
post #482 of 1307 Old 07-03-2017, 12:49 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Dave Harper View Post
And for the record, I'm still of the same position until I am able to do further research myself and gather more facts.
You choose not to get more facts. You could get them if you cared to.
Quote:
Originally Posted by Dave Harper View Post
Also for the honesty record, I DID ask him all your questions darinp, long ago and I told you that.
Yep. You clearly said that you would not post his answer here. You say I treat you different than Javs, but Javs doesn't play games like that. Javs clearly wanted to get to the truth, while your behavior has clearly shown that you don't want readers to get the truth about what your expert actually believes when asked a question that would clearly clarify their position.

--Darin
darinp is offline  
post #483 of 1307 Old 07-03-2017, 01:01 PM
AVS Forum Addicted Member
 
Stereodude's Avatar
 
Join Date: Jan 2002
Location: Detroit Metro Area
Posts: 15,120
Mentioned: 36 Post(s)
Tagged: 0 Thread(s)
Quoted: 4126 Post(s)
Liked: 2938
Dave, just bow out. Darin's wrong. He's been wrong from the start of the thread. He's not going to be convinced. He claims to have magic beans. Anyone who doubts his magic beans doesn't understand physics. Strangely he can't prove his beans are magic using physics and math though. Instead we get thought experiment after thought experiment after thought experiment and diagrams of E's and 3's and flashing E's and 3's like that's going to prove something. Somehow how your brain interprets the projected image dictates how good the lens must be, not whats actually going through the lens. You might have missed it, but he even claimed that stacking two separate 1080p projectors so they project on top of each other but with a half pixel offset vertically and horizontally required a better lens than if they were setup with perfect alignment.

Extraordinary claims require extraordinary proof. So far Darin and his crew have only provided the former and none of the latter. A few hundred posts have been made since I was last here. No one has convinced him of anything. Darin's strategy seems to be one of outlasting anyone who wants to disagree with him and then declaring victory. Maybe he could better divert his efforts at InfoComm and PISCR where he's right and accomplish something useful.
Dave Harper likes this.
Stereodude is offline  
Sponsored Links
Advertisement
 
post #484 of 1307 Old 07-03-2017, 01:27 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Stereodude View Post
Dave, just bow out. Darin's wrong. He's been wrong from the start of the thread.
Nope. Still not wrong.

Just for fun, do you disagree with Dave's own expert that with totally incoherent light the lens requirements are the same whether the two 1080p (or 2.7k) sub-frames go through the lens at different times than at the same time? Seems like you want to act like you and Dave agree, yet even Dave agreed that it made no difference whether the sub-frames go through the lens at the same time or at different times.

Isn't your whole position based on the lens requirements being different if the two sub-frames go through the lens at different times than at the same time?

As I've said, it would probably take less than 5 minutes for me to straighten things out with Dave's expert. He got some stuff right and some stuff wrong. Only somebody who is very confused would think that A, B, and C in the following have the same lens requirements:

Code:
With totally incoherent light do A, B, and C
all have the same lens requirements:

A: One 2.7k image.
B: Two different 2.7k images with half pixel offset
    and going through the lens at different times.
C: Two different 2.7k images with half pixel offset
    and going through the lens at the same time.
D. One 4k image.
So, why are you acting like you agree with Dave when Dave clearly said that he thought A, B, and C all have the same lens requirements and his expert's views as posted say the same thing?

Are you saying you agree with Dave? You really think A and C have the same lens requirements? If so, then you are even more confused about how things work than I thought.

--Darin
darinp is offline  
post #485 of 1307 Old 07-03-2017, 01:45 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Stereodude View Post
You might have missed it, but he even claimed that stacking two separate 1080p projectors so they project on top of each other but with a half pixel offset vertically and horizontally required a better lens than if they were setup with perfect alignment.
As you indicated, you are clearly confused. Shouldn't be hard for people who understand light and lenses to understand. If you set up two 1080p projectors where you could extract and feed them different 1080p frames from 4k content, and lined them up so that when you displayed a single red pixel with one and a single green pixel with the other, you would get what Javs' put in his picture:



Then the lenses would need to be good enough to properly show the yellow area that is 1/4th of a pixel size in area. Is it really that hard to understand that if the red and green pixels are 1 mm^2 each on screen and you set something up to display this 0.25 mm^2 yellow sub-pixel between them, then the lenses need to be good enough for the 0.25 mm^2 item and not just for displaying individual 1 mm^2 items?

If you line them up perfectly and so then can't have two different 1080p eShift sub-frames for showing smaller details, then the smallest item the lenses and projectors would need to display to humans would be 1 mm^2, not 0.25 mm^2.

Seems pretty simple to me.

Maybe somebody else here can help me out. Is this example hard to understand?

--Darin

Last edited by darinp; 07-03-2017 at 02:05 PM.
darinp is offline  
post #486 of 1307 Old 07-03-2017, 03:10 PM
AVS Forum Addicted Member
 
Stereodude's Avatar
 
Join Date: Jan 2002
Location: Detroit Metro Area
Posts: 15,120
Mentioned: 36 Post(s)
Tagged: 0 Thread(s)
Quoted: 4126 Post(s)
Liked: 2938
Quote:
Originally Posted by darinp View Post
Just for fun, do you disagree with Dave's own expert that with totally incoherent light the lens requirements are the same whether the two 1080p (or 2.7k) sub-frames go through the lens at different times than at the same time? Seems like you want to act like you and Dave agree, yet even Dave agreed that it made no difference whether the sub-frames go through the lens at the same time or at different times.
Dave and I agree that you're wrong. I don't agree that B & C are the same. They're not.

Quote:
Isn't your whole position based on the lens requirements being different if the two sub-frames go through the lens at different times than at the same time?
Yes, absolutely.
Stereodude is offline  
post #487 of 1307 Old 07-03-2017, 03:19 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Stereodude View Post
Dave and I agree that you're wrong.
Funny, you guys don't agree with each other, but there is the "the enemy of my enemy is my friend" kind of thing you seem to have.
Quote:
Originally Posted by Stereodude View Post
I don't agree that B & C are the same. They're not.
So, you disagree with the one expert here. Of course, that expert is wrong about one thing, so I'm not saying they are all that great with their expertise anyway, and they would probably change their position within 5 minutes if I talked to them. Both you and he are kind of like the INFOCOMM "experts" who don't understand contrast ratio, except that you don't get paid to know this stuff, while he and the other ones do, which is worse.

I'm used to large numbers of people thinking that I'm wrong when I am right, just like happened for years with contrast ratio. So, I can handle when a bunch of people are wrong, like you are about this one. If you don't understand the red and green pixel example then I may be at the end of how much I can simplify things for you.
Quote:
Originally Posted by Stereodude View Post
Yes, absolutely.
Do you understand the post just above with the red and green pixel from two different 1080p machines? Do you think that the lenses only have to be able to resolve 1 mm^2 items on screen if the yellow part is 0.25 mm^2?

--Darin

Last edited by darinp; 07-03-2017 at 03:23 PM.
darinp is offline  
post #488 of 1307 Old 07-03-2017, 03:30 PM
AVS Forum Addicted Member
 
Stereodude's Avatar
 
Join Date: Jan 2002
Location: Detroit Metro Area
Posts: 15,120
Mentioned: 36 Post(s)
Tagged: 0 Thread(s)
Quoted: 4126 Post(s)
Liked: 2938
Quote:
Originally Posted by darinp View Post
Then the lenses would need to be good enough to properly show the yellow area that is 1/4th of a pixel size in area. Is it really that hard to understand that if the red and green pixels are 1 mm^2 each on screen and you set something up to display this 0.25 mm^2 yellow sub-pixel between them, then the lenses need to be good enough for the 0.25 mm^2 item and not just for displaying individual 1 mm^2 items?

If you line them up perfectly and so then can't have two different 1080p eShift sub-frames for showing smaller details, then the smallest item the lenses and projectors would need to display to humans would be 1 mm^2, not 0.25 mm^2.

Seems pretty simple to me.

Maybe somebody else here can help me out. Is this example hard to understand?
Why do the lenses have to resolve what isn't going through them? There is no 1/4th pixel sized yellow pixel going through either of them. The 1/4th sized yellow pixel is the result of external projector alignment of the larger pixels. If the lens can adequately display a defined square pixel, offsetting the two pixels will yield a 1/4th pixel sized yellow pixel with no changes to either lens.

Please explain with math, not a thought experiment. You keep claiming its simple physics. Lets see the mathematical proof that connects the perception of two rapidly flashed offset 1080p images by our human vision to the requirement of the lens. You keep trying to tell everyone that because the whole is perceived as greater than the sum of it's parts that then the lens(es) then really need to be capable of the perceived sum. I'd like to see the math that backs that up.
Dave Harper likes this.
Stereodude is offline  
post #489 of 1307 Old 07-03-2017, 03:35 PM
AVS Forum Addicted Member
 
Stereodude's Avatar
 
Join Date: Jan 2002
Location: Detroit Metro Area
Posts: 15,120
Mentioned: 36 Post(s)
Tagged: 0 Thread(s)
Quoted: 4126 Post(s)
Liked: 2938
Quote:
Originally Posted by darinp View Post
I'm used to large numbers of people thinking that I'm wrong when I am right, just like happened for years with contrast ratio. So, I can handle when a bunch of people are wrong, like you are about this one. If you don't understand the red and green pixel example then I may be at the end of how much I can simplify things for you.
I'm not looking for a simplification. I want a mathematical proof. You keep insisting it's simple physics. Simple physics has simple formulas behind it. Show us the math.
Dave Harper likes this.
Stereodude is offline  
post #490 of 1307 Old 07-03-2017, 03:57 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Stereodude View Post
I'd like to see the math that backs that up.
I think that simple examples like I just posted are even easier to understand.

If you are so hooked on telling me that I am wrong and you want the math so much, why don't you post what you think the math is? If you don't know the math then why would you tell me I am wrong? In that case you could tell me that you don't know if I am right or wrong, but if the math is the answer and you don't know the math, then it should be pretty clear that you don't have much of a leg to stand on to claim that I am wrong.

With the red and green pixel from Javs I think I've gotten it down to a level where a good high school physics should be able to follow it, if they want to understand things and not just tell somebody they are wrong.
Quote:
Originally Posted by Stereodude View Post
Why do the lenses have to resolve what isn't going through them?
Because their jobs are to display things with proper retention of spatial properties for whole frames, not just for instances. You claim that I have some magic, yet you are the one who thinks that splitting the lens's job up temporally in half means you can cut the spatial requirements in half. Physics doesn't work that way. It would require magic to change their spatial requirements just by sending the light at different times (where those different times are imperceptible to a humans).

As Dave's expect said, if you have all wavelengths of light it does not matter whether the 2 sub-frames go through the lens at the same time or at different times. The interference will average out to zero. You seem to think that splitting the frame in 2 pieces means the lens can blur more than if you send it all at once. Light doesn't work that way.
Quote:
Originally Posted by Stereodude View Post
If the lens can adequately display a defined square pixel, offsetting the two pixels will yield a 1/4th pixel sized yellow pixel with no changes to either lens.
So, you think that the MTF requirements for the lenses are building around 1080p line pairs in thie case instead of 2160p line pairs?

On the sliding scale from horrible to perfect for lenses, wherever you set your "acceptable" line it will be lower for one 1080p projector showing pixels the size of the red and green pixels than where the "acceptable" line would be when showing that 1/4 sized yellow element. Same for you draw a line for where a lens qualifies as having "great" spatial resolution. You will hit "great" sooner with 1080p than what in this case has an element the size for 1 pixel in 2160p.

Do you agree than all else being equal 2160p projectors have higher lens requirements than 1080p projectors? They do, and the reason is that they are tasked with showing elements that are 1/4th the size. This example I just gave where the system is tasked with displaying an element 1/4th the native pixel size and since the system is tasked with that, so are the lenses.

Before somebody jumps off an said that I claims eShift projectors have the same lens requirements as 4k projectors, no I am not saying that. That is a more complicated discussion and people should understand some of the simpler stuff before getting into that one.

I thought that some of Javs's stuff with blurring showed things pretty well too.

You don't get to pick a lens with more blurring just because you split the 2 eShift sub-frames in time instead of sending them through the lens at the same time. Either way requires lens quality with basically the same MTF. As I said, it is those who think they can choose a lens with lower MTF just by sending the eShift sub-frames at different times who are coming up with some magical MTF requirement reduction scheme.

--Darin
darinp is offline  
post #491 of 1307 Old 07-03-2017, 04:00 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Stereodude View Post
Show us the math.
Again, you claim I and Dave's expert are wrong about being able to send both sub-frames at the same time with totally incoherent light instead of at different times, without increasing the lens requirements, so why don't you show us your math that says that sending them at the same time increases the lens MTF requirements?

Do you need me to provide you with math about how waves interact? You are the one who claims that these light waves going to the screen will need a better lens if all you do is send all the photons at once instead of sending them half at a time.

--Darin
darinp is offline  
post #492 of 1307 Old 07-03-2017, 04:01 PM
Advanced Member
 
AJSJones's Avatar
 
Join Date: Dec 1999
Location: Sonoma County, CA, US
Posts: 886
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 100 Post(s)
Liked: 24
Quote:
Originally Posted by darinp View Post



Then the lenses would need to be good enough to properly show the yellow area that is 1/4th of a pixel size in area. Is it really that hard to understand that if the red and green pixels are 1 mm^2 each on screen and you set something up to display this 0.25 mm^2 yellow sub-pixel between them, then the lenses need to be good enough for the 0.25 mm^2 item and not just for displaying individual 1 mm^2 items?

If you line them up perfectly and so then can't have two different 1080p eShift sub-frames for showing smaller details, then the smallest item the lenses and projectors would need to display to humans would be 1 mm^2, not 0.25 mm^2.

Seems pretty simple to me.

Maybe somebody else here can help me out. Is this example hard to understand?

--Darin
Darin - imagine a printer that can print squares with little gaps between them - this is the analogy of one pixel and the pixel-gap grid, where the lens/printer can actually resolve objects - such as the grid lines (1 in the picture below) - that are substantially smaller than the imaging device's pixels. First I print one image let's say a single red square. The I come back with a second pass with the same printer head but using the green ink and offset by ½ square in each dimension. Both squares are the same size and have sharpish edges (the pixel-gap grid) (2). In the printer output instead of yellow, I'll get brown - this is ink not light - but I'll only get brown in the area where the squares overlap. (Whether the inks go down at the same time or in two passes, the eye sees the print "the composite of the two images). If I have a crummy lens that is so bad I don't see sharp edges to each pixel/square, but rather a blurry edge for each (and no visible pixel-gap grid; 3 in the picture), when I come back with the second pass with the green ink, I'll get the brown, but only in the middle area of the overlap, because the edges will have been printed with a soft blurred edge. The size of the "pure brown" square will be smaller than the one with the "good lens" (4 in the picture). Now, with the same printer I try to print smaller squares. The one with the sharp edges on the big pixels will do a better job than the one with soft edges on the big pixels. So, if the lens/printer can only just keep the big pixels/squares from overlapping, it's going to have a tough time printing the small squares. (That's the analogy with a lens that "is barely adequate for 1080p" - we get the image detail from the big squares because the overlap is small compared to the size of the 2k mode pixels.) Now, getting back to the good lens - the common type where we see a well-defined (although not necessarily broad-lined) grid in 2K maode- not only will it do well printing the overlapping squares as above (the case of the shifted pixels), it will be able to print the small squares with perhaps some blurring of the gridlines, but still keeping the squares separate - the case of the "native 4K" array. Thus, if the lens is pretty good already (evidenced by a good sharp gridline /screendoor, in 2k) it will do well with 4K - that's not to say that a "better" lens would not show slight/moderate improvement, but that's the MTF game above.



The separate issue of the performance of the algorithm is not addressable with this kind of logic. The green square above is needed to create the yellow "detail", and it works well in the simple illustration, but it will put green on the other three "sub-pixels" and there may be a cost to that: perhaps they should be white - they'll now be green. Perhaps the algorithm adjusts the first pass to decrease the green in those three sub-pixels etc. I'm not putting any bets down on how well the processor trickery will succeed on typical video material but I suspect that challenging test patterns will reveal the limitations of two sets of 4 million pixels compared to "the real discrete pixels of 4k" for stills but probably less so for motion tests. Until I see those, I won't dismiss the possible cost/benefits of the "cheaper" approach to getting more detail than 2k on the screen

Last edited by AJSJones; 07-03-2017 at 04:10 PM.
AJSJones is offline  
post #493 of 1307 Old 07-03-2017, 04:11 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by AJSJones View Post
Thus, if the lens is pretty good already (evidenced by a good sharp gridline /screendoor, in 2k) it will do well with 4K - that's not to say that a "better" lens would not show slight/moderate improvement, but that's the MTF game above.
Thanks. I'm trying to make sure I understand what you are saying. It seems like what you are saying could be used to say that native 4K projectors don't really need better lenses than native 2K projectors. Is that right?

I've mentioned before that I think of 1080p+eShift as kind of like 3K. It can display some patterns from 4K signal perfectly, but not all of them. There is that correlation you mentioned, where the green would affect other pixels if there were any.

It seems like this is a little bit of sidetracking though, since I think the disagreement in this thread is mostly whether it matters whether the eShift sub-frames go through the lens at the same time or at different times.

I think everybody except for Dave and his expert knows that 2 eShift sub-frames sent at the exact same time require a better lens than just the native resolution, where the disagreement is then whether the timing of the 2 eShift sub-frames matters.

It seems like in your example with the printer the issues would be the same whether the red and green were printed at the same time or different times, other than that paint is made of solid objects that interact in ways that light doesn't.

--Darin
darinp is offline  
post #494 of 1307 Old 07-03-2017, 04:25 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
AJSJones,

I decided to reduce the size of your image for a test. Here is the smaller version:



I then moved away from my screen. The yellow in the 4th imaged disappeared for me where it was pretty easy to see it in the 2nd image and to see the green and red in the 3rd image.

--Darin
Attached Images
 
darinp is offline  
post #495 of 1307 Old 07-03-2017, 04:38 PM
AVS Forum Special Member
 
Javs's Avatar
 
Join Date: Dec 2012
Location: Sydney
Posts: 7,830
Mentioned: 473 Post(s)
Tagged: 0 Thread(s)
Quoted: 6771 Post(s)
Liked: 6388
Quote:
Originally Posted by darinp View Post
Thanks. I'm trying to make sure I understand what you are saying. It seems like what you are saying could be used to say that native 4K projectors don't really need better lenses than native 2K projectors. Is that right?

I've mentioned before that I think of 1080p+eShift as kind of like 3K. It can display some patterns from 4K signal perfectly, but not all of them. There is that correlation you mentioned, where the green would affect other pixels if there were any.

It seems like this is a little bit of sidetracking though, since I think the disagreement in this thread is mostly whether it matters whether the eShift sub-frames go through the lens at the same time or at different times.

I think everybody except for Dave and his expert knows that 2 eShift sub-frames sent at the exact same time require a better lens than just the native resolution, where the disagreement is then whether the timing of the 2 eShift sub-frames matters.

It seems like in your example with the printer the issues would be the same whether the red and green were printed at the same time or different times, other than that paint is made of solid objects that interact in ways that light doesn't.

--Darin
No, I think his post was seriously spot on in illuminating why you would benefit from sharper lenses period. E-Shift or not, but that does show that you want a sharp as possible lens with 1080p anyway to move on to things like e-shift.

I think you are the one who keeps fixating on the time thing, yes it was periodically an issue in the start, but we moved past that IMO, and the simple graphics above show what it looks like when you have a lens which, while can resolve 1080p, can stand to be a lot sharper, and you will see benefits from the shifted subframes.

It doesnt matter when they go through the lens, or if your brain makes the compisite itself, its important because if your brain has sharp defined information to work with from the outset, its going to construct a sharper composite.

This whole conversation is seriously over, I thought I covered this in my post yesterday with the 8k pixels etc, you seemed to kind of skim over it to be honest, but there is no disputing from any single person that a better lens benefits not just the resolutions you want to show, but every resolution under it all the way back to a solid two tone contrast pattern.

People here seem to have issue with what is an acceptable sharpness for the native resolution in the first place, and are not understanding that its impossible to say something is fully resolved since 100% MTF is impossible in optics at the diffraction limit, therefore its a never ending loop of discussion.

How can there be any dispute at all, that as far as ASJones's image shows in number 4 being blurry and an indicitive example of what a low spec lens might look like when compared with mine which is sharp, there is clearly a large blurry difference between the two, even though both subframes travel through the lens at different times, your brain combines them, you are either going to construct a blurry mental image or a sharp one, this is the key here. Of course it pays to have a sharper lens in this case.

JVC X9500 (RS620) | 120" 16:9 | Marantz AV7702 MkII | Emotiva XPA-7 | DIY Modular Towers | DIY TPL-150 Surrounds | DIY Atmos | DIY 18" Subs
-
MadVR Settings | UHD Waveform Analysis | Arve Tool Instructions + V3 Javs Curves
Javs is offline  
post #496 of 1307 Old 07-03-2017, 04:39 PM
AVS Forum Special Member
 
Highjinx's Avatar
 
Join Date: Dec 2002
Location: Australia
Posts: 3,417
Mentioned: 6 Post(s)
Tagged: 0 Thread(s)
Quoted: 1219 Post(s)
Liked: 496
Quote:
Originally Posted by Stereodude View Post
Why do the lenses have to resolve what isn't going through them? There is no 1/4th pixel sized yellow pixel going through either of them. The 1/4th sized yellow pixel is the result of external projector alignment of the larger pixels. If the lens can adequately display a defined square pixel, offsetting the two pixels will yield a 1/4th pixel sized yellow pixel with no changes to either lens.

Please explain with math, not a thought experiment. You keep claiming its simple physics. Lets see the mathematical proof that connects the perception of two rapidly flashed offset 1080p images by our human vision to the requirement of the lens. You keep trying to tell everyone that because the whole is perceived as greater than the sum of it's parts that then the lens(es) then really need to be capable of the perceived sum. I'd like to see the math that backs that up.
This is my thinking as well. The lens needs to be able to pass through the reflected image of the pixels(and thus content), including pixel edges & inter-pixel gaps faithfully. If the lens does this, my logic tells me there is no need for the lens to be 'better' to function in e-shift mode, since each of the flashes are projecting the same size pixels at different times. As mentioned above, the 1/4 size pixel area is created in our minds.

However if a new chip with smaller pixels are substituted, then a better lens may be needed, if the existing lens is not up to the task of displaying what is required faithfully.

The question is if lens of quality 'A' can faithfully resolve the required pixel size 'X' perfectly, will using a lens A++ help?. It wouldn't hurt but, I doubt it would help, unless the pixel size got smaller to the point where the original lens came up short. Not to mention we are talking about moving images here.

Not a physics guy, maths guy nor a lens guy! . Simply relying on logic to make sense of this.

Would love to see a 4K image e-shifted, with a group of pixels with image data from flash 1 and flash 2 displayed/captured separately.

Attached Thumbnails
Click image for larger version

Name:	Capture.JPG
Views:	3
Size:	25.3 KB
ID:	2216913  

May the success of a Nation be judged not by its collective wealth nor by its power, but by the contentment of its people.
Hiran J Wijeyesekera - 1985.
Highjinx is offline  
post #497 of 1307 Old 07-03-2017, 04:47 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Javs View Post
No, I think his post was seriously spot on in illuminating why you would benefit from sharper lenses period. E-Shift or not, but that does show that you want a sharp as possible lens with 1080p anyway to move on to things like e-shift.

I think you are the one who keeps fixating on the time thing, yes it was periodically an issue in the start, but we moved past that IMO, and the simple graphics above show what it looks like when you have a lens which, while can resolve 1080p, can stand to be a lot sharper, and you will see benefits from the shifted subframes.

It doesnt matter when they go through the lens, or if your brain makes the compisite itself, its important because if your brain has sharp defined information to work with from the outset, its going to construct a sharper composite.
You and I may have gotten past it, but Dave Harper posts that I must think that the 2 eShift sub-frames are combined before the lens and Stereodude said that this is in fact his position:
Quote:
Isn't your whole position based on the lens requirements being different if the two sub-frames go through the lens at different times than at the same time?
So, how are we past it?
Quote:
Originally Posted by Javs View Post
This whole conversation is seriously over, I thought I covered this in my post yesterday with the 8k pixels etc, you seemed to kind of skim over it to be honest, but there is no disputing from any single person that a better lens benefits not just the resolutions you want to show, but every resolution under it all the way back to a solid two tone contrast pattern.
Yes they can, but even so higher native resolutions still call for better lenses and the eShift thing fits in there too as a higher effective resolution than just the native panel it uses.

Still looks to me like the disagreement here is whether it matters that the 2 sub-frames go through the lens at the same time or not, where you pretty clearly agree with what I said about that. That is, other than Ruined who uses arguments against eShift requiring better lenses that would work as effectively against native 4k requiring better lenses than native 1080p.
Quote:
Originally Posted by Javs View Post
How can there be any dispute at all, that as far as ASJones's image shows in number 4 being blurry and an indicitive example of what a low spec lens might look like when compared with mine which is sharp, there is clearly a large blurry difference between the two, even though both subframes travel through the lens at different times, your brain combines them, you are either going to construct a blurry mental image or a sharp one, this is the key here. Of course it pays to have a sharper lens in this case.
I can agree with you about "how can there be any dispute at all", yet Dave and Stereodude keep posting that all that matters is what goes through the lens in an instant, and it seems that HighJinx agrees with them.

--Darin
darinp is offline  
post #498 of 1307 Old 07-03-2017, 04:47 PM
 
Dave Harper's Avatar
 
Join Date: Feb 2000
Location: Paradise on Earth
Posts: 6,554
Mentioned: 62 Post(s)
Tagged: 1 Thread(s)
Quoted: 3159 Post(s)
Liked: 1721
I said:
Quote:
Originally Posted by Dave Harper
...No thank you. I'm done with condescending answers that belittle everyone who disagrees with you....
So thanks for proving my point with your reply below, but here goes (which I know I will regret again)


Quote:
Originally Posted by darinp View Post
You choose not to get more facts. You could get them if you cared to....
And just what do you think I have been doing this whole time while not posting here wasting my valuable time with you and coderguy? Who is the dishonest one here, darinp? Are you following me around, knowing what I am doing every minute of every day? How do you have even ONE CLUE what I have been doing lately?



Quote:
Originally Posted by darinp View Post
...Yep. You clearly said that you would not post his answer here. You say I treat you different than Javs, but Javs doesn't play games like that. Javs clearly wanted to get to the truth, while your behavior has clearly shown that you don't want readers to get the truth about what your expert actually believes when asked a question that would clearly clarify their position.

--Darin
I clearly said that after discussing with him and reading your responses about coherent and incoherent light that I tentatively and hesitantly switched my opinion and stance on your completely irrelevant "C" scenario of the two sub-frames going through the lens at the same time. I also said that I have asked him further questions and gave him links to this and the DLP XPR thread where all this is happening and he has not gotten back to me. Would you like me to stalk him and follow him to work and home and force him to respond to me darinp? I said I invited him to come over for a movie and some optics discussions, which he also hasn't replied with an answer. He is away on a short staycation, should I drive over there and hunt him down and knock on his vacation house door and force myself in, tie him up in a chair, shine a bright light (an incoherent light I might add! ) in his eyes and force him to answer the questions from you?

I am working towards truth, but I am no longer going to waste my time here debating you and coderguy, going in circles and getting nothing accomplished, all the while being demeaned, belittled, condescended to and a whole host of other insults, like your stance that just because I disagree with you, it means I am being "dishonest". Do you blame me, darinp??? You mention "this wouldn't happen on college campuses with physics professors", but I ask you, would that professor talk to one of his colleagues or students the way you are, who may have a differing opinion or maybe just misunderstands the subject matter? Actually, now that I think about it, that IS kind of what's happening there at colleges to folks with differing views. They're being screamed at and shouted down, with the opposers even resorting to violence, so I guess this is what the world is now, huh? If so, I will not waste my time in debating it with folks like that, so I am not.


Now I will ask you and coderguy a very simple question that I would like a short, simple answer to, without some long winded hypothetical where you turn it around and form you answer into a question to me......

If I took a picture of the screen where a lab grade pro camera (with more than enough resolution capabilities and MTF quality for the lens) snaps the image at 1/120th of a second and it is genlocked to the projector to snap that picture at the very instant that the A or B sub-frame is presented to the imaging chip (we will say the one in a 1920x1080 LCoS JVC eShift projector), then which image will the camera capture from the choices below:



A. Image 1 (Original 4K image)
B. Image 2 or 3 (A and B sub-frames created from original 4K image in the projector's eShift/Video processor)
C. Image 4 (the composite image of the two sub-frames that are created in our brains)

EDIT:
Oh, and where would you place the lens in that graphic?

Last edited by Dave Harper; 07-03-2017 at 04:53 PM.
Dave Harper is offline  
post #499 of 1307 Old 07-03-2017, 04:57 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Dave Harper View Post
I also said that I have asked him further questions and gave him links to this and the DLP XPR thread where all this is happening and he has not gotten back to me.
You posted days ago that you had asked him my question a while before that, but you would not post his response. My question should have taken less than 5 minutes, so I'm skeptical that you actually asked that A, B, C, D question a week or so ago and didn't get any response, especially since what you said was that you wouldn't post his response.
Quote:
Originally Posted by Dave Harper View Post
If I took a picture of the screen where a lab grade pro camera (with more than enough resolution capabilities and MTF quality for the lens) snaps the image at 1/120th of a second and it is genlocked to the projector to snap that picture at the very instant that the A or B sub-frame is presented to the imaging chip (we will say the one in a 1920x1080 LCoS JVC eShift projector), then which image will the camera capture from the choices below:



A. Image 1 (Original 4K image)
B. Image 2 or 3 (A and B sub-frames created from original 4K image in the projector's eShift/Video processor)
C. Image 4 (the composite image of the two sub-frames that are created in our brains)
With that fast camera synced you would of course get image A or image B. With a slow shutter like human vision you would get the last image.

--Darin

Last edited by darinp; 07-03-2017 at 05:00 PM.
darinp is offline  
post #500 of 1307 Old 07-03-2017, 05:00 PM
AVS Forum Special Member
 
Javs's Avatar
 
Join Date: Dec 2012
Location: Sydney
Posts: 7,830
Mentioned: 473 Post(s)
Tagged: 0 Thread(s)
Quoted: 6771 Post(s)
Liked: 6388
Quote:
Originally Posted by Dave Harper View Post

Dave

I see where you are coming from totally, you and I agree about the time thing, however this is a technicality to this which is glaring.

Let me ask you something,

Do you are agree that even though the subframes travel through the lens at different times, this would be considered 'good enough' to resolve the individual 1080p pixels, and barely good enough, but good enough none the less for the e-shift pixels?



And do you also now agree that since we have higher MTF quality lens on the same device thus able to more sharply resolve the 1080p pixels and even nthe e-shift pixels, we now have a sharper resultant composite image?



Both cases are legally good enough for 1080p, and in both cases the native 1080p images are just that, 1080p, yet by moving to a higher MTF lens, we now have a sharper composite. your brain combines both later, yes, but if the subframes are not as sharp as they can be in the first place, if they can be sharper, then there is merit to a sharper lens.

The problem is, this does not end, scientifically we cant get to 100% sharpness in any pixel as has been stated, so this whole thread is getting hung up on a pretty simple technicality, when the reality is, MTF and sharpness affects all pixels equally.

Even more so since I kind of proved with the state of todays projectors clearly resolving the screen door in 1080p, you actually would also be able to resolve 8k pixels and screen door! So, this is seriously just technicality philosophical discussion at the end of the day and its going to go on forever until we are on the same page about how MTF affects all pixels equally and we can NEVER reach 100% MTF at any given pixel size in a display device.

JVC X9500 (RS620) | 120" 16:9 | Marantz AV7702 MkII | Emotiva XPA-7 | DIY Modular Towers | DIY TPL-150 Surrounds | DIY Atmos | DIY 18" Subs
-
MadVR Settings | UHD Waveform Analysis | Arve Tool Instructions + V3 Javs Curves
Javs is offline  
post #501 of 1307 Old 07-03-2017, 05:04 PM
 
Dave Harper's Avatar
 
Join Date: Feb 2000
Location: Paradise on Earth
Posts: 6,554
Mentioned: 62 Post(s)
Tagged: 1 Thread(s)
Quoted: 3159 Post(s)
Liked: 1721
Quote:
Originally Posted by darinp View Post
You posted days ago that you had asked him my question a while before that, but you would not post his response....
Yes, and that was him explaining the same thing you did about coherent and incoherent light, so why would I basically copy what you said and explained anyway?

Quote:
Originally Posted by darinp View Post
...With that fast camera synced you would of course get image A or image B. With a slow shutter like human vision you would get the last image.

--Darin
OK, argument over, thanks for playing and proving my point all along! So why in the world would you need a higher quality/MTF lens to just resolve the exact same sized 1080 pixels that you get with regular 1080 projectors, as shown in that example that you just agreed would be what is shown through the lens at each 1/120th of a second?

The "human vision" part comes after the lens, does it not?

I am going to reply to your other replies that occurred after my last quotes, when I can, where you have gotten my position all wrong this whole time.
Dave Harper is offline  
post #502 of 1307 Old 07-03-2017, 05:06 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Javs View Post
Even more so since I kind of proved with the state of todays projectors clearly resolving the screen door in 1080p, you actually would also be able to resolve 8k pixels and screen door! So, this is seriously just technicality philosophical discussion at the end of the day and its going to go on forever until we are on the same page about how MTF affects all pixels equally and we can NEVER reach 100% MTF at any given pixel size in a display device.
I tried to address some of this in the very first post I put in this thread.
Quote:
Originally Posted by darinp2 View Post
In one of the JVC threads there was a discussion about whether better lenses are required with e-shift on than off to show all the detail that the rest of the projector is producing, much like 4K panels requiring better lenses than 2K panels to resolve all the resolution of each, all else being equal. Just to be clear these are entertainment devices and no level of quality is really required, but hopefully people get the idea about how "required" is being used in this case.
The whole reason this thread is here is because some people said that by sending the eShift sub-frames one at a time the lens requirements are only the same as for the native resolution.

Seems like you seem to agree with me that these people were wrong.

Are you now saying that since better lenses are always better I am wrong in correcting this misconception from people that sending the eShift frames one at a time means they can use a worse lens than if they were sent at the same time?

Thanks,
Darin
darinp is offline  
post #503 of 1307 Old 07-03-2017, 05:09 PM
 
Dave Harper's Avatar
 
Join Date: Feb 2000
Location: Paradise on Earth
Posts: 6,554
Mentioned: 62 Post(s)
Tagged: 1 Thread(s)
Quoted: 3159 Post(s)
Liked: 1721
Quote:
Originally Posted by Javs View Post
Dave

I see where you are coming from totally, you and I agree about the time thing, however this is a technicality to this which is glaring.

Let me ask you something,

Do you are agree that even though the subframes travel through the lens at different times, this would be considered 'good enough' to resolve the individual 1080p pixels, and barely good enough, but good enough none the less for the e-shift pixels?



And do you also now agree that since we have higher MTF quality lens on the same device thus able to more sharply resolve the 1080p pixels and even nthe e-shift pixels, we now have a sharper resultant composite image?



Both cases are legally good enough for 1080p, and in both cases the native 1080p images are just that, 1080p, yet by moving to a higher MTF lens, we now have a sharper composite. your brain combines both later, yes, but if the subframes are not as sharp as they can be in the first place, if they can be sharper, then there is merit to a sharper lens.

The problem is, this does not end, scientifically we cant get to 100% sharpness in any pixel as has been stated, so this whole thread is getting hung up on a pretty simple technicality, when the reality is, MTF and sharpness affects all pixels equally.

Even more so since I kind of proved with the state of todays projectors clearly resolving the screen door in 1080p, you actually would also be able to resolve 8k pixels and screen door! So, this is seriously just technicality philosophical discussion at the end of the day and its going to go on forever until we are on the same page about how MTF affects all pixels equally and we can NEVER reach 100% MTF at any given pixel size in a display device.
Yes, but I have ALWAYS prefaced it with saying that we have a lens that "clearly resolves" the 1080 pixels, have I not??? To me that means most and as close to appearing like your second image (the copy of mine), NOT the blurred one, because that would even suck for 1080 and I doubt any video/optics engineer would choose THAT lens even for their 1080 projector, and that and ONLY that was my preface ad assumption to begin with.
Dave Harper is offline  
post #504 of 1307 Old 07-03-2017, 05:10 PM
AVS Forum Special Member
 
Javs's Avatar
 
Join Date: Dec 2012
Location: Sydney
Posts: 7,830
Mentioned: 473 Post(s)
Tagged: 0 Thread(s)
Quoted: 6771 Post(s)
Liked: 6388
Quote:
Originally Posted by darinp View Post
I tried to address some of this in the very first post I put in this thread.
The whole reason this thread is here is because some people said that by sending the eShift sub-frames one at a time the lens requirements are only the same as for the native resolution.

Seems like you seem to agree with me that these people were wrong.

Are you now saying that since better lenses are always better I am wrong in correcting this misconception from people that sending the eShift frames one at a time means they can use a worse lens than if they were sent at the same time?

Thanks,
Darin
Nope, you are fixating on the time aspect and laying out convoluted examples TBH rather than trying to educate them on why having a sharper subframes means sharper composite frames, no matter what the time aspect is, I must say, it took me a little while to latch on to that too. I think I just super clearly illustrated it to Dave in my post too, once people grasp that concept, its easy to see by extension this is a simple technicality and that sharper lenses will always be able to yield smaller detail when comped together.

JVC X9500 (RS620) | 120" 16:9 | Marantz AV7702 MkII | Emotiva XPA-7 | DIY Modular Towers | DIY TPL-150 Surrounds | DIY Atmos | DIY 18" Subs
-
MadVR Settings | UHD Waveform Analysis | Arve Tool Instructions + V3 Javs Curves
Javs is offline  
post #505 of 1307 Old 07-03-2017, 05:12 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Dave Harper View Post
Yes, and that was him explaining the same thing you did about coherent and incoherent light, so why would I basically copy what you said and explained anyway?
Are you now saying that he answered my A, B, C, D question and that it agreed with what I said and so you didn't post it? It is unclear to me what you are saying here about what question he answered.
Quote:
Originally Posted by Dave Harper View Post
OK, argument over, thanks for playing and proving my point all along! So why in the world would you need a higher quality/MTF lens to just resolve the exact same sized 1080 pixels that you get with regular 1080 projectors, as shown in that example that you just agreed would be what is shown through the lens at each 1/120th of a second?
Because, as I already explained at least once, human vision is like the camera with a slow shutter speed, not a camera with a high shutter speed. Seems like I addressed this multiple times.

If you want to know why it matters you have to use a camera with a slow shutter speed, otherwise you aren't answering the question for human vision. I think it has been clear since the first post in this thread that I was talking about lens requirements when displaying images for human viewers.
Quote:
Originally Posted by Dave Harper View Post
The "human vision" part comes after the lens, does it not?
It does, and it doesn't matter.

According to your expert it doesn't matter, so why are you bringing this up? Your expert himself said that the lens requirements are the same whether the eShift images are combined before the lens or after the lens, so why are you contradicting him again?

--Darin
darinp is offline  
post #506 of 1307 Old 07-03-2017, 05:13 PM
 
Dave Harper's Avatar
 
Join Date: Feb 2000
Location: Paradise on Earth
Posts: 6,554
Mentioned: 62 Post(s)
Tagged: 1 Thread(s)
Quoted: 3159 Post(s)
Liked: 1721
Quote:
Originally Posted by darinp View Post
I tried to address some of this in the very first post I put in this thread.
The whole reason this thread is here is because some people said that by sending the eShift sub-frames one at a time the lens requirements are only the same as for the native resolution.

Seems like you seem to agree with me that these people were wrong.

Are you now saying that since better lenses are always better I am wrong in correcting this misconception from people that sending the eShift frames one at a time means they can use a worse lens than if they were sent at the same time?

Thanks,
Darin
Why are you so hung up on this whole "sending the sub-frames at the same time" when it isn't even a thing in reality darinp???

eShift doesn't work that way. There is only ONE imaging chip in these projectors, so therefore the sub-frames MUST be sequentially sent one at a time. And each of those sub-frames are ONLY the same sized 1920x1080 pixels as their non eShift counterpart projectors. If they get cut up into 4 slices afterwards, that doesn't matter because that is ALL happening outside and AFTER the lens, in your brain!
Dave Harper is offline  
post #507 of 1307 Old 07-03-2017, 05:15 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Javs View Post
Nope, you are fixating on the time aspect ...
Just to defend myself a little bit. You are right, but this whole thread was created because of the time aspect and the time aspect is what some people have been claiming for 18 months I was wrong about.
Quote:
Originally Posted by Javs View Post
I think I just super clearly illustrated it to Dave in my post too, once people grasp that concept, its easy to see by extension this is a simple technicality and that sharper lenses will always be able to yield smaller detail when comped together.
I will applaud you greatly if you can get Dave and Stereodude to understand that and also why it doesn't matter whether the sub-frames are sent at the same time, or at different times.

Thanks,
Darin
darinp is offline  
post #508 of 1307 Old 07-03-2017, 05:15 PM
AVS Forum Special Member
 
Javs's Avatar
 
Join Date: Dec 2012
Location: Sydney
Posts: 7,830
Mentioned: 473 Post(s)
Tagged: 0 Thread(s)
Quoted: 6771 Post(s)
Liked: 6388
Quote:
Originally Posted by Dave Harper View Post
Yes, but I have ALWAYS prefaced it with saying that we have a lens that "clearly resolves" the 1080 pixels, have I not??? To me that means most and as close to appearing like your second image (the copy of mine), NOT the blurred one, because that would even suck for 1080 and I doubt any video/optics engineer would choose THAT lens even for their 1080 projector, and that and ONLY that was my preface ad assumption to begin with.
I know you have Dave, even I did in the first few pages on this thread, problem is, while you and I know that 'clearly resolves' means pretty damn sharp (JVC RS600 level lens) Darin is getting hung up on that final 0.5% of sharpness and THUS technically, you can/will benefit from a sharper lens. Since his early examples were hinging on illustrating the point of failure with 1080p sharpness, I showed that's nothing that even exists in today's projectors. Still, its a technically none the less, and a damned stupid discussion IMO because everyone is on the same page about it at the end of the day.

I have said it before, and I will say it again, everybody in this thread IS ON THE SAME PAGE. But we never DEFINED what 1080p sharpness is acceptable for this discussion, is it 90% MTF? Even if it was, the resulted composite image will be sharper with a 92% MTF 1080p lens, get what I mean?

JVC X9500 (RS620) | 120" 16:9 | Marantz AV7702 MkII | Emotiva XPA-7 | DIY Modular Towers | DIY TPL-150 Surrounds | DIY Atmos | DIY 18" Subs
-
MadVR Settings | UHD Waveform Analysis | Arve Tool Instructions + V3 Javs Curves
Javs is offline  
post #509 of 1307 Old 07-03-2017, 05:20 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Dave Harper View Post
Why are you so hung up on this whole "sending the sub-frames at the same time" when it isn't even a thing in reality darinp???
Because this is who intelligent people figure things out.

And once again, you being extremely disingenuous. When you thought that hypothetical supported you, you used it. It was only after you realized that it supported me that you started claiming that hypothetical don't matter.

The worst part is you just asked me a hypothetical because you thought it was to your advantage. There is no system synced with a camera at 120Hz to exactly catch each sub-frame. It isn't even a thing in reality. So, why does it matter.

This is part of posting honestly Dave. Using your argument against any hypothetical that you think isn't to your advantage, while using other hypotheticals that don't exist isn't being intellectually honest.

I'm guessing your expert could actually straiten you out with an answer to my A, B, C, D question.

--Darin
darinp is offline  
post #510 of 1307 Old 07-03-2017, 05:22 PM
 
Dave Harper's Avatar
 
Join Date: Feb 2000
Location: Paradise on Earth
Posts: 6,554
Mentioned: 62 Post(s)
Tagged: 1 Thread(s)
Quoted: 3159 Post(s)
Liked: 1721
Quote:
Originally Posted by darinp View Post
Are you now saying that he answered my A, B, C, D question and that it agreed with what I said and so you didn't post it? It is unclear to me what you are saying here about what question he answered.
Because, as I already explained at least once, human vision is like the camera with a slow shutter speed, not a camera with a high shutter speed. Seems like I addressed this multiple times.

If you want to know why it matters you have to use a camera with a slow shutter speed, otherwise you aren't answering the question for human vision. I think it has been clear since the first post in this thread that I was talking about lens requirements when displaying images for human viewers.
It does, and it doesn't matter.

According to your expert it doesn't matter, so why are you bringing this up? Your expert himself said that the lens requirements are the same whether the eShift images are combined before the lens or after the lens, so why are you contradicting him again?

--Darin
I am done with you. All this diatribe you're throwing out still doesn't take into account that your eyes and brain are AFTER then lens, and you even said and PROVED that with a camera capturing exactly 1/120th instant in time (which shows you EXACTLY what the projector itself is doing at that instant!) that it only shows ONE single 1920x1080 .67" pixel sized frame! So therefore it's easy to conclude, and that is all the lens needs to resolve (yes, already assumed clearly Javs) in that instant of time, and since we ALL agreed lenses have no persistence (like the human eye/brain does), then there is NOTHING further it needs to resolve!

I am not even getting into your whole spin where you're trying to talk about me and my "expert's" views as you have that so distorted it isn't even funny anymore.
Dave Harper is offline  
Sponsored Links
Advertisement
 
Reply Digital Hi-End Projectors - $3,000+ USD MSRP

Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off