Resolution requirements for lenses - Page 40 - AVS Forum | Home Theater Discussions And Reviews
Forum Jump: 
 269Likes
Reply
 
Thread Tools
post #1171 of 1307 Old 08-04-2017, 09:11 AM
AVS Forum Addicted Member
 
Stereodude's Avatar
 
Join Date: Jan 2002
Location: Detroit Metro Area
Posts: 15,178
Mentioned: 37 Post(s)
Tagged: 0 Thread(s)
Quoted: 4179 Post(s)
Liked: 2959
Quote:
Originally Posted by cmjohnson View Post
Nope, still wrong, stereodude, because the "virtual" pixels which are 1/2 size (H and V) which are real picture information requires the lens to have sufficient angular resolution to resolve pixels of that size REGARDLESS of the fact that the virtual pixels are not actually discretely projected through the lens system.

If that were not the case, then you could get extra performance out of any lens by shifting it slightly relative to the imaging device on a dynamic basis, giving "free" bandwidth in direct violation of Nyquist.

The shift is angular spatial information and it must be resolvable to a defined standard. As feature size decreases, the sharpness of the lens has to increase in order to achieve the same MTF value.

A 4K projector requires a better lens than a 1080p projector. A 4K e-shift projector requires a better lens than a 1080p projector.

If this was not true then nothing would ever require a better lens according to the way you're thinking. So why not use a lens built for
an early generation VGA projector at 640x480 resolution? Well, why not? Because if a 1080p lens is good enough for 4K then a VGA lens should be good enough for 1280x960 or so, according to your (defective) reasoning.
I am most certainly not wrong. You are wrong in assuming that any projector ships with a lens that is only just barely good enough to resolve the target resolution. If the MTF of a 1080p projector for a 1080TV line test pattern is 0 then yes, it would need a better lens to resolve the better resolution of an e-shift system. However, no one sells a HT-centric 1080p projector with a lens anywhere near that bad. The typical HT-centric 1080p projector has a lens that is good enough to make out the gaps between the pixels. As such, the MTF limited extra resolution gained from an e-shift system can be realized without a better lens.
Dave Harper likes this.
Stereodude is online now  
Sponsored Links
Advertisement
 
post #1172 of 1307 Old 08-04-2017, 09:21 AM
AVS Forum Addicted Member
 
Stereodude's Avatar
 
Join Date: Jan 2002
Location: Detroit Metro Area
Posts: 15,178
Mentioned: 37 Post(s)
Tagged: 0 Thread(s)
Quoted: 4179 Post(s)
Liked: 2959
Quote:
Originally Posted by cmjohnson View Post
Let's reduce this to a very, VERY simple way of looking at it.

There are those who persist in believing that:

If a given lens is good enough to resolve a specific pixel size when projected, but that pixel is of a size that is the limit of the resolution of the lens,

then that lens can resolve a detail that is SMALLER than the FIRST pixel,

BY PROJECTING IT AGAIN IN A SLIGHTLY DIFFERENT LOCATION.



Jaw hits floor. How can anyone be that DUMB?
I don't think a single person in the thread has argued what you're trying to claim they have. I have said over and over that if the lens is adequately sharp for 1080p it is adequate for realizing the MTF rolling off extended resolution of 1080p e-shift. We don't need a better lens as Darin claims. And by adequately sharp I don't mean has a MTF that is just barely distinguishable from 0 with a 1080TV line patterns as you seem to think. I mean it has sharp & clearly defined 1080p pixels.
Dave Harper likes this.
Stereodude is online now  
post #1173 of 1307 Old 08-04-2017, 09:32 AM
AVS Forum Addicted Member
 
Stereodude's Avatar
 
Join Date: Jan 2002
Location: Detroit Metro Area
Posts: 15,178
Mentioned: 37 Post(s)
Tagged: 0 Thread(s)
Quoted: 4179 Post(s)
Liked: 2959
Quote:
Originally Posted by Javs View Post
Today, the real advantage of e-shift is not to erase the pixel grid, that would be what you get when you enable e-shift on a standard 1080p input, but by using UHD input you get the super resolution effect, whereby the e-shift system is able to stack unique subframes and the end result is an image with higher density and resolution than that of 1080p, effectively 4 million pixels or so, or, 3k.
The point of e-shift is absolutely to get rid of the pixel grid. The abrupt drop in MTF to 0 at the Nyquist limit of the pixel grid creates an artificial unnatural sharpness to the image. By using e-shift the abrupt MTF drop is removed and replaced by a slow roll off to 0 above the Nyquist limit. The image doesn't look as sharp at first glance, but it does contain higher frequency detail in it.
Stereodude is online now  
Sponsored Links
Advertisement
 
post #1174 of 1307 Old 08-04-2017, 11:55 AM
AVS Forum Special Member
 
cmjohnson's Avatar
 
Join Date: Nov 2003
Location: Sharply focused on sharper focus.
Posts: 6,140
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 104 Post(s)
Liked: 34
https://www.avsforum.com/forum/24-dig...l#post38912713

Stereodude, you contradict yourself.

Incidentally, at no point have I made any claim that any given projector ships with a lens that is "just adequate" for its rated resolution. I do expect that most units come equipped with a lens that is good enough to provide a good MTF value and would provide an acceptable MTF value at some undefined higher resolution.

You're just not getting it. The added information provided by e-shift requires a higher spec lens than the minimum defined requirement to resolve the non-shifted base resolution. It can be drawn, it has an angular component, thus it requires a lens that has the angular resolution required to actually resolve it. You may establish a performance standard, and in fact that should be done.

If a given lens achieves 50 percent MTF at 1080p then at e-shifted quasi-4K or full 4K it will still pass the image but its MTF value may be down to, oh, I don't know, 25 percent or so for the quasi-4k pixels. (Guessing.)
cmjohnson is offline  
post #1175 of 1307 Old 08-04-2017, 01:56 PM
Advanced Member
 
dovercat's Avatar
 
Join Date: Apr 2008
Posts: 739
Mentioned: 3 Post(s)
Tagged: 0 Thread(s)
Quoted: 103 Post(s)
Liked: 52
Quote:
Originally Posted by cmjohnson View Post
https://www.avsforum.com/forum/24-dig...l#post38912713

Stereodude, you contradict yourself.

Incidentally, at no point have I made any claim that any given projector ships with a lens that is "just adequate" for its rated resolution. I do expect that most units come equipped with a lens that is good enough to provide a good MTF value and would provide an acceptable MTF value at some undefined higher resolution.

You're just not getting it. The added information provided by e-shift requires a higher spec lens than the minimum defined requirement to resolve the non-shifted base resolution. It can be drawn, it has an angular component, thus it requires a lens that has the angular resolution required to actually resolve it. You may establish a performance standard, and in fact that should be done.

If a given lens achieves 50 percent MTF at 1080p then at e-shifted quasi-4K or full 4K it will still pass the image but its MTF value may be down to, oh, I don't know, 25 percent or so for the quasi-4k pixels. (Guessing.)
Home Theater Geeks 338: All About JVC Projectors at 38.54 has a MTF graph
MTF with lens.
Native 4K MTF at 1080 TV lines about 0.90, at 2160 TV lines about 0.68
Native 2K MTF at 1080 TV lines about 0.55
VS2400/2500 E-shift MTF at 1080 TV lines about 0.55, at 2160 TV lines about 0.05

Which implies JVC native 1080p e-shift to 4K projectors do not use better lenses than other 1080p native projectors. Just the minimum recommended for 1080 which is a MTF of 40% minimum for video with 50% to 60% the norm for projectors, at 1080 TV lines.

Last edited by dovercat; 08-05-2017 at 01:00 AM.
dovercat is offline  
post #1176 of 1307 Old 08-04-2017, 01:58 PM
Advanced Member
 
AJSJones's Avatar
 
Join Date: Dec 1999
Location: Sonoma County, CA, US
Posts: 886
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 100 Post(s)
Liked: 24
Quote:
Originally Posted by Stereodude View Post
I don't think a single person in the thread has argued what you're trying to claim they have. I have said over and over that if the lens is adequately sharp for 1080p it is adequate for realizing the MTF rolling off extended resolution of 1080p e-shift. We don't need a better lens as Darin claims. And by adequately sharp I don't mean has a MTF that is just barely distinguishable from 0 with a 1080TV line patterns as you seem to think. I mean it has sharp & clearly defined 1080p pixels.
Your version of "adequate" is subjective, no matter how many times you say it And, from the beginning many have agreed that if you have a lens that "has sharp and clearly defined pixels" (which brings with it a well-defined grid/SDE) then that lens will show most of the benefit available in the e-shift system, because in that case, the lens is a minor influence on the overall system MTF.

This discussion is not about the question of "how well do currently supplied lenses work?", these are thought experiments and the OP question contemplated, in Darin's words:
"There was one camp ... who felt that lenses only need to be good enough for the resolution of one of the e-shift frames and not for the finer elements of the full image frames that include 2 e-shift frames, since those frames don't pass through the lens at the same time." This proposition must mean that some believe that "something different happens to the subframe images "in the lens at the same time" that doesn't happen when they "go through separately", because the eye/brain will receive the information the same way in either case.
Quote:
Originally Posted by Dave Harper View Post
As soon as you ...

A. Flash the two native sub-frames at the same exact moment in "time"...

Then you must move to a higher resolution lens with greater MTF,
Here's a thought experiment, just since we're having so much fun here Forget about actual projectors and actual lenses and assertions about them, for this thought experiment .

Imagine a full resolution 4k DMD chip with such a high fill factor (e.g.98%) that we can barely see the grid even with a super duper diffraction limited lens, so we can ignore the grid for the discussion below, we just see pixels with sharp edges just touching. (Edit: they are getting closer:

We can use a somewhat worse lens (than a diffraction limited one) and still not detect deterioration of the image from the viewing distance because the increased blur at the pixel edges is so small compared to the size of the pixels that the edges still appear sharp - your criterion. The information from each pixel does not contaminate any of its neighbours.

If we use progressively worse lenses we will come to a point where the viewers notice/perceive the degradation (like at the optician: "Is 1 sharper than 2" when switching lenses viewing letters on the chart) Let's call this the lens "threshold" for this 4K chip at our viewing distance - i.e. the quality where a "better lens" would not be noticeable. Let's call it 100 on some arbitrary (and non-linear) scale. The scale combines 1) the digital input signal MTF and 2) the lens MTF into the system MTF and 3) includes our perception of the information. Holistic, even, as Highjinx might say It might be a little different depending on the image we choose - test patterns versus real-world images because of eye/brain processing etc, but let's accept the principle.


Now, I bin pixels together in squares of 4 in the processor to project a 1080p input image (each 1080p pixel created by 4 mirrors from the 4k device). It will not look as sharp as the 4k image, obviously, as everyone agrees, because of the decreased information content in the same screen area and the bigger pixels. So I can start making the lens worse and doing comparisons asking is "1 (our 4k threshold lens) better than 2 (a worse lens)" We will be able to use a worse lens before people will be able say "yes" to our question. As expected, a 1080p image doesn't require as good a lens as a 4k image. Let's call that lens a 50 on our scale. So far, we're "D'oh", right?

Now for the third part. I take the data from two subframes (those an e-shift processor would give me from a 4k source to be displayed by a 1080p chip with a shift device) and shift and superimpose them in the processor into a 4k array. I now have a 4k data array carrying only the superimposed image data from the 1080p subframes going to the chip and through "at the same time". It obviously doesn't have 4k's worth of detail information but it projects all the information (from the 4k->2x 1080p processor) that the combination of the two subframes can carry. People here tell us that it IS significantly better than a single 1080p image so it must be carrying more detail information, right? When we do the lens testing the same people who say "eshift is better" will be able to tell the difference and will set the threshold at a 70 lens, for example, to resolve that extra information they perceive from shift projectors.

Fourth part.
We take those subframes and send each to our 4-pixel binned chip so it projects 1080p images and project them sequentially by adding of an e-shift device so the superimposition happens on the screen after the lens. Which lens will people pick as the threshold? A 50 or a 70?

I think the piece missing from Stereodude's argument is the absence of the eye/brain processor in the assessment of the overall MTF of the system AS WE SEE IT. The eye/brain never sees the single subframe on its own, it sees the successive suprimpositions so considering it in isolation doesn't make sense to me when discussing how sharp a picture appears

Last edited by AJSJones; 08-11-2017 at 04:09 PM.
AJSJones is offline  
post #1177 of 1307 Old 08-04-2017, 03:37 PM
AVS Forum Special Member
 
CherylJosie's Avatar
 
Join Date: May 2008
Posts: 1,504
Mentioned: 18 Post(s)
Tagged: 1 Thread(s)
Quoted: 798 Post(s)
Liked: 323
Quote:
Originally Posted by AJSJones View Post
The smaller the aperture, the greater the diffraction and the lower the resolution. (Many photographers here may be aware of this - on a DSLR the captured image gets softer and softer as one stops down to narrower than f/8 or f/11) Other than that aperture effect, there's no diffraction going on in a lens. Only refraction
Not a photographer, not experienced, but aware (better now). I knew about depth of field vs. detail.

Quote:
Hence the 2160p MTF 0 comments.
Lost me there. Not a physicist.

Wiki of MTF says 'modulation transfer function' as optical transfer function neglecting phase (mag only). The phase component is zero so they are by definition neglecting phase and that means no coherent light? or just simplified to concentrate on the focal length? Unsure what that means. I was fascinated by how it can be directly received and plotted simply by translating the imaging surface on the z axis above and below the focal length because it is actually a 3D function in space.

By MTF 0 you mean the peak at zero spatial frequency?
CherylJosie is offline  
post #1178 of 1307 Old 08-04-2017, 10:49 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
For those who say that all that matters is the pixel size that goes through the lens at any given point in time and the composite image is irrelevant as far as lens requirements, how about this one?

Research money has gone to digital instead of CRT, but let's say a company spent money making CRT projectors and they created projectors like the following.

They have 2 models, the CRT1080 and the CRT540. Both use fast phosphor so that one sub-frame is done before the other starts. Both only display 1080i120 images. That is one 1080i image each 8 milliseconds.

Both can accept 1080p60. The difference between the 2 models is this. The CRT1080 will display all the data in the 1080p60 images, by displaying all the even lines in one 1080i frame for 8 milliseconds, then the odd lines in the next 1080i frame.

The cheaper model, the CRT540, will display all the even lines in one 1080i frame for 8 milliseconds, then will repeat all of the even lines again in the next 1080i frame, throwing away the data for the odd lines.

The size of the pixels going through the lens for any one given point in time is exactly the same between the two projectors.

The only difference is in the composite images the projectors create over time. The CRT540 maxes out at being able to display a horizontal line pattern with 540 lines. The CRT1080 maxes out at 1080 lines, but that is only when considering the composite images. When considering only the moment in time it cannot do 1080 lines, although if you just looked at the height of the pixels for a given point in time they would be 1/1080th of the height. The key is that they would also be 1/1080th of the height for the CRT540, even though it cannot actually deliver 1080 lines to human viewers because it just repeats half of the input signal.

So, what determines the lens requirements for vertical resolution? Is it the pixels at any given moment, meaning both projectors have exactly the same lens requirements for vertical resolution? Or are those lens requirements based on the composite images, meaning that the requirements are different between the two projector models?

--Darin
darinp is offline  
post #1179 of 1307 Old 08-05-2017, 06:42 AM
Advanced Member
 
dovercat's Avatar
 
Join Date: Apr 2008
Posts: 739
Mentioned: 3 Post(s)
Tagged: 0 Thread(s)
Quoted: 103 Post(s)
Liked: 52
Quote:
Originally Posted by darinp2 View Post
In one of the JVC threads there was a discussion about whether better lenses are required with e-shift on than off to show all the detail that the rest of the projector is producing, much like 4K panels requiring better lenses than 2K panels to resolve all the resolution of each, all else being equal. Just to be clear these are entertainment devices and no level of quality is really required, but hopefully people get the idea about how "required" is being used in this case.

There was one camp that at least at first I would say was strongly in the majority who felt that lenses only need to be good enough for the resolution of one of the e-shift frames and not for the finer elements of the full image frames that include 2 e-shift frames, since those frames don't pass through the lens at the same time.

I thought there was some good discussion and progress, but some readers would be left being misinformed...

...Does anybody still believe that if a lens is good enough to show the E and good enough to show the 3 this means it is good enough to show the full image of those next to each other properly as long as the E and 3 never go through the lens at the same time?

--Darin
JVC appear to believe that if a lens is good enough for 1080 its good enough for 1080 native with E-shift to 4K.

Home Theater Geeks 338: All About JVC Projectors at 38.54 has a MTF graph
MTF with lens.
Native 4K MTF at 1080 TV lines about 0.90, at 2160 TV lines about 0.68
Native 2K MTF at 1080 TV lines about 0.55
VS2400/2500 E-shift MTF at 1080 TV lines about 0.55, at 2160 TV lines about 0.05

Which implies JVC native 1080p e-shift to 4K projectors do not use better lenses than other 1080p native projectors. Just the minimum recommended for 1080 which is a MTF of 40% minimum for video with 50% to 60% the norm for projectors, at 1080 TV lines.
dovercat is offline  
post #1180 of 1307 Old 08-05-2017, 07:52 AM
Advanced Member
 
AJSJones's Avatar
 
Join Date: Dec 1999
Location: Sonoma County, CA, US
Posts: 886
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 100 Post(s)
Liked: 24
Just to finish the thought experiment above, I should have created the scale to measure blur Think of it as a measure of how wide the image of a pure black/white transition edge where pixels touch shows up on our screen - the screen will show gray between the black and white (Check out this commercial product Imatest as an example )

Diffraction limited lens has a blur of 20
Case 1 4k - our audience can only detect a degradation of the image when blur is increased to 50 - based on edge blur for the size of the pixels.
Case 2 1080p - our audience can only detect a blur when increased to 100
Case 3 E-shift composite - audience can detect blur at 70
Case 4 E-shift sequential - ?

In the sequential situation the proponents seem to say we can get away with a lens that causes a blur of 100, but if the superimposition occurs before the lens we need the lens to decrease the blur to 70. So the question is why does the eye not detect the greater 100 blur we know (from case 2) to be present on the edges of the pixels in the sequential 1080p subframes? What is the physics or eye/brain phenomenon that is the cause of the difference?

Last edited by AJSJones; 08-05-2017 at 09:48 AM.
AJSJones is offline  
post #1181 of 1307 Old 08-05-2017, 03:27 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by dovercat View Post
Which implies JVC native 1080p e-shift to 4K projectors do not use better lenses than other 1080p native projectors.
I didn't watch it, but doesn't sound like that to me. Sounds like they just measured the same projector for 1080p and 2700p. Was there any indication that they used a non-JVC projector for one of the tests.

Besides the fact that no company is obligated to actually change anything for the kind of requirement I talked about in the first post, if they did tighten up the lens area in thinking about eShift, there likely wouldn't be enough difference to keep them from saving on volume pricing and putting the same lens in a non-eShift version of the same projector.

--Darin

Last edited by darinp; 08-05-2017 at 04:00 PM.
darinp is offline  
post #1182 of 1307 Old 08-05-2017, 04:00 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by AJSJones View Post
In the sequential situation the proponents seem to say we can get away with a lens that causes a blur of 100, but if the superimposition occurs before the lens we need the lens to decrease the blur to 70. So the question is why does the eye not detect the greater 100 blur we know (from case 2) to be present on the edges of the pixels in the sequential 1080p subframes? What is the physics or eye/brain phenomenon that is the cause of the difference?
Not sure any of the proponents will ever tell us any physics reason. Just intuition.

Some were telling us before that the sub-frames would interact. After over 18 months it seemed like Stereodude finally changed his position from firmly believing that the temporal separation was important, to saying it was irrelevant (while trying to act like it was my fault for starting a thread to tell people like him that the temporal separation didn't matter). So, those physics reasons for his beliefs of over 18 months seem to have magically disappeared.

Highjinx has seemed to move from saying the sub-frames would interact to saying that somehow sending the sub-frames has a problem with higher data density, as if that is a reason. He now seems to say that the extra photons won't change each other, but that something will happen from having 2 sub-frames going through the lens at the same time. I'm not sure what that "something" is supposed to be. Is it supposed to be more blur per sub-frame?

Not sure, since whatever blur the first sub-frame has can be added to whatever blur the second sub-frame has, to get the blur for the whole frame. The only way the blur for the final frame can be changed is to change the blur for the sub-frames. Yet, it seems like Highjinx's current position is that sending the sub-frames at the same time doesn't change the blur of each sub-frame, but does change the blur of the whole frame. Not sure though, since it seems like Highjinx wants to stay away from any actual property of physics.

Dave Harper keeps repeating over and over again that the yellow sub-pixel is combined in your brain and not in the lens, but I don't recall him coming up with a single good reason why he thinks this matters. Since it should be obvious to anybody that the final frame (blur and all) is just the addition of sub-frame A and sub-frame B, not sure if Dave believes sub-frame A changes sub-frame B and vis versa if they are sent through the lens at the same time, or if he has some other reason for thinking it matters when they go through the lens.

As you basically say, the eye should see the blur for the red pixel and it should see the blur for the green pixel. The addition is the blur it sees for the yellow sub-pixel too.

Dave may never really tell us any physics reason that it matters whether the yellow sub-pixel passes through at a moment in time though. Especially since Dave's expert knows the temporal separation doesn't matter given that they said B=C and Dave is contradicting his own expert every time he repeats that it matters that the yellow sub-pixel is combined in your brain. Even as Dave tells me that I need to take everything his expert said as final (even as the guy contradicts himself), I think Dave has contradicted his own expert at least 10 times.

If Dave still believes that it matters when the yellow sub-pixel goes through the lens he should be able to come up with a physics reason that a human would see a different final image if the yellow sub-pixel goes through the lens at a moment in time vs if temporal separation is used to display yellow by projecting red and the green with temporal separation (like a single chip DLP).

--Darin

Last edited by darinp; 08-05-2017 at 04:07 PM.
darinp is offline  
post #1183 of 1307 Old 08-05-2017, 04:39 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Does anybody here disagree with these statements:

When sub-frames A and B are sent sequentially at very high speed, the images humans see are sub-frame A plus sub-frame B.

When sub-frames A and B are sent simultaneously, the images humans see are sub-frame A plus sub-frame B.

--Darin
darinp is offline  
post #1184 of 1307 Old 08-05-2017, 04:50 PM
AVS Forum Special Member
 
Javs's Avatar
 
Join Date: Dec 2012
Location: Sydney
Posts: 7,841
Mentioned: 475 Post(s)
Tagged: 0 Thread(s)
Quoted: 6784 Post(s)
Liked: 6411
Sigh... this thread is a big fat merry go round...

I reckon we all cut our losses and /end thread
Dave Harper and Stereodude like this.

JVC X9500 (RS620) | 120" 16:9 | Marantz AV7702 MkII | Emotiva XPA-7 | DIY Modular Towers | DIY TPL-150 Surrounds | DIY Atmos | DIY 18" Subs
-
MadVR Settings | UHD Waveform Analysis | Arve Tool Instructions + V3 Javs Curves
Javs is offline  
post #1185 of 1307 Old 08-05-2017, 05:39 PM
Advanced Member
 
AJSJones's Avatar
 
Join Date: Dec 1999
Location: Sonoma County, CA, US
Posts: 886
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 100 Post(s)
Liked: 24
Quote:
Originally Posted by AJSJones View Post
In the sequential situation the proponents seem to say we can get away with a lens that causes a blur of 100, but if the superimposition occurs before the lens we need the lens to decrease the blur to 70. So the question is why does the eye not detect the greater 100 blur we know (from case 2) to be present on the edges of the pixels in the sequential 1080p subframes? What is the physics or eye/brain phenomenon that is the cause of the difference?
Quote:
Originally Posted by darinp View Post
Not sure any of the proponents will ever tell us any physics reason. Just intuition.
...
--Darin
That's the problem in this thread, in a nutshell. No scientific/optical/physics explanation has been provided for very strongly stated assertions and that means they just don't cut it.

(The yellow pixel doesn't exist in a DLP device either but that's not relevant to the edge sharpness issue)

Last edited by AJSJones; 08-05-2017 at 05:52 PM.
AJSJones is offline  
post #1186 of 1307 Old 08-05-2017, 08:15 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by AJSJones View Post
That's the problem in this thread, in a nutshell. No scientific/optical/physics explanation has been provided for very strongly stated assertions and that means they just don't cut it.
One of the worst parts IMO is that one person even went to somebody who is supposed to be a physics expert to get their position, yet while they can keep telling us that this person gets paid to know about lenses and I don't, and can repeat over and over and over again that the yellow pixel is combined in our brains, they can't provide any reason why it really matters where the yellow sub-pixel is combined.

Dave Harper has access to a "physics expert", so why doesn't he have a physics answer for why it matters when the yellow pixel is formed?

Dave Harper's continued insistence that his expert must be right (even as he contradicts said expert) and I must be wrong simply because the expert is paid to work on this stuff makes me think he hasn't paid attention enough to see how many times in life the people who are supposed to be the most expert in the field turn out to be wrong, where somebody less expert could figure it out.

The double slit experiment mentioned earlier is something where a student figured out some stuff about it that the best physics minds hadn't figured out.

I've had experiences where I've figured things out where others were wrong, yet they could have pointed to their resumes as if they couldn't be wrong given what those said. One related to home theater was when I had to explain some math about ANSI CR to Joe Kane, even though he has a math degree. Should I have just rolled over and started repeating his false beliefs about the room effect on ANSI CR just because he gets paid to know home theater things, he has a math degree, and I have an engineering degree?

One reason I think I am able to figure things out sometimes better than people who are supposed to be experts is because I think many of them learn rules that work 99% of the time, but when they get a problem that falls outside those rules they have forgotten, or never knew, the underlying principles that led to those rules, at least not well enough.

For example, back in the CRT days many people in the field were told that on/off CR was irrelevant and ANSI CR is all that matters. This worked well for CRT vs CRT. However, when digitals came along the rule didn't work anymore. Yet, 15+ years later there are people still trying to stick to their rule.

I've had enough experience with some of them that they will use one of the same tactics that Dave Harper keeps using, the instead of comparing actual knowledge, let's compare resumes. As if that will tell us the answer.

Dave kind of touched on one of the problems with eShift and lens experts. That is, eShift asks them to think outside the box. They've been able to use their rule that you just have to look at the pixel size for so long, but eShift adds a wrinkle. It isn't like those other projectors.

So, even while Dave spouts his confidence that his expert is right, the expert himself seems less than 100% confident in his answers, especially since he contradicts himself. I think he does that because he has been asked to apply his expertise to a problem that requires thinking outside the box.

As I've said, I'm open to believing that this expert is right that A=C. I don't think it is the case, but could be convinced. The reason I started the thread was to say B=C (that the temporal separation between the sub-frames doesn't matter) and I believe Dave's expert is right on that one.

I haven't seen anybody come up with any good physics reason that sending the 2 frames at the same time would be different than sending them in sequence at high speed, other than minutia. Just the same intuition over and over again, it seems.

--Darin

Last edited by darinp; 08-05-2017 at 08:21 PM.
darinp is offline  
post #1187 of 1307 Old 08-05-2017, 10:25 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by dovercat View Post
I would assume the bottom image has a better lens as the smallest detail I can clearly see is the lines that make up the pixel squares, the screen door effect.
Just in case it wasn't clear, those pictures are from the exact same projector. Just eShift with a 4K source vs native 1080p.
Quote:
Originally Posted by dovercat View Post
How good are they at displaying a 4K 1 pixel checker board, 1 black pixel, one white pixel.
They can't. While not exactly the same, eShift is more like 2.7K than 4K.
Quote:
Originally Posted by dovercat View Post
In comparison to non pixel shifting display displaying a checker board of their native image chip resolution?
They can't make things as sharp above 1080p as they can with native 1080p mode because they have overlap between sub-frames.
Quote:
Originally Posted by dovercat View Post
Is not the real advantage of pixel shift systems that they eliminate the visibility of pixel structure by overlaying the pixels and cause MTF contrast to fall off with resolution rather than drop off a cliff to zero at the native image chip resolution. So in theory creating a more natural looking image. At the expense of a less artificially sharp looking image.
If the only goal was to eliminate pixel structure they could have saved a lot of money by just using the method described in the paper that Tomas2 linked to. Just put a physical element in that splits the image in 2 and adds shifts between the 2 sub-frames. With the method JVC chose that can actually deliver detail that native 1080p on its own cannot do, besides the less artificially sharp looking image.
Quote:
Originally Posted by dovercat View Post
Though I get your point that the lens has to be good enough to display a half pixel difference otherwise you would not clearly see the half pixel difference when its created in one image following another that is half a pixel different. But, the lenses for the native chip projectors are easily capable of that as on a projected image you can see the native pixel structure, the native pixels do not all blur into one another due to poor lens focus.
Even if current lenses are good enough that doesn't change the discussion about where lens requirements actually lie, especially as we consider 4K+eShift for the future.
Quote:
Originally Posted by dovercat View Post
And I am unsure if they would bother improving lens quality much for pixel shift systems when in any event contrast/sharpness is I think going to be falling away for the smaller pixel shift created details due to the blurring caused by overlaying pixel shifted images.
It is possible, but JVC already went through a phase of improving their lenses. Companies don't have to wait until they have actually delivered something to improve an area they think will help. For example, years before JVC released their first native 4K home theater projector they improved the fill ratio for their 1080p chips in anticipation of this helping when they started delivering 4K chips.

While most 1080p projectors do deliver the pixels pretty sharply, I don't think it as important for native 1080p images as it is for when they add eShift. For example, in the pictures above I think a little bit more bleed from the 1080p pixels to a little bit outside the area that belongs to them would have less detrimental effect on the native 1080p image than on the eShifted image.

--Darin
darinp is offline  
post #1188 of 1307 Old 08-05-2017, 10:31 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Stereodude View Post
Even the mythical scenario of C does not need a better lens.
Quote:
Originally Posted by Stereodude View Post
I am most certainly not wrong.
To be clear about your current position, it is that the following images have identical lens requirements, and that they would even if the top image was created by sending both 1080p frames at the same time, right?



--Darin
darinp is offline  
post #1189 of 1307 Old 08-06-2017, 02:54 AM
Advanced Member
 
dovercat's Avatar
 
Join Date: Apr 2008
Posts: 739
Mentioned: 3 Post(s)
Tagged: 0 Thread(s)
Quoted: 103 Post(s)
Liked: 52
Quote:
Originally Posted by darinp View Post
They can't.
Why not even if it is at low MTF?
And how about DLP native 2560x1600 pixel shifted to 4K?
Quote:
Originally Posted by darinp View Post
While not exactly the same, eShift is more like 2.7K than 4K.
Would you say the marketing is misleading?
Quote:
Originally Posted by darinp View Post
With the method JVC chose that can actually deliver detail that native 1080p on its own cannot do, besides the less artificially sharp looking image.
I agree. But think the main benefit is more natural looking image. At the cost of less artificially sharp looking image.
Quote:
Originally Posted by darinp View Post
Even if current lenses are good enough that doesn't change the discussion about where lens requirements actually lie,
It makes the discussion redundant.
Quote:
Originally Posted by darinp View Post
especially as we consider 4K+eShift for the future.
Native 4K projectors assuming the same or smaller chip size will have higher lens quality than 1080 native projectors. To achieve the 50-60% MTF norm for projectors at full native resolution line pairs.

Quote:
Originally Posted by darinp View Post
It is possible, but JVC already went through a phase of improving their lenses.
Going by the graph in the Home Theater Geeks video JVC image MFT including the effect of lenses is bog standard middle of the road for 1080. JVC going by the graph are about 55% at full native resolution line pairs.
Quote:
Originally Posted by darinp View Post
Companies don't have to wait until they have actually delivered something to improve an area they think will help. For example, years before JVC released their first native 4K home theater projector they improved the fill ratio for their 1080p chips in anticipation of this helping when they started delivering 4K chips.
Increasing fill factor enables smaller pixel size on chip which decreases fill factor. Smaller pixel size on chip equals smaller chip size which equals cheaper production costs, more profits.

Smaller pixel size on chip also requires better lenses. Which can be made, I doubt better lenses is some dark art requiring new R&D. But better lenses are more expensive.

The lens cost factor I think renders it pointless to have better MTF at full resolution line pairs than needed to achieve the norm for projectors, unless the product is a cost is not a factor premium device.
Quote:
Originally Posted by darinp View Post
While most 1080p projectors do deliver the pixels pretty sharply, I don't think it as important for native 1080p images as it is for when they add eShift. For example, in the pictures above I think a little bit more bleed from the 1080p pixels to a little bit outside the area that belongs to them would have less detrimental effect on the native 1080p image than on the eShifted image.

--Darin
The need for the sharpness of the edge of a detail is the same regardless of resolution, split the screen down the middle half black half white and resolution is irrelevant to how clear sharp the transition appears. And overall perception of focus and sharpness of an image is also not dictated by how in focus the smallest detail you can see in the image is.

What changes with more resolution is how good the lens needs to be to retain high MTF for the smallest size of the detail that can be displayed to be clearly seen. The minimum recommended for video I think is 40% at full resolution line pairs, with the norm for projectors being 50-60%.

But while a standard high native resolution would have high 50-60% MTF at full resolution line pairs then fall off a cliff to zero as no smaller detail can be displayed. E-shift instead has high 50-60% MTF at full native resolution then MTF falling off at higher e-shift created resolution so smaller and smaller details become less clear. That creates a more natural image that does contain smaller details but does not necessitate the higher quality lens the higher native resolution projector would have.

It maybe that e-shift is not capable of maintaining high MTF for higher than native resolution details due to those smaller details being created by overlaying images. That might inherently cause blurring. So what a higher quality lens might do is increase how easily noticeable the smaller details are and how noticeable it is they are blurry. While having image processing contrast and sharpness enhancement enables the user to adjust to personal preference and viewing conditions.

Last edited by dovercat; 08-06-2017 at 03:11 AM.
dovercat is offline  
post #1190 of 1307 Old 08-06-2017, 09:05 AM
Advanced Member
 
AJSJones's Avatar
 
Join Date: Dec 1999
Location: Sonoma County, CA, US
Posts: 886
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 100 Post(s)
Liked: 24
Quote:
Originally Posted by dovercat View Post
The need for the sharpness of the edge of a detail is the same regardless of resolution, ...
I find this a little cryptic. Are you referring to a detail as, for example, an "edge" wiithin an image as depicted by an array of pixels or as the edge of a single pixel? I have been viewing the lens quality debate as one which depends on how much blur the lens causes around each individual pixel (i.e., the size regime where good lenses have good MTF at much higher spatial frequencies than the dimensions of the details a digital signal can carry). The degradation of the image by using worse lenses would be a combination of the decreased sharpness of the boundaries between pixels and the increasing overlap of "information" from adjacent pixels.
Quote:
Originally Posted by dovercat View Post
But while a standard high native resolution would have high 50-60% MTF at full resolution line pairs then fall off a cliff to zero as no smaller detail can be displayed. E-shift instead has high 50-60% MTF at full native resolution then MTF falling off at higher e-shift created resolution so smaller and smaller details become less clear. That creates a more natural image that does contain smaller details but does not necessitate the higher quality lens the higher native resolution projector would have.
So the lens does not need to be so high a quality as needed for a "real 4K" chip, but there will be a need for a somewhat better lens to extract the most benefit from the "smaller details" the image now contains.
Quote:
Originally Posted by dovercat View Post
It maybe that e-shift is not capable of maintaining high MTF for higher than native resolution details due to those smaller details being created by overlaying images. That might inherently cause blurring. So what a higher quality lens might do is increase how easily noticeable the smaller details are and how noticeable it is they are blurry.
The extra detail that can be added by the e-shift process is, as you suggest, not going to achieve the same MTF range as in a single subframe, due to the inevitable superimposition of information from neighbouring pixels. However, it seems that many feel there is not only a (pleasing) decrease in SDE and an increase in "smoothness" you expect by shifting a 2k source, but also, when processing a 4k source to two subframes, a (somewhat) more detailed image than availablke from a 1080p chip, suggesting the processing does allow the chip to present an image with a higher "information bandwidth" to the lens. The JVC guy in the video says they can present 3K's worth of information but that may be a bit of an (JVC-centric) exaggeration So, numbers of 2.7k are thrown around and maybe it's only 2.4 in some more rigorous measurement. In other words, processing a 4k source to two subframes for a 1080p rojector is partially/modestly successful in increasing detail. Thus in practical terms, the increase in lens quality required to regain some specific "quality" parameter is modest (and that with the same lens, some but not all the improvement will be seen). All of these issues eventually get to the point where human eye/brain/vision system have to be considered in the assessment of success in getting the detail to the screen to be "perceived". It seems you would agree with the notion that for an e-shift projector where extra detail is available from processing that a better lens will be of benefit, but that it does not need to be "4k quality". So, regardless of how much benefit can actually be achieved by the eshift with a 4k source, lens requirements are can be ranked: 1080p< e-shift 1080p < 4k Most people seem to agree with this concept but...

That brings us to:

The issue that remains contentious (and why this thread is close to 1200 posts) is whether the lens requirement for projecting the extra detail is decreased by the fact the the two subframes are not projected "at the exact same time".

Last edited by AJSJones; 08-06-2017 at 09:08 AM.
AJSJones is offline  
post #1191 of 1307 Old 08-06-2017, 01:31 PM
Advanced Member
 
dovercat's Avatar
 
Join Date: Apr 2008
Posts: 739
Mentioned: 3 Post(s)
Tagged: 0 Thread(s)
Quoted: 103 Post(s)
Liked: 52
Quote:
Originally Posted by AJSJones View Post
I find this a little cryptic. Are you referring to a detail as, for example, an "edge" wiithin an image as depicted by an array of pixels or as the edge of a single pixel?
That equates to the same thing. The lens helps determine how sharp the edge of each pixel is. Resolution the smallest detail which is dictated by the imaging chip does not define perceived image sharpness or focus. What defines image sharpness is the transition between the edges and in particularly how sharp larger details are. A good lens improves image quality regardless of resolution.
Quote:
Originally Posted by AJSJones View Post
I have been viewing the lens quality debate as one which depends on how much blur the lens causes around each individual pixel (i.e., the size regime where good lenses have good MTF at much higher spatial frequencies than the dimensions of the details a digital signal can carry). The degradation of the image by using worse lenses would be a combination of the decreased sharpness of the boundaries between pixels and the increasing overlap of "information" from adjacent pixels.
So the lens does not need to be so high a quality as needed for a "real 4K" chip, but there will be a need for a somewhat better lens to extract the most benefit from the "smaller details" the image now contains.
Not necessarily. Why bother having a better lens for e-shift if the small details are blurry anyway due to being created by overlaying two images. Why not do as JVC appears to do let the MTF drop off below the 50-60% that 1080 enjoys down to as low as 5% in the smallest e-shift created details.
Quote:
Originally Posted by AJSJones View Post
The extra detail that can be added by the e-shift process is, as you suggest, not going to achieve the same MTF range as in a single subframe, due to the inevitable superimposition of information from neighbouring pixels. However, it seems that many feel there is not only a (pleasing) decrease in SDE and an increase in "smoothness" you expect by shifting a 2k source, but also, when processing a 4k source to two subframes, a (somewhat) more detailed image than available from a 1080p chip, suggesting the processing does allow the chip to present an image with a higher "information bandwidth" to the lens.
I agree e-shift should produce a more pleasing image. No pixel structure and more image details at the cost of less artificial sharpness. For a more natural and more film like image.
Quote:
Originally Posted by AJSJones View Post
The JVC guy in the video says they can present 3K's worth of information but that may be a bit of an (JVC-centric) exaggeration So, numbers of 2.7k are thrown around and maybe it's only 2.4 in some more rigorous measurement. In other words, processing a 4k source to two subframes for a 1080p rojector is partially/modestly successful in increasing detail.
Marketing calls it 4K.
Quote:
Originally Posted by AJSJones View Post
Thus in practical terms, the increase in lens quality required to regain some specific "quality" parameter is modest (and that with the same lens, some but not all the improvement will be seen).
The smallest image detail size e-shift should be able to produce is a quarter pixel size. That is full 4K resolution. While such small details may not be displayable as a checker board they should be say in the edge of a circle or a sloped line. If the aim was to make such small details as visible as possible then a lens equal to that used in a 4K projector or better would be needed.
Quote:
Originally Posted by AJSJones View Post
All of these issues eventually get to the point where human eye/brain/vision system have to be considered in the assessment of success in getting the detail to the screen to be "perceived". It seems you would agree with the notion that for an e-shift projector where extra detail is available from processing that a better lens will be of benefit, but that it does not need to be "4k quality". So, regardless of how much benefit can actually be achieved by the eshift with a 4k source, lens requirements are can be ranked: 1080p< e-shift 1080p < 4k Most people seem to agree with this concept but...
No my view is that while in theory I think e-shift small detail visibility could benefit from having a lens equal or better than that of a native 4K projector, in practice they do not appear to use better lenses than a 1080 projector, if true that is by choice and probably because the smaller details created by e-shift maybe inherently blurry or of low MTF due to the overlaying of two images.
Quote:
Originally Posted by AJSJones View Post
That brings us to:

The issue that remains contentious (and why this thread is close to 1200 posts) is whether the lens requirement for projecting the extra detail is decreased by the fact the the two subframes are not projected "at the exact same time".
Display one image and then another half a pixel different so rapidly that retention of vision overlays them and the lens has to be capable of showing the half pixel gap for you to see the difference. So I think its irrelevant that only one image is displayed at a time. In theory a 1080 e-shift to 4K projector could benefit from having the same or better lens than a native 4K projector as far as improving visibility of the smallest details it can display. If they are not using such lenses then presumably its down to the smaller details produced by e-shift maybe being blurry/low contrast due to the nature of e-shift two images overlaid. Making the use of better lenses pointless or even possibly counter productive to perceived picture quality.

Last edited by dovercat; 08-06-2017 at 01:36 PM.
dovercat is offline  
post #1192 of 1307 Old 08-06-2017, 02:00 PM
Advanced Member
 
AJSJones's Avatar
 
Join Date: Dec 1999
Location: Sonoma County, CA, US
Posts: 886
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 100 Post(s)
Liked: 24
Thanks
Quote:
Originally Posted by dovercat View Post
A good lens improves image quality regardless of resolution. .
A lesson learned long ago in the DSLR drawn-out discussions of "Do our newer sensors "outresolve" our lenses?" - the combination of the two is always improved by the improvement of one
Quote:
Originally Posted by dovercat View Post
Not necessarily. Why bother having a better lens for e-shift if the small details are blurry anyway due to being created by overlaying two images. Why not do as JVC appears to do let the MTF drop off below the 50-60% that 1080 enjoys down to as low as 5% in the smallest e-shift created details. .
This is the benefit that e-shifting a simple 1080p source gains from the eshift - everything except the production of "new, smaller details" With the current lens quality of a 1080p system, not upgrading the lens will not deprive the viewer of much of that new detail

Quote:
Originally Posted by dovercat View Post
I agree e-shift should produce a more pleasing image. No pixel structure and more image details at the cost of less artificial sharpness. For a more natural and more film like image.

Marketing calls it 4K..
Marketing and engineering often see the world differently
Quote:
Originally Posted by dovercat View Post
The smallest image detail size e-shift should be able to produce is a quarter pixel size. That is full 4K resolution. While such small details may not be displayable as a checker board they should be say in the edge of a circle or a sloped line. If the aim was to make such small details as visible as possible then a lens equal to that used in a 4K projector or better would be needed.
Indeed - -as the engineer said, the things it does best are: makes lines straighter and circles more circular.


Quote:
Originally Posted by dovercat View Post
Display one image and then another half a pixel different so rapidly that retention of vision overlays them and the lens has to be capable of showing the half pixel gap for you to see the difference. So I think its irrelevant that only one image is displayed at a time. In theory a 1080 e-shift to 4K projector could benefit from having the same or better lens than a native 4K projector as far as improving visibility of the smallest details it can display. .
AJSJones is offline  
post #1193 of 1307 Old 08-06-2017, 02:47 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by dovercat View Post
Why not do as JVC appears to do let the MTF drop off below the 50-60% that 1080 enjoys down to as low as 5% in the smallest e-shift created details.
Do you have any MTF numbers for 1080p with any other projectors? I don't know what the norm is. In JVC's case, no matter how good a lens was the curve would drop off some beyond 1080p. Of course, the less degradation from the lens at 1080p, the less there should be at 2700p.
Quote:
Originally Posted by dovercat View Post
Marketing calls it 4K.
Does JVC marketing still? With TI they start with 2.7K and claim 4K resolution, while seemingly trying to make people believe it is true 4K. That is why I think the best acronym for XPR is eXaggerated Pixel Resolution.
Quote:
Originally Posted by dovercat View Post
So I think its irrelevant that only one image is displayed at a time.
Which makes you one more person who agrees with the reason I started this thread over 18 months ago; to say that the temporal separation between the sub-frames doesn't matter, after pretty much the whole forum who weighed in said it did.

--Darin
darinp is offline  
post #1194 of 1307 Old 08-06-2017, 04:39 PM
AVS Forum Special Member
 
Javs's Avatar
 
Join Date: Dec 2012
Location: Sydney
Posts: 7,841
Mentioned: 475 Post(s)
Tagged: 0 Thread(s)
Quoted: 6784 Post(s)
Liked: 6411
Quote:
Originally Posted by darinp View Post
Do you have any MTF numbers for 1080p with any other projectors? I don't know what the norm is. In JVC's case, no matter how good a lens was the curve would drop off some beyond 1080p. Of course, the less degradation from the lens at 1080p, the less there should be at 2700p.
Does JVC marketing still? With TI they start with 2.7K and claim 4K resolution, while seemingly trying to make people believe it is true 4K. That is why I think the best acronym for XPR is eXaggerated Pixel Resolution.
Which makes you one more person who agrees with the reason I started this thread over 18 months ago; to say that the temporal separation between the sub-frames doesn't matter, after pretty much the whole forum who weighed in said it did.

--Darin
When they test the lenses it wont be with an image from the projectors panels, it would be done differently measured against an MTF chart much how a camera lens is measured.

An MTF of 50 at 1080p is actually pretty good, I dont see a manufacturer targeting higher than that for a given resolution.

JVC X9500 (RS620) | 120" 16:9 | Marantz AV7702 MkII | Emotiva XPA-7 | DIY Modular Towers | DIY TPL-150 Surrounds | DIY Atmos | DIY 18" Subs
-
MadVR Settings | UHD Waveform Analysis | Arve Tool Instructions + V3 Javs Curves
Javs is offline  
post #1195 of 1307 Old 08-06-2017, 06:29 PM
Advanced Member
 
AJSJones's Avatar
 
Join Date: Dec 1999
Location: Sonoma County, CA, US
Posts: 886
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 100 Post(s)
Liked: 24
Quote:
Originally Posted by Javs View Post
When they test the lenses it wont be with an image from the projectors panels, it would be done differently measured against an MTF chart much how a camera lens is measured.

An MTF of 50 at 1080p is actually pretty good, I dont see a manufacturer targeting higher than that for a given resolution.
Indeed that is pretty good especially if it's maintained to the edge of the screen

If they are ~8µ pixels that's ~120/mm ~60 lp/mm on the chip. Luckily the chips aren't too big so the lenses are not too much of a problem - these MTF plots are from the lens wide open and lenses will usually show significantly better when stopped down (Compare f2 with 1.4 here)
AJSJones is offline  
post #1196 of 1307 Old 08-07-2017, 08:09 AM
Advanced Member
 
Tomas2's Avatar
 
Join Date: Jan 2017
Location: New Orleans,LA
Posts: 728
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 453 Post(s)
Liked: 691
Quote:
Originally Posted by AJSJones View Post
...So where are the slits in a lens to cause this interference?
Did you find the physics text that says this happens to incoherent light going through a normal lens?
Sure did

http://labman.phys.utk.edu/phys222co...ving_power.htm

"In geometrical optics we assume that an ideal, aberration-free lens focuses parallel rays to a single point one focal length away from the lens.  But the lens itself acts like an aperture with diameter D for the incident light.  The light passing through the lens therefore spread out.  This yields a blurred spot at the focal point.  Light near the focal point exhibits an Airy Disc pattern."

https://en.m.wikipedia.org/wiki/Angular_resolution

"The lens' circular aperture is analogous to a two-dimensional version of the single-slit experiment. Light passing through the lens interferes with itself creating a ring-shape diffraction pattern, known as the Airy pattern, if the wavefrontof the transmitted light is taken to be spherical or plane over the exit aperture.

http://www.math.ubc.ca/~cass/courses...rojects/krzak/

"When the light encounters the (single) slit, the pattern of the resulting wave can be calculated by treating each point in the aperature as a point source from which new waves spread out."

Quote:
...about what would happen to our ability to see things if all these light waves around us interfered the way you are suggesting:
The difference being a "lens", the wiki PSF reference you and Darin are banking on was referring to the non interaction of photons in route to the lens. It was not an example of discrete point sources overlapping in a lens like the infamous hypothetical 'C'

SAMSUNG QLED | ROTEL | MOREL | M&K | HAFLER | TECHNICS SP-25
Tomas2 is offline  
post #1197 of 1307 Old 08-07-2017, 09:29 AM
Advanced Member
 
Tomas2's Avatar
 
Join Date: Jan 2017
Location: New Orleans,LA
Posts: 728
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 453 Post(s)
Liked: 691
Quote:
Originally Posted by Highjinx View Post
We could look at each pixel separated by the inter pixel gap as an individual shafts of light, not interacting with each other(mostly). But if we superimpose these shafts of light through this less that perfect trans missive medium then we will have some undesirable interaction.

It's not that the photons aren't playing nice, it's the glass that's not 100% trans missive, that will cause an anomaly....deflecting here, deviating there.

If we had a 100% trans missive lens, then undesirable interaction could be discounted.
Agreed, it's not photon interaction prior to the lens, rather an interaction within the lens itself.

SAMSUNG QLED | ROTEL | MOREL | M&K | HAFLER | TECHNICS SP-25
Tomas2 is offline  
post #1198 of 1307 Old 08-07-2017, 10:34 AM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Tomas2 View Post
The difference being a "lens", the wiki PSF reference you and Darin are banking on was referring to the non interaction of photons in route to the lens. It was not an example of discrete point sources overlapping in a lens like the infamous hypothetical 'C'
How do you figure that? Here is what the wiki page says:
Quote:
The degree of spreading (blurring) of the point object is a measure for the quality of an imaging system. In non-coherent imaging systems such as fluorescent microscopes, telescopes or optical microscopes, the image formation process is linear in power and described by linear system theory. This means that when two objects A and B are imaged simultaneously, the result is equal to the sum of the independently imaged objects. In other words: the imaging of A is unaffected by the imaging of B and vice versa, owing to the non-interacting property of photons.
Two things, it is clearly about non-coherent systems and it is clearly talking about devices that have lenses.

Even after reading that are you going to stick with your position that it was not referring to simultaneous imaging of more than one object using lenses? If so, how do you figure that given how clearly it is referring to non-coherent imaging with more than one device type that uses lenses?

Not sure how anybody could read that to mean what you are saying it means. I assume we all know that those microscopes and telescopes use lenses.

--Darin

Last edited by darinp; 08-07-2017 at 10:45 AM.
darinp is offline  
post #1199 of 1307 Old 08-07-2017, 10:40 AM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Tomas2 View Post
Agreed, it's not photon interaction prior to the lens, rather an interaction within the lens itself.
To be clear, is your current position that in the method referred to in the paper you linked to, where the original pixels go through the lens simultaneously, the image on the right has more blurring than it would have if the two white pixels were sent sequentially, like eShift:



Is that your position?

Do you see more blurring on the right than on left? Is this blurring you seem to refer to from photon interactions in the lens supposed to be visible to humans?

--Darin
darinp is offline  
post #1200 of 1307 Old 08-07-2017, 10:53 AM
Advanced Member
 
AJSJones's Avatar
 
Join Date: Dec 1999
Location: Sonoma County, CA, US
Posts: 886
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 100 Post(s)
Liked: 24
Quote:
Originally Posted by Tomas2 View Post
Agreed, it's not photon interaction prior to the lens, rather an interaction within the lens itself.
So, these are smart photons - they can tell when they are in glass but not in air (a fluid gas) ? Is the lens is causing this just because it's denser - what if I put the lens in a denser fluid like water before the photons go into the lens - would they interact then??
AJSJones is offline  
Sponsored Links
Advertisement
 
Reply Digital Hi-End Projectors - $3,000+ USD MSRP

Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off