Resolution requirements for lenses - Page 37 - AVS Forum | Home Theater Discussions And Reviews
Forum Jump: 
 269Likes
Reply
 
Thread Tools
post #1081 of 1307 Old 08-01-2017, 08:32 PM
AVS Forum Special Member
 
Highjinx's Avatar
 
Join Date: Dec 2002
Location: Australia
Posts: 3,546
Mentioned: 7 Post(s)
Tagged: 0 Thread(s)
Quoted: 1320 Post(s)
Liked: 535
Quote:
Originally Posted by darinp View Post
If you understood the answer to whether you can image 2 headlights at the same time without it being a problem compared to imaging them sequentially then you would understand why you are wrong in this post.

The individual lens elements are not aware whether the sub-frames go through the elements at the same time or not. The elements just deal with photons.

I get the feeling you think the images go through the lens elements in some ordered matter.

Here is a basic lens question:

If you take a picture of 2 white lamps, how much of the outside lens surface is used for each lamp? Put another way, for a single light source (like a lamp), do all the photons go though just one spot in the outside lens elements, or do they go through the whole lens surface?

If you don't think that photons are getting bounced off each other like bowling balls, what is it you think is happening to the photons to make the images blurrier if both sub-frames go through the lens elements at the same time? Is this where magic pixie dust comes in?

--Darin
The photons will travel uninterrupted.

It's the lenses ability to resolve the additional data density. That is the issue. Unless we have the perfect lens.

May the success of a Nation be judged not by its collective wealth nor by its power, but by the contentment of its people.
Hiran J Wijeyesekera - 1985.
Highjinx is offline  
Sponsored Links
Advertisement
 
post #1082 of 1307 Old 08-01-2017, 09:26 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Highjinx View Post
The photons will travel uninterrupted.

It's the lenses ability to resolve the additional data density. That is the issue. Unless we have the perfect lens.
Sorry, but you are contradicting yourself.

If the photons in sub-frame A don't disturb the photons in sub-frame B when sent at the same time, and vis versa, then the math for the whole frame is very simple. The distribution of photons for the whole frame is the distribution of photons for sub-frame A plus the distribution of photons for sub-frame B, whether they go through the lens at the same time or not. Yet you claim one method will magically cause the overall images to change (with more blur).

Also the data density for the lens is the same either way too, since the data density for the lens elements is in photons, not in pixels.

If each sub-frame with eShift as we know it is 1x10^17 photons per 1/120th of a second, then if you split the light in half to send both sub-frames at the same time then the data density for the lens is still 1x10^17 photons per 1/120th of a second.

The only way you could be right is if the sub-frames interfered with each other, but you now may now understand that the interference factor between the sub-frames when sent at the same time is essentially zero.

BTW: This was another post you made that you would understand was wrong if you understood how you could take a picture of two light bulbs and have it be the same whether they were on at the same time or not.

You are missing the basic understanding of light and lenses to be able to answer questions correctly that you are answering based on your instinct about how light and lenses work, instead of on facts.

--Darin
darinp is offline  
post #1083 of 1307 Old 08-01-2017, 09:30 PM
Advanced Member
 
AJSJones's Avatar
 
Join Date: Dec 1999
Location: Sonoma County, CA, US
Posts: 886
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 100 Post(s)
Liked: 24
Quote:
Originally Posted by Highjinx View Post
If one shifts and overlays image information simultaneously, one is increasing the 'Pixel Count' and thus the image unit area density, requiring a better lens that can deal with the increase.

No such increase, if the images are projected sequentially.
You keep failing to understand that the lens doesn't "see" pixels or "data density" so that argument is meaningless. The lens IS AN ANALOG DEVICE and blurs everthing that goes through it, regardless of any "information content" we might perceive from it. That's why the question I posed is key to understanding the situation.
AJSJones is offline  
Sponsored Links
Advertisement
 
post #1084 of 1307 Old 08-01-2017, 09:44 PM
AVS Forum Special Member
 
Highjinx's Avatar
 
Join Date: Dec 2002
Location: Australia
Posts: 3,546
Mentioned: 7 Post(s)
Tagged: 0 Thread(s)
Quoted: 1320 Post(s)
Liked: 535
Quote:
Originally Posted by AJSJones View Post
You keep failing to understand that the lens doesn't "see" pixels or "data density" so that argument is meaningless. The lens IS AN ANALOG DEVICE and blurs everthing that goes through it, regardless of any "information content" we might perceive from it. That's why the question I posed is key to understanding the situation.
Ok forget pixels. Lets go with line pairs and the lenses ability to resolve them. The more dense the line pairs, the more resolving capability the lens need to have. Projecting side shifted simultaneous images is akin to increasing line pair density.

May the success of a Nation be judged not by its collective wealth nor by its power, but by the contentment of its people.
Hiran J Wijeyesekera - 1985.
Highjinx is offline  
post #1085 of 1307 Old 08-01-2017, 09:49 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
I get the feeling that some people might need to read the Ray Tracing section of the Optics 4 Kids page:

http://www.optics4kids.org/home/cont...trical-optics/

When you take a picture of a scene photons from every point pass through all places on the outside lens element surface. So, photons are already passing through each other at many angles even with a single picture.

The first lens element doesn't see the whole image laid out spatially like we see it. Every point in the image sends photons to every point on the lens face.

And a lens doesn't care whether you send a sub-frame for 1/120th of a second and then send the same sub-frame again for 120th of a second (like native 1080p), or if you send 2 different sub-frames with 1/2 pixel offset. It will blur the same amount either way spatially. And since the blur is the same the blur size relative to a 1080p pixel is only 1/4th as big as the blur size relative to a 1080p+eShift sub-pixel.

The only way for the blur size relative to the smallest element displayed in the image to be the same between native 1080p and 1080p+eShift would be for the blur size to go to 1/4th of the size spatially when eShift is turned on.

There is that magic pixie dust again. Anybody think the spatial blur size of the lens relative to the chip size goes down when eShift is turned on?

--Darin
darinp is offline  
post #1086 of 1307 Old 08-01-2017, 09:54 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Highjinx View Post
Ok forget pixels. Lets go with line pairs and the lenses ability to resolve them.
Good idea. How do you think JVC measured the MTF for a 2700 line pattern with a 1080p+eShift machine?

Are you claiming that the lens wasn't judged on how well it could do 2700 line images?

Again, the lens doesn't care whether it has to display all 2700 lines at an instant or over time. Either way the lens has the same spatial requirements.

How would the 2700 line image that JVC measured differ if they decided to send both sub-frames at the same time instead of at separate times? Would the MTF differ between those? If so, by what force given that you now seem to understand that the interaction between the sub-frames is zero?
Quote:
Originally Posted by Highjinx View Post
Projecting side shifted simultaneous images is akin to increasing line pair density.
What you don't seem to be understanding is that so is sending the sub-frames sequentially. The overall images that the lens is judged by have the same line pair density either way.

The same amount of blur from a real lens will affect the 2700 line pattern the same whether the sub-frames are sent at the same time or not.

The only way you could be right is if the lens spatial blur properties magically changed depending on when the photons were sent.

--Darin

Last edited by darinp; 08-01-2017 at 10:05 PM.
darinp is offline  
post #1087 of 1307 Old 08-01-2017, 10:17 PM
AVS Forum Special Member
 
Highjinx's Avatar
 
Join Date: Dec 2002
Location: Australia
Posts: 3,546
Mentioned: 7 Post(s)
Tagged: 0 Thread(s)
Quoted: 1320 Post(s)
Liked: 535
Quote:
Originally Posted by darinp View Post
Good idea. How do you think JVC measured the MTF for a 2700 line pattern with a 1080p+eShift machine?

Are you claiming that the lens wasn't judged on how well it could do 2700 line images?

Again, the lens doesn't care whether it has to display all 2700 lines at an instant or over time. Either way the lens has the same spatial requirements.

How would the 2700 line image that JVC measured differ if they decided to send both sub-frames at the same time instead of at separate times? Would the MTF differ between those? If so, by what force given that you now seem to understand that the interaction between the sub-frames is zero?
What you don't seem to be understanding is that so is sending the sub-frames sequentially. The overall images that the lens is judged by have the same line pair density either way.

--Darin
Wasn't that a simulated plot?

The point is it does not have to display a 2700 if done sequentially just 1920, if done simultaneously if would be 2700 or whatever higher number than 1920 is.

The overall pseudo composite can be judged to have a higher density, but not the individual sub frames, they will have a 1920 limit, the lens does not have to exceed that limit.

May the success of a Nation be judged not by its collective wealth nor by its power, but by the contentment of its people.
Hiran J Wijeyesekera - 1985.
Highjinx is offline  
post #1088 of 1307 Old 08-01-2017, 10:26 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Highjinx View Post
Wasn't that a simulated plot?
They had a simulated line for an ideal lens I believe, but actual measurements for the actual system results.
Quote:
Originally Posted by Highjinx View Post
The point is it does not have to display a 2700 if done sequentially just 1920, if done simultaneously if would be 2700 or whatever higher number than 1920 is.
The lens is tasked with doing 2700 lines either way and that is why JVC measured the MTF for 2700 lines. The lens doesn't care whether that is instantaneous or not. Much like a single chip DLP lens is tasked with lining up red and green pixels even though it is never tasked with doing that in an instant.

Let me go back to pixels for a second. Let's say that a lens has a 10% blur factor relative to the size of a 1080p pixel when a native 1080p image is shown. What would the lens blur factor be relative to the size of a 1080p pixel if two 1080p eShift sub-frames were sent through the lens at the same time? Are you claiming it would be something different than 10% of a 1080p pixel? If so, by what mechanism would it change from 10% and which direction would it go?

--Darin
darinp is offline  
post #1089 of 1307 Old 08-01-2017, 10:30 PM
Advanced Member
 
AJSJones's Avatar
 
Join Date: Dec 1999
Location: Sonoma County, CA, US
Posts: 886
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 100 Post(s)
Liked: 24
Quote:
Originally Posted by Highjinx View Post
Ok forget pixels. Lets go with line pairs and the lenses ability to resolve them. The more dense the line pairs, the more resolving capability the lens need to have. Projecting side shifted simultaneous images is akin to increasing line pair density.
OK let's take a line pair, (one black line and the other white or a "cycle") representing the edge of a white pixel on a black background in the image coming off a chip. I have a line pair in the composite image (the edge of a small "pixel") and I have a line pair in a subframe (the edge of a big pixel). Will they be blurred the same or not? The lens will not "know" whether the edge is from a composite or separate subframe. Note that the line pairs we are discussing are not made by pixels*, they are far smaller than pixels, so "data density" is irrelevant again - we are talking the "sharpness" of the pixels as perceived by the blur at the edges.

*We know this because the lenses clearly resolve the SDE grid (i.e. they do not blur them away)

Last edited by AJSJones; 08-01-2017 at 10:47 PM.
AJSJones is offline  
post #1090 of 1307 Old 08-01-2017, 10:43 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Highjinx,

You seem to believe that the amount the lens blurs the images will be higher if the sub-frames go through the lens at the same time than at different times.

The guys who wrote that article Tomas2 linked to modified a projector to send the same image 2 or 3 times with offset, at the same time.

Why do you think the composite image from them on the right doesn't have more blur relative to the pixel size on the left?



Your "data density" is higher on the right when both white pixels are sent through the lens at the same time, like they did for those pictures, so where is this missing blur increase you claim?

--Darin
darinp is offline  
post #1091 of 1307 Old 08-01-2017, 11:21 PM
AVS Forum Special Member
 
Highjinx's Avatar
 
Join Date: Dec 2002
Location: Australia
Posts: 3,546
Mentioned: 7 Post(s)
Tagged: 0 Thread(s)
Quoted: 1320 Post(s)
Liked: 535
Consider the whole 1920x1080 pixel canvas and increased image data density if side shifted overlaid and flashed simultaneously vs flashed sequentially.

Please get Rod Sterling to clear this up.
Attached Images
File Type: jpg Capture.Jpg (26.3 KB, 10 views)

May the success of a Nation be judged not by its collective wealth nor by its power, but by the contentment of its people.
Hiran J Wijeyesekera - 1985.
Highjinx is offline  
post #1092 of 1307 Old 08-02-2017, 06:39 AM
Advanced Member
 
AJSJones's Avatar
 
Join Date: Dec 1999
Location: Sonoma County, CA, US
Posts: 886
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 100 Post(s)
Liked: 24
Quote:
Originally Posted by Highjinx View Post
Ok forget pixels. Lets go with line pairs and the lenses ability to resolve them. The more dense the line pairs, the more resolving capability the lens need to have. Projecting side shifted simultaneous images is akin to increasing line pair density.
Quote:
Originally Posted by AJSJones View Post
OK let's take a line pair, (one black line and the other white or a "cycle") representing the edge of a white pixel on a black background in the image coming off a chip. We have a line pair in the composite image (the edge of a small "pixel") and we have a line pair in a subframe (the edge of a big pixel). Will they be blurred the same or not?
Quote:
Originally Posted by Highjinx View Post
Consider the whole 1920x1080 pixel canvas and increased image data density if side shifted overlaid and flashed simultaneously vs flashed sequentially
So you won't answer even about what the lens does to the line pairs that you suggested we consider? Sounds a bit like a cop-out, if you ask me
AJSJones is offline  
post #1093 of 1307 Old 08-02-2017, 07:24 AM
Advanced Member
 
Tomas2's Avatar
 
Join Date: Jan 2017
Location: New Orleans
Posts: 742
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 459 Post(s)
Liked: 731
Quote:
Originally Posted by AJSJones View Post
That interaction they portray has nothing to do with the lens performance - it's the Fourier transformation of the image of the two overlapping pixels used as a PSF - see the attachment in # 1048 - no lens characteristics were used to calculate the frequency response (and its symmetry or asymmetry - sensitivity to gradients in the image's pixel array) from the PSF. It may help to consider the PSF they refer to as the Pixel Spread Function The lens is only involved in blurring the edges of the image(s) that eventually goes through the lens.
I understand it's a Fourier transform representing it's frequency distribution. The example image (two diagonal pixels) has a PSF applied (lens blur) similar to a qaussian blur in photo shop. The associated frequency response IMO apparently is contrived from the blurred pixel pair.

Also Darin is right about what they are referring to as a re-shapped pixel being "non-square". They illustrated that with a picture of the composite pixel shape bolded vs the overlayed pixel in gray.

SAMSUNG QLED | ROTEL | MOREL | M&K | HAFLER | TECHNICS SP-25

Last edited by Tomas2; 08-02-2017 at 07:27 AM.
Tomas2 is offline  
post #1094 of 1307 Old 08-02-2017, 07:51 AM
Advanced Member
 
AJSJones's Avatar
 
Join Date: Dec 1999
Location: Sonoma County, CA, US
Posts: 886
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 100 Post(s)
Liked: 24
Quote:
Originally Posted by Tomas2 View Post
I understand it's a Fourier transform representing it's frequency distribution. The example image (two diagonal pixels) has a PSF applied (lens blur) similar to a qaussian blur in photo shop. The associated frequency response IMO apparently is contrived from the blurred pixel pair.
So, apparently do the single pixels from the low and high resolution (resolution referring there to pixel size not lens quality). All their work addresses the performance of their pre-processing algorithm with respect to the geometry options of creating multiple images and overlay patterns to increase the apparent resolution of the image presented to the lens. That's why they are concerned about the (asymmetry of the ) frequency response expected from those overlay patterns (in Fig. 5) in their simultaneous projection conditions.

We still have no comparison between simultaneous vs sequential effects on lens performance. The question I asked about line pairs, as Highjinx suggested, is the key - maybe you can check that question out?
AJSJones is offline  
post #1095 of 1307 Old 08-02-2017, 08:14 AM
Advanced Member
 
Tomas2's Avatar
 
Join Date: Jan 2017
Location: New Orleans
Posts: 742
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 459 Post(s)
Liked: 731
We all agree 4K vs 2K (imager the same size) 4K will have a higher lens requirement right? In the case of modified eShift, where two 1920 x 1080 (pixel density) subframes are imaged simultaneously, will now have a pixel density 3840 x 2160? Ignoring the pixel gaps because there will be combinations where the area is even smaller than a 1/4 pixel. With this hypothetical the lens requirements are essentially the same as 4K...right.

Will follow up after you respond...thanks

SAMSUNG QLED | ROTEL | MOREL | M&K | HAFLER | TECHNICS SP-25
Tomas2 is offline  
post #1096 of 1307 Old 08-02-2017, 08:18 AM
Advanced Member
 
Tomas2's Avatar
 
Join Date: Jan 2017
Location: New Orleans
Posts: 742
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 459 Post(s)
Liked: 731
It doesn't matter that they are not all addressable

SAMSUNG QLED | ROTEL | MOREL | M&K | HAFLER | TECHNICS SP-25
Tomas2 is offline  
post #1097 of 1307 Old 08-02-2017, 08:37 AM
Advanced Member
 
AJSJones's Avatar
 
Join Date: Dec 1999
Location: Sonoma County, CA, US
Posts: 886
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 100 Post(s)
Liked: 24
Quote:
Originally Posted by Tomas2 View Post
We all agree 4K vs 2K (imager the same size) 4K will have a higher lens requirement right? In the case of modified eShift, where two 1920 x 1080 (pixel density) subframes are imaged simultaneously, will now have a pixel density 3840 x 2160?
(I think you have been confused by Highjinx's concern about data density with respect to the action of the analog lens and how/when it blurs the edges of pixels)

The reason the 4k needs better lens is because we can see the blurring effect (at the edges of the pixels) of a poor lens when we send the smaller pixels through it. That's not because there are more pixels but because the blur at the edges of the pixels is more obvious with respect to the size of the pixels (greater relative blur) - and we can see the pixels begin to merge sooner. The edges (lines or black to white transitions) of the pixels, no matter what their size, will be equally blurred whenever they pass though the lens. You need to separate the limiting resolution of the pixel array itself (pixels per mm, Nyquist, and the frequency response realm the paper deals with, etc.) from the resolution of the lens (the resolution of the data array is way way lower than that of the lens) and its effect on the size of transitions at the edges of the pixels - that blur at the edges is what degrades the image we see and depends on the lens quaity no matter when the edges go through. Look back at some of my and other posts for pictures of different blur effects caused by the lens.
AJSJones is offline  
post #1098 of 1307 Old 08-02-2017, 08:47 AM
Advanced Member
 
Tomas2's Avatar
 
Join Date: Jan 2017
Location: New Orleans
Posts: 742
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 459 Post(s)
Liked: 731
I'm not confused, all I asked was if the lens requirements were the same...I give up.

SAMSUNG QLED | ROTEL | MOREL | M&K | HAFLER | TECHNICS SP-25
Tomas2 is offline  
post #1099 of 1307 Old 08-02-2017, 09:41 AM
Advanced Member
 
AJSJones's Avatar
 
Join Date: Dec 1999
Location: Sonoma County, CA, US
Posts: 886
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 100 Post(s)
Liked: 24
Quote:
Originally Posted by Tomas2 View Post
I'm not confused, all I asked was if the lens requirements were the same...I give up.
For eshift systems, the lens requirements are the same regardless of simultaneous versus sequential subframes (I think that's B vs C in some earlier post), because in each case the edges of the pixels are blurred the same amount by the lens. Why wouldn't they be? Didn't we do this a couple of hundred posts ago?

Last edited by AJSJones; 08-02-2017 at 09:49 AM.
AJSJones is offline  
post #1100 of 1307 Old 08-02-2017, 10:01 AM
Advanced Member
 
Tomas2's Avatar
 
Join Date: Jan 2017
Location: New Orleans
Posts: 742
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 459 Post(s)
Liked: 731
Insert expletive here

SAMSUNG QLED | ROTEL | MOREL | M&K | HAFLER | TECHNICS SP-25
Tomas2 is offline  
post #1101 of 1307 Old 08-02-2017, 10:18 AM
Advanced Member
 
AJSJones's Avatar
 
Join Date: Dec 1999
Location: Sonoma County, CA, US
Posts: 886
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 100 Post(s)
Liked: 24
Quote:
Originally Posted by Tomas2 View Post
Will follow up after you respond...thanks
Quote:
Originally Posted by Tomas2 View Post
Insert expletive here
Good follow-up -

You still think they would be blurred differently? Really? (These aren't sound waves we're talking about where you do get cancellation etc)
AJSJones is offline  
post #1102 of 1307 Old 08-02-2017, 10:49 AM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Tomas2 View Post
We all agree 4K vs 2K (imager the same size) 4K will have a higher lens requirement right?
Yes.
Quote:
Originally Posted by Tomas2 View Post
In the case of modified eShift, where two 1920 x 1080 (pixel density) subframes are imaged simultaneously, will now have a pixel density 3840 x 2160? Ignoring the pixel gaps because there will be combinations where the area is even smaller than a 1/4 pixel. With this hypothetical the lens requirements are essentially the same as 4K...right.
My position is that since the eShift projector can display more of the detail from a 4K source than native 1080p can do on its own, but not as much of the detail considering all 4K source frames as a native 4K projector, the lens requirements for 1080p+eShift are between those for native 2K and native 4K. Put another way, there is some correlation between the 1/4 size sub-pixels and the 1080p sized pixels with 1080p+eShift, but not with native 4K.

However, I do not know the exact eShift algorithm for converting 4K source frames into 2 1080p sized sub-frames.

When this discussion started everybody agreed that if both 1080p sub-frames went through the lens at the same time then the lens requirements were higher than for native 1080p, so I didn't pursue that issue. I am open to reasoned arguments that due to the way eShift processes the 4K source image that even sending both sub-frames at the same time doesn't have higher lens requirements than native 1080p. I doubt it, but could be convinced that we were all wrong on that subject from the beginning.

The part I'm not flexible on is whether it really matters whether the sub-frames go through the lens at the same time or at different times. That was the original disagreement over 18 months ago and the main reason I started this thread.

In that document you linked to the PSF pictures for 4(c), 4(d), and 4(e) would have the same amount of blur even if the 2 identical sub-frames had passed through the lens at different times. The authors choosing a method that sent the sub-frames at the same time didn't increase the blur.

--Darin
darinp is offline  
post #1103 of 1307 Old 08-02-2017, 11:04 AM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Highjinx View Post
Consider the whole 1920x1080 pixel canvas and increased image data density if side shifted overlaid and flashed simultaneously vs flashed sequentially.

Please get Rod Sterling to clear this up.
You need Rod Sterling to tell you that the extra blur you predicted for the pixel shapes in 4(c), 4(d), and 4(e) over 4(b) when the authors sent the sub-frames simultaneously in this:



isn't there? Why? You can't see that with your own eyes?

--Darin
darinp is offline  
post #1104 of 1307 Old 08-02-2017, 11:10 AM
Advanced Member
 
Tomas2's Avatar
 
Join Date: Jan 2017
Location: New Orleans
Posts: 742
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 459 Post(s)
Liked: 731
If an eShift composite pixel is 1/4 the size ....that is equivalent to 4K pixel density, right ?

It does not matter if there all addressesable, the eShift lens assy is physically propagating 4K pixel density ?

SAMSUNG QLED | ROTEL | MOREL | M&K | HAFLER | TECHNICS SP-25
Tomas2 is offline  
post #1105 of 1307 Old 08-02-2017, 11:17 AM
AVS Forum Special Member
 
cmjohnson's Avatar
 
Join Date: Nov 2003
Location: Sharply focused on sharper focus.
Posts: 6,140
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 104 Post(s)
Liked: 34
Go back to those four words: Rayleigh Criteron/Angular Resolution.

Ultimately that's all that matters. E-shift = spatial movement = change in angle of a feature = need for increased spatial resolution capacity out of the optical system to achieve equal MTF values.

All the rest of the discussion is, frankly, a distraction and a way for people to try to either skirt the truth or alter it or try to explain it better.

The lens has no temporal functionality. Light from one frame does not interact with light from the next or previous frame.

Just as a moving picture is constructed in your brain from a series of still images, the images themselves are of limited quality and by themselves they are not moving, only projected in sequence, for generally the same reason the effect of e-shift is to create, when temporally added together, a higher resolution picture than any single source frame. But that temporal addition occurs at the point of view.

The added information is real, but it is also fair to consider it to be a "processing artifact" because the added information (the difference between the two frames, shifted and non-shifted) is not and can not be displayed as its own individual entity in this system, freed of its parent frames.
cmjohnson is offline  
post #1106 of 1307 Old 08-02-2017, 11:25 AM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Tomas2 View Post
If an eShift composite pixel is 1/4 the size ....that is equivalent to 4K pixel density, right ?
For an image that displays something different for that 1/4 sized element and still displays the rest of the 4K image properly, it is equivalent. However, I still think it matters that the 1/4 sized object is not completely independent. For instance, a true 4K projector can display a single white pixel surrounded by black, or at least should be able to. A 1080p+eShift projector cannot. However, it could display the red and green 1080p sized pixels with a 1/4 sized yellow pixel if the eShift algorithm allows that.
Quote:
Originally Posted by Tomas2 View Post
It does not matter if there all addressesable, the eShift lens assy is physically propagating 4K pixel density ?
This is where I think it gets complicated. The lens is propagating some 4K pixel densities, but will never see images that match many of the possibilities for 4K images.

I don't think it would be bad to have a lens capable of 4K for a 1080p+eShift projector, I just don't think the higher lens quality is as necessary.

To use an extreme example, if a native 4K chip was only fed images where 4 pixel blocks (2x2) would be identical then the lens would only need to be good enough for 2K. eShift overlaps large pixels in a way that complicates the matter. It isn't as extreme as the 4K native chip that can only display 2K images, or as extreme as the 4K native chip that can display 4K images from the sources properly.

We can look at the 2700 line test pattern as one example. The native 4K projector can display that with white and black lines. The eShift projector uses multiple images to display the 2700 line pattern when all those individual images are blended together and I think it can only really do the 2700 line test pattern as white and gray, or maybe gray and black, although I haven't thought about the exact sequence of images they use to display the 2700 line test pattern. However, they do have a theoretical MTF for the 2700 line test pattern that is lower than the 4K projector can do. In that 2700 line case with the eShift projector there isn't as much detail for the lens to retain as with the 4K projector.

--Darin

Last edited by darinp; 08-02-2017 at 11:30 AM.
darinp is offline  
post #1107 of 1307 Old 08-02-2017, 12:07 PM
Advanced Member
 
AJSJones's Avatar
 
Join Date: Dec 1999
Location: Sonoma County, CA, US
Posts: 886
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 100 Post(s)
Liked: 24
Just in case there was any doubt here, we all agree, (I think) that lens requirements increase as we move up 1080p <1080p + e-shift < 2.7K + eshift (DMD chip) < 4k discrete (assuming similar chip size and viewing position for simplicity of discussion) because the "details" we see in those images get finer and finer - because the edges of each pixel are well defined for their size. Let's say we found, from our broad array of lenses, one that just kept the details of a 1080p source discrete (i.e., that a slightly worse one was detectable to our audience). If we used that same lens for each chip we will see that the 1080p+e-shift suffers some loss of possible increase in detail and the 4k would lose much of its advantage, all due to too much edge blur, softening and detail overlap etc. So far so good?

I'll post the rest as a follow-up if everyone agrees
AJSJones is offline  
post #1108 of 1307 Old 08-02-2017, 12:21 PM
Advanced Member
 
Tomas2's Avatar
 
Join Date: Jan 2017
Location: New Orleans
Posts: 742
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 459 Post(s)
Liked: 731
I understand the limitations of the algorithm, but the debate has focused on the potential for 1/4 sized elements.

Now onto the possible interaction within the lens per simultaneous projection. According to Mr Jones the lens is an "analog" device. Not really, the entire action:

input >absorption>re-emittance>absorption>re-emittance.....>exit

are quantum events. There's always an advantage to have information in a component form vs mixing the information with say 90 degree isolation. Also in this case the light is not completely incoherent, being filtered RGB. There will be many times like colored pixels will overlap (a lot in fact).

But let's say for example they (pixels) are all 100% incoherent. In the original famous double slit experiment, raw sunlight was used and the first slit was there to create a point source for the second pair of slits. There was an interference pattern produced. Now include a less than ideal lens and there should be some level of more scatter/interaction. Just based on the impurities alone the simultaneous alignment of two points sources should interact.

The wiki based PSF reference was highlighting two discrete point sources with separation between them and IMO were qualifying that the bundles of photons (or waves) from each source to the destination (the lens) would not interact in route. Put another way, the point A and point B PSF,s will not crosstalk in the lens.

SAMSUNG QLED | ROTEL | MOREL | M&K | HAFLER | TECHNICS SP-25
Tomas2 is offline  
post #1109 of 1307 Old 08-02-2017, 12:34 PM
Advanced Member
 
Tomas2's Avatar
 
Join Date: Jan 2017
Location: New Orleans
Posts: 742
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 459 Post(s)
Liked: 731
"...The experiment was conducted by Thomas Young and is known as Young's Double Slit Experiment. ... In a completely darkened room, Young allowed a thin beam of sunlight to pass through an aperture on his window and onto two narrow, closely spaced openings (the double slit)."

SAMSUNG QLED | ROTEL | MOREL | M&K | HAFLER | TECHNICS SP-25
Tomas2 is offline  
post #1110 of 1307 Old 08-02-2017, 12:41 PM
Advanced Member
 
AJSJones's Avatar
 
Join Date: Dec 1999
Location: Sonoma County, CA, US
Posts: 886
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 100 Post(s)
Liked: 24
Quote:
Originally Posted by Tomas2 View Post
I understand the limitations of the algorithm, but the debate has focused on the potential for 1/4 sized elements.

Now onto the possible interaction within the lens per simultaneous projection. According to Mr Jones the lens is an "analog" device. Not really, the entire action:

input >absorption>re-emittance>absorption>re-emittance.....>exit

are quantum events. There's always an advantage to have information in a component form vs mixing the information with say 90 degree isolation. Also in this case the light is not completely incoherent, being filtered RGB. There will be many times like colored pixels will overlap (a lot in fact).

But let's say for example they (pixels) are all 100% incoherent. In the original famous double slit experiment, raw sunlight was used and the first slit was there to create a point source for the second pair of slits. There was an interference pattern produced. Now include a less than ideal lens and there should be some level of more scatter/interaction. Just based on the impurities alone the simultaneous alignment of two points sources should interact.

The wiki based PSF reference was highlighting two discrete point sources with separation between them and IMO were qualifying that the bundles of photons (or waves) from each source to the destination (the lens) would not interact in route. Put another way, the point A and point B PSF,s will not crosstalk in the lens.
Oh boy - out in to territory unfamiliar to you. The double slit experiment shows that the light interacts with the slits - it is the basis for the diffraction problem - if a beam of light goes through an aperture its resolution is limited as a result of that interaction. Go look up Airy disk
Your absorption reemission thing is only "digital" if you think of photons as particles - it is useless and not digital if you consider electromagnetic waves. The only "photons" that are relevant in the re-emission are those forward scattered and in phase with the absorbed one (after 10-8 sec or so) and so the ray only continues in a straight line - and that is how physics and optics treats it. It has no bearing on (the non-existent) incoherent ray interactions in a lens. Please go and read about this duality in a text book somewhere and see if you can find any text that says that such rays interact with each other when passing through a lens. We've been asking someone to find such evidence - perhaps there's a Nobel prize in it for you. If this is your basis for saying there is degradation when the two subframes go through together and that it doesn't happen when they go through separately, then I can understand why you persist, but can assure you it is not correct.

Last edited by AJSJones; 08-02-2017 at 12:45 PM.
AJSJones is offline  
Sponsored Links
Advertisement
 
Reply Digital Hi-End Projectors - $3,000+ USD MSRP

Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off