Resolution requirements for lenses - AVS Forum | Home Theater Discussions And Reviews
Forum Jump: 
 269Likes
Reply
 
Thread Tools
post #1 of 1307 Old 11-18-2015, 01:21 AM - Thread Starter
AVS Forum Addicted Member
 
darinp2's Avatar
 
Join Date: Sep 2003
Location: Seattle, WA
Posts: 23,188
Mentioned: 7 Post(s)
Tagged: 0 Thread(s)
Quoted: 1135 Post(s)
Liked: 1766
Resolution requirements for lenses

In one of the JVC threads there was a discussion about whether better lenses are required with e-shift on than off to show all the detail that the rest of the projector is producing, much like 4K panels requiring better lenses than 2K panels to resolve all the resolution of each, all else being equal. Just to be clear these are entertainment devices and no level of quality is really required, but hopefully people get the idea about how "required" is being used in this case.

There was one camp that at least at first I would say was strongly in the majority who felt that lenses only need to be good enough for the resolution of one of the e-shift frames and not for the finer elements of the full image frames that include 2 e-shift frames, since those frames don't pass through the lens at the same time.

I thought there was some good discussion and progress, but some readers would be left being misinformed.

So, I came up with a demonstration, although I am limited in framerate and so hopefully people understand enough about how high speed video works where the same image can be flashed multiple times without human vision ever realizing it was off (since human vision has some strong persistence). That is, hopefully people understand enough to know that if I could play the gif I attached at 240Hz both sides would look completely solid, like the third image I attached.

Since human eyes have lenses we can use images like these to see what happens with our eyes at different resolving powers (which we can change by moving closer and further away or by putting glasses on and taking them off).

I've attached 3 bmps. The first 2 are for images that are to be sent through a lens, alternating at high frequency.

We can consider how much resolution is required in order to be able to detect that the first one is an E and the second one a 3. I expect that people can move quite a distance from their screen and still be able to see these as an E and a 3.

The third attachment is the image I really want to project and just happen to be displaying it half at a time for this example.

For this 3rd image what does it take for a viewer to realize that these are 2 separate characters and not connected lines? People can try moving further away from their screen and see if at some point they can still detect the large black blocks inside the E and 3, but now see the 2 sides as connected. That is, the larger detail from each image is still present, but the detail for the entire image is not being resolved properly.

Here is a low speed gif showing a slow motion version of what the theoretical display puts at. I have an mp4 that goes faster, but don't know of any way to show these sides toggling at 120Hz or 240Hz.



Does anybody still believe that if a lens is good enough to show the E and good enough to show the 3 this means it is good enough to show the full image of those next to each other properly as long as the E and 3 never go through the lens at the same time?

I view this as like test questions in school where the teacher would put irrelevant information and part of the class would change their answer based on it. In this case the irrelevant information is whether the left and right side are shown at the same time or at different times when each is being shown at higher frequency than the viewing system can perceive.

If people don't want to consider human vision they can consider what a camera would capture when taking a picture with a long exposure and how much smearing of the E and the 3 are allowed before a picture taken in such a way would show them as one object instead of 2, then how that amount of smearing compares with how much the E can be smeared when shown by itself and still be recognized as an E.

--Darin
Attached Thumbnails
Click image for larger version

Name:	e3Short30secSmall.gif
Views:	2318
Size:	390.1 KB
ID:	1065042  
Attached Images
File Type: bmp eshiftE.bmp (443.6 KB, 113 views)
File Type: bmp eshift3.bmp (443.6 KB, 112 views)
File Type: bmp eshiftE3.bmp (443.6 KB, 131 views)

This is the AV Science Forum. Please don't be gullible and please do remember the saying, "Fool me once, shame on you. Fool me twice, shame on me."

Last edited by darinp2; 11-20-2015 at 02:46 PM.
darinp2 is offline  
Sponsored Links
Advertisement
 
post #2 of 1307 Old 11-18-2015, 01:31 AM - Thread Starter
AVS Forum Addicted Member
 
darinp2's Avatar
 
Join Date: Sep 2003
Location: Seattle, WA
Posts: 23,188
Mentioned: 7 Post(s)
Tagged: 0 Thread(s)
Quoted: 1135 Post(s)
Liked: 1766
Reserved for copying some things from the other thread.

This is the AV Science Forum. Please don't be gullible and please do remember the saying, "Fool me once, shame on you. Fool me twice, shame on me."
darinp2 is offline  
post #3 of 1307 Old 11-18-2015, 07:14 AM
Senior Member
 
cischico's Avatar
 
Join Date: Jul 2007
Location: Japan/USA
Posts: 426
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 128 Post(s)
Liked: 95
Since the speed of e-shift is a constant that is one variable that we don't have to worry about so our focus would be human vision, lens quality and content are the only three factors of real consideration. The better quality of the lens the farther you can see the difference with significant diminishing returns.

Our vision is what creates the faux 4k image and if the lens can resolve 1080p cleanly the 4k image will appear more cleanly though at a certain point it becomes less relevant how well it can resolve 1080p as our distance to it increases.

I can cleanly see the pixels on a VW1100ES and I can cleanly see most pixels on the 5XX/6XXES but as I watch content the lens resolution doesn't matter as much anymore. Still images such as text and image patterns allows or vision to focus on and spot differences between lens quality as long as our own vision is able to resolve the details at the given distance you're observing.

So like an e-shifted pixel it's not exactly black and white (though it literally is), it's grey.
gkf15 likes this.
cischico is offline  
 
post #4 of 1307 Old 11-18-2015, 01:22 PM
Advanced Member
 
rak306's Avatar
 
Join Date: Oct 2004
Location: Syracuse NY
Posts: 935
Mentioned: 4 Post(s)
Tagged: 0 Thread(s)
Quoted: 560 Post(s)
Liked: 232
Simulated eshift

As I said in a PM to you on 11/15, ...
... how good a lens is necessary for a given resolution? IMO, this is very much at the heart of the issue. Put aside the eshift for the moment.

Take a projector with a given resolution, and use a "perfect lens" (i.e. one with flat MTF over the spatial frequency given by the projectors pixel spacing). Move the viewer to a distance just beyond where you can see the pixels (i.e. you can't see any pixel structure). This will result in the best perceptible image. Now change to real lenses. The MTF typically is not flat, but peaks and then falls off more and more with higher spatial frequency. As you go to lenses with MTF that fall off sooner, the image will degrade, but exactly where the line is for "acceptable" is debatable. The lens in the Sony 600 lens is by all accounts lesser than the lens in the Sony 1100. But both are 4k lenses. Could a lesser lens to the Sony 600 be considered good enough for 4k? Probably.

----------------------------

So now I will give you my thought experiment.

Take the JVC with eshift, and use 3 sets of lenses.

Lens 1 is poor - with a single pixel wide checkerboard test pattern, there is some bluring between adjacent pixels. You can see the checkerboard pattern, but it looks more like a sine wave than a square wave.

Lens 2 is the standard JVC lens. - which shows some detail within a pixel. The checkerboard would look square wave like with only a little blurring.

Lens 3 is a near perfect lens. The checkerboard would look precisely square, with no blurring, and the ability to see fine detail structure within the pixel.

1) I believe you would see resolution improvement with the better lenses, regardless of eshift position (no surprise here).
i.e.
-------- eshift off, picture resolution order: lens 3 > lens 2 > lens 1
-------- eshift on, picture resolution order: lens 3 > lens 2 > lens 1

2) Most importantly, I also believe that with all 3 lenses, you would see improvement with eshift on vs off.
i.e.

Lens 1: eshift on > eshift off
Lens 2: eshift on > eshift off
Lens 3: eshift on > eshift off

Which of these is better: "Lens 1 eshift on" or "Lens 3 eshift off"?
I don't know, but I would suggest that could be the criteria for the necessary lens quality for a given resolution.

----------------end of PM -------

I have made a simulation of Eshift. Fig 1 shows an input image that contains only 1 (sine wave) spatial frequency (but different frequency in each direction). It is sampled at over 60 times the nyquist rate. So keep in mind - given it's only 1 frequency, it is supposed to be fuzzy.





Figure 2 shows the same image, but with a 30x lower sample rate. At this sample rate, the sign wave is 0.24 cycles/pixel (in the vert direction).





Note: the hard "sharp" edges you see are interference, not what was part of the image. You should not see them if you are trying to be faithful to the original image. And this is only 0.24 cycle/image. Theory says we should be able to represent a frequency of up to 0.5 cycle/pixel.

I made 2 sub-images from the pixels of Fig 2, each has 1/2 the number of pixels, and the 2 images are offset by 1 pixel in horizontal and vertical. These are shown in Figs 3 and 4 below;







Fig 7 shows the sub-images of figs 3,4 up-sampled (pixel replicated 2x2) by 2, offset (from each other) by 1 pixel in both X, an Y, then added. This represents my simulation of the eshift process. The image of Fig 7 is not as good as Fig 2, but it is much better than the sub-images.





Now let's take the 2 sub images of figures 3,4 and low pass filter them, representing a less sharp lens. These are shown in Figs 12.13.







Now look a Fig 14, which is the sum of figs 12,13, and represents (my take on) the eshift process through a lower quality lens.




Fig 14 - Eshift simulated result through a lower quality lens.



Fig 15 is figs 7 and fig 14 side by side to allow direct comparison.



Left image: My "eshift" simulation with "perfect" lens. Right image: Filtered with a cut off at 0.5 cycles/pixel.

Stand back at a distance where the left image shows no sign of jaggies. At this distance, (to my eyes), both look similar, with the one on the left having a bit more contrast.

So from this, I conclude that while a better lens is better for the picture, even a lesser lens can show the increased detail using 2 shifted lower resolution images.

Caveats:

1) I only used a test signal of 0.24 cy/pixel. This signal should be able to be represented in the lower sample rate sub-image. I really should use a higher frequency, e.g 0.42. However, at that high of a frequency, the signal does not look much like the original even without simulating eshift.

2) My simulation of eshift is simplistic, and undoubtedly not what JVC does, but I do think it is adequate to illustrate the point.

Elix likes this.

Last edited by rak306; 11-18-2015 at 01:34 PM.
rak306 is offline  
post #5 of 1307 Old 11-18-2015, 01:26 PM - Thread Starter
AVS Forum Addicted Member
 
darinp2's Avatar
 
Join Date: Sep 2003
Location: Seattle, WA
Posts: 23,188
Mentioned: 7 Post(s)
Tagged: 0 Thread(s)
Quoted: 1135 Post(s)
Liked: 1766
Anybody willing to weigh in on what they think would happen if the following gif was played at 240Hz on a scanning projector (say fast phosphor CRT or scanning laser), with respect to how clear it would have to be in order for a person to be able to fairly accurately describe the image(s) being displayed? To be clear, the white blocks do not represent single pixels and the black lines are more than a pixel wide. I am referring to playing this gif at the size it is.



Would the lens requirements change as the video is playing at 240Hz between the portion of the video where each block is displayed one at a time and the portion of the video where all 4 blocks are shown at once?

One thing to keep in mind is that with the projector types I mentioned (CRT and scanning laser) none of those blocks are ever going through the lens at the same moment in time. Even the parts with single blocks have to be painted over a finite period of time. So, if somebody wants to claim that the lens only has to be able to resolve whatever is going through the lens at a moment of time it cannot be a whole white block in this case.


--Darin
Attached Thumbnails
Click image for larger version

Name:	fourblocks.gif
Views:	2084
Size:	29.4 KB
ID:	1065810  

This is the AV Science Forum. Please don't be gullible and please do remember the saying, "Fool me once, shame on you. Fool me twice, shame on me."
darinp2 is offline  
post #6 of 1307 Old 11-18-2015, 02:26 PM
Advanced Member
 
rak306's Avatar
 
Join Date: Oct 2004
Location: Syracuse NY
Posts: 935
Mentioned: 4 Post(s)
Tagged: 0 Thread(s)
Quoted: 560 Post(s)
Liked: 232
Quote:
Originally Posted by darinp2 View Post
Anybody willing to weigh in on what they think would happen if the following gif was played at 240Hz on a scanning projector (say fast phosphor CRT or scanning laser), with respect to how clear it would have to be in order for a person to be able to fairly accurately describe the image(s) being displayed?


--Darin
I think this whole temporal human perception thing is a red herring, and not related to the issue. As to your question, once you get the speed fast enough, you will see one image, and the details will be no different than if you used a camera with an exposure time long enough to see all of the images. And if you used a less sharp lens, the sharp edges would be less sharp.

But I also believe you can get extra detail using an eshift mechanism, even if the sub-images have fuzzy pixels.

Let's look at a an inkjet printer. (for sake of argument - it's in a back lit display, so the colors add not subtract).

1) Say the ink jet printer has a 0.004 in dot size, and prints 250 dot/inch (the dots just fit side by side).

2) the source image is over-sampled for the size of the print. e.g. 10 inches wide (printed 2500 dots), and the image is 10000 pixels wide.

3) Now here is a question: What if I make 2 print passes on the velum, the first is standard just like I would do normally, and a second print pass is with the print head shifted 0.002 in left and 0.002 in down, using the appropriately generated different set of dots. Would the 2 pass printed image be improved over just the first pass image?

I think yes, even though not as good as an inkjet printer with a 0.002 in dot size, and capable of 500 dot/in.
rak306 is offline  
post #7 of 1307 Old 11-18-2015, 05:23 PM
Member
 
Dr.Evazan's Avatar
 
Join Date: Jun 2010
Posts: 183
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 58 Post(s)
Liked: 70
Coming from a background in cinema production. I will say that if the old 1080 JVC lenses were not capable of resolving UHD then yes, there will be a definit benefit to better lenses. The fact that just because eshift only sends one half the image out at a time is ALMOST irelevant. Most digital cinema cameras still only capture one line of pixels at a time, going down row by row, and yet they still need very good lenses to capture 4k. (the Red Epic uses this method, which is the camera Peter Jackson used to shoot The Hobbit)

I think its very likely that the old JVC lenses are capable of resolving UHD in the center but not so much on the corners, and that would also change at different zoom ratios.

Considering that one must spend around 15k to get a 24mm zeiss ultra prime capable of resolving the full potential of 4k out to the corners I doubt the ($11,999) DLA-X900R's ZOOM lens is capable of squezzing every bit out of 4k. However, keep in mind one can buy a $1800 Canon 24-70 2.8 L and get 90% of the benefit of 4K.

I know i'm comparing camera lenses to pj lenses but i think the comparison is warrented, unless there's somthing about pj lenses that makes them considerably cheaper to manufacture at hihg quality.
Dr.Evazan is offline  
post #8 of 1307 Old 11-18-2015, 06:10 PM
AVS Forum Addicted Member
 
R Harkness's Avatar
 
Join Date: May 2001
Location: Toronto, Ontario, Canada
Posts: 14,376
Mentioned: 10 Post(s)
Tagged: 0 Thread(s)
Quoted: 2130 Post(s)
Liked: 1844
When I was thinking this through at first the idea that the JVC lens only needed to resolve 1080p for E-Shifting made intuitive sense. But as I thought about it, and read Darin, the more Darin's case started to make sense as I tried to visualize my own thought experiments. But with the pattern Darin has just presented it finally "clicks" and I get it. Thanks.
R Harkness is offline  
post #9 of 1307 Old 11-19-2015, 01:11 AM
AVS Forum Special Member
 
Highjinx's Avatar
 
Join Date: Dec 2002
Location: Australia
Posts: 2,696
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 643 Post(s)
Liked: 275
I would say/think this is more of an issue where the throw is short and the beam spot(on the lens) is large, however possibly not a consideration if the throw is long and the beam spot(on the lens) is small. As the 1/4pixel shift will still be within the sweet spot of the lens?

May the success of a Nation be judged not by its collective wealth nor by its power, but by the contentment of its people.
Hiran J Wijeyesekera - 1985.
Highjinx is online now  
post #10 of 1307 Old 11-19-2015, 07:03 AM
Member
 
ader42's Avatar
 
Join Date: Feb 2013
Posts: 44
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 24 Post(s)
Liked: 27
Aren't image 1 and 2 only using a quarter of the lens and image 3 using all the lens?

When the first eshift projector came out (x70) no-one (not even jvc) said the lenses had been improved from the previous non-eshift generation. And didn't the non-eshift x30 use the same lens?

What do you think would happen if someone put an x30 lens in an x70? I suspect nothing would change. The 1080p x30 lens would be fine for eshift.
ader42 is offline  
post #11 of 1307 Old 11-19-2015, 07:12 AM
Member
 
ader42's Avatar
 
Join Date: Feb 2013
Posts: 44
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 24 Post(s)
Liked: 27
By image 1, 2 & 3; I in fact meant images 1, 2, 3 & 4 use a quarter of the lens area and image 5 uses all the lens. So image 5 is the sum of 1 to 4.

E.g. Your original image 1 & 2 have half the number of pixels so only use half the lens?
ader42 is offline  
post #12 of 1307 Old 11-19-2015, 07:13 AM - Thread Starter
AVS Forum Addicted Member
 
darinp2's Avatar
 
Join Date: Sep 2003
Location: Seattle, WA
Posts: 23,188
Mentioned: 7 Post(s)
Tagged: 0 Thread(s)
Quoted: 1135 Post(s)
Liked: 1766
Quote:
Originally Posted by ader42 View Post
Aren't image 1 and 2 only using a quarter of the lens and image 3 using all the lens?
Or half and full depending on how you want to look at it. What relevance do you think that has?

If you are only using half the lens then that half still needs to handle the light going through it. The point here is that the lens requirements for a high speed video are not just the requirements for moments in time. I understand that people have a gut feeling that they are, but gut feelings are often wrong. Mother Nature doesn't care about our gut feelings.
Quote:
What do you think would happen if someone put an x30 lens in an x70? I suspect nothing would change. The 1080p x30 lens would be fine for eshift.
I don't believe anybody said otherwise. Just like the Marantz lens they used on their 720p DLPs may have been fine for 1080p, just to a different scale. A screen can be good enough for 1080p and 4K also.

Just because something requires better for the same signal to noise ratio doesn't mean it wasn't up to more than was required of it before.

Also, just because a lens is good enough that doesn't mean the images couldn't get even better with a better lens. It is diminishing returns though.

--Darin

Last edited by darinp2; 11-19-2015 at 07:19 AM.
darinp2 is offline  
post #13 of 1307 Old 11-19-2015, 09:36 AM
Member
 
ader42's Avatar
 
Join Date: Feb 2013
Posts: 44
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 24 Post(s)
Liked: 27
Darin, I thought your argument was that you need a better lens for eshift? Now you imply a 1080p x30 lens is good enough?

I can't see how a better lens is needed to send a 1080p stream at twice the speed and that is what eshift does albeit every other half frame is positioned slightly diagonal. I don't see the lens requirements being different to a 120hz static image : it's never more than 1080p data at once.

I'm happy to accept that I have major flaws or gaps in my understanding though.

I've never said a better lens won't provide a better image, of course it will. The question is will eshift work with a good 1080p lens and will eshift of 4k allow a pseudo 8k eshift (assuming the lens is good enough for real 4k).
Elix likes this.
ader42 is offline  
post #14 of 1307 Old 11-19-2015, 01:16 PM - Thread Starter
AVS Forum Addicted Member
 
darinp2's Avatar
 
Join Date: Sep 2003
Location: Seattle, WA
Posts: 23,188
Mentioned: 7 Post(s)
Tagged: 0 Thread(s)
Quoted: 1135 Post(s)
Liked: 1766
Quote:
Originally Posted by ader42 View Post
Darin, I thought your argument was that you need a better lens for eshift? Now you imply a 1080p x30 lens is good enough?
And it might be good enough for 4k. Just because you need a better lens for higher resolution doesn't mean that a lens put on a lower resolution isn't good enough for higher resolution chips.

As an example, if we were to grade lenses based on resolution with a score of 0 to 100 and decide that for a certain level of "good enough" we need an 80 for 1080p and 90 for 4k, then I would say the 4k e-shift requires something between there for the same level of "good enough", like maybe 83. That doesn't mean that an 80 lens vs an 83 lens would look dramatically different on a 4K e-shift projector, but there is some finer detail with 4k e-shift. Not as much as true 4k overall, but more than straight 1080p.

Same thing going to 8k e-shift and 8k. If we decide 8k requires a lens that is a 95 then maybe an 8k e-shift requires 92. Again, as I mentioned nothing is actually required, but we are talking about retaining detail that is being presented to the lens (but isn't necessary to be presented all at the same point in time).
Quote:
Originally Posted by ader42 View Post
I can't see how a better lens is needed to send a 1080p stream at twice the speed and that is what eshift does albeit every other half frame is positioned slightly diagonal. I don't see the lens requirements being different to a 120hz static image : it's never more than 1080p data at once.
It isn't more than 1080p data at once, but the fallacy is that this is relevant. It doesn't matter how much is shown at once, it matters what you perceive as being shown at once.

Would you say that all 3 of these pictures require the same resolution to show by themselves?











What if the first 2 represent 4 gray pixels in 4K space (1 pixel in 1080p space) and the last picture represents 7 blocks). Would you need me to tell you whether the 3rd picture was being displayed with 4K panels or with an e-shift projector with 1080p panels before you could tell me what the lens requirements would be to show that image to a human viewer with enough accuracy to have them figure out what the picture is?

--Darin
Attached Thumbnails
Click image for larger version

Name:	grayBlock1.jpg
Views:	2252
Size:	1.3 KB
ID:	1067554   Click image for larger version

Name:	grayBlock2.jpg
Views:	2253
Size:	1.4 KB
ID:	1067562   Click image for larger version

Name:	grayBlocks.jpg
Views:	2487
Size:	1.7 KB
ID:	1067570  

This is the AV Science Forum. Please don't be gullible and please do remember the saying, "Fool me once, shame on you. Fool me twice, shame on me."

Last edited by darinp2; 11-19-2015 at 01:20 PM.
darinp2 is offline  
post #15 of 1307 Old 11-19-2015, 02:58 PM
Member
 
ader42's Avatar
 
Join Date: Feb 2013
Posts: 44
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 24 Post(s)
Liked: 27
Yes I would need to know if it was 4k or eshift.

Because image 3 is never "shown" through the lens.

Image 1 is shown and then image 2 is shown.

Image 3 only exists in the viewers brain.

It's never even on screen at a single point in time.
Dionyz likes this.
ader42 is offline  
post #16 of 1307 Old 11-19-2015, 03:50 PM - Thread Starter
AVS Forum Addicted Member
 
darinp2's Avatar
 
Join Date: Sep 2003
Location: Seattle, WA
Posts: 23,188
Mentioned: 7 Post(s)
Tagged: 0 Thread(s)
Quoted: 1135 Post(s)
Liked: 1766
Quote:
Originally Posted by ader42 View Post
Yes I would need to know if it was 4k or eshift.

Because image 3 is never "shown" through the lens.

Image 1 is shown and then image 2 is shown.

Image 3 only exists in the viewers brain.

It's never even on screen at a single point in time.
It makes no difference. Flash those first 2 at 120Hz each and a person couldn't even tell you when things were lit up and when they weren't.

How is it your think a poor lens degrades images spatially? It smears them out and that smearing shows in the photons that your eye receives regardless of when it shows up during that frame time.

Lets go back to the E3 demo in the first post. Do you think that if a lens smeared the E and 3 out, but a small enough amount that you could still see the E and 3 it would mean you could still see that it was 2 separate characters when played at 240Hz, but not if it was the way that your current display does them?

You eye doesn't know when the photons hit it in this kind of timeframe and that is true whether the smearing happens when both things are displayed or when just one is displayed?

I can understand how people's original intuition was that it mattered whether they went through at the same time, but have a hard time understanding why people think a lens smearing the photons for one of the blocks while the other block is displayed is different than smearing when they are both displayed. The photons don't interact with each other.

I guess I could make some pictures of what those gray blocks look like with a poor lens, but seems like people should be able to picture that.

If a lens was to scatter the light for a scanning laser it wouldn't matter that only a small amount of light went through the lens at one time. What matters is the scattering for all light for a whole frame whether pixels are put up 1/10th at a time, 1/2 at a time, a whole pixel at a time, 5 pixels at a time, ..., all pixels at one time, is irrelevant.

--Darin

This is the AV Science Forum. Please don't be gullible and please do remember the saying, "Fool me once, shame on you. Fool me twice, shame on me."
darinp2 is offline  
post #17 of 1307 Old 11-20-2015, 12:37 AM - Thread Starter
AVS Forum Addicted Member
 
darinp2's Avatar
 
Join Date: Sep 2003
Location: Seattle, WA
Posts: 23,188
Mentioned: 7 Post(s)
Tagged: 0 Thread(s)
Quoted: 1135 Post(s)
Liked: 1766
Quote:
Originally Posted by ader42 View Post
Because image 3 is never "shown" through the lens.

Image 1 is shown and then image 2 is shown.

Image 3 only exists in the viewers brain.

It's never even on screen at a single point in time.
What is it you think happens with the contamination light from the poor lens when displaying one of the frames? I'm guessing you understand that when we talk about how good lenses for projectors need to be we are talking about for human vision and not for high speed cameras.

Going back to the E3 example in the first post in this thread, what would happen if the lens was smearing a lot of light from the E all around the E, then smearing a lot of light from the 3 all around the 3 and this was played at say 240Hz. Do you think a human would be able to tell that the full image was made up of an E and a 3?

As an example, lets say that for this example the light put out is at such intensity that if the one character was shown constantly the white would be 40 cd/m2. We will just have the display show each character for half the time, so that means the luminance will be 20 cd/m2.

Now lets say the lens is pretty poor and it creates a hallow around each of these large letters that is about half that bright and extends out like shown in this gif.



With each letter shown individually can you tell what they are? If so, then this image is good enough for those letters. Now, if that gif was played at 240Hz do you think you would be able to tell from any reasonable distance that the image was made up of 2 characters and wasn't just a white block with 2 black blocks?

If the black lines between the E and 3 on the chip face are smeared by the lens at half the luminance of the letters and extends all the way between the characters then the characters would be 20 cd/m2 from only being displayed half the time and the parts that are supposed to be black would also be 20 cd/m2 because they are displayed 100% of the time. That is, half the luminance for twice as long gets back to the same luminance.

Put another way, given that if the lens skews the E and 3 too much into each other that little black gap becomes as bright as the letters themselves, how would a lens creating that much of a problem be considered adequate just because it was adequate enough to see the E and 3 when slowed down enough (which is of course not what human vision sees at 240Hz or 120Hz)?

Does that make sense?


In reality the lens smearing would be fuzzier and not blocky like my example, but I don't think I have a program for making things fuzzier, while it is pretty easy to show a sharp gray "halo" using Microsoft Paint. The idea is the same either way though. The concept to grasp is that detail can be finer in a full frame than in a sub-frame (which it clear is in my E3 example), so retaining the detail in individual sub-frames most definitely does not tell you how good a lens would have to be to show all the detail in full frames to humans.


--Darin
Attached Thumbnails
Click image for larger version

Name:	e3WSmear.gif
Views:	1918
Size:	13.0 KB
ID:	1069098  

This is the AV Science Forum. Please don't be gullible and please do remember the saying, "Fool me once, shame on you. Fool me twice, shame on me."

Last edited by darinp2; 11-20-2015 at 12:54 AM.
darinp2 is offline  
post #18 of 1307 Old 11-20-2015, 02:50 AM
Member
 
ader42's Avatar
 
Join Date: Feb 2013
Posts: 44
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 24 Post(s)
Liked: 27
Darin it feels like your stance is that a better than 1080p lens is needed for eshift but that you're trying to make a case for that by illustrating that a sub 1080p lens is not good enough. Not the same thing.

I'm happy to accept that I'm not bright enough to understand your explanations though.

I'm a computer programmer who thinks in terms of bandwidth e.g. x amount of data over n amount of time.

I guess I am saying that I think a 1080p lens is good enough for 2x eshift (4k) if :

1) the data rate is doubled again (240hz) and
2) four 1080p images are used (one for each corner) and
3) the lens or image is shifted accordingly for each sub-frame

or that it wiil be good enough for 8k if 16 x 1080p images are used at 960hz and the lens or images are shifted appropriately.

I feel that eshift doubles the data rate (2x 1080p) in place of doubling the data (1x 1620).

Maybe we need to ignore the screen and human perception of the image and purely concentrate on what happens in the lens if that is where my understanding falls apart?
ader42 is offline  
post #19 of 1307 Old 11-20-2015, 06:03 AM
AVS Forum Addicted Member
 
Mike Garrett's Avatar
 
Join Date: Sep 2011
Posts: 18,993
Mentioned: 111 Post(s)
Tagged: 0 Thread(s)
Quoted: 7085 Post(s)
Liked: 4793
Send a message via Skype™ to Mike Garrett
Quote:
Originally Posted by darinp2 View Post
And it might be good enough for 4k. Just because you need a better lens for higher resolution doesn't mean that a lens put on a lower resolution isn't good enough for higher resolution chips.

As an example, if we were to grade lenses based on resolution with a score of 0 to 100 and decide that for a certain level of "good enough" we need an 80 for 1080p and 90 for 4k, then I would say the 4k e-shift requires something between there for the same level of "good enough", like maybe 83. That doesn't mean that an 80 lens vs an 83 lens would look dramatically different on a 4K e-shift projector, but there is some finer detail with 4k e-shift. Not as much as true 4k overall, but more than straight 1080p.

Same thing going to 8k e-shift and 8k. If we decide 8k requires a lens that is a 95 then maybe an 8k e-shift requires 92. Again, as I mentioned nothing is actually required, but we are talking about retaining detail that is being presented to the lens (but isn't necessary to be presented all at the same point in time).
It isn't more than 1080p data at once, but the fallacy is that this is relevant. It doesn't matter how much is shown at once, it matters what you perceive as being shown at once.

Would you say that all 3 of these pictures require the same resolution to show by themselves?











What if the first 2 represent 4 gray pixels in 4K space (1 pixel in 1080p space) and the last picture represents 7 blocks). Would you need me to tell you whether the 3rd picture was being displayed with 4K panels or with an e-shift projector with 1080p panels before you could tell me what the lens requirements would be to show that image to a human viewer with enough accuracy to have them figure out what the picture is?

--Darin
That is a nice simplified explanation and should make it easy to understand. The last image is just that an image. It could have been created by E-shift or it could have been created by a 4K projector, but it shows the same thing, so looking at the image by it's self, forgetting how it was produced and you have to admit it requires higher resolution.
gkf15 likes this.

mjgarrett100@gmail.com

My Baffle wall LCR build: http://www.avsforum.com/forum/155-di...-tpl-150h.html
Mike Garrett is offline  
post #20 of 1307 Old 11-20-2015, 06:48 AM
AVS Forum Addicted Member
 
stanger89's Avatar
 
Join Date: Nov 2002
Location: Marion, IA
Posts: 22,550
Mentioned: 24 Post(s)
Tagged: 0 Thread(s)
Quoted: 3692 Post(s)
Liked: 2054
Quote:
Originally Posted by ader42 View Post
Darin it feels like your stance is that a better than 1080p lens is needed for eshift but that you're trying to make a case for that by illustrating that a sub 1080p lens is not good enough. Not the same thing.
You guys are arguing two different things.

If I could try to summarize Darin's point:

Despite the fact that e-Shift only sends "1080p" images through the lens at a time, it requires a lens capable of resolving at least 1/2 pixel detail since it needs to accurately pass pixels that are offset by a half a pixel dimension. Arguably this means that e-Shift has the same lens requirements as native 4k, because both produce information at the same scale (1/2 of 1080p pixel scale). It doesn't matter that half of that information is provided in one moment, and the other half in another, there is still important information that must be retained at the 1/2 1080p pixel scale.



It seems a lot of other folks are latching onto the e-Shift part and interpreting that this is saying the lenses on the current JVC aren't good enough for e-Shift. Which is not what is getting argued.



I will repeat myself and say again, that I think the issue is there's no definition or agreement on what it means to be a "1080p lens". Most high-end 1080p projectors these days have lenses capable of resolving the gap between a pixel. Those gaps are on the scale of 1/20th of a pixel. That means most current 1080p lenses on the higher end projectors are capable resolving resolutions upwards of 40k, 38400x21600. It wouldn't resolve the gaps between pixels at a resolution that high, but it should be able to resolve a 1 pixel on/off test pattern even at that resolution.
stanger89 is offline  
post #21 of 1307 Old 11-20-2015, 08:18 AM - Thread Starter
AVS Forum Addicted Member
 
darinp2's Avatar
 
Join Date: Sep 2003
Location: Seattle, WA
Posts: 23,188
Mentioned: 7 Post(s)
Tagged: 0 Thread(s)
Quoted: 1135 Post(s)
Liked: 1766
Quote:
Originally Posted by ader42 View Post
I'm a computer programmer who thinks in terms of bandwidth e.g. x amount of data over n amount of time.

I guess I am saying that I think a 1080p lens is good enough for 2x eshift (4k) if :

1) the data rate is doubled again (240hz) and
2) four 1080p images are used (one for each corner) and
3) the lens or image is shifted accordingly for each sub-frame

or that it wiil be good enough for 8k if 16 x 1080p images are used at 960hz and the lens or images are shifted appropriately.

I feel that eshift doubles the data rate (2x 1080p) in place of doubling the data (1x 1620).
I think here you just got thrown off with the frequency of sub-frames thing. For lens requirements that isn't really part of the equation. Speeding up the sub-frames doesn't change the lens requirements.

Lens requirements in this case are about contamination percentage. So, if you run the same thing through for half as long you still have the same signal to noise ratio, but just half of each.

In the e-shift case there are twice as many discrete elements going through the lens per film frame as 1080p and half as many as 2160p. However, they are overlapped to have the same number of elements as 2160p, but with some correlation. The 4 million pieces of information are a subset of the 8 million, but not a superset of the 2 million.

Going with the data rate thing, I think we can agree that if I put up the E for half the time and put up the 3 for half the time the data rate per second is not affected by whether I choose to put them both up at the same time period within the frame time, or give one the first half of the frame time with the other getting the second half, or give them a cycle time of 1 nanosecond, whether toggling sides or putting them up together. Do you agree with that?

Maybe it depends on how you count the data rate though. With photons you are still getting more information as the E stays up. It is the same E, but the fact that it is still being transmitted is data. There is nothing that tells you that the E is still up other than the data (photons) showing that the E is still up.

The temporal frequency that applies to light for the lens in this case is relative to the speed of light and changing the frequency of the sub frames doesn't change that. Every nano-second the lens has to do something with the photons that went through it in that nanosecond, so in computer terms might be more like a status that you are monitoring at a constant rate and cannot change that rate, where what you are monitoring has its own update rate that is faster than your monitoring rate and you are just reading an average every time you get a status.

Not sure if that helps.

--Darin
darinp2 is offline  
post #22 of 1307 Old 11-20-2015, 08:43 AM
Senior Member
 
Dionyz's Avatar
 
Join Date: May 2005
Location: Atlanta GA
Posts: 382
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 57 Post(s)
Liked: 49
Quote:
Originally Posted by Mike Garrett View Post
That is a nice simplified explanation and should make it easy to understand. The last image is just that an image. It could have been created by E-shift or it could have been created by a 4K projector, but it shows the same thing, so looking at the image by it's self, forgetting how it was produced and you have to admit it requires higher resolution.
I think the confusion is the result of confusing what is coming from the lens vs. what we view on the screen.
You need to follow the steps in the process of generating a e-shift 4K image

Step 1 - projector projects the source 1080 image (not 4k image) - Image 1 - which passes through the lens for for 1/120th of a second

Step 2 - projector shifts the Image 1 together with eshift algorithms to generate Image 2, which is still a 1080p image and this 1080p image passes through the lens for for 1/120th of a second

The two images, Image 1 and Image 2 are NEVER passing through the lens at the same time.
The lens is NEVER required to pass a 4k image
Speed of light is 186,000 miles per hour, which is almost a billion feet per second or 8 million feet in the 120th of a second that each image is passing through the lens.
Thus it is impossible for the light from each image to overlap.

This is the key -It is on the screen and NOT the lens where the shifted images overlap, creating the perceived increase in pixels and resolution in our brains.

Image 1 and Image 2 are NOT recombined in the electronics (with the pixel offset) before being projected through the lens - the lens NEVER sees anything more than 1080p, thus does NOT need 4k resolving capability.

Only if Image 1 and Image 2 were shifted in the electronics and recombined into a single image prior to going through the lens would you need a 4K lens. However, since JVC panels are only 1080p, you would not get any perceive improvement in resolution, and more likely result in a worse than original image.

Darin logic and explanations all lead one to believe that the two images somehow are overlapping in the lens, which would necessitate more resolving capability, which is simply NOT the case -remember speed of light and JVC panels NEVER generating more than 1080p image!!!

However, I do agree with other posters that a better lens will improve image on the screen.
Perfect lens has zero degradation of the image produced by the panels.
Less than perfect lens can only degrade image generated by the panels
Tomas2 likes this.
Dionyz is offline  
post #23 of 1307 Old 11-20-2015, 09:46 AM
AVS Forum Addicted Member
 
stanger89's Avatar
 
Join Date: Nov 2002
Location: Marion, IA
Posts: 22,550
Mentioned: 24 Post(s)
Tagged: 0 Thread(s)
Quoted: 3692 Post(s)
Liked: 2054
Quote:
Originally Posted by Dionyz View Post
Image 1 and Image 2 are NOT recombined in the electronics (with the pixel offset) before being projected through the lens - the lens NEVER sees anything more than 1080p, thus does NOT need 4k resolving capability.
But what does "4k resolving capability" mean? For that matter what does 1080p capability mean?

Quote:
Darin logic and explanations all lead one to believe that the two images somehow are overlapping in the lens, which would necessitate more resolving capability, which is simply NOT the case -remember speed of light and JVC panels NEVER generating more than 1080p image!!!
But e-Shift requires being able to accurately render detail that is half the size of a 1080p pixel.

If you have a lens/system that can't clearly, accurately resolve a pixel, e-Shift causes a blurry mess:


For e-Shift to work, and to effectively simulate 4k resolution, you first need to be able to clearly resolve individual pixels, then your e-Shift process can convincingly simulate 4K:


So the question is, is lens in the first example good enough for 1080p? It can resolve a white pixel next to a black pixel, so I would say yes. In which case I would say it's clear that for e-Shift to be convincing you need a lens that it better than one that's just good enough for 1080p (can just resolve a black pixel next to white).
gkf15 and Dr.Evazan like this.
stanger89 is offline  
post #24 of 1307 Old 11-20-2015, 10:19 AM
AVS Forum Addicted Member
 
R Harkness's Avatar
 
Join Date: May 2001
Location: Toronto, Ontario, Canada
Posts: 14,376
Mentioned: 10 Post(s)
Tagged: 0 Thread(s)
Quoted: 2130 Post(s)
Liked: 1844
Dionyz,


You are repeating the same basic intuitions that began the whole conversation, and which Darin is showing to be misleading.

Darin is pointing out that the resolution of a lens is very much tied to SPATIAL resolution, not temporal resolution. Lens resolution is a problem of AREA of SPACE, and how much can be resolved within that given space - it's not in issue of sequential information. If it was, then any old lens only capable of passing 720p images could be considered of virtually "unlimited resolution" because the amount of info you can pass through it (e.g. every movie or video ever made) sequentially is almost unlimited. After all, the film and videos are made of sequential information that our brains put together, just like E-shift. But that doesn't tell us anything about the resolving power of a lens. So the idea that adding up image info sequentially solves the resolution limitation problem - as your idea relies on - is a misunderstanding of the issues of lens resolution.

Hence it really matters that tiny pixels are trying to be moved around within the space of a single lens, whether this happens sequentially or not.

Try for a moment to put aside whether you are right or wrong, and think about the following:

Say you have a lens that CAN resolve down to 4K pixel sizes. Now, you have a blank field and you turn one of those teeny 4K pixels on. You can see the pixel resolved on screen through the lens. Now turn the pixel next to it on. Now you have two pixels beside each other, clearly resolved. Now, turn the first pixels off. You still see the new pixel resolved. Turn the original pixel back on. Both pixels are there.

Notice this is the difference between both 4K pixels being resolved simultaneously (both on) vs sequentially (turning one off, then on). You could turn all the millions of pixels in the 4K pattern on
either simultaneously. One pixel after the other. Or just turning only one pixel on at at time.

Through this process, is the resolution of the lens changing? Does it NEED to change? Surely not. It's the same lens, it can resolve 4K pixels spatially within it's dimensions. It doesn't matter whether you are showing those pixels one at a time or showing them sequentially. Each time you turn on another pixel, you are presenting NEW information through the lens...SPATIALLY....that it has to resolve.

If the lens couldn't resolve the first 4K pixel in the first place, you wouldn't see it, nor would you see the second one turned on sequentially either. Time has nothing to do with it: spatial resolving power does.

This is because the resolution of the lens isn't tied to time; it's about the spatial resolution possible.

So now imagine projecting a 1080p pixel grid. Now keep that grid there while SIMULTANEOUSLY projecting another 1080p pixel grid and E-shifting it. What happens? If the resolution is there, you can see how the fine grid structure has created new, smaller pixel sizes than the 1080p pixels. In other words, image information finer than the original 1080p pixel grid is now being displayed. A reminder of how this looks:



So there is now SMALLER SPATIAL INFORMATION being conveyed through the lens, the smaller grid areas creating smaller pixel. Again, this is presuming SIMULTANEOUS off-set of the pixel grid, not sequential.

Now think back on the example of resolving the 4K pixels. It did not matter whether the 4K pixel pattern was created sequentially, or simultaneously. The lens either had the spatial resolution to resolve the 4K pixels, together or separately - or it didn't.
If it didn't, you couldn't distinguish the 4K pixels whether you presented them simultaneously or sequentially within the space of the image.

It's the same now for the E-shift pixels. Looking at the SIMULTANEOUS projection of the E-shifted pixels, you can either see (resolved by the lens) the SMALLER-THAN-1080P grid structure and pixels, or you can't. You can make out how the grid has cut the original larger pixels into smaller ones...or you can't. If your lens couldn't resolve that fine information, then when you simultaneously display the E-shifted 1080p grid, it would not be resolving the new finer line structure. If the lens was ONLY capable of resolving something as large as a 1080p pixel, then overlaying a new shifted 1080p image wouldn't show the new, smaller visible pixel structure created by the intersection of the grid lines, it would simply create a more blurry image, with the big pixels simply WIPING OUT the finer pixel structure.

Now, just like the 4K pixel example, turn the second E-shift grid on and off, sequentially. Now remember the 4k pixel example. It didn't matter whether whether the pixels were on simultaneously or sequentially: what mattered was whether the lens had the spatial resolution to show information that small. It's the same with the new, smaller grid structure created by the E-shifted 1080p grid. The lens either has the spatial resolution to resolve the finer grid/pixels appearing between the shifted bigger pixels, or not. Like the 4K pixels, it doesn't matter whether you are asking the lens to show those finer pixels at the same time, or sequentially. In terms of spatial resolution, they put the same demands on the lens. The new finer grid structure of an E-shifted image puts the same spatial resolution demands on the lens, whether shown simultaneously or sequentially.

Yes, our brains put the E-shifted image together sequentially. But this can only happen in the first place if the spatial resolution of the lens would allow us to actually see the finer grid lines/pixel structure that occur when subtly off-setting the grid spatially. By subtly shifting the pixel grid via E-shift, this is a SPATIAL movement of the information as well as a sequential issue, and a lens has limited spatial resolution in which you can shift around teeny image information such that the effects would be visible to our eyes.

Last edited by R Harkness; 11-20-2015 at 10:38 AM.
R Harkness is offline  
post #25 of 1307 Old 11-20-2015, 10:47 AM
Advanced Member
 
rak306's Avatar
 
Join Date: Oct 2004
Location: Syracuse NY
Posts: 935
Mentioned: 4 Post(s)
Tagged: 0 Thread(s)
Quoted: 560 Post(s)
Liked: 232
Quote:
Originally Posted by stanger89 View Post
But what does "4k resolving capability" mean? For that matter what does 1080p capability mean?
Agreed

Quote:
Originally Posted by stanger89 View Post

But e-Shift requires being able to accurately render detail that is half the size of a 1080p pixel.

If you have a lens/system that can't clearly, accurately resolve a pixel, e-Shift causes a blurry mess
Dissagree. It only requires the picture looks better than it did without eshift.
Your example is extreme, and valid only for conputer characters. No movie would have that level of detail. It would be filtered so as not to alias.

(Of course live video may have that level of content as some video camera makers are sloppy about how they filter)

Quote:
Originally Posted by stanger89 View Post
For e-Shift to work, and to effectively simulate 4k resolution, you first need to be able to clearly resolve individual pixels, then your e-Shift process can convincingly simulate 4K:

So the question is, is lens in the first example good enough for 1080p? It can resolve a white pixel next to a black pixel, so I would say yes. In which case I would say it's clear that for e-Shift to be convincing you need a lens that it better than one that's just good enough for 1080p (can just resolve a black pixel next to white).
You are contending that a lens that is just good enough to be ok for 1080p is not good enough for eshift. I believe my example (post 4) shows otherwise. I believe the proper question should be: if i add eshift to a 1080p projector, will the image improve. I believe the answer is yes, regardless of how good the lens was(assuming it was judged ok for 1080p). Now the better the lens the better, but it will improve even with a marginal lens.

Rick

Last edited by rak306; 11-20-2015 at 12:15 PM.
rak306 is offline  
post #26 of 1307 Old 11-20-2015, 11:27 AM
Senior Member
 
Dionyz's Avatar
 
Join Date: May 2005
Location: Atlanta GA
Posts: 382
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 57 Post(s)
Liked: 49
Darin is confusing what the lens needs to handle and what appears on the screen.

Simply put - the JVC panels are 1080p and NEVER produce more than1080p, thus that is all that the lens will ever see.
Now I am willing to believe JVC lens is not as good as it should be to resolve 1080p perfectly.

However, it makes absolutely NO difference whether you are projecting 1080p source with or without e-shift, since the lens NEVER sees anything more than 1080p coming to it. And it is impossible for lens to see anything other than 1080p image since that is all that the panels are capable of producing.

If you believe otherwise then you must believe that JVC panels are capable of producing more than 1080p.
Dionyz is offline  
post #27 of 1307 Old 11-20-2015, 11:28 AM - Thread Starter
AVS Forum Addicted Member
 
darinp2's Avatar
 
Join Date: Sep 2003
Location: Seattle, WA
Posts: 23,188
Mentioned: 7 Post(s)
Tagged: 0 Thread(s)
Quoted: 1135 Post(s)
Liked: 1766
Quote:
Originally Posted by Dionyz View Post
This is the key -It is on the screen and NOT the lens where the shifted images overlap ...
No, it is not. Anybody with a high speed camera can show that you are wrong. You didn't make a screen out of phosphor, did you? The images don't overlap on the screen anymore than they overlap in the lens.
Quote:
Originally Posted by Dionyz View Post
Image 1 and Image 2 are NOT recombined in the electronics (with the pixel offset) before being projected through the lens - the lens NEVER sees anything more than 1080p
And neither does the screen. By this argument the screen doesn't need to resolve 4K either. That claim would be as ridiculous as this one, but I wonder if you claim the screen doesn't need anymore resolution for the same signal to noise ratio with e-shift as without. Is that your claim, that a screen that is barely good enough for 1080p is just as good for 4K e-shift, but not good enough for true 4k?
[quote=Dionyz;39086178]Only if Image 1 and Image 2 were shifted in the electronics and recombined into a single image prior to going through the lens would you need a 4K lens.[/qoute]No, that wouldn't matter. Just like I proposed with a 6 chip solution where both e-shift images go through the lens at the same time.
Quote:
Originally Posted by Dionyz View Post
Darin logic and explanations all lead one to believe that the two images somehow are overlapping in the lens ...
Only if they don't read what I said or don't understand it. The problem here is that you don't understand enough about how this works to understand that you have reached a conclusion based on information that is irrelevant to the answer. As I said:
Quote:
Originally Posted by darinp2 View Post
It isn't more than 1080p data at once, but the fallacy is that this is relevant. It doesn't matter how much is shown at once, it matters what you perceive as being shown at once.
You are welcome to show other readers that you understand enough of the basics to explain what happens if for the E and 3 example I gave the E and 3 are smeared when shown individually into the gap that is between them when viewing them quickly in sequencer. For instance, if you have something like the attached image which a highly zoomed up representation of the upper right part of the E with a lens putting a halo around it and the gray on the right side extending all the way to where the 3 is supposed to start a half a frame later, what happens? Specifically, for what is shown in the attached image:

1. If the E is shown at what would be 40 cd/m2 if shown continuously, but is only shown for the first half of a frame, how bright will the pixels in the E be?
2. If the halo created by the lens is half of the values within the E for any moment of time, how bright will the contribution from the lens add to the gap while the E is up and then how much average per frame from projecting the E?
3. Given the same halo from the 3, how bright will the pixels in the gap be given this halo described once shown at high speed?

If you don't think the questions are clear I can clarify.

BTW: I saw your post in another thread telling somebody to act like a grown up. I would hope that if you are proven wrong you will be a grown up and admit to that, but from what I've seen of your posts I would be surprised if you would ever admit it even if you learned enough about the physics of light here to understand that you have been wrong this whole time.

--Darin
Attached Thumbnails
Click image for larger version

Name:	partOfEWithSmear.jpg
Views:	47
Size:	6.0 KB
ID:	1069794  

Last edited by darinp2; 11-20-2015 at 12:23 PM.
darinp2 is offline  
post #28 of 1307 Old 11-20-2015, 11:33 AM - Thread Starter
AVS Forum Addicted Member
 
darinp2's Avatar
 
Join Date: Sep 2003
Location: Seattle, WA
Posts: 23,188
Mentioned: 7 Post(s)
Tagged: 0 Thread(s)
Quoted: 1135 Post(s)
Liked: 1766
Quote:
Originally Posted by Dionyz View Post
Darin is confusing what the lens needs to handle and what appears on the screen.

Simply put - the JVC panels are 1080p and NEVER produce more than1080p, thus that is all that the lens will ever see.
Do you believe the same thing about screens? If you think the screen ever reflects more than 1080p when e-shift is on then you really should learn some more about screen materials.

How about addressing the specific example of the E and the 3 I just provided specific questions about in my previous post?

Your position is that if a system is good enough to show the E and it is good enough to show the 3 in the original post in this thread, then it must be good enough to show the gap between them when the E and 3 are shown alternating at much higher framerate than a human can detect, right? If not, please explain how that is different than what you have been claiming.

--Darin

This is the AV Science Forum. Please don't be gullible and please do remember the saying, "Fool me once, shame on you. Fool me twice, shame on me."
darinp2 is offline  
post #29 of 1307 Old 11-20-2015, 11:57 AM
AVS Forum Addicted Member
 
R Harkness's Avatar
 
Join Date: May 2001
Location: Toronto, Ontario, Canada
Posts: 14,376
Mentioned: 10 Post(s)
Tagged: 0 Thread(s)
Quoted: 2130 Post(s)
Liked: 1844
I just want to riff a tiny bit more on this, because it took me a bit to work through to find where the intuitions were going wrong (and of course with Darin's helpful posts).

Intuitively it can seem that a projector doing E-shift only needs to be able to resolve a 1080p pixel grid, because that's all it's ever showing at one point in time. But, as Darin points out, the "point in time" part, the sequential aspect, really misses what lens resolution is all about, which spatial resolution, not temporal. If lens resolution had to do simply with adding up what can be shown through it temporally, and put together over time in our brains, then any crappy lens could temporally show almost infinite information insofar as you could temporally pass through the lens as many low-res movie and videos as you want, ad infinitum.

But what lens resolution really has to do with is spatially-oriented information. You have a limited space - the lens area - lens resolution has to do with how much fine information you can show within that area.

When people think of the E-shifted image, they tend to just imagine in their mind a 1080p grid, think "that's resolved," move it slightly think "that's resolved too" and that's all that is needed. Looking at the pictures provided by JVC of how E-shift works also sets up this same problem. You see a 1080p pixel grid. Then another one simply shifted, then overlayed. So it seems you can say "ok, all I need is a 1080p grid to do this; I just take it and shift it slightly over another 1080p grid - viola!"

Except what's missing is this process is not GOING THROUGH A LENS. It has abstracted the lens out of the equation. Once you actually have to show those grids off-set through a lens, you are now working within the spatial resolution constraints of the lens. And that is fixed. It's not moving around, like the pixel grid is, to help you show everything. The spatial resolution is, after all, why lenses HAVE differing resolution to begin with.

To grasp this idea of spatial constraint, you could replace the lens with a fine grid. The grid represents. the spatial resolution of a lens.

Lets say you had the image of a map of North America, with all it's super fine lines and information. The map is made of tiny pieces - pixels - that fit right into the squares of the grid. THE GRID IS IN FRONT OF THESE MAP PIXELS, with the map pixels inset. But they line up perfectly. From the right distance, this all comes together and you see the map finely resolved.

Now, take those tiny pixels of the map, and slightly shift them diagonally beneath that grid (like an E-shifted image). What you've done is shifted the image spatially, slightly, within the constraints of the FIXED resolution of the grid. Unfortunately, though you have moved the map pixels, you haven't moved the grid with it. (<<<<-----the important part). Therefore, every map pixel is now obscured somewhat by the grid. In other words, now you've lost some resolution of the map, because you've shifted it slightly in space, but the grid had limited just how much you could fit there, spatially, and still have everything be seen without being obscured. This same problem of obscuring detail occurs whether you shift the same map pixels sequentially to the next position under the grid, or you add another layer of those off-set pixels simultaneously. Either way, the off-set pixels end up in the same position "behind" the grid, obscuring them partially just as if you did it sequentially. That's the limiting of the resolution.

Now imagine a different scenario. In this one, the grid size is enlarged (this is equivalent to a higher resolution lens, which allows more information through). Now each same map pixels take up only the bottom 1/4 of each grid space. And now you do the shift again, diagonally. But this time there is space within each grid area to continue to show the whole pixel, and now the pixel has simply moved to the top right corner of the grid, but is not obscured at all.

Stepping back from the grid to view the map image, you have lost no information due to the grid, all map detail is finely resolved, the whole picture there. This is what you get when you alter the grid so it can handle MORE information spatially.

You have more area of spatial resolution in which to shift an image around. And it doesn't matter if you show the map pixels first at the bottom left of the larger grid squares, and then sequentially shifted to the top right, or whether you show the original and shifted map pixels simultaneously. Now you can see BOTH of them, all the pixel information, un-obscured by the limitations of the smaller grid. (Of course, whether you are flashing the new map pixels sequentially or simultaneously, to make the map look the same the map image would have to be re-mapped/scaled within the space created by the two new pixels - the same way it is done in E-shift).

So, again: The smaller grid size = the lens of lower (spatial) resolution, which limits your ability to shift the same image information around the same area. It's a fixed spatial limit you have to work within, and so it limits what size picture information you can shift within it, without paying consequences of losing information. If you are showing an image that is already at the limit, filling what the grid can show at one time, then shifting the image around within that space, whether sequentially or not, you are going to lose image information, obscured by the limits of that grid. And no matter how you try to re-map the image, you never get beyond the original problems of that grid.

The larger grid size = a lens with higher (spatial) resolution, allowing you more resolution space in which to shift around the same pixel information without it being lost or obscured. So long as you can shift the pixels within that larger resolution space, you can flash them sequentially in that grid space, nothing being obscured so our eyes can see all the pixel information being flashed, and then you can start re-mapping the original signal into those new visible pixels.

So I think the key is to keep in mind that lens resolution is a spatial issue. It's a limited amount of space in which you can show fine information, sequentially or simultaneously. That's why shifting a 1080p pixel grid around behind the lens is only part of the equation. Just like the grid example, the GRID - that is THE LENS - is not itself shifting along with the 1080p pixel grid. You are sending changing spatial information through a lens that is FIXED IN SPACE and fixed in available spatial resolution, like the grid, and now you are working through that limitation. Which limits what you can do within that space and still see it on screen.
Eternal_Sunshine likes this.

Last edited by R Harkness; 11-20-2015 at 12:13 PM.
R Harkness is offline  
post #30 of 1307 Old 11-20-2015, 02:35 PM
Member
 
ader42's Avatar
 
Join Date: Feb 2013
Posts: 44
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 24 Post(s)
Liked: 27
Rich/Darin

If the 2 images were sent simultaneously I would agree with you. To send the images simultaneously you would have to use two lenses or a higher quality lens. But the images in eshift are never simultaneous.

The 2160p image doesn't exist in the projector panels, in the lens or on the screen. It only exists in the mind of the beholder ala POV.

The same way a one eyed person watching 3d through 3d glasses sees a single non-blurry 2d image. They only see half the data so the brain is not fooled.

I think the lens has to be good enough for one 1080p image and also be good enough for a second 1080p image sent afterwards that is not in the exact same spatial position. But a 1080p lens is good enough for both 1080p images. I don't think a lens structure maps spatially to a 1080p or 2160p grid.

Let's say you had a projector that rotated it's lens 90 degrees between each sub-frame and sent two 1080p images. At 120hz and the alternating images were offset by half a pixel. Each screen wall would I believe show 50% bright image but each would look fine resolution wise.

If you then stopped the lens rotation and did fancy processing to choose select pixels from a 4k source then you would end up with an eshift display. Still using a 1080p resolving lens.


I think if you had a normal eshift device and display on and then switched to only sending one of the 1080p images (but each subframe lasting twice as long) that you would see a 1080p image just fine.

And if you then switched to the alternative 1080p stream (again at twice the rate to take up the slack of the missing 1080p image) that would look fine too.

And if you went back to alternating (eshift), you would perceive a non-existent 2160p image just like using a flip book makes it look like something is moving - the brain is being fooled.

Eshift is an illusion.


Maybe I'm short on gray matter lol
Dionyz and Tomas2 like this.
ader42 is offline  
Sponsored Links
Advertisement
 
Reply Digital Hi-End Projectors - $3,000+ USD MSRP

Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off