Resolution requirements for lenses - Page 2 - AVS Forum | Home Theater Discussions And Reviews
Forum Jump: 
 269Likes
Reply
 
Thread Tools
post #31 of 1307 Old 11-20-2015, 02:45 PM - Thread Starter
AVS Forum Addicted Member
 
darinp2's Avatar
 
Join Date: Sep 2003
Location: Seattle, WA
Posts: 23,188
Mentioned: 7 Post(s)
Tagged: 0 Thread(s)
Quoted: 1135 Post(s)
Liked: 1767
Quote:
Originally Posted by ader42 View Post
Rich/Darin

If the 2 images were sent simultaneously I would agree with you. To send the images simultaneously you would have to use two lenses or a higher quality lens. But the images in eshift are never simultaneous.
Would you please address what you think for the E and 3 example I gave where they are smeared when shown individually into the gap that is between them when viewing them quickly in sequencer. For instance, if you have something like the attached image which a highly zoomed up representation of the upper right part of the E with a lens putting a halo around it and the gray on the right side extending all the way to where the 3 is supposed to start a half a frame later, what happens? Specifically, for what is shown in this image:



To be clear, that is to represent the upper right arm of the E when it is smeared instead of displayed correctly like this:



1. If the E is shown at what would be 40 cd/m2 if shown continuously, but is only shown for the first half of a frame, how bright will the pixels in the E be?
2. If the halo created by the lens is half of the values within the E for any moment of time, how bright will the contribution from the lens add to the gap while the E is up and then how much average per frame from projecting the E, in cd/m2?
3. Given the same halo from the 3, how bright will the pixels in the gap be given this halo described once shown at high speed, in cd/m2?

To be clear, for this example the E and the 3 could be displayed just by cutting holes in paper and using a flashlight and cheap lens. Does that mean if the flashlights are toggled quickly a viewer could still see the E and 3 as separate entities just because the E and 3 where never actually on the screen at the same time (but would require a high framerate camera to even know whether they were being displayed at the same time or not)?

I'm still unclear as to what you believe happens with the smeared light when the E is up and when the 3 is up that is different than what happens with the smeared light when they are put up simultaneously for half a frame at a time instead. The photons don't interact. Do you think the human eyes see the smeared photons in one case and not the other?

If you don't think the questions are clear I can clarify.

Thanks,
Darin
Attached Thumbnails
Click image for larger version

Name:	partOfEWithSmear.jpg
Views:	404
Size:	6.0 KB
ID:	1070098  

This is the AV Science Forum. Please don't be gullible and please do remember the saying, "Fool me once, shame on you. Fool me twice, shame on me."

Last edited by darinp2; 11-20-2015 at 02:52 PM.
darinp2 is offline  
Sponsored Links
Advertisement
 
post #32 of 1307 Old 11-20-2015, 03:06 PM - Thread Starter
AVS Forum Addicted Member
 
darinp2's Avatar
 
Join Date: Sep 2003
Location: Seattle, WA
Posts: 23,188
Mentioned: 7 Post(s)
Tagged: 0 Thread(s)
Quoted: 1135 Post(s)
Liked: 1767
Quote:
Originally Posted by rak306 View Post
You are contending that a lens that is just good enough to be ok for 1080p is not good enough for eshift. I believe my example (post 4) shows otherwise.
You started out with something that you wanted to be blurry in 1080p space and ended up with something blurry with e-shift on. I think you could "prove" the same thing to claim that a 4k panel doesn't require anymore resolution than a 1080p panel. Just start with something blurry in 1080p.

In case it isn't clear, most of us are talking about starting with a 4K source, where JVC extracts 2 images with some intelligence applied. I don't believe every edge in all movies is soft in the actual encodings, even if we can argue that they should be.

I am unclear as to your current position as to something it seemed like you claimed earlier. That is, lenses only have to have enough resolution for the light that passes through them at a moment in time. For example, for my E and 3 example if the lens is good enough to show the E and good enough to show the 3, then it will be good enough to show the E and 3 when they are alternately shown at 120Hz or higher. Is that your current position?

For computer sources do you believe that it matters whether things are shown at the same time or at different times during a frame time as far as spatial resolution requirements for lenses?

--Darin

This is the AV Science Forum. Please don't be gullible and please do remember the saying, "Fool me once, shame on you. Fool me twice, shame on me."
darinp2 is offline  
post #33 of 1307 Old 11-20-2015, 06:11 PM
Advanced Member
 
rak306's Avatar
 
Join Date: Oct 2004
Location: Syracuse NY
Posts: 989
Mentioned: 6 Post(s)
Tagged: 0 Thread(s)
Quoted: 616 Post(s)
Liked: 252
Quote:
Originally Posted by darinp2 View Post
You started out with something that you wanted to be blurry in 1080p space and ended up with something blurry with e-shift on. I think you could "prove" the same thing to claim that a 4k panel doesn't require anymore resolution than a 1080p panel. Just start with something blurry in 1080p.
I chose a sine wave as any picture can be decomposed into a sum of sine waves at frequencies of: DC, 1 cycle/frame, 2 cycles/frame, ... number-of-pixels-per-frame/2 cycles/frame. So if you know the response to a set of sine waves (e.g. the MTF), you can (in theory) calculate what the output would look like.

I showed the response to a sine wave at 0.24 cycles/pixel, which is about 1/2 the max resolution that can theoretically be passed (Nyquist frequency is 0.5 cycles/pixel). So it is not some fuzzy signal, it is about 1/2 the maximum theoretical resolution.

I fully agree with you that the time of when you send stuff doesn't matter.

But it doesn't matter whether they are sharp squares, blobs from a fuzzy lens, or tiny points. When viewed at the proper distance (one where you don't see any jaggies from the pixels, or points if the pixels were point sources), I contend there still will be improvement in the picture using eshift for a given lens. I'm not saying a better lens won't improve things, just that eshift will improve the image - regardless of the lens (so long as it was 'good enough' for 1080p).

Quote:
Originally Posted by darinp2 View Post

...
I don't believe every edge in all movies is soft in the actual encodings, even if we can argue that they should be.
I do. Go back to DVD time frame. You have a movie film negative, with arguably 4k+ resolution, then you make a 2k digital scan. You must apply an anti aliasing filtering mechanism during the scan or you will get aliasing. Now take that 2k digital master, and make a 720x404 DVD and you will get aliasing all over again unless you apply a (digital) low pass filter. I have never seen aliasing in a (DVD or Blu-ray) movie (outside of computer generated logos).



Quote:
Originally Posted by darinp2 View Post


I am unclear as to your current position as to something it seemed like you claimed earlier. That is, lenses only have to have enough resolution for the light that passes through them at a moment in time.
I was wrong.

Quote:
Originally Posted by darinp2 View Post
For example, for my E and 3 example if the lens is good enough to show the E and good enough to show the 3, then it will be good enough to show the E and 3 when they are alternately shown at 120Hz or higher. Is that your current position?
Depends on how close the E and 3 are to each other. It doesnt matter if shown at same time or alternating, the results are the same. But, you certainly can increase the resolution requirements of the lens (in order to see a space between the 2 letters), by lowering the space between them (without them touching), to an ever smaller and smaller amount, or they will merge into an 8.

Now take that E (with sharp edges), and project it on a 4k protector. Move the E slowly across and up the screen. Those sharp edges of the E will wiggle as it moves through the pixels. So you don't have the resolution that you think you have by lining up an edge on a pixel boundary. To eliminate that wiggling, you need to filter the image so that it looks the same as it moves. When you do, those edges will not be as sharp. Look at any anti-aliased character set. It takes several pixels for an "edge".


Quote:
Originally Posted by darinp2 View Post


For computer sources do you believe that it matters whether things are shown at the same time or at different times during a frame time as far as spatial resolution requirements for lenses?

--Darin
Nope.


Keep in mind - I agree you want the best lens you can afford - with flat spatial frequency all the way up to the Nyquist freq. I'm just suggesting that a picture can be improved with the eshift method (over what it is without eshift) keeping the lens the same - even if the lens 'just adequate' without eshift.
Dionyz and Tomas2 like this.
rak306 is offline  
Sponsored Links
Advertisement
 
post #34 of 1307 Old 11-21-2015, 04:52 AM
Member
 
ader42's Avatar
 
Join Date: Feb 2013
Posts: 44
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 24 Post(s)
Liked: 27
Quote:
Originally Posted by darinp2 View Post
Going back to the E3 example in the first post in this thread, what would happen if the lens was smearing a lot of light from the E all around the E, then smearing a lot of light from the 3 all around the 3 and this was played at say 240Hz. Do you think a human would be able to tell that the full image was made up of an E and a 3?
Isn't the point that the intent is for humans to not be able to detect the separate images? My point is that the lens ability is fixed, so the software needs to adapt and choose the pixels and their luminance to overcome any lens limitation - this is the e-shift process creating two sub-frames to present alternatively but aimed at giving the false impression that a single image is being presented.

I think you think a better lens is required but I think only the software pixel selection and image offsetting is required. I'm happy to accept it might be that both are required and that I can't get my head around something though.

Do you think e-shift simply uses pixels 1,3, 5 etc. for image 1 and pixels 2, 4, 6 etc for image 2?
I think there is far more to it than that.


Quote:
Originally Posted by darinp2 View Post
As an example, lets say that for this example the light put out is at such intensity that if the one character was shown constantly the white would be 40 cd/m2. We will just have the display show each character for half the time, so that means the luminance will be 20 cd/m2.

Now lets say the lens is pretty poor and it creates a hallow around each of these large letters that is about half that bright and extends out like shown in this gif.



With each letter shown individually can you tell what they are? If so, then this image is good enough for those letters.
Yes.

Quote:
Originally Posted by darinp2 View Post

Now, if that gif was played at 240Hz do you think you would be able to tell from any reasonable distance that the image was made up of 2 characters and wasn't just a white block with 2 black blocks?
I think the e-shift process would choose pixels and adjust their luminance so that the viewer could not see an E and a 3 - that is the whole point.

Quote:
Originally Posted by darinp2 View Post

If the black lines between the E and 3 on the chip face are smeared by the lens at half the luminance of the letters and extends all the way between the characters then the characters would be 20 cd/m2 from only being displayed half the time and the parts that are supposed to be black would also be 20 cd/m2 because they are displayed 100% of the time. That is, half the luminance for twice as long gets back to the same luminance.

Put another way, given that if the lens skews the E and 3 too much into each other that little black gap becomes as bright as the letters themselves, how would a lens creating that much of a problem be considered adequate just because it was adequate enough to see the E and 3 when slowed down enough (which is of course not what human vision sees at 240Hz or 120Hz)?

Does that make sense?
If the lens is smearing to that level then it's not good enough for 1080p let alone e-shift?


I think you believe a better lens is being used but I think the limitations of the lens are being overcome by the e-shift process (pixel selection/creation & the offsetting) and exploitation of the limitations of our visual system.

I think you would have a point if all e-shift was doing was choosing alternate pixels from a 4k source and shifting them diagonally. I think there is much more to the generation of the two 108p images though, and that is why it has been able to improve e-shift year on year without lens changes (even though there may have been lens changes also).

E-shift effectively provides a 3k image from a 4k source (half the 4k data).
Surely e-shift only exists because it's cheaper than using a 3k panel and a 3k lens.

You're saying that a 3k lens is required anyway. So the only possible "saving" is that it's cheaper to use an e-shift offsetting device and two 2k panels in place of a single 3k panel.

That sounds wrong to me. I would guess that using two 2k panels and developing and implementing a new e-shift offsetting device would cost more than just using a 3k panel. I could be wrong about that too though.

Would a 3k image created from a 4k source may be better than 2x offset 1080p images?
I'd like to see a comparison with an e-shift 3k versus a downscaled 3k (both coming from a 4k source) to test that.
ader42 is offline  
post #35 of 1307 Old 11-21-2015, 10:13 AM - Thread Starter
AVS Forum Addicted Member
 
darinp2's Avatar
 
Join Date: Sep 2003
Location: Seattle, WA
Posts: 23,188
Mentioned: 7 Post(s)
Tagged: 0 Thread(s)
Quoted: 1135 Post(s)
Liked: 1767
Quote:
Originally Posted by rak306 View Post
Keep in mind - I agree you want the best lens you can afford - with flat spatial frequency all the way up to the Nyquist freq. I'm just suggesting that a picture can be improved with the eshift method (over what it is without eshift) keeping the lens the same - even if the lens 'just adequate' without eshift.
Seems like we are pretty much in agreement. I'm guessing you would feel the same way about true 4K chips even if the lens was one considered just good enough for 1080p. Is that right?

I find it interesting that Dionyz marked your post as like even though it contradicts his position that all you have to look at is what comes through the lens for a moment of time. I wonder if he understood that.


You may be right about sources not containing sharp edges. CRTs would smooth these things out naturally some and so I don't think it was as critical then. Joe Kane gave a presentation years ago for his Samsung single chip DLP projector where he mentioned getting a call from somebody using one for mastering who said that his projector was adding artifacts. Joe looked into it and said it was just that his projector was showing the detail that the CRTs couldn't, but that it was in the source. I asked Joe whether detail that sharp was really supposed to be in the source and he had an answer that I think was something along the lines that the reality is that it was there, but I don't recall his exact words. I know that the DLP chips would multiply some image noise more than was in the source, so some people would blame the source completely because it only happened when there was noise in the image, but that doesn't mean the chips were handling that noise correctly.


--Darin

This is the AV Science Forum. Please don't be gullible and please do remember the saying, "Fool me once, shame on you. Fool me twice, shame on me."

Last edited by darinp2; 11-21-2015 at 10:18 AM.
darinp2 is offline  
post #36 of 1307 Old 11-21-2015, 10:29 AM - Thread Starter
AVS Forum Addicted Member
 
darinp2's Avatar
 
Join Date: Sep 2003
Location: Seattle, WA
Posts: 23,188
Mentioned: 7 Post(s)
Tagged: 0 Thread(s)
Quoted: 1135 Post(s)
Liked: 1767
Quote:
Originally Posted by ader42 View Post
Isn't the point that the intent is for humans to not be able to detect the separate images?
The point of this one is that the image is supposed to be something like a company symbol that is an E next to a 3, not an 8. Here is a zoomed up version of the symbol to be displayed:




I provided this example to address the assumption that many have made that the lens only has to be able to resolve the detail that goes through it for a moment of time. In this case, even if the E is shown at one moment and the 3 at another if each one is smeared an amount that is inconsequential to showing an E and inconsequential to showing a 3 that does not mean it is inconsequential to properly displaying this symbol as E3 instead of as 8.

Is your position that a bad lens will show this improperly even if the E is flashed and then the 3, but the software can just make the black lines between the E and 3 bigger to keep it from incorrectly showing an 8 to viewers? If so, that would be true if the E and 3 were shown sequentially or at the same time. The time lag between them in the same frame wouldn't make any difference to that.
Quote:
Originally Posted by ader42 View Post
Do you think e-shift simply uses pixels 1,3, 5 etc. for image 1 and pixels 2, 4, 6 etc for image 2?
No, I believe they choose the 4 million things to display out of 8 million in the source carefully.
Quote:
Originally Posted by ader42 View Post
That sounds wrong to me. I would guess that using two 2k panels and developing and implementing a new e-shift offsetting device would cost more than just using a 3k panel.
They don't use two 2k panels instead of a 3k panel. They still use 1 panel per primary color and put an element in front of it that shifts the view of that panel by 1/2 a pixel horizontally and 1/2 a pixel vertical. I believe they have exactly one e-shift element per 3 panels, so I do believe the cost is significantly less than three 3k panels would be, even if 3k panels were equivalent to 4k e-shift.

--Darin
Attached Thumbnails
Click image for larger version

Name:	e3Zoomed.jpg
Views:	1651
Size:	11.2 KB
ID:	1071290  

This is the AV Science Forum. Please don't be gullible and please do remember the saying, "Fool me once, shame on you. Fool me twice, shame on me."

Last edited by darinp2; 11-21-2015 at 10:37 AM.
darinp2 is offline  
post #37 of 1307 Old 11-21-2015, 10:47 AM - Thread Starter
AVS Forum Addicted Member
 
darinp2's Avatar
 
Join Date: Sep 2003
Location: Seattle, WA
Posts: 23,188
Mentioned: 7 Post(s)
Tagged: 0 Thread(s)
Quoted: 1135 Post(s)
Liked: 1767
Going back to something from earlier.
Quote:
Originally Posted by ader42 View Post
And if you went back to alternating (eshift), you would perceive a non-existent 2160p image just like using a flip book makes it look like something is moving - the brain is being fooled.

Eshift is an illusion.
This stuff is all illusions. Maybe part of the problem is that people aren't familiar with how even non-eshift displays go about displaying the images we see as solid.

I used a 1000 fps camera on a JVC back in 2009 and it isn't like the JVC just lights up a pixel and leaves it there. I recall something like 4 cycles of different levels luminance within individual frames, like dark, then bright, then dark, the dim, ...

And spatially I don't believe the pattern ended up being what we see either until you averaged all of the 4 different cases together (which our visual systems do). Even showing just the E in my example wouldn't necessarily have that whole E going through the lens as whole for a moment of time. Unfortunately the videos I posted here in 2009 for this were on a site where they no longer are. I can think of someplace they might be and could look, but only some chance they would be there.

Considering CRTs and scanning lasers, if they were going to show the E3 symbol they wouldn't show it all at once. They would generally show a little bit of the very top of each character, then a little bit lower, then a little bit lower, all the way down until they have painted the whole symbol once considering persistence. They may do this multiple times per frame.

Displaying movies is largely about displaying illusions and that was true long before e-shift was added. Just because one thing looks solid with a non-eshift projector doesn't mean it all went through the lens at the same moment in time.

--Darin

This is the AV Science Forum. Please don't be gullible and please do remember the saying, "Fool me once, shame on you. Fool me twice, shame on me."
darinp2 is offline  
post #38 of 1307 Old 11-21-2015, 11:59 AM - Thread Starter
AVS Forum Addicted Member
 
darinp2's Avatar
 
Join Date: Sep 2003
Location: Seattle, WA
Posts: 23,188
Mentioned: 7 Post(s)
Tagged: 0 Thread(s)
Quoted: 1135 Post(s)
Liked: 1767
Somebody provided me a link to an interesting looking paper about shifting pixels:

http://www.ics.uci.edu/~majumder/docs/iccp13.pdf

I haven't looked through the whole thing yet, but looks like they used 2 lenses to display the shifted images at the same time. Not that this increases lens requirements if using just one chip with one image on it compared to showing the non-shifted image and shifted image separated in time once the frequency between them is high enough though. With the method used in JVC's e-shift the 2 frames can have different content besides the shift (or the same content, if JVC chooses).

--Darin

This is the AV Science Forum. Please don't be gullible and please do remember the saying, "Fool me once, shame on you. Fool me twice, shame on me."

Last edited by darinp2; 11-21-2015 at 12:05 PM.
darinp2 is offline  
post #39 of 1307 Old 11-21-2015, 11:07 PM
Advanced Member
 
rak306's Avatar
 
Join Date: Oct 2004
Location: Syracuse NY
Posts: 989
Mentioned: 6 Post(s)
Tagged: 0 Thread(s)
Quoted: 616 Post(s)
Liked: 252
I decided to repeat the example of post 4, but this time with a real image.

I took a still from LOR that was 720x404 pixels into photoshop. I purposely lowered the image size by a factor of 2 to 360x202. I saved this and am pretending that this is the "4k" image. (The reason did this was to be sure that there would be detail changes per pixel. I am not sure how much detail was used in those early days of DVD).

I then further reduced the image size by a factor of 2 to 180x101. This represents my "2k" image.

Figure 1 shows the 320x202 input "4k" image. Figure 2 is the 180x101 "2k" image. The reductions in resolution being done in photoshop. Notice the reduced detail in Fig 2 in the beard.

Fig 1 - "4k" source



Fig 2 - "2k" reduced by factor of 2 from fig 1 in photoshop.

Figure 7 shows the results from simulating eshift, by taking 2 half resolution images from fig 1, shifting one of them, and adding the results. Notice the improvement over Fig 2.


Fig 7 - simulated eshift.

Figure 8 takes the 2 separate images that made up Figure 7, and filters them - representing a lesser quality lens. Then these 2 images were added and are shown in Figure 8. This represents eshift through a lower quality lens than that of Figure 7.



Fig 8 - Eshift using a lesser quality lens

Figure 9 shows Figure 2 using the same filtering as Figure 8.



Figure 9 - "2k" image from Fig2 filtered the sameas Figure 8

Finally,the animated gif below shows figs 8,9 in a loop.

Neither of the "filtered 2k, or eshifted" versions are as good as the "perfect lens - 4k or eshift" versions. However, for the filtered (simulated lesser lens) versions, adding eshift improved the resolution over the 2k version. Below is an animated gif that alternates between figs 8,9 to better see the difference.

One final note: These images are shown at 2x size. That is they are 720 pixels wide even though the images are only 360 pixels. When I displayed them at 360, it was harder to tell the difference.



Fig 8 - simulated eshift with lesser lens vs Fig 9 - "2k" image from Fig 2 with lesser lens

EDIT: Add more comparisons in a 2 frame animated gif.



Fig 1 -"4k" original vs Fig 2 - "2k" lower res via Photoshop





Fig 1 -"4k" original vs Fig 7 - simulated eshift




Figs 2 - "2k" vs Fig 7- simulated Eshift




Figs 2 - "2k" vs Fig 8- Eshift with lesser lens


When comparing Figs 8/9 you see improvement of a "4k eshift" over just "2k" even with the lesser lens, but clearly the best picture is with the best lens.

Fig 7 ("eshift with good lens") is almost as good as Fig 1 ("4k"). Figs 2/8 show that there is slight improvement with "eshift with lesser lens" over "2k with good lens".

But when it is all said and done, I think no one would go the the expense of adding eshift, without also making sure the lens was up to the increased resolution. So I've come pretty much full circle.

Last edited by rak306; 11-22-2015 at 12:22 PM. Reason: Add more comparison gifs
rak306 is offline  
post #40 of 1307 Old 11-22-2015, 07:25 AM
 
RLBURNSIDE's Avatar
 
Join Date: Jan 2007
Posts: 3,901
Mentioned: 16 Post(s)
Tagged: 0 Thread(s)
Quoted: 2012 Post(s)
Liked: 1406
Quote:
Originally Posted by darinp2 View Post
Going back to something from earlier.
This stuff is all illusions. Maybe part of the problem is that people aren't familiar with how even non-eshift displays go about displaying the images we see as solid.

I used a 1000 fps camera on a JVC back in 2009 and it isn't like the JVC just lights up a pixel and leaves it there. I recall something like 4 cycles of different levels luminance within individual frames, like dark, then bright, then dark, the dim, ...

And spatially I don't believe the pattern ended up being what we see either until you averaged all of the 4 different cases together (which our visual systems do). Even showing just the E in my example wouldn't necessarily have that whole E going through the lens as whole for a moment of time. Unfortunately the videos I posted here in 2009 for this were on a site where they no longer are. I can think of someplace they might be and could look, but only some chance they would be there.

Considering CRTs and scanning lasers, if they were going to show the E3 symbol they wouldn't show it all at once. They would generally show a little bit of the very top of each character, then a little bit lower, then a little bit lower, all the way down until they have painted the whole symbol once considering persistence. They may do this multiple times per frame.

Displaying movies is largely about displaying illusions and that was true long before e-shift was added. Just because one thing looks solid with a non-eshift projector doesn't mean it all went through the lens at the same moment in time.

--Darin
Good post.

Sounds like the first bolded part is talking about Framerate Convertion (FRC), which is a type of temporal dithering in order to increase perceived bit depth. (if you didn't know that already). If it's in 2009 I wonder if it was to increase from 8 to 10 bits, but more likely 6 to 8. Who knows. It could be related to the dynamic iris tech that boosts brightness of each pixel then closes the iris to increase dynamic contrast in black areas, on a frame-by-frame basis, by analysing the APL of each frame then adapting itself accordingly. I read a link to a tech PDF once about some older DLPs that did this and achieved good results to reduce banding and increase shadow detail.

Second bolded part, doing this multiple times per frame = yes, definitely, strobed displays need to be shown at at least 60hz to avoid flickering, this is why Tesla chose 60hz for street lamps. The higher the luminance levels in theaters (such that lasers enable), the more important a higher refresh rate is. Not only that, but the more important a higher actual frame rate is. HDR will start to force people to look more into higher frame rates, because the brighter the image is, the more apparent the various gaudy motion artifacts from 24hz framerate become.

But aside from the Sony Ultra short throw 4K laser projector and the LG one, as well as the pico laser projectors, I'm not aware of any strobed laser projectors available. Most of the laser tech now is just used as a dumb "whole scene" light source, then feed that into a DLP or whatever. Kind of missing the point of lasers IMO. But of course there are serious safety issues to higher-lumen direct laser projection. Lots of new VR headsets are using strobing too now, like the Oculus, to reduce sample and hold persistence as well as input latency. If your display receives its frame line-by-line in order, it can display them right away, instead of waiting for the entire frame to arrive then doing a buffer flip. This is another reason why CRTs are often used as the gold standard for low input latency, as well as low persistence-based motion blur. Strobing is superior, although I don't relish the notion of a return to 60hz displays, let's hope they at least get the memo that 72hz / 96hz / 120hz is better. 120hz being ideal obviously since you can do 24p and 60p easily without any processing.
RLBURNSIDE is offline  
post #41 of 1307 Old 01-08-2016, 01:50 AM
AVS Forum Special Member
 
Elix's Avatar
 
Join Date: Jun 2011
Location: Dungeon, Pillar of Eyes
Posts: 1,574
Mentioned: 6 Post(s)
Tagged: 0 Thread(s)
Quoted: 253 Post(s)
Liked: 106
Now with the announcement of first consumer 4K DLP projectors using 'wobulating' 0.67" DMDs with 4 million active pixels (creating 8 million addressable pixels) I'd like to ask Darin a question. What are the new resolution requirements for lenses? Let's compare to 0.65" DMDs projectors with 2 million active pixels. And let's ignore for simplicity the difference between chip sizes (0.65" vs. 0.67"). The simple questions is, if I may put it like this, does the requirement doubles or quadruples in order to maintain the same MTF?

Last edited by Elix; 01-08-2016 at 01:53 AM.
Elix is offline  
post #42 of 1307 Old 04-26-2016, 08:48 AM
AVS Forum Special Member
 
Ted99's Avatar
 
Join Date: Jul 2004
Location: Houston, TX
Posts: 1,948
Mentioned: 15 Post(s)
Tagged: 0 Thread(s)
Quoted: 944 Post(s)
Liked: 424
@darinp2 : In reading through the "[email protected] thread, I just found the reference to this thread, which seems point on to a question I have. This last post by @Elix sets up my question, which is: Will the new .67" DMD from TI, using a technique which is similar to JVC's E-shift allow the same on-screen resolution as the 3-chip DILA e-shift with a lower cost lens. As I understand things, a smaller chip allows for a smaller lens. The .67" DMD is about the same size as the .7" LCOS chips in the JVC, but there are 3 of them projecting through the lens at the same time, so I am wondering if it requires a higher quality lens to transmit these three colors simultaneously for the same resolution as a single DMD with sequential colors regulated with a color wheel?

JVC RS600 Chad-callibrated, 120" 1.3g in Batcave HT, Denon X8500 Polk LSiM703 fronts,
RTi-12 rears, LSiM 706 center, Monitor 40 Heights, Monitor 60 FW, FXiA4 Bi-pole sides,
LSiC top front, Infinity 6" VOG. 4X 12" subs w/mini DSP on sub 1 and nearfield 18" from sub 2.
Ted99 is offline  
post #43 of 1307 Old 06-15-2017, 04:42 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Some may remember that a while ago on this forum there were a bunch of people who thought that since eShift images only go through the lens as native at any one point in time then the lens requirements are based on the native images and not the composite eShift images. For a while it was pretty much me against a large group of people, but with time many of them realized that they had been wrong.

Now there is another group on the under $3.5k forum claiming the same thing about the TI XPR projectors that are 2.7k native plus eShift. That discussion was boggin down one of the threads, so I'm putting a response here:
Quote:
Originally Posted by Dave Harper View Post
So what you're saying, is that even though the native resolution is all that is ever going through the lens at any one given point in time
Spoiler!
, that it magically needs to resolve more than that for our human eyes and brains, which don't see said images until AFTER they leave the lens, one frame at a time, then bounce off a screen and into said eyes which is then, and only then, processed by said "human brain"?.....Gotcha!
It is not magic at all. It is just like the eyeglasses. If they obscure the detail in the final composite image too much your brain will not be able to perceive the composite image properly. It makes no difference whether the lens or eyeglasses has low enough obscuring for the instances of time.
Quote:
Originally Posted by Dave Harper View Post
Any studies, tests, links, white papers, double blind peer reviews that can back up that claim?
I doubt there is any of that on something basic, and unique. I am an engineer who works on medical products where imaging is very important, but that isn't why I understand something like this. Unlike a lot of people who are considered experts in this industry I haven't forgotten all the physics I learned in school.

Do you believe that on/off CR matters? If I hadn't done a lot of work here and behind the scenes to try to get people to understand that while ISF and others were telling everybody that on/off CR matters you might have been fooled by those people too.
Quote:
Originally Posted by Dave Harper View Post
... please don't call us wrong, until you can prove your point scientifically, which you haven't.
Not trying to hurt your feelings, but I don't need a white paper saying you are wrong to know that you are wrong, anymore than I would have needed a white paper from somebody else to explain many things with contrast ratio, screens, and projectors over that last 15 years.
Quote:
Originally Posted by Dave Harper View Post
OK, so you're telling me, that given the eye chart below (represents native rez of the panel), if I can read the very bottom line with the very best possible lenses that the optometrist can give me and that human technology has created and allows for that sized letter/number(which is the key point here where lenses are concerned!), that if I then flash each of the letters alternately odd then even, as eShift is doing, then all of the sudden that lens isn't adequate to still resolve each letter independently or together?
I never said that at all. You conveniently choose letters that are spaced so that the finest detail is in the letters and not the gap between the characters. Spreading those out changes what is needed in the glasses.

I also find it strange that you think I am the one that thinks it matters how the composite images are made. You are the one who's claim mean that if those letters you showed on the eye chart were created by a display that only showed one pixel in an instant of time then the glasses would "magically" only need to resolve those pixels instead of having to resolve the whole letters.

To by clear if you went to the optometrist and the following picture was the only thing shown for an eye test, you would need to ask whether the display was showing both characters at the same time or at different times before you could figure out how good the glasses would need to be?



Is that really still your position? You think the glasses required would differ between a display that showed both the left and right halves of the image every even millisecond than one that showed the left half every even millisecond and the right half every odd millisecond?

You wouldn't even be able to tell which the display was doing, yet you seem to think the glasses required to see the E3 would magically change due to something that you couldn't even perceive.

--Darin
Attached Thumbnails
Click image for larger version

Name:	e3s3.jpg
Views:	228
Size:	1.6 KB
ID:	2190033  

Last edited by darinp; 06-15-2017 at 04:50 PM.
darinp is offline  
post #44 of 1307 Old 06-15-2017, 08:31 PM
 
Dave Harper's Avatar
 
Join Date: Feb 2000
Location: Paradise on Earth
Posts: 6,554
Mentioned: 62 Post(s)
Tagged: 1 Thread(s)
Quoted: 3159 Post(s)
Liked: 1723
One caveat here before I reply. This is my only post here until I am given or have found some real science regarding this, not the junk science, or lack thereof, we have been given so far..............

Quote:
Originally Posted by darinp View Post
Some may remember that a while ago on this forum there were a bunch of people who thought that since eShift images only go through the lens as native at any one point in time then the lens requirements are based on the native images and not the composite eShift images. For a while it was pretty much me against a large group of people, but with time many of them realized that they had been wrong...
I suspect they just didn't want to argue it anymore, or didn't know enough to make a truly informed, scientific fact based reply.


Quote:
Originally Posted by darinp View Post
...It is not magic at all. It is just like the eyeglasses. If they obscure the detail in the final composite image too much your brain will not be able to perceive the composite image properly. It makes no difference whether the lens or eyeglasses has low enough obscuring for the instances of time....
The "final composite image" doesn't happen until after it hits your eyes and then picoseconds later...your brain (which you yourself have already said and agreed is the case), which is long after that one 2.7K XPR image has left the lens, relatively speaking. You're making it seem like there's magic in the lens though, which there's nothing magic about them. They just pass what's shown through them at any given moment in time, based on their manufactured capabilities to resolve what's behind them, which in this case is a 1920x1080 or 2.7K imaging chip flashing two sub-frames in separate moments in time, shifted up and to the right slightly.

So by your logic here, I would need a better lens to show 1920x1080 60Fps than I would 1920x1080 30Fps, since 60p (60 frames per second) has twice as many frames being flashed in the same amount of time (1 second) through the lens, as 30p (30 frames per second)? This is the exact same thing that Ruined and I are contending with you, because that is all eShift is doing, flashing a 1920x1080p image twice in the same instant in time, through the same lens sequentially (one after the other, not at the same time). It is the 1920x1080 resolution part that the lens cares about, needs to resolve and comes into play when being designed, NOT the 60p, 30p, 120p, eShift, whatever part at all. Remember, this is all taking into account that we have already established that we are using the best lens we can for the 1920x1080 part, WITHOUT these so called obscurities you keep bringing up as if it matters when they were already rendered moot when we designed our projector to have the best lens to resolve the native resolution of 1920x1080.

So a 30p eShift image is sending 60 separate flashes per second, with the odd 1920x1080 pixels first, then the even 1920x1080 pixels shifted up and to the right next (simplified eg), creating one whole "pseudo 4K" frame. So this is essentially the same as 1920x1080 60p, so why would I need a better lens for the same resolution at 60p than I would for 30p again?


Quote:
Originally Posted by darinp View Post
...I doubt there is any of that on something basic, and unique. I am an engineer who works on medical products where imaging is very important, but that isn't why I understand something like this. Unlike a lot of people who are considered experts in this industry I haven't forgotten all the physics I learned in school...
OK, I see. So they use eShift techniques and not native resolutions in something as important as medical equipment for things like seeing tumors, cancer cells, brain scans, MRIs, PET, scans, SPECT scans and the like, then?


Quote:
Originally Posted by darinp View Post
...Do you believe that on/off CR matters? If I hadn't done a lot of work here and behind the scenes to try to get people to understand that while ISF and others were telling everybody that on/off CR matters you might have been fooled by those people too.

Not trying to hurt your feelings, but I don't need a white paper saying you are wrong to know that you are wrong, anymore than I would have needed a white paper from somebody else to explain many things with contrast ratio, screens, and projectors over that last 15 years.
I am not getting into CR, because that is a separate subject, Let's answer this question first before moving on to other things please.

You're not hurting my feelings in the least, but actually yes, you do need white papers and scientific test data, engineering info and links, otherwise it's just your word against our's, isn't it? If not, then you are assuming that you are the only one who has this type of experience here.


Quote:
Originally Posted by darinp View Post
...I never said that at all. You conveniently choose letters that are spaced so that the finest detail is in the letters and not the gap between the characters. Spreading those out changes what is needed in the glasses...
If I choose a lens that totally and completely resolves the native resolution, hence totally and completely resolves the letters and the spacing in between said letters, then it doesn't matter one iota whether they then get spaced closer or further apart, get flashed separately or together, or any other trick you or anyone else decides to come up with.

The absolute best lens in the world's aim is to be able to resolve down to the tiniest of particles (pixels) clearly and visibly without artifacts. Once that lens is used, then it will resolve all like particles within it's lens surface clearly and visibly without artifact also, as well as any other image less detailed and larger than it, with the same focus distance. (Taking into account Ruined's real life example that lenses resolving power decrease as you get further from the center of course.)

Quote:
Originally Posted by darinp View Post
...I also find it strange that you think I am the one that thinks it matters how the composite images are made. You are the one who's claim mean that if those letters you showed on the eye chart were created by a display that only showed one pixel in an instant of time then the glasses would "magically" only need to resolve those pixels instead of having to resolve the whole letters.
I am not claiming that. I said in essence, "one 1920x1080 pixel or one 2.7K XPR pixel frame" in an instant of time. So did Ruined.

Once you can resolve down to a single pixel with the lens, then yes, it will resolve all 1920x1080 of the single pixels in that frame, be they letters or spaces or what have you.

So are you saying if I had a 1x1 pixel array, where that one pixel was a .0001" square, if I paired that with a lens that could fully resolve that pixel without artifacts and with complete detail and sharpness, if I added more .0001" pixels and made a 1920x1080 array of those same .0001" pixels, that my same lens I used for the 1x1 array wouldn't resolve all of them all of the sudden (as long as the lens was wide enough to fit each array within its operating range)?

Quote:
Originally Posted by darinp View Post
...To by clear if you went to the optometrist and the following picture was the only thing shown for an eye test, you would need to ask whether the display was showing both characters at the same time or at different times before you could figure out how good the glasses would need to be?

...
No, it would depend on how many and how small the pixels were that made up those letters. Then once the proper lens was found to resolve those pixels and their size, it wouldn't matter one hoot when or how those letters were shown, either one at a time flashed fast sequentially, near or far from each other, or both together at the same time.

Quote:
Originally Posted by darinp View Post
...Is that really still your position? You think the glasses required would differ between a display that showed both the left and right halves of the image every even millisecond than one that showed the left half every even millisecond and the right half every odd millisecond?...
I'm not sure I am following what you're asking here, but I think I just answered that question just above your quote here, and I am sure elsewhere too. You are acting as if those Es and 3s are a pixel or something, or are they a group of pixels making up the "E" and the "3"? You can't answer your question without knowing that. Does your "E" represent one half frame of a 1920x1080 eShift frame, and the "3" represents the other half that is shifted up and to the right slightly? Is it a group of pixels making up the E and 3 somewhere on that 1920x1080 frame that we need to base our lens requirements on?

I think your grey box analogy image works better for this scenario, IMO.

Quote:
Originally Posted by darinp View Post
...You wouldn't even be able to tell which the display was doing, yet you seem to think the glasses required to see the E3 would magically change due to something that you couldn't even perceive.

--Darin
That statement there says you don't understand the stance that I have with this. That actually seems like what you have been saying, at least to me anyway.


This quote from the other thread sums it up perfectly I think.............

Quote:
Originally Posted by Dave in Green View Post
I disagree. This discussion is an example of what makes AVS Forum so great. Each side seems to be frustrated that the other side can't see what to them seems obvious, which is understandable. But the discussion is staying focused on a technical issue and is not devolving into a name-calling contest. I think in some ways both sides are in agreement on many points, so perhaps it would be productive to try to narrow it down a little and keep the focus on the primary area of disagreement.
Thanks @Dave in Green , I will myself try to indeed do that, for everyone's sake.

See you after we have some real science! TTFN
coug7669 likes this.

Last edited by Dave Harper; 06-15-2017 at 08:36 PM.
Dave Harper is offline  
post #45 of 1307 Old 06-15-2017, 11:13 PM
AVS Forum Special Member
 
Javs's Avatar
 
Join Date: Dec 2012
Location: Sydney
Posts: 7,849
Mentioned: 475 Post(s)
Tagged: 0 Thread(s)
Quoted: 6791 Post(s)
Liked: 6418
Time is not a factor in this, a complete frame is the only factor, E-Shift displays two frames sequentially, the end result is actually irrelevant that it is 4k or 8k or 10k after our brain comps the images together.

What is important is that each frame that flashes at any one time in the E-shift process is only a 1080p frame according to the lens glass element itself and this is a physical real world limitation based on panel size.

Having said that MTF is EVERYTHING here.

You can have lenses which resolve 1080p pixels to many many different levels of MTF, just seeing the pixel does not mean its fully resolved, the picture on page one showing the word YOU is the best example of this. All are legible, yet all have drastically different levels of MTF, if we are going to pretend for a moment that those are purely examples of a lenses resolving quality.

It all comes down to a lenses ability to display line pairs per millimetre (which we refer to as MTF) in relation to the input resolution into the inner lens elements.

1080p as a resolution has 540 possible line pairs vertically if the frame was made of horizontal alternating black and white lines, which we can use to discern detail, contrast, or what not.

Now for this example, lets be simple and scale the projector panel size down by an even factor of 100 from its native resolution, so lets say we have a 1080p panel which measures 19.2mm across by 10.8mm high, this would be incredibly small real world but I'm keeping the math simple here.

So with 540 possible line pairs, it means we need a lens which can resolve 50 line pairs per millimetre hitting the glass element, for every 1mm or the chip, there are 100 pixels, which means the lens MUST be able to, for every 1mm of glass on the INNER element, be able to clearly define and resolve 50 line pairs by the time the image is enlarged and hits the screen.

Now this is actually the limit of 1080p, its not possible to render more lines than this due to the panel being digital and comprised of pixels and not analogue grain etc. Given E-Shift is technically just very quickly flashing a number of frames at us sequentially, like I said TIME is not a factor here, as long as each 1080p frame is clear and the line pair per millimetre REQUIREMENTS on the INPUT side of the lens element is unchanged (i.e. the Panel size itself is not changing), it is irrelevant what the end result resolution is which our brain pieces together, it could be 8k for all our brain cares, the fact is, the lens will be flashing clear 1080p frames multiple times to get there, and never once saw a pixel density in terms of line pair per millimetre change according to the inner lens element.

Regarding 1080p and MTF though, there are completely other factors at play above all of this. Also why the images on page one of the word YOU could potentially look so different with different quality lenses.

Lenses are rated in MTF, to have 100% MTF resolution in 1080p due to Nyquist and not have the very high frequency information in the frame fall apart and begin to heavily alias, we need a lens which can resolve down to 100 line pairs per millimetre which is effectively double the 1080p requirement, this gives high frequency information at the limit a nice gentle roll off to zero (read this as blur to infinity) as such this is a completely different conversation.

I suggest you guys have a read of the ARRI Whitepaper on Resolution and MTF, its very enlightening.

https://hopa.memberclicks.net/assets...gyBrochure.pdf

In it they use an example of extracting the full resolution limit from a frame of Super35mm film, In the test, the smallest resolvable detail is 0.006 mm large on the film. Thus, across the full film width there are 24.576 mm / 0.006 = 4096 details or points for 35 mm film. Which, you guessed it! Is where we got 4096 x 2160 DCI resolution from.

However, interestingly in order to scan this into digital, or, equally project it, you need to scan the image at twice that in order to prevent aliasing.



As you see here, even though there is only technically 4k worth of information on the negative when viewed under a microscope, they had to scan it upwards of 10k in order to clearly replicate that information digitally.



In my view, a lens which can properly resolve 1080p actually needs to be able to resolve double in order to keep the high frequency spacial details present and delineated.

JVC X9500 (RS620) | 120" 16:9 | Marantz AV7702 MkII | Emotiva XPA-7 | DIY Modular Towers | DIY TPL-150 Surrounds | DIY Atmos | DIY 18" Subs
-
MadVR Settings | UHD Waveform Analysis | Arve Tool Instructions + V3 Javs Curves
Javs is online now  
post #46 of 1307 Old 06-15-2017, 11:19 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Dave Harper View Post
One caveat here before I reply. This is my only post here until I am given or have found some real science regarding this, not the junk science, or lack thereof, we have been given so far..............
You can ask for links a thousand times, but I will not be providing them. If you need links you can find them. I have enough knowledge of physics from high school and college courses, plus other things, to be able to figure out some things on my own. One nice thing about college was I didn't have to argue with people like Ruined. They could provide whatever qualifications they wanted, but when it came to the tests and I did well and they didn't, there really wasn't much need for an argument.

I'm not going to teach you everything about physics, but I will point out what you make mistakes like thinking light through a lens was like fluid though a spigot. Do you need a scientific leak to prove that they are not the same thing or do you know enough about light waves to know they aren't the same thing?
Quote:
Originally Posted by Dave Harper View Post
I suspect they just didn't want to argue it anymore, or didn't know enough to make a truly informed, scientific fact based reply.
I think Seegs and Mike of AVScience understood it at least.
Quote:
Originally Posted by Dave Harper View Post
So by your logic here, I would need a better lens to show 1920x1080 60Fps than I would 1920x1080 30Fps, since 60p (60 frames per second) has twice as many frames being flashed in the same amount of time (1 second) through the lens, as 30p (30 frames per second)?
You clearly don't understand what I said. I thought I was clear that it is the spatial movement (1/2 a pixel diagonally) that matters, not the temporal movement. You and Ruined are the ones that have said it is the timing of when those go through the lens that matters. I am the one saying the timing doesn't matter as long as your eyes don't detect the difference in timing.

Is it not your position that if both 2.7k frames went through the lens at the same time (this could be done with 2 chips offset by half a pixel) then you would need a better lens than you need when the 2.7k frames go through at different times? If that isn't your position then I don't believe you actually agree with Ruined and we've wasted a lot of time with me thinking you agreed with him.
Quote:
Originally Posted by Dave Harper View Post
Remember, this is all taking into account that we have already established that we are using the best lens we can for the 1920x1080 part, WITHOUT these so called obscurities you keep bringing up as if it matters when they were already rendered moot when we designed our projector to have the best lens to resolve the native resolution of 1920x1080.
I believe I clearly said that I was never disagreeing that if you had a lens that was a 10 out of 10 for sharpness for 2.7k images it would be good enough for 2.7k+eShift. Might be 8/10, but that would be good enough. This disagreement has always been about whether it matters when the 2.7k images go through the lens and it seems like you and Ruined have been clear that you think it does, even when the human vision system has no clue when the light went through the lens of the projector, the glasses a person is wearing, and through the lenses of the person's eyeballs.
Quote:
Originally Posted by Dave Harper View Post
OK, I see. So they use eShift techniques and not native resolutions in something as important as medical equipment for things like seeing tumors, cancer cells, brain scans, MRIs, PET, scans, SPECT scans and the like, then?
Stupid question. I think I was clear that it isn't due to working in medical imaging that I understand things like the properties of light and whether which microsecond the photons hit the back of your eyes matters. It is because I have paid attention to physics things for a long time and can figure some things out on my own, even seeing the flaws in some white papers that others have used as the basis for claims that aren't actually true (like INFOCOMM), thinking they were being scientific when they were being ignorant.
Quote:
Originally Posted by Dave Harper View Post
If not, then you are assuming that you are the only one who has this type of experience here.
Not at all. It is clear that some people don't get it though. Like Ruined. Again, I'm not trying to hurt your feelings, but your example of the spigot showed that you don't really have the background for this, so I can understand why you want some scientific papers. If there are some then great. If there aren't, then there are some people with scientific backgrounds who are able to figure out certain things based on the knowledge they have gained.
Quote:
Originally Posted by Dave Harper View Post
No, it would depend on how many and how small the pixels were that made up those letters.
If you ever talk to an ophthalmologist you can ask them whether the glasses required to see the E by itself and the E3 depend on what display was used to make those images. The displays don't even have to have a single pixel. The E could be made with one light box with a light inside that flashes at a certain rate and the 3 made with another light box with a light that flashes at a certain rate (maybe displaying at the same time as the other light box and maybe at different times). Honestly the ophthalmologist wouldn't really care for the question of how good the glasses need to be for the E, how good the glasses need to be the 3, and how good the glasses need to be for the E3.
Quote:
Originally Posted by Dave Harper View Post
Then once the proper lens was found to resolve those pixels and their size, it wouldn't matter one hoot when or how those letters were shown, either one at a time flashed fast sequentially, near or far from each other, or both together at the same time.
"Proper" in this case seems to be code for more than necessary to actually be able to make out the object. Doesn't address the question at all about whether eShift projectors can have worse lenses if the images go through the lens at different times. An eShift projector could be made where the light all went through at the same time. Wouldn't change the resolution required and your example doesn't address that case, which is what is in dispute in my view. If you define "resolve" as perfectly resolving then of course it can resolve smaller detail. If you define "resolve" as just good enough to be able to see the E (which could be one big pixel), then that does not mean it can resolve the space between the E and the 3 properly.

Is it your position that if the E was one big E shaped pixel and the 3 was one big 3 shaped pixel, if the glasses where good enough to see the E by itself they would be good enough to see the E3? To be clear, I didn't say way better glasses than are actually needed to see the E, which is the argument you seem to have moved to.
Quote:
Originally Posted by Dave Harper View Post
I'm not sure I am following what you're asking here, but I think I just answered that question just above your quote here, and I am sure elsewhere too. You are acting as if those Es and 3s are a pixel or something, or are they a group of pixels making up the "E" and the "3"? You can't answer your question without knowing that.
The ophthalmologist wouldn't need that information to figure out which glasses were just good enough for each. That information is irrelevant. That is where it seems like you have gotten off the rails. You think that information matters and it doesn't matter for which glasses are necessary for each. As I think I made clear, if you get glasses that are way better then needed for the E then you might be able to make out the E3, but that was never the disagreement. The disagreement was whether it matters when the light passes through the glasses.

If you find a scientist you can ask them the same thing. Do they need any of the information you claimed is necessary to figure out how good the glasses need to be to see the E and how good the glasses need to be to see the E3.

I know this is getting long, so I am going to close it by making sure I understand one of the things you have been arguing and that I have been disagreeing with.

Is this one of your positions:

"If the 2 images for an eShift projector went through the lens at the same time then the lens would need to be higher quality than if the 2 images go through the lens at different times."
Quote:
Originally Posted by Dave Harper View Post
See you after we have some real science! TTFN
You are welcome to find any science you want. Nay or Yea. I'm not stopping you.

Thanks,
Darin

Last edited by darinp; 06-15-2017 at 11:26 PM.
darinp is offline  
post #47 of 1307 Old 06-15-2017, 11:31 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Javs,

I want to make sure I understand your position. Do you agree with the following statements:

"If the 2 images for an eShift projector went through the lens at the same time then the lens would need to be higher quality than if the 2 images go through the lens at different times."

"If the 2 images for an eShift projector went through a person's glasses at the same time then the glasses would need to be higher quality than if the 2 images go through the glasses at different times."

These seem to be what Ruined and Dave are claiming, but I want to make sure I have that right for Dave.

Since you say time doesn't matter and then seem to say that since the images are only 1080p at a time, is it your position that even if the two 1080p images went through the lens at the exact same time with 1/2 pixel offset the lens would only need to be good enough for 1080p?

Thanks,
Darin

Last edited by darinp; 06-15-2017 at 11:49 PM.
darinp is offline  
post #48 of 1307 Old 06-15-2017, 11:34 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
I should add one thing. Since eShift projectors have some extra spatial abilities I think there are probably some things they could do to try to counter a low MTF lens for certain images. However, that is separate from whether it matters whether the 2 eShift images pass through the lens at the same time or at different times.

--Darin
darinp is offline  
post #49 of 1307 Old 06-16-2017, 12:53 AM
AVS Forum Special Member
 
Javs's Avatar
 
Join Date: Dec 2012
Location: Sydney
Posts: 7,849
Mentioned: 475 Post(s)
Tagged: 0 Thread(s)
Quoted: 6791 Post(s)
Liked: 6418
Quote:
Originally Posted by darinp View Post
Javs,

I want to make sure I understand your position. Do you agree with the following statements:

"If the 2 images for an eShift projector went through the lens at the same time then the lens would need to be higher quality than if the 2 images go through the lens at different times."

"If the 2 images for an eShift projector went through a person's glasses at the same time then the glasses would need to be higher quality than if the 2 images go through the glasses at different times."

These seem to be what Ruined and Dave are claiming, but I want to make sure I have that right for Dave.

Since you say time doesn't matter and then seem to say that since the images are only 1080p at a time, is it your position that even if the two 1080p images went through the lens at the exact same time with 1/2 pixel offset the lens would only need to be good enough for 1080p?

Thanks,
Darin
I feel the second statement is not relevant to a lens or projector, and its too similar to the first statement.?

I don't really agree with the first statement because if you were to project both e-shift images at precisely the same time they would just be a blur of two 1080p images and still essentially 1080p but larger by 1/2 a pixel diagonally, since all the detail within the 4pixel grid that is created by the e-shift process will be lost and in stead would be a large pixel block of smeared information.

If you are trying to set a scenario where you may be able to actually see ALL the detail if somehow that was possible for both e-shifted images to be projected at precisely the same time then we are indeed expanding the MTF requirements since in the space of what used to be one pixel we now have more detail simultaneously which we need to resolve. But that is not what happens in real life and my points are not based on this.

The only things that matter IMO, and feel free to disagree here, is the pixel pitch/size which is hitting the rear element of the glass in the lens before its enlarged through the elements. A pixel is a certain size with regards to the 1080p panels in the JVC's or any projector with e-Shift, and it will always be that size no matter how much pixel shifting is happening, which is why I say time is not relevant. The MTF level to resolve that particular pixel size is pretty much finite in relation to the size of the panel being 1080p, we are never going to increase the MTF requirements to fully resolve a 1080p panel using e-shift or any other process since they are dictated by panel size and native resolution, now, if you pixel shift and display 2, or even 4 of those images to create whatever end result resolution you want, because they MUST be shown alternated back and forth in reality you are still dealing with the very same 1080p MTF requirements for each individual pixel on the panel itself, the lens element does not now see smaller pixels, your brain takes over here and smears them together because it happens so fast, yes you move the pixels physically by 1/2 diagonally but a black pixel on Shift A (part of a black line in the image for eg) will still be the same size black pixel no matter what, if the same were to appear on Shift B, the lens only has to be of a certain MTF performance level to resolve that black pixel fully. Furthermore, the lens itself does not discriminate where on the glass plane that pixel resides.

Now, if you were to look at a 4k panel with the same physical overall size as the 1080p panels, you have 4 individually addressable pixels in the same space vs the 1080p panel, that single black pixel is now 4x smaller, that single black pixel is also hitting the rear element 4x smaller than both Shift A and Shift B. Now, indeed the lens MUST be able to resolve 4x the fine detail on the inner lens element as the e-shift device, as such, the MTF requirements here shoot through the roof for real and we need far better lenses.

I didn't actually think this through until I first read this thread today, I would have assumed, knee jerk reaction, that E-Shift would require a better lens, but thinking about this in detail about what is actually happening during the process, I am not of the opinion that a sharper lens is needed.

Both shifted pixels are going to always actually be 1080p to the lens, as such, there is only so much sharpness the edge of those pixels can possibly contain, and that's 100% MTF. This is as relevant to a pure 1080p projector as it is to any display technology associated.



What happens though, is we begin to get into discussions on MTF in relation to the lenses, if we have a lens that could resolve 100% MTF in 1080p, because lets say, its a lens that is so good it would actually be great for 4k anyway, then it doesn't matter in this discussion. If the lens is only 60% MTF in 1080p, then I say its a poor lens full stop, and there was room for improvement in 1080p let alone e-shift.

What they need to do is work on making E-Shift itself incredibly sharp and stable, that is, there is NO hint of blur in the process. The E-Shift process itself, without any of the projector lens involved, due to the vibration actually has its own MTF losses associated with it.

JVC X9500 (RS620) | 120" 16:9 | Marantz AV7702 MkII | Emotiva XPA-7 | DIY Modular Towers | DIY TPL-150 Surrounds | DIY Atmos | DIY 18" Subs
-
MadVR Settings | UHD Waveform Analysis | Arve Tool Instructions + V3 Javs Curves
Javs is online now  
post #50 of 1307 Old 06-16-2017, 02:26 AM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Javs View Post
I don't really agree with the first statement because if you were to project both e-shift images at precisely the same time they would just be a blur of two 1080p images and still essentially 1080p but larger by 1/2 a pixel diagonally, since all the detail within the 4pixel grid that is created by the e-shift process will be lost and in stead would be a large pixel block of smeared information.
I'm not sure if the eShift unit really causes much blurring.

If you had a 2 chip 1080p DLP you could send both images through the lens at the same time and with a close to perfect lens you could get something like the following, where the first 2 images are single pixels in 1080p space and the third image is what passes through the lens at a moment in time. In this case the lens would never see just a 1080p image. It would see the overlap where the "pixels" the lens sees are 1/4th the size of what the pixels are in 1080p space (using an idealized 100% full ratio).






--Darin
darinp is offline  
post #51 of 1307 Old 06-16-2017, 03:22 AM
AVS Forum Special Member
 
Javs's Avatar
 
Join Date: Dec 2012
Location: Sydney
Posts: 7,849
Mentioned: 475 Post(s)
Tagged: 0 Thread(s)
Quoted: 6791 Post(s)
Liked: 6418
Quote:
Originally Posted by darinp View Post
I'm not sure if the eShift unit really causes much blurring.

If you had a 2 chip 1080p DLP you could send both images through the lens at the same time and with a close to perfect lens you could get something like the following, where the first 2 images are single pixels in 1080p space and the third image is what passes through the lens at a moment in time. In this case the lens would never see just a 1080p image. It would see the overlap where the "pixels" the lens sees are 1/4th the size of what the pixels are in 1080p space (using an idealized 100% full ratio).






--Darin
How could you do that though and not got 'junk crosstalk pixels' in a sense, how can two 1080p frames create new content at the very same time. That's like expecting Active 3D to work by using the same system we use now, but projecting both eyes at the very same time not alternating? You couldn't do that without polarisation?

If Shift A is supposed to be black, and Shift B is another colour, and the tiny new pixel we have from the two overlapping is supposed to content a 3rd colour, its just not going to work no?

JVC X9500 (RS620) | 120" 16:9 | Marantz AV7702 MkII | Emotiva XPA-7 | DIY Modular Towers | DIY TPL-150 Surrounds | DIY Atmos | DIY 18" Subs
-
MadVR Settings | UHD Waveform Analysis | Arve Tool Instructions + V3 Javs Curves
Javs is online now  
post #52 of 1307 Old 06-16-2017, 03:56 AM
Advanced Member
 
Join Date: Jun 2014
Posts: 946
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 485 Post(s)
Liked: 125
How fast and how far can they actually make the shifting at this time? What is the theoretical limit?

IF we prove definitively that we don't need a better lens, we should be able to create cheap 8K projectors with this chip size and tech when upsampling/interpolation capable chips are reasonably priced.
AJSJones likes this.
TheronB is offline  
post #53 of 1307 Old 06-16-2017, 03:56 AM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Javs,

This is mostly a theoretical discussion since I wouldn't expect anyone to actually make one.

As far as how you do it, much like they do 3 chip DLPs where all 3 images are sent through the lens together (mostly), except that you don't have to split by color. The 2 DLP chips could send the same color. Whatever the color wheel is currently on. I don't see any reason a 2 chip DLP like this would require polarization.

As far as not getting every option for the sub pixel, that is just like eShift now. The smallest element the lens is supposed to display without obscuring it too much is 1/4th the size of a native pixel, but there is correlation between the sub pixels. EShift projectors can't make one of the sub pixels white without making some sub pixels around that one gray at least, for instance.

--Darin
darinp is offline  
post #54 of 1307 Old 06-16-2017, 04:07 AM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by TheronB View Post
How fast and how far can they actually make the shifting at this time?
I don't know, but if a company was willing to pay enough money I'm sure they could get JVC to make a 4k+eShift projector, if they don't already sell those to the simulation market.

Wouldn't surprise me if JVC released a high end consumer model that does that within 2 to 4 years.

--Darin
darinp is offline  
post #55 of 1307 Old 06-17-2017, 02:51 AM
 
Dave Harper's Avatar
 
Join Date: Feb 2000
Location: Paradise on Earth
Posts: 6,554
Mentioned: 62 Post(s)
Tagged: 1 Thread(s)
Quoted: 3159 Post(s)
Liked: 1723
Resolution requirements for lenses

Wow, I can't even believe how many ways and how wrong you are on the position that I and Ruined were arguing. You actually agreed with me on more than one occasion in all your "in the weeds" ramblings above.

Thank you Javs for clearly stating what I have been trying to convey all this time.

Last edited by Dave Harper; 06-17-2017 at 04:15 AM.
Dave Harper is offline  
post #56 of 1307 Old 06-17-2017, 08:12 AM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Dave Harper View Post
Wow, I can't even believe how many ways and how wrong you are on the position that I and Ruined were arguing. You actually agreed with me on more than one occasion in all your "in the weeds" ramblings above.
Your original position that started this whole discussion was that the important factor was that Sony projectors send all 8M pixels through the lens at the same time and XPR projectors only send 4M pixels through the lens at the same time.

Now it seems that you have an expert who you say confirms you were right, but it wouldn't surprise me if they were actually confirming something we agree about.

I just noticed that you said above that it wouldn't matter when the E and the 3 went through the glasses. I missed that. I thought your position the whole time was that the important thing was when the photons went through the lens. It seems to me that you changed your view from your original speculation that it was the time factor that mattered, since now you seem to be saying you agree with Javs, who said time is not a factor in this.

Here is your original speculation that started this:
Quote:
Originally Posted by Dave Harper View Post
If this is an eShift projector, which it is, then at any given point in time there's only half the image shining through the lens, unlike true native 4K like the Sonys, which shoot all 8.3 million out of it at the same time.
Then Ruined said that the time separation between the 2 images mattered and you said you were believing his argument.

--Darin

Last edited by darinp; 06-17-2017 at 08:48 AM.
darinp is offline  
post #57 of 1307 Old 06-17-2017, 12:33 PM
 
Dave Harper's Avatar
 
Join Date: Feb 2000
Location: Paradise on Earth
Posts: 6,554
Mentioned: 62 Post(s)
Tagged: 1 Thread(s)
Quoted: 3159 Post(s)
Liked: 1723
Quote:
Originally Posted by darinp View Post
Your original position that started this whole discussion was that the important factor was that Sony projectors send all 8M pixels through the lens at the same time and XPR projectors only send 4M pixels through the lens at the same time....
Yep.

Quote:
Originally Posted by darinp View Post
...Now it seems that you have an expert who you say confirms you were right,....
Yep

Quote:
Originally Posted by darinp View Post
... but it wouldn't surprise me if they were actually confirming something we agree about...
Don't think so, but maybe.

Quote:
Originally Posted by darinp View Post
...I just noticed that you said above that it wouldn't matter when the E and the 3 went through the glasses. I missed that. I thought your position the whole time was that the important thing was when the photons went through the lens. It seems to me that you changed your view from your original speculation that it was the time factor that mattered, since now you seem to be saying you agree with Javs, who said time is not a factor in this....
Yes, because I clearly said that the eye chart represented the native resolution of the projector's panel, so once you got the E correct(in the first sub-frame), then if the 3 flashed on the panel (in the next sub-frame), it would be just as sharp and delineated as the E was. So after we established the lens is quality enough for the native Rez to show the E and the 3, it wouldn't then matter how fast they flashed from one sub-frame to the next or if they were close to each other (in the resultant composite image that is formed in our eyes and brains, AFTER the lens!).

To be fair though, I may have gotten so confused with all that E and 3 garbage and those useless graphics that I guess made sense to you, that I probably replied to something I "thought" you were saying, and you probably did the same with me.

Quote:
Originally Posted by darinp View Post
....Here is your original speculation that started this:....
Yep


Quote:
Originally Posted by darinp View Post
...Then Ruined said that the time separation between the 2 images mattered and you said you were believing his argument.

--Darin

Yep, didn't you just write the word "time" a bunch of times in the very first quote of you, above?

I said that because each 1920x1080 sub-frame goes through the lens at different "times", doesn't it? So therefore the lens only needs to fully and totally, without aberration, resolve 1920x1080. If they went through at the same "time", then you'd need a better lens, just as you do with higher rez like 4K.

Here is my FINAL post on this, which was confirmed by a more expert person on the subject than any of us....

A lens that totally resolves, without aberration, the native resolution of whatever is behind it, in this case eShift projectors with either 1920x1080 (LCD/DiLA/LCoS) or 2.7K (DLP XPR), is the proper lens to chose and use for either:

A. A non-eShift projector with the end resultant image on screen and processed in our eyes and brains, which is the same as the native resolution of the projector. (2 or 2.7K in this case)

B. The same native rez projector, but with eShift, whereas the native rez is flashed twice in quick succession, with two sub-frames, each with half the image (for simplicity's sake), with the second sub-frame being shifted up and to the right, a fraction in "time" later, to be merged together to form a pseudo faux-k image in a human's brain due to persistence of vision, which happens after the lens.

As soon as you either:

A. Flash the two native sub-frames at the same exact moment in "time"....

Or

B. Use a higher resolution native imager (4K/UHD) that uses the same relatively sized surface area (.69", .74")....

Then you must move to a higher resolution lens with greater MTF, as Javs said way more eloquently than I.
Dave Harper is offline  
post #58 of 1307 Old 06-17-2017, 12:50 PM
Advanced Member
 
Eternal_Sunshine's Avatar
 
Join Date: Jul 2005
Posts: 512
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 32 Post(s)
Liked: 33
Above, someone in the "lens must only resolve 1080" camp said the following: the (e-shift) picture only needs to be "shifted up and to the right slightly".

I think it is at this point where the misconception occurs: precisely to do this, to be even able to shift the picture slightly, i.e. in increments smaller than the original pixel width/height, the lens needs the extra resolution.

Hope this helps.
Eternal_Sunshine is offline  
post #59 of 1307 Old 06-17-2017, 12:53 PM
 
Seegs108's Avatar
 
Join Date: Jul 2007
Location: Upstate New York
Posts: 10,827
Mentioned: 48 Post(s)
Tagged: 0 Thread(s)
Quoted: 5167 Post(s)
Liked: 2593
Quote:
Originally Posted by Eternal_Sunshine View Post
Above, someone in the "lens must only resolve 1080" camp said the following: the (e-shift) picture only needs to be "shifted up and to the right slightly".

I think it is at this point where the misconception occurs: precisely to do this, to be even able to shift the picture slightly, i.e. in increments smaller than the original pixel width/height, the lens needs the extra resolution.

Hope this helps.
Remember though, at any given time, only a 1080p image is being sent through the lens and the pixels and pixel gaps remain the same size.
Dave Harper likes this.
Seegs108 is offline  
post #60 of 1307 Old 06-17-2017, 01:02 PM
Advanced Member
 
Eternal_Sunshine's Avatar
 
Join Date: Jul 2005
Posts: 512
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 32 Post(s)
Liked: 33
Quote:
Originally Posted by Seegs108 View Post
Remember though, at any given time, only a 1080p image is being sent through the lens and the pixels and pixel gaps remain the same size.
The point is that the resolution of the lens needs to be "finer" than the shifted pixels, because otherwise you couldn't shift those pixels in such small increments. The time when this happens is indeed irrelevant.
Eternal_Sunshine is offline  
Sponsored Links
Advertisement
 
Reply Digital Hi-End Projectors - $3,000+ USD MSRP

Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off