Resolution requirements for lenses - Page 29 - AVS Forum | Home Theater Discussions And Reviews
Forum Jump: 
 269Likes
Reply
 
Thread Tools
post #841 of 1307 Old 07-21-2017, 12:48 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Tomas2 View Post
You keep repeating this but your hypothetical is flawed. There are multiple reasons for keeping the subframes discrete. Take for example the sequence A,B,C,D,E,F,G....etc

Subframes B&C will have the same perceived interaction as A&B. So making a composite A+B frame followed by a composite C+D frame would not allow for the B+C combo that would result perceptually.
They do not want you to see B and C as the same frame. That is a side effect that showing both sub-frames at the same time would help with.

With 24p if they use 96Hz then you sequence would be:

A, B, A, B, C, D, C, D, E, F, ...

With A and B representing one frame of 4K content, showing A and B at the same time for 1/24th of a second would be most true to the source.

If there was enough processing time and lumens wouldn't take a hit then, when forced to show the sub-frames at different times, it would probably be best if they put a short delay between B and C, between D and E, etc.

--Darin
darinp is offline  
Sponsored Links
Advertisement
 
post #842 of 1307 Old 07-21-2017, 01:54 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Mike Garrett View Post
The JVC graph in the video just answered the question. It shows 1080P, Current E-shift and theoretical E-shift limit if better glass (higher MTF) is used. Also shows 4K graph.
Yep. Rod explained much of what I have been saying for the last 18 months plus, although I would say he did a better job in some ways.

One thing intelligent people who listen to that section of the podcast will probably notice is that in the discussion about image MTFs between 1080p, 1080p+eShift, and 4K, there wasn't even the hint of anything about how the sub-frames going through the lens at different is a factor at all in the image MTFs.

The reason is what I stated long ago. It has basically nothing to do with the MTFs of the images or lens requirements frame m an MTF standpoint.

It is a fact that the sub-frames go through the lens at different times. It is just basically one of those irrelevant facts. That is why Rod didn't mention it.

Some person have been holding on to that red herring for dear life.

Somewhere around the 35 or 36 minute mark Rod mentioned that with native 1080p chips the highest MTF test you could even do is 1920 lines horizontally. There is no MTF test you can run beyond that, because if you try to do 1921 lines then the chips can't do it.

It is like shown in the graph in the video, where the image MTF drops to 0 beyond 1920 lines.



As Rod mentioned, once you add eShift you can know run an MTF test for 3000 horizontal lines. That is also shown in the graph for the eShift simulation projector.

As I have been saying, and as shown in the graph, the lens with a 1080p+eShift projector is tasked with doing things that a lens on a native 1080p projector is never tasked with. That is clear on the graph by looking to the right of the 50% mark on the bottom of the graph.

As the graph also shows, a lens on a native 4K projector is tasked with retaining higher MTFs than the 1080p+eShift projector is tasked with for 4K sources.

Also, I mentioned before that they could do 4K+eShift. Rod mentioned that they have selling those for a while to the simulation market.

--Darin
Attached Thumbnails
Click image for larger version

Name:	IMG_2417.PNG
Views:	115
Size:	386.0 KB
ID:	2250913  

Last edited by darinp; 07-21-2017 at 02:13 PM.
darinp is offline  
post #843 of 1307 Old 07-21-2017, 02:17 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Anybody looking at that graph that I just posted from Rod Sterling (chief engineer at JVC, which was the first company to bring eShift to the consumer market), still think that Dave Harper, Stereodude, Highjinx, Tomas2, and R Johnson are right that lens requirements are exactly the same between native 1080p projectors and 1080p+eShift projectors?

--Darin
darinp is offline  
Sponsored Links
Advertisement
 
post #844 of 1307 Old 07-21-2017, 02:21 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Tomas2 View Post
I already provided a paper that illustrated the contribution of the lens when pixels are offset and overlayed simultaneously. Darin's hypothetical is not an optical equivalent.
It doesn't seem like you really understand the contents of that paper. It described a way to send the same sub-frame through the lens at the same time, with offset. Are you claiming that those 2 sub-frames will interact and cause spatial blurring on screen that wouldn't be there if the 2 sub-frames were separated in time?
Quote:
Originally Posted by Tomas2 View Post
Maybe Dave Harper will get some feedback from his experts on this
If he does, hopefully he will find a smarter "expert" than the last guy he found. The last guy couldn't even answer a few questions from Dave without contradicting himself. Dave claims that he knew what questions to ask this "expert", yet if he really did then Dave should have pointed out the problem with all of his "expert's" claims when put together right from the beginning.

Dave could have immediately gotten the contradiction straightened, but instead Dave spent weeks posting misinformation and even multiple times outright contradicted what this "expert" had told Dave.

If I had been asking the questions this all likely would have been straightened out pretty quickly. If Dave actually cares about the truth then hopefully he has figured out what questions to ask and to not just let a contradiction go as the final answer.
Quote:
Originally Posted by Tomas2 View Post
+1 an empirical fact !
Both you and Highjinx seem to be confused about what that fact means. Those lens anomalies occur whether you send one photon at a time, or all photons at once. Did you think the blurring described in that paper you linked to was caused by sending too many photons at once? If you understand the point spread functions described in that paper then you should understand that you can take the blurring from each sub-frame in that paper and add it to the blurring of the other sub-frame, even though they go through the lense at the same time. With incoherent light there is basically no difference between the spatial blurring for a frame of video between having the sub-frames go at the same time or at different times.

These photon interactions between sub-frames that you and Highjinx keep thinking will make a difference are basically fantasies.

--Darin
darinp is offline  
post #845 of 1307 Old 07-21-2017, 02:50 PM
Advanced Member
 
Tomas2's Avatar
 
Join Date: Jan 2017
Location: New Orleans
Posts: 740
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 459 Post(s)
Liked: 716
Quote:
Originally Posted by darinp View Post
They do not want you to see B and C as the same frame. That is a side effect that showing both sub-frames at the same time would help with.
You are underestimating the temporal (adaption) complexity of each subframe. Motion is considered such that each subframe contain contiguous elements.

Your hypothetical A+B followed by C+D is an example of over simplifying.

SAMSUNG QLED | ROTEL | MOREL | M&K | HAFLER | TECHNICS SP-25
Tomas2 is offline  
post #846 of 1307 Old 07-21-2017, 03:11 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Tomas2 View Post
Your hypothetical A+B followed by C+D is an example of over simplifying.
Are the claiming that they couldn't build an eShift projector that does A and B subframes at the same time for frame 1, then C and D sub-frames at the same time for frame 2?

What do you think the sub-frame sequence is for eShift on the JVCs with 24p movies?

--Darin
darinp is offline  
post #847 of 1307 Old 07-21-2017, 03:16 PM
Advanced Member
 
AJSJones's Avatar
 
Join Date: Dec 1999
Location: Sonoma County, CA, US
Posts: 886
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 100 Post(s)
Liked: 24
Quote:
Originally Posted by Tomas2 View Post
You are underestimating the temporal (adaption) complexity of each subframe. Motion is considered such that each subframe contain contiguous elements.

Your hypothetical A+B followed by C+D is an example of over simplifying.
Still addressing how and how well eshift might work to present more information and non-shift?
How about addressing the issue of lens quality and seeing blur in different places in different subframes -> more blur...?
AJSJones is offline  
post #848 of 1307 Old 07-21-2017, 03:40 PM
 
Seegs108's Avatar
 
Join Date: Jul 2007
Location: Upstate New York
Posts: 10,827
Mentioned: 48 Post(s)
Tagged: 0 Thread(s)
Quoted: 5167 Post(s)
Liked: 2593
Here are some single pixel (and one "multi-pixel") test patterns taken from the UHD65 (an XPR DLP model) compared against the reference pattern:

http://screenshotcomparison.com/comparison/214550

http://screenshotcomparison.com/comparison/214551

http://screenshotcomparison.com/comparison/214558

I tried my best to match the photo to the reference pattern (R.Masciola HDR UHD Test Pattern Suite). As you can see in the first two shots XPR does not faithfully reproduce single pixel information well at all. The checker board pattern with single pixel black on white "dots" don't even show up and it's just a grey patch on screen. In the first shot you can really see how XPR works. It almost looks like it takes advantage of mirror tilt to aid in the XPR process. You can make out an almost 45 degree angle on some of the pixels. The last shot you can see it does do multi-line horizontal pixel information much better, but then again so do eshift projectors from JVC and Epson. Why are people calling this "native" 4K again? The JVC DLA-RS4500 shows these patterns perfectly:

Seegs108 is offline  
post #849 of 1307 Old 07-21-2017, 04:39 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Thanks Seegs.

It would be really interesting to see XPR added to a graph like this from JVC:



What we should see if that was added is that with XPR off the graph would go to zero at the native 2.7K, just like without eShift the native 1080p goes to zero at 1920 lines. With XPR on the graph should continue beyond 2.7k, but not sure how high it would be with the lenses they actually use.

With the BenQ I saw it looked like the lens was great, but it seemed like such a waste to put a lens like that on a product with horrible on/off CR.

--Darin
darinp is offline  
post #850 of 1307 Old 07-21-2017, 05:33 PM
Advanced Member
 
AJSJones's Avatar
 
Join Date: Dec 1999
Location: Sonoma County, CA, US
Posts: 886
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 100 Post(s)
Liked: 24
Quote:
Originally Posted by Seegs108 View Post
Here are some single pixel (and one "multi-pixel") test patterns taken from the UHD65 (an XPR DLP model) compared against the reference pattern:

http://screenshotcomparison.com/comparison/214550

http://screenshotcomparison.com/comparison/214551

http://screenshotcomparison.com/comparison/214558

I tried my best to match the photo to the reference pattern (R.Masciola HDR UHD Test Pattern Suite). As you can see in the first two shots XPR does not faithfully reproduce single pixel information well at all. The checker board pattern with single pixel black on white "dots" don't even show up and it's just a grey patch on screen. In the first shot you can really see how XPR works. It almost looks like it takes advantage of mirror tilt to aid in the XPR process. You can make out an almost 45 degree angle on some of the pixels. The last shot you can see it does do multi-line horizontal pixel information much better, but then again so do eshift projectors from JVC and Epson. Why are people calling this "native" 4K again? The JVC DLA-RS4500 shows these patterns perfectly:
Thanks for the reminder of the mush these systems can generate when trying to do anything other than wide horizontal or vertical lines

I took part of one of those images and lined it up with its test pattern to show how well it does that - as you noted. (I didn't even try with any of the others!)


The main reason to post that image is to show that the doubled pixel grid is still quite sharp/distinct/visible - indicating the presence of quite a good lens
RLBURNSIDE likes this.
AJSJones is offline  
post #851 of 1307 Old 07-21-2017, 05:37 PM
AVS Forum Special Member
 
Highjinx's Avatar
 
Join Date: Dec 2002
Location: Australia
Posts: 3,461
Mentioned: 6 Post(s)
Tagged: 0 Thread(s)
Quoted: 1246 Post(s)
Liked: 514
Currentl e-shift is understood as a 2 stage event with the 3840x2160 'Master Frame' being sampled twice, each sampling producing a sub frame. Considering image integrity, this does not make sense, we cannot pack 3840x2160 into 1920x1080 x2, unless we down scale or discard 50% of image data.

What if each Master Frame could being sampled 4 times, but in two stages. Where the Master Frame has two positions, to compensate for the e-shift elements 2 position limitation. At each Master Frame position, 2 sub frames are created/extracted for a total of 4 sub frames, enabling the 1920x1080 chip to be addressed with the full 3840x2160 image data sequentially. 30, 4 sub frame cycles over a 24 frame duration.

This could work quite well in low level motion sequences, where the benefit would be more noticeable and to a lesser degree where fast panning/motion exists.

Contrast enhancement and other relevant processing is applied to compensate for the short falls of the process. No image data greater than what 1920x1080 pixels can be addressed with, needs to pass through the lens for any singular sub frame projection.

Of course that is not what is happening currently they are sampling the 3840x2160 Master Frame as is, extracting the image data volume that can only be addressed to the 1920x1080 x 2 pixels, image process to compensate for the 50% data throwaway.

Just a thought.

Perhaps we may see the 4 sub frame cycle method in the next incarnation of e-shift!

Off to do something practical.....my yard has a Oxalis weed infestation, butane torch wand, tool of choice. Winter over here.

May the success of a Nation be judged not by its collective wealth nor by its power, but by the contentment of its people.
Hiran J Wijeyesekera - 1985.
Highjinx is offline  
post #852 of 1307 Old 07-21-2017, 05:53 PM
Advanced Member
 
AJSJones's Avatar
 
Join Date: Dec 1999
Location: Sonoma County, CA, US
Posts: 886
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 100 Post(s)
Liked: 24
Quote:
Originally Posted by Highjinx View Post
Currentl e-shift is understood as a 2 stage event with the 3840x2160 'Master Frame' being sampled twice, each sampling producing a sub frame. Considering image integrity, this does not make sense, we cannot pack 3840x2160 into 1920x1080 x2, unless we down scale or discard 50% of image data.

What if each Master Frame could being sampled 4 times, but in two stages. Where the Master Frame has two positions, to compensate for the e-shift elements 2 position limitation. At each Master Frame position, 2 sub frames are created/extracted for a total of 4 sub frames, enabling the 1920x1080 chip to be addressed with the full 3840x2160 image data sequentially. 30, 4 sub frame cycles over a 24 frame duration.

This could work quite well in low level motion sequences, where the benefit would be more noticeable and to a lesser degree where fast panning/motion exists.

Contrast enhancement and other relevant processing is applied to compensate for the short falls of the process. No image data greater than what 1920x1080 pixels can be addressed with, needs to pass through the lens for any singular sub frame projection.

Of course that is not what is happening currently they are sampling the 3840x2160 Master Frame as is, extracting the image data volume that can only be addressed to the 1920x1080 x 2 pixels, image process to compensate for the 50% data throwaway.

Just a thought.

Perhaps we may see the 4 sub frame cycle method in the next incarnation of e-shift!

Off to do something practical.....my yard has a Oxalis weed infestation, butane torch wand, tool of choice. Winter over here.
See post 847 above

Oxalis flowers are a pretty red/green - um I mean yellow - but you have my sympathy for their taking over some part of your yard
Highjinx likes this.

Last edited by AJSJones; 07-21-2017 at 05:56 PM.
AJSJones is offline  
post #853 of 1307 Old 07-21-2017, 06:39 PM
 
Seegs108's Avatar
 
Join Date: Jul 2007
Location: Upstate New York
Posts: 10,827
Mentioned: 48 Post(s)
Tagged: 0 Thread(s)
Quoted: 5167 Post(s)
Liked: 2593
Quote:
Originally Posted by AJSJones View Post
Thanks for the reminder of the mush these systems can generate when trying to do anything other than wide horizontal or vertical lines

I took part of one of those images and lined it up with its test pattern to show how well it does that - as you noted. (I didn't even try with any of the others!)


The main reason to post that image is to show that the doubled pixel grid is still quite sharp/distinct/visible - indicating the presence of quite a good lens
The issue is that JVC and Epson 1080p projectors can do a very similar level of "resolving" with those horizontal and vertical line tests with eshift engaged and that's with half the resolution that these XPR DMDs have. I wouldn't use the last comparison as a useful tool to measure much as those are not single pixel width lines and that's the reason why they're easier to resolve. This is also the reason why the JVC /Epson eshift units have an easier time to resolve them too. The issue is that TI and the CTA are calling these XPR DMDs "Native" 4K and these single pixel tests patterns show that they clearly cannot do all the same things a true native 4K projector or display can do. The RS4500 photo I posted shows how that first pattern should look and it does it perfectly on a 1:1 pixel mapped basis. For the XPR DMD to truly be "native" 4K it should be able to do the same thing the RS4500 can do and it cannot.
Seegs108 is offline  
post #854 of 1307 Old 07-21-2017, 06:50 PM
Advanced Member
 
presenter's Avatar
 
Join Date: Sep 2002
Location: San Clemente
Posts: 717
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 18 Post(s)
Liked: 33
Wow, a lot of serious conversation on this thread. So I thought I'd add my two cents. Myself, I'm no engineer (understatement). I've been reviewing projectors professionally for more than a decade, but I primarily review them from a subjective standpoint. I've also been an avid photographer, (I got my first film SLR just over 50 years ago - to date myself), and for my regular photography, and my reviews I use a Canon 60D. I use the standard zoom, and know that there are better optics Canon sells, where they have similar zoom lenses (in throw ratios), that cost more than my already pricey dSLR with lens.

OK.
First, despite spending 20+ minutes on this page, reading scanning, and looking at images, I do not see any way that having e-shift has a significant affect on how good a lens is needed. That's not to say, it doesn't call for slightly better resolving optics. But it definitely wouldn't need to be as good a lens as a true 4K would need.

1. "A pixel is a pixel" Whether e-shift is on, or off, the smallest pixel size any 1080p JVC pixel shifter can do, is always the same size. Firing twice in different positions does not change the amount of detail of either the first, or the delayed pixel.
2. When e-shift is on, that delayed pixel carries different info (color, sat, etc.) than the first, thanks to "fancy processing."
3. Perhaps most important. No projector I know of, pixel shifter or not, can produce a single pixel that has variation in it. Every aspect of that pixel will be exactly the same color/saturation, etc.)
4. Because of processing, the idea of pixel shifting, allows something "new." Right now, we have a single pixel which is X in size. Because of the overlap (and non-overlap) the original pixel and it's delayed partner, we end up with a "double pixel" which is not square, (looks like the usual diagram for pixel shifting) and it's probably close to 50% larger than a single pixel. So we have a "combined" pixel that has more than one color/sat, etc, but it is larger. Now we can have variation but in a larger pixel. Still, that variation can be done in a smaller area than with two discreet pixels (non-pixel shifter).
5. Each is still a 1080 sized pixel. If a lens can perfectly resolve 1 1080p pixel it should perfectly resolve two overlapping ones. But we're not talking perfect.
6. I think one could argue that if one's lens isn't high enough resolution it might blur the transitions between the overlapping pixels (lets' say they are different color or saturation, being different due to neighboring pixel information, but if that was the case, I would argue, that the lens would also be too soft of properly do 1080p pixels to begin with. Imagine the first firing is a medium red, and the 2nd firing of the same pixel (but shifted) is yellow. There will be three colors visible in our double pixel. one area (call it the lower left) would be red, the upper right would be yellow, and the area where they overlap would be orange. But the area that is orange, will be smaller than a single pixel. So there is the argument for needing more resolution,

hmm. Am I still making any sense.

Either way, though, the JVC does not need a lens that's "4K" Those overlapping pixel segments are each smaller than 1080p, but still, each pixel original pixel starts being four times the size of a true 3840x2160 4K pixel. So each of our three segments is still larger than a single 3840x2160 pixel.

Now with the new DLP UHD projectors and their native 2716x1528 chips, applying the same logic, one could argue that they need to be much closer to needing "4K" optics.

Generally, most projectors, unfortunately could use better glass, regardless. Less pincushioning, less barrel distortion, less blooming, and more even uniformity, and it sure would be nice if such lenses also rolled off brightness around 5% in the corners instead of the usual 10 - 15% or more.

One definitely can see the difference between the old VW1100 (or the 5000ES) and the lower end Sony 4Ks in terms of sharpness, and overall clarity. Not huge, but visibly superior. What that tells us, is that there are $15K true 4K projectors that still don't have "4K" optics (that can fully resolve 4K), because if they did, there wouldn't be a visible difference between those Sony's and their top of the lines. Also as is the nature of lenses, and as folks have pointed out, lenses can resolve more in the center than the edges of the image.

The sad part is, what we all really need is true 8K, so we can have proper, "big time" immersion. I look forward to the day when I can sit 5 feet from a 124" diagonal screen and have everything razor sharp. (BTW true 8K is still only 80 pixels per inch on a 100 inch wide screen - not exactly laser printer sharpness, but then we don't sit 18 inches from our screens.

Overall, I'd say most projectors - could use better lenses. I've got three different 4K UHD DLP's here, with one seeming to have a bit better lens than the other two. Two are $2500 projectors, so you know there are limits to what they can provide in terms of lens, unless of course, they can rack up the kind of sales volume that would allow the better economies of scale when it comes to manufacturing.

OK my ramble is over, perhaps I'm right, but at least I tried to be a little less technical, more subjective, for those like me, who might have had a problem understanding some of the comments and demonstrations. -art

Art Feierman
"Reviewing projectors is fun, but watching the 5th Element for the 500th time..."
Which means I'm really looking forward to Valerian: City of 1000 Planets
www.projectorreviews.com
presenter is offline  
post #855 of 1307 Old 07-21-2017, 06:52 PM
Advanced Member
 
AJSJones's Avatar
 
Join Date: Dec 1999
Location: Sonoma County, CA, US
Posts: 886
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 100 Post(s)
Liked: 24
Quote:
Originally Posted by Seegs108 View Post
The issue is that JVC and Epson 1080p projectors can do a very similar level of "resolving" with those horizontal and vertical line tests with eshift engaged and that's with half the resolution that these XPR DMDs have. I wouldn't use the last comparison as a useful tool to measure much as those are not single pixel width lines and that's the reason why they're easier to resolve. This is also the reason why the JVC /Epson eshift units have an easier time to resolve them too. The issue is that TI and the CTA are calling these XPR DMDs "Native" 4K and these single pixel tests patterns show that they clearly cannot do all the same things a true native 4K projector or display can do. The RS4500 photo I posted shows how that first pattern should look and it does it perfectly on a 1:1 pixel mapped basis. For the XPR DMD to truly be "native" 4K it should be able to do the same thing the RS4500 can do and it cannot.
Agreed - I wasn't commenting on the ability of e-shift to do 4k, but rather that the lens they used can still resolve the doubled gridlines - the lens is probably much better than the imaging system, indicating useful MTF well above the 1080p resolution and probably above the requirements of 4k, even if the shift system can't display it
AJSJones is offline  
post #856 of 1307 Old 07-21-2017, 07:12 PM
Advanced Member
 
AJSJones's Avatar
 
Join Date: Dec 1999
Location: Sonoma County, CA, US
Posts: 886
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 100 Post(s)
Liked: 24
Quote:
Originally Posted by presenter View Post
Wow, a lot of serious conversation on this thread. So I thought I'd add my two cents. Myself, I'm no engineer (understatement). I've been reviewing projectors professionally for more than a decade, but I primarily review them from a subjective standpoint. I've also been an avid photographer, (I got my first film SLR just over 50 years ago - to date myself), and for my regular photography, and my reviews I use a Canon 60D. I use the standard zoom, and know that there are better optics Canon sells, where they have similar zoom lenses (in throw ratios), that cost more than my already pricey dSLR with lens.

OK.
First, despite spending 20+ minutes on this page, reading scanning, and looking at images, I do not see any way that having e-shift has a significant affect on how good a lens is needed. That's not to say, it doesn't call for slightly better resolving optics. But it definitely wouldn't need to be as good a lens as a true 4K would need.

1. "A pixel is a pixel" Whether e-shift is on, or off, the smallest pixel size any 1080p JVC pixel shifter can do, is always the same size. Firing twice in different positions does not change the amount of detail of either the first, or the delayed pixel.
2. When e-shift is on, that delayed pixel carries different info (color, sat, etc.) than the first, thanks to "fancy processing."
3. Perhaps most important. No projector I know of, pixel shifter or not, can produce a single pixel that has variation in it. Every aspect of that pixel will be exactly the same color/saturation, etc.)
4. Because of processing, the idea of pixel shifting, allows something "new." Right now, we have a single pixel which is X in size. Because of the overlap (and non-overlap) the original pixel and it's delayed partner, we end up with a "double pixel" which is not square, (looks like the usual diagram for pixel shifting) and it's probably close to 50% larger than a single pixel. So we have a "combined" pixel that has more than one color/sat, etc, but it is larger. Now we can have variation but in a larger pixel. Still, that variation can be done in a smaller area than with two discreet pixels (non-pixel shifter).
5. Each is still a 1080 sized pixel. If a lens can perfectly resolve 1 1080p pixel it should perfectly resolve two overlapping ones. But we're not talking perfect.
6. I think one could argue that if one's lens isn't high enough resolution it might blur the transitions between the overlapping pixels (lets' say they are different color or saturation, being different due to neighboring pixel information, but if that was the case, I would argue, that the lens would also be too soft of properly do 1080p pixels to begin with. Imagine the first firing is a medium red, and the 2nd firing of the same pixel (but shifted) is yellow. There will be three colors visible in our double pixel. one area (call it the lower left) would be red, the upper right would be yellow, and the area where they overlap would be orange. But the area that is orange, will be smaller than a single pixel. So there is the argument for needing more resolution,

hmm. Am I still making any sense.

Either way, though, the JVC does not need a lens that's "4K" Those overlapping pixel segments are each smaller than 1080p, but still, each pixel original pixel starts being four times the size of a true 3840x2160 4K pixel. So each of our three segments is still larger than a single 3840x2160 pixel.

Now with the new DLP UHD projectors and their native 2716x1528 chips, applying the same logic, one could argue that they need to be much closer to needing "4K" optics.

Generally, most projectors, unfortunately could use better glass, regardless. Less pincushioning, less barrel distortion, less blooming, and more even uniformity, and it sure would be nice if such lenses also rolled off brightness around 5% in the corners instead of the usual 10 - 15% or more.

One definitely can see the difference between the old VW1100 (or the 5000ES) and the lower end Sony 4Ks in terms of sharpness, and overall clarity. Not huge, but visibly superior. What that tells us, is that there are $15K true 4K projectors that still don't have "4K" optics (that can fully resolve 4K), because if they did, there wouldn't be a visible difference between those Sony's and their top of the lines. Also as is the nature of lenses, and as folks have pointed out, lenses can resolve more in the center than the edges of the image.

The sad part is, what we all really need is true 8K, so we can have proper, "big time" immersion. I look forward to the day when I can sit 5 feet from a 124" diagonal screen and have everything razor sharp. (BTW true 8K is still only 80 pixels per inch on a 100 inch wide screen - not exactly laser printer sharpness, but then we don't sit 18 inches from our screens.

Overall, I'd say most projectors - could use better lenses. I've got three different 4K UHD DLP's here, with one seeming to have a bit better lens than the other two. Two are $2500 projectors, so you know there are limits to what they can provide in terms of lens, unless of course, they can rack up the kind of sales volume that would allow the better economies of scale when it comes to manufacturing.

OK my ramble is over, perhaps I'm right, but at least I tried to be a little less technical, more subjective, for those like me, who might have had a problem understanding some of the comments and demonstrations. -art
You have the general idea when you say
"But the area that is orange, will be smaller than a single pixel. So there is the argument for needing more resolution,"
AJSJones is offline  
post #857 of 1307 Old 07-21-2017, 07:59 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by AJSJones View Post
You have the general idea when you say
"But the area that is orange, will be smaller than a single pixel. So there is the argument for needing more resolution,"
Yep. To visualize it we have the example that shows this, but just with different colors:



--Darin
darinp is offline  
post #858 of 1307 Old 07-21-2017, 08:50 PM
AVS Forum Special Member
 
Highjinx's Avatar
 
Join Date: Dec 2002
Location: Australia
Posts: 3,461
Mentioned: 6 Post(s)
Tagged: 0 Thread(s)
Quoted: 1246 Post(s)
Liked: 514
Is not, what this is showing,but the combined effect of both sub frames 'interwoven', the reality is they pass though the glass at different times. This graph is there to illustrate the end result, not the lens requirement per singular sub frame projection.

No argument a better lens will help all stages.

Dave Harper and Tomas2 like this.

May the success of a Nation be judged not by its collective wealth nor by its power, but by the contentment of its people.
Hiran J Wijeyesekera - 1985.
Highjinx is offline  
post #859 of 1307 Old 07-21-2017, 09:09 PM
AVS Forum Special Member
 
Dave in Green's Avatar
 
Join Date: Jan 2014
Location: USA
Posts: 8,099
Mentioned: 147 Post(s)
Tagged: 0 Thread(s)
Quoted: 3670 Post(s)
Liked: 2874
The simple truth remains that 1080p+e-shift is more demanding of lens quality than 1080p without e-shift and less demanding of lens quality than 4k. All the side commentary is irrelevant to that core issue.
AJSJones likes this.
Dave in Green is offline  
post #860 of 1307 Old 07-21-2017, 09:39 PM
Advanced Member
 
AJSJones's Avatar
 
Join Date: Dec 1999
Location: Sonoma County, CA, US
Posts: 886
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 100 Post(s)
Liked: 24
Quote:
Originally Posted by Highjinx View Post
Is not, what this is showing,but the combined effect of both sub frames 'interwoven', the reality is they pass though the glass at different times. This graph is there to illustrate the end result, not the lens requirement per singular sub frame projection.

No argument a better lens will help all stages.
Once and for all, please cite some (any) supporting evidence for your assertion that the two subframes will be altered in a perceptible way by going through the lens "at the same time". That assertion is simpy bogus, but that's all it is: an assertion made with no foundation in fact and with no evidence to support it. You have also refused to answer any of the questions about how to assess lens blur/quality.
AJSJones is offline  
post #861 of 1307 Old 07-21-2017, 09:53 PM
AVS Forum Special Member
 
Highjinx's Avatar
 
Join Date: Dec 2002
Location: Australia
Posts: 3,461
Mentioned: 6 Post(s)
Tagged: 0 Thread(s)
Quoted: 1246 Post(s)
Liked: 514
Quote:
Originally Posted by AJSJones View Post
Once and for all, please cite some (any) supporting evidence for your assertion that the two subframes will be altered in a perceptible way by going through the lens "at the same time". That assertion is simpy bogus, but that's all it is: an assertion made with no foundation in fact and with no evidence to support it. You have also refused to answer any of the questions about how to assess lens blur/quality.
The two sub frames are not going through the lens at the same time in the e-shift process, each sub frame projection is a reflection off a 1920x1080 pixel chip/canvas.

I don't understand the relevance. The graph is a simulation of the end result of two combined sub frames in real time, this never happens in the e-shift process. The e-shift demands of the lens do not exceed the requirements of a 1920x1080 chip.
Dave Harper likes this.

May the success of a Nation be judged not by its collective wealth nor by its power, but by the contentment of its people.
Hiran J Wijeyesekera - 1985.
Highjinx is offline  
post #862 of 1307 Old 07-21-2017, 10:00 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Highjinx View Post
Is not, what this is showing,but the combined effect of both sub frames 'interwoven', the reality is they pass though the glass at different times. This graph is there to illustrate the end result, not the lens requirement per singular sub frame projection.
You are right that the graph shows the MTFs for the entire composite frame and not for the individual sub-frames. So, why would the chief engineer for the company that brought us eShift first show a graph that grades the system, including the lens, on how well it delivers a frame of video, even though the video frames are made up of multiple flashes?

The reason is very simple. It is that what I have been saying for over 18 months (that the lens requirements are based on the composite frame and not just a sub-frame) is true and what you, Dave Harper, Stereodude, R Johnson, Tomas2, and maybe Ruined, have been saying (that the lens requirements are only based on what goes through the lens in an instant) is not true.

There is good reason that the graph shows the MTF even at 2700 lines for 1080p+eShift, when it is impossible to even do a 2700 line test pattern with a sub-frame of native 1080p. The reason is that a 1080p+eShift projector can show a frame with 2700 lines, even though it is impossible to show 2700 lines with a sub-frame.

It is clear that JVC knows that you judge the lens and system by what it can do for frames of video. As some of us have said over and over, splitting the frames temporally does not change those spatial requirements.

This graph supports that position by not including the temporal separation that so many people think is important, but is just a red herring some people are holding onto for dear life. It isn't part of the graph because it isn't really relevant.

If you ever get to point of understanding why lenses are judged by how they do the whole frame of video they are tasked with transmitting, you will be far ahead of where you are. At that point we could start discussing some of the more difficult subtleties of this subject matter.

--Darin
darinp is offline  
post #863 of 1307 Old 07-21-2017, 10:13 PM
Advanced Member
 
AJSJones's Avatar
 
Join Date: Dec 1999
Location: Sonoma County, CA, US
Posts: 886
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 100 Post(s)
Liked: 24
Quote:
Originally Posted by Highjinx View Post
The two sub frames are not going through the lens at the same time in the e-shift process, each sub frame projection is a reflection off a 1920x1080 pixel chip/canvas.

I don't understand the relevance. The graph is a simulation of the end result of two combined sub frames in real time, this never happens in the e-shift process. The e-shift demands of the lens do not exceed the requirements of a 1920x1080 chip.
There's that bogus assertion again. Any evidence to provide that it is true? A link to a simple optics text describing this effect would suffice. If it's so basic to how lenses work, it's bound to be there.

You just avoid the questions and keep asserting a falsehood. You won't even say what you mean by "lens demands". I have to conclude you are a troll.

Last edited by AJSJones; 07-22-2017 at 12:09 PM.
AJSJones is offline  
post #864 of 1307 Old 07-21-2017, 10:15 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Highjinx View Post
The two sub frames are not going through the lens at the same time in the e-shift process, each sub frame projection is a reflection off a 1920x1080 pixel chip/canvas.
Everybody already knows that. The problem is that you keep thinking it is relevant.
Quote:
Originally Posted by Highjinx View Post
I don't understand the relevance. The graph is a simulation of the end result of two combined sub frames in real time, this never happens in the e-shift process.
The graph shows actual measurements. They measure just like your brain sees things, including all frames, not just sub-frames.

I'm losing faith that you will ever understand that the lens's job is to deliver video to human beings and so is judged by how well it does that, which is just like what the graph shows. It seems like you wanted JVC to deliver a graph for something other than what the lens's job is. Sorry, they did it right. You would have done it wrong.
Quote:
Originally Posted by Highjinx View Post
The e-shift demands of the lens do not exceed the requirements of a 1920x1080 chip.
It was one thing when you believed this a month ago, but you have had to ignore a lot of information and refuse to answer a lot of our questions in order to continue believing something that is false. Honestly, imagine trying to discuss something with a member of the flat earth society, and they ignore even simple facts about physics that don't support their belief that the earth is flat.

Your belief that the lens requirements are exactly the same between 1080p and 1080p+eShift is a lot like the flat earth society members beliefs that the earth is flat.

When you outright refuse to address any physics that doesn't support your claims, even basic stuff like the non-interacting properties of photons, why should we keep answering you questions? At this point you have to know that you've refused to answer some pretty simple questions in order to keep posting your (false) belief.

--Darin

Last edited by darinp; 07-21-2017 at 10:18 PM.
darinp is offline  
post #865 of 1307 Old 07-21-2017, 10:19 PM
AVS Forum Special Member
 
Highjinx's Avatar
 
Join Date: Dec 2002
Location: Australia
Posts: 3,461
Mentioned: 6 Post(s)
Tagged: 0 Thread(s)
Quoted: 1246 Post(s)
Liked: 514
Quote:
Originally Posted by darinp View Post
You are right that the graph shows the MTFs for the entire composite frame and not for the individual sub-frames. So, why would the chief engineer for the company that brought us eShift first show a graph that grades the system, including the lens, on how well it delivers a frame of video, even though the video frames are made up of multiple flashes?

The reason is very simple. It is that what I have been saying for over 18 months (that the lens requirements are based on the composite frame and not just a sub-frame) is true and what you, Dave Harper, Stereodude, R Johnson, Tomas2, and maybe Ruined, have been saying (that the lens requirements are only based on what goes through the lens in an instant) is not true.

There is good reason that the graph shows the MTF even at 2700 lines for 1080p+eShift, when it is impossible to even do a 2700 line test pattern with a sub-frame of native 1080p. The reason is that a 1080p+eShift projector can show a frame with 2700 lines, even though it is impossible to show 2700 lines with a sub-frame.

It is clear that JVC knows that you judge the lens and system by what it can do for frames of video. As some of us have said over and over, splitting the frames temporally does not change those spatial requirements.

This graph supports that position by not including the temporal separation that so many people think is important, but is just a red herring some people are holding onto for dear life. It isn't part of the graph because it isn't really relevant.

If you ever get to point of understanding why lenses are judged by how they do the whole frame of video they are tasked with transmitting, you will be far ahead of where you are. At that point we could start discussing some of the more difficult subtleties of this subject matter.

--Darin
I would agree with you if it's task was to ACTUALLY create a 'Single Page' composite.

It's not, it's creating a pseudo composite by relying on our visual system to blend two separate images, not unlike creating apparent motion by projecting a series of still frames with differences.

Next you guys will be saying, we need a higher MTF lens to project 24 frames than it would be to project 1 frame.

BTW E-Shift was developed by NHK and JVC Japan.
Dave Harper and Tomas2 like this.

May the success of a Nation be judged not by its collective wealth nor by its power, but by the contentment of its people.
Hiran J Wijeyesekera - 1985.
Highjinx is offline  
post #866 of 1307 Old 07-21-2017, 10:34 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Highjinx View Post
I would agree with you if it's task was to ACTUALLY create a 'Single Page' composite.

It's not, it's creating a pseudo composite by relying on our visual system to blend two separate images, not unlike creating apparent motion by projecting a series of still frames with differences.
More like single chip DLPs show only 4 different things for a pixel at a moment in time, but create colorful pictures for humans. eShift is not trying to create motion between movie sub-frames, since the source doesn't have motion within a frame, so no, it is not like your motion example.

Why did you ignore my questions about whether a lens that misconverged red and green horribly could still be a considered a great lens for a single chip DLP, since convergence of the colors never happens at an instant. If you only grade by an instant then convergence of colors is irrelevant. Are you going to ignore that question forever?
Quote:
Originally Posted by Highjinx View Post
Next you guys will be saying, we need a higher MTF lens to project 24 frames than it would be to project 1 frame.
I already told you that separating things temporally doesn't change the spatial lens requirements, so you would be the one claiming the temporal separation matters. It is only if our eyes see something as separate that it matters. If our eyes see something as happening at the same time (whether it is or not) then proper measurements are done the same way. JVC did it the right way. You would have done it the wrong way.
Quote:
Originally Posted by Highjinx View Post
BTW E-Shift was developed by NHK and JVC Japan.
And yet here you are claiming JVC did the measurements with it the wrong way. Do you think they don't know better than you how to measure the performance of an eShift system?

I have talked to Rod Sterling in person at least once a year going back quite a ways. We sometimes talk about misinformation in this industry. I also get to hear some of his stories about going to Japan and training people over there. Rod is one of the good guys in the industry and is very knowledgeable.

--Darin

Last edited by darinp; 07-21-2017 at 10:41 PM.
darinp is offline  
post #867 of 1307 Old 07-22-2017, 07:15 PM
AVS Forum Special Member
 
Highjinx's Avatar
 
Join Date: Dec 2002
Location: Australia
Posts: 3,461
Mentioned: 6 Post(s)
Tagged: 0 Thread(s)
Quoted: 1246 Post(s)
Liked: 514
The fact that e-shift creates 2 separate frames from the original master frame and projects each sequentially IMO makes those individual images temporal. E-shift depends on the human visual psychology to pull this off.

The graph is a simulation of the lens requirement if both e-shift sub frames were projected simultaneously. In reality these frames are never projected simultaneously.
JVC did not do the measurements the wrong way, the graph is a marketing tool to keep the information on a level, easy to interpret way.

No, a lens that created chromatic aberration would not be considered a great lens. BTW on the angular resolution stance, the e-shift element is located very close to the back element of the lens and the angle is minuscule, sure mathematically measurable, but is it visually apparent...and again sequential projection and reflection off the screen. The sought result is not a single page print with a well blended image aka a blended photographic print.

Hmmm wonder how the end result would be if we flashed each of these sub frames sequentially and used magical MTF100 3D glasses, perhaps the perceived result would be the same/similar.

The process depends on human visual psychology to sample & hold, blend the two temporal sub frames with key dissimilar information sets together to create the full frame. In the 120hz cycle, 24 unique 3840x2160 temporal movie frames with dissimilar information sub sets have to be split into 2 sub frames and projected sequentially.

Perhaps the cycle is not set at 120hz, but up to 120hz. 48hz for 24 fps material, up to 120hz for 60fps material.

Yes there are two camps, one that sees each of these sub frames extracted from the master frame as temporal 1920x1080 images with key areas of dissimilar information and projected sequentially, that does not require a better lens that what is considered sufficient to delineate & project the said pixels and image data they carry. The other camp sees things differently.

No matter all good fun.

Don't mind me, I'm a feeble minded fool!....I'll bow out now. Thanks for the education Dr. darinp!

Have a wonderful summers day! .......you too Dr.Jones!
Dave Harper likes this.

May the success of a Nation be judged not by its collective wealth nor by its power, but by the contentment of its people.
Hiran J Wijeyesekera - 1985.
Highjinx is offline  
post #868 of 1307 Old 07-22-2017, 08:39 PM
AVS Forum Special Member
 
Dave in Green's Avatar
 
Join Date: Jan 2014
Location: USA
Posts: 8,099
Mentioned: 147 Post(s)
Tagged: 0 Thread(s)
Quoted: 3670 Post(s)
Liked: 2874
The fundamental fact is that if more actual fine detail is displayed in a projected image then that image passing through the lens is more demanding of lens quality no matter how or when the added actual fine detail is created or passed through the lens. All the side commentary remains irrelevant to that core issue.
Dave in Green is offline  
post #869 of 1307 Old 07-22-2017, 08:48 PM
AVS Forum Special Member
 
Highjinx's Avatar
 
Join Date: Dec 2002
Location: Australia
Posts: 3,461
Mentioned: 6 Post(s)
Tagged: 0 Thread(s)
Quoted: 1246 Post(s)
Liked: 514
Quote:
Originally Posted by Dave in Green View Post
The fundamental fact is that if more actual fine detail is displayed in a projected image then that image passing through the lens is more demanding of lens quality no matter how or when the added actual fine detail is created or passed through the lens. All the side commentary remains irrelevant to that core issue.
There is NO finer detail passing through the lens than what the 1920x1080 pixel count can deliver. New information in non overlapping areas, but again detail level limited to the pixel count.

The two sub frames projected sequentially at speed are essentially creating an animation, an animation that is cleverly crafted to be indistinguishable from a 4k chip from the appropriate viewing distance, where the additional detail provided by a 4k pixel count would not be able to be seen by an person with 20/20 vision. Move closer and the story changes.

May the success of a Nation be judged not by its collective wealth nor by its power, but by the contentment of its people.
Hiran J Wijeyesekera - 1985.

Last edited by Highjinx; 07-22-2017 at 09:28 PM.
Highjinx is offline  
post #870 of 1307 Old 07-22-2017, 09:22 PM
Advanced Member
 
AJSJones's Avatar
 
Join Date: Dec 1999
Location: Sonoma County, CA, US
Posts: 886
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 100 Post(s)
Liked: 24
Quote:
Originally Posted by Highjinx View Post
There is NO finer detail passing through the lens than what the 1920x1080 pixel size can deliver.
.
Until you say what that means in terms of the effect of lens quality on what comes out of the lens, that assertion remains meaningless

Tell us what you think happens to any image of the 1080 chip on its way through the lens - does it get blurred a bit, for example? Still waiting for some independent support for your assertions

(In any case, the screenshots Seegs provided show that it does't work very well and the MTF cuves are consistent with that.)

Last edited by AJSJones; 07-22-2017 at 09:35 PM.
AJSJones is offline  
Sponsored Links
Advertisement
 
Reply Digital Hi-End Projectors - $3,000+ USD MSRP

Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off