Resolution requirements for lenses - Page 16 - AVS Forum | Home Theater Discussions And Reviews
Forum Jump: 
 269Likes
Reply
 
Thread Tools
post #451 of 1307 Old 07-01-2017, 08:03 PM
AVS Forum Addicted Member
 
Mike Garrett's Avatar
 
Join Date: Sep 2011
Posts: 25,340
Mentioned: 231 Post(s)
Tagged: 0 Thread(s)
Quoted: 11671 Post(s)
Liked: 9230
Send a message via Skype™ to Mike Garrett
Quote:
Originally Posted by Dave Harper View Post
OK, I swear this is the last post here until such time that I have further info to go beyond this stage that we are going in circles with. I just felt the need to reply to these few posts, then I'm done until such time.......



You and I and everyone knows they implied that the reason a higher quality lens is needed for eShift was because of the (spatial?) "composite image" that's formed from the two sub-frames. That therefore implies that they believe the "composite image" itself has to be resolved by the lens, so therefore that means that the "composite image" must be presented to the rear element of the lens, i.e. - before the lens, inside the projector. That is exactly what I was depicting with those graphics. I apologize if I didn't do a good job of it.

That is what I truly and firmly believed they were saying, so therefore when I replied there was no dishonesty whatsoever from my perspective, and no falsehoods whatsoever.

Now if they're saying something else completely, which I can't see how since they're involving the lens in something happening beyond it in space and time, then this is also another reason I am bowing out here until I'm able to reconnect with my guy (thinking of having him over for a movie and refreshments!) and we can talk face to face and he can go over each point with me one by one, showing where either I or darinp and coderguy or whomever was wrong and right.

I've attached a photo of his bumper sticker just for fun and to show he lives this stuff, so I certainly think and will believe his stance before anyone here until they can prove without a doubt their credentials and equality with his as far as optics are concerned. He didn't just learn this stuff in college and then try to apply it on an enthusiast forum. He actually applies it in the real world designing real, amazing things like commercial telescopes.






Exactly, and this is what we are saying. The lens for the native 1080 non-eShift projector would of course be spec'd to use a lens with an MTF quality good enough to resolve the pixel gaps, would it not? That's why you can see the "screen door" as Javs put it. So now that we have that established, nothing in the eShift process, as far as the lens is concerned, would make that MTF lens requirement any higher.

So what you're saying with the bolded sentences above is that they used a much better lens on the LS9600e just because the higher models, LS10000/10500 will need it? If so, I find that very hard to believe that a manufacturer would spend that extra money on the lens because of that reason. They would use a cheaper one like one from their other native 1080 non-eShift projectors. Someone else said the same thing earlier in the thread right after I asked that question for the first time.

The reason it's the same is because they have the same requirements. Now when/if they use that chassis with native 4K imagers, then they will need to upgrade the lens.


Is that taking into account the imager actually blanking totally between sub-frames, while the eShift actuator glass angles into place for the next B sub-frame milliseconds later, which will go through the lens slightly shifted up and to the right 1/2 pixel?

Or is that talking about the light being constant and it "sliding" so to speak up and into its new position, across the rear element of the lens?
Yes your graphics clearly show that you think Darin and Coderguy are saying the combined image is going through the lens. That is why Darin is upset, because that is not even close to what they said or implied. The two images go through the lens separately, combined by your eye when they hit the screen.

Last edited by Mike Garrett; 07-01-2017 at 08:12 PM.
Mike Garrett is online now  
Sponsored Links
Advertisement
 
post #452 of 1307 Old 07-01-2017, 08:53 PM
AVS Forum Special Member
 
cmjohnson's Avatar
 
Join Date: Nov 2003
Location: Sharply focused on sharper focus.
Posts: 6,140
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 104 Post(s)
Liked: 34
In this discussion I have never addressed or given consideration to any factors other than lens resolution and MTF, which
can be restated correctly as the ANGULAR resolution capacity of a given lens.

While it is certainly possible to pass information through a lens that can be combined by the viewer (or viewing device) to create an image with more information in it than has passed through the lens at any given moment, such as sequential 3D as an easy example, even that does not have any bearing on the fact that angular resolution of the lens places a hard limit on its information bandwidth capacity.

I am simply choosing not to address any form of temporal pre- or post-processing of the image.

You can pass any information you want, and as much as you want, through the lens, but only up to the limit of its resolution and the speed that you can generate and capture images. There's no reason at all that you couldn't theoretically slam a billion 4K frames per second through a lens that can resolve 4K. If you had the means to generate that framerate, that is. Or receive it.

Putting information through the lens that has to go through sequentially requires the receiving device to use a certain amount of time for acquisition and integration. The information passing through this system will effectively suffer data loss if the transfer rate is faster than the ability of the receiver system to handle. Including, of course, the eye.
cmjohnson is offline  
post #453 of 1307 Old 07-01-2017, 09:32 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Dave Harper View Post
You and I and everyone knows they implied that the reason a higher quality lens is needed for eShift was because of the (spatial?) "composite image" that's formed from the two sub-frames. That therefore implies that they believe the "composite image" itself has to be resolved by the lens, so therefore that means that the "composite image" must be presented to the rear element of the lens, i.e. - before the lens, inside the projector. That is exactly what I was depicting with those graphics. I apologize if I didn't do a good job of it.

That is what I truly and firmly believed they were saying, so therefore when I replied there was no dishonesty whatsoever from my perspective, and no falsehoods whatsoever.
By that logic do you believe that your expert thought that the eShift sub-frames get combined before the lens since they said the same thing I did, that it doesn't really matter whether the eShift sub-frames are combined before the lens or not?

You seem so intent on saying that I am wrong and your expert is right that even with your expert agrees with me (that it doesn't matter if the eShift sub-frames go through the lens at different times or at the same time), you claim that I am wrong and your expert is right. That isn't possible since your expert agreed on that point. If you want to claim I am wrong on that then you have to claim your expert is wrong too. If you actually believe that we are both wrong then just say so, or go get the clarification from your expert that you so far have out and out refused to get and post.
Quote:
Originally Posted by Dave Harper View Post
Amazing how different you talk to Javs, when his position is virtually the same as mine. That says it all right there.
Javs doesn't play the games that you play. I am confident that if Javs had an expert at his disposal he would go ask the expert a reasonable followup question to clarify their position and post the answer here. You outright refuse to do that. If you cared about whether what you have posted so far was true or not you would get that clarification and post it. To do otherwise is to show that you are more interested in claiming I am wrong than finding out the truth.
Quote:
Originally Posted by Dave Harper View Post
... I'm able to reconnect with my guy (thinking of having him over for a movie and refreshments!) and we can talk face to face and he can go over each point with me one by one, showing where either I or darinp and coderguy or whomever was wrong and right.
It sounds to me like you want to set up a situation where you would be interpreting what your expert said and then posting that, instead of posting their exact words. We saw how well that worked out last time. You came here and completely misrepresented something your expert said. It was only due to you posting their exact words that it because clear that they thought B and C in my example had the same lens requirements when your interpretation of what your expect had said was exactly the opposite.

Everybody makes mistakes, but since you already showed that reinterpreting what your expert claims doesn't necessarily work, why not text them my question and post their exact answer? Then people here can see what your expert says when asked for clarification instead of your interpretation of what your expert says.

You have a chance to show people that you care about the truth instead of just claiming that you do.

--Darin
darinp is offline  
Sponsored Links
Advertisement
 
post #454 of 1307 Old 07-01-2017, 09:54 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Javs View Post
This conversation will go on in eternity because of the fact that we can never resolve 100% of a 1080p pixel in the first place, thus we never hit the limits of 1080p, thus the lens can always be better.
That is one reason. I think the other reason it continues on is because Dave outright refuses to ask his expert my clarification question and post his expert's response.
Quote:
Originally Posted by Javs View Post
Why is this even an argument, who says this?
I am referring to things like Ruined's continued argument that if a lens has blurring that affects 1 mm^2 entities (pixels in 1080p space) it will similarly affect 0.25 mm^2 entities (sub-pixels in eShift space), as an argument that eShift doesn't require any better lenses than 1080p, when that exact argument goes into the list of arguments that go against higher native resolutions having higher lens requirements too.
Quote:
Originally Posted by Javs View Post
Sorry but this is now nothing to do with a lens and everything to do with my vision and distance, I don't get how this example is relevant whatsoever to our discussion as I don't take issue with it. How does this support your argument for resolution requirements in lenses? Lets stick to that please?
Your vision has 2 lenses. If the argument that lenses only need to resolve what goes through them at one time were true then it would apply to the lenses in your eyes too. That is what this was trying to address; the fundamental question of whether separating the photons in time changes the spatial requirements of a lens, whether it be in a projector or in an eyeball.
Quote:
Originally Posted by Javs View Post
Now, based on that, if the lens we have can ONLY resolve 1080 line pairs horizontally and that's it (This is not the current state of projectors though btw so its nothing we see in HT right now) indicating the lens is only good enough for 1080p, then you have resultant new sub-pixels with too much MTF losses to be resolved, I wont dispute this, and yes, even though our brain makes up this composite in our minds, the fact is, unless the MTF is high enough over the minimum limit for 1080p, then our brain wont have enough information to create a clean composite...
I believe we agree there.

I know you had other posts after this and I don't want to respond to every one, but will say that it seems like you and I agree on some things:

1. Current lenses on the 1080p projectors we discuss here may be well up to the task of handling eShift or handling 4k chips of the same size.
2. Although these lenses may be more than up to the tasks, that does not change that the minimum lens requirements for acceptable spatial resolution are higher for both eShift and native 4k than they are for 1080p with no eShift. And whether eShift is done by sending the sub-frames through the lens at different times or at the exact same time is pretty much irrelevant to those lens requirements. The fine detail in the composite eShift images increases the requirements for the lenses to retain that detail in whole frames (where whole frames include both eShift sub-frames overlayed) that humans perceive as being displayed all the same time, even if the 3 primaries colors are never displayed at the same time or the eShift sub-frames are never displayed at the same time.

Do those seem right to you?

Thanks,
Darin

Last edited by darinp; 07-01-2017 at 09:58 PM.
darinp is offline  
post #455 of 1307 Old 07-01-2017, 10:12 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
One more thing Javs.
Quote:
Originally Posted by Javs View Post
its another trick on the human brain, the brain needs free space around things to recognise what the detail is in such things as letters, that's why we have kerning rules in fonts.
By the argument some people are making here, if there was a problem where somebody put 2 letters too close together for the lenses in the front of your eyes to resolve them so that your retina could see them properly, you wouldn't actually have to separate the letters in space at all. All you would have to do is flash one of the letters for one millisecond and then flash the letter next to that one in the next millisecond, toggling back and forth.

That is, by the arguments of Dave and some others, all you would have to do is separate those consecutive letters in time and that would fix the spatial problem the lenses in your eyeballs had with resolving those letters.

Anybody think that would actually work to lower the requirements for the lenses in your eyeballs if somebody wanted to display a font with the letters crammed together?

--Darin
darinp is offline  
post #456 of 1307 Old 07-01-2017, 11:02 PM
AVS Forum Special Member
 
Dave in Green's Avatar
 
Join Date: Jan 2014
Location: USA
Posts: 7,861
Mentioned: 140 Post(s)
Tagged: 0 Thread(s)
Quoted: 3552 Post(s)
Liked: 2734
It seems as if clarification of the original concept is resulting in a consensus starting to develop among those who have sincerely tried to understand and have rational dialog about the other side's point of view. This is how collaboration is supposed to work and everyone who has contributed to that process is to be commended.
Dave in Green is online now  
post #457 of 1307 Old 07-01-2017, 11:31 PM
AVS Forum Special Member
 
Javs's Avatar
 
Join Date: Dec 2012
Location: Sydney
Posts: 7,837
Mentioned: 475 Post(s)
Tagged: 0 Thread(s)
Quoted: 6779 Post(s)
Liked: 6395
Quote:
Originally Posted by darinp View Post

I know you had other posts after this and I don't want to respond to every one, but will say that it seems like you and I agree on some things:

1. Current lenses on the 1080p projectors we discuss here may be well up to the task of handling eShift or handling 4k chips of the same size.
2. Although these lenses may be more than up to the tasks, that does not change that the minimum lens requirements for acceptable spatial resolution are higher for both eShift and native 4k than they are for 1080p with no eShift. And whether eShift is done by sending the sub-frames through the lens at different times or at the exact same time is pretty much irrelevant to those lens requirements. The fine detail in the composite eShift images increases the requirements for the lenses to retain that detail in whole frames (where whole frames include both eShift sub-frames overlayed) that humans perceive as being displayed all the same time, even if the 3 primaries colors are never displayed at the same time or the eShift sub-frames are never displayed at the same time.

Do those seem right to you?

Thanks,
Darin
1, 2. Yeah for sure, my long post with graphics shows it extremely clearly, Nobody could contest the data there when talking about minimum requirements, but minimum is a bloody low bar to set on it, as I show, you cant even see the pixel grid at that level, yet its enough to make out a line pair and thus technically that is the minimum requirement, so getting anything other than more blur or higher resolution information is going to be impossible. I never once contended this though. We needed to set a limit on what we call acceptable 1080p mtf, but anyway, again as the graphics show, 100% MTF is impossible thus no matter the case you can always go better.

And whether eShift is done by sending the sub-frames through the lens at different times or at the exact same time is pretty much irrelevant to those lens requirements.

I think the best way to illustrate this though, and show to people that although its technically a composite on two 1080p frames, if you have the lowest acceptable MTF for 1080p we can show that its not going to be possible to create a higher resolution composite past this point regardless, I think Dave can appreciate this, and I doubt he would take issue with it simply because its logical.



I say again that I really do think a lot of this thread has been people with different concepts on requirements for a given resolution. This is a really a philosophical conversation though at the end of the day, since in this reality, there is NO projector on the market that I know of with MTF this bad by the time the image hits the screen, and I also show that in fact, pretty much all projectors meet minimum MTF requirements for 4k also since I have also never seen a 1080p projector this bad:



So, I think we can appreciate that in the real world, this is a pretty academic conversation at best. Minimum requirements are a VERY loose term.

I would even say being quite conservative again with things, being that I believe the MTF from my JVC RS620 is actually better than this, but I believe the current 1080p projectors with decent lenses could actually display 8k with no problem on the same given panel size, the SDE would probably want to be smaller, but I still think you would see it.



This would be close to the MTF limit for 8k, again, I dont believe there is any reasonable 1080p projector out there with MTF this bad...

rak306 likes this.

JVC X9500 (RS620) | 120" 16:9 | Marantz AV7702 MkII | Emotiva XPA-7 | DIY Modular Towers | DIY TPL-150 Surrounds | DIY Atmos | DIY 18" Subs
-
MadVR Settings | UHD Waveform Analysis | Arve Tool Instructions + V3 Javs Curves

Last edited by Javs; 07-01-2017 at 11:43 PM.
Javs is offline  
post #458 of 1307 Old 07-01-2017, 11:45 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Javs,

While hopefully none of these will fail to meet the minimum requirements, there is one case where a company that already had good lenses went even better when they went to native 4K.

From what I understand, just the parts cost for the lens in the RS4500 would make it impossible to put that lens in one of the lower priced projectors and have them sell for close to what they sell for now.

Although, I don't recall if the JVC 4K chips are the same size as their 1080p chips.

As you said, a sharper lens is always better. However, and I think we agree on this, I think you can take advantage of a better lens more when you have finer detail to show, whether that be through the eShift technology or higher native resolution.

Our eyeballs will blur things some, so the better the lens can convey that fine detail the more chance we will have of actually seeing it.

--Darin
darinp is offline  
post #459 of 1307 Old 07-01-2017, 11:55 PM
AVS Forum Special Member
 
Javs's Avatar
 
Join Date: Dec 2012
Location: Sydney
Posts: 7,837
Mentioned: 475 Post(s)
Tagged: 0 Thread(s)
Quoted: 6779 Post(s)
Liked: 6395
Quote:
Originally Posted by darinp View Post
Javs,

While hopefully none of these will fail to meet the minimum requirements, there is one case where a company that already had good lenses went even better when they went to native 4K.

From what I understand, just the parts cost for the lens in the RS4500 would make it impossible to put that lens in one of the lower priced projectors and have them sell for close to what they sell for now.

Although, I don't recall if the JVC 4K chips are the same size as their 1080p chips.

As you said, a sharper lens is always better. However, and I think we agree on this, I think you can take advantage of a better lens more when you have finer detail to show, whether that be through the eShift technology or higher native resolution.

Our eyeballs will blur things some, so the better the lens can convey that fine detail the more chance we will have of actually seeing it.

--Darin
yes of course, and on the cycle goes, its one of the reasons upgrading to the JVC X9500 (RS620) floored me with its increased lens sharpness, the resolution didn't change, but the lens MTF performance sure as hell did.

The 4500 has a 0.69inch panel, I believe the current 1080p panels are about the same, but also the pixel gap significantly reduced too. So the pixel pitch went from 8µm to 3.8µm.

What they probably refer to as minimum requirements is hopefully something like 50% MTF, which should really be where the 'minimum' bar level is set for a given resolution if you are building a lens for it.

So I am sure the 4500 lens can resolve 120 line pairs which is IIRC what is required of its panel at something like 50% MTF. Which would be far more impressive than what the current lenses can do in the 1080p line, its also why the Sony 1100ES looks a heck of a lot sharper than the lower 3/6XX Sony 4k units amongst the other optical aberrations and such which they have.

JVC X9500 (RS620) | 120" 16:9 | Marantz AV7702 MkII | Emotiva XPA-7 | DIY Modular Towers | DIY TPL-150 Surrounds | DIY Atmos | DIY 18" Subs
-
MadVR Settings | UHD Waveform Analysis | Arve Tool Instructions + V3 Javs Curves

Last edited by Javs; 07-01-2017 at 11:58 PM.
Javs is offline  
post #460 of 1307 Old 07-02-2017, 06:48 AM
Advanced Member
 
rak306's Avatar
 
Join Date: Oct 2004
Location: Syracuse NY
Posts: 989
Mentioned: 6 Post(s)
Tagged: 0 Thread(s)
Quoted: 616 Post(s)
Liked: 252
Quote:
Originally Posted by Javs View Post

This would be close to the MTF limit for 8k, again, I dont believe there is any reasonable 1080p projector out there with MTF this bad...

+1, I agree with everything you said, and I believe you have stated it the clearest.

Just one comment. When they quote a spec like: film can resolve 80 lp/mm, or lens resolution: 150 lp/mm, the lines resolved will look like those fuzzy ones in this image, and not sharp and distinct ones.

Make no mistake about it, for 4K movies, the raw movie frames are low pass filtered to remove any frequency content above 1920 line pairs. (I have tried to find out - unsuccessfully - the shape/cutoff of that filter). That means, there is no content in the movie, that requires a lens better than what would pass those blurry lines.

Of course, what matters is the entire shape of the MFT curve (as you have pointed out before). And lenses don't just have a flat MTF up to a cutoff, then fall like a rock. So to get a good MFT where you need it, the lens does resolve well beyond a pixel.
Dave Harper likes this.
rak306 is offline  
post #461 of 1307 Old 07-02-2017, 11:49 AM
 
RLBURNSIDE's Avatar
 
Join Date: Jan 2007
Posts: 3,901
Mentioned: 16 Post(s)
Tagged: 0 Thread(s)
Quoted: 2012 Post(s)
Liked: 1406
Quote:
Originally Posted by coderguy View Post
Here it is again for the people that want to continue arguing:
https://www.semanticscholar.org/pape...0ad506766d4b94
Good link, I'll read that paper later.

I don't need to be convinced of super-resolution, I read about it ten years ago while Mitsubishi was using it during the process of smoothing out shaky-cam video footage while simultaneously boosting static resolution due to multi-sampling across frames (by matching picture elements which have shifted from one frame to the next).

Videogames often use temporal supersampling to double or triple the effective number of samples.This gets you, e.g. 8X MSAA for 4X costs (making it viable for real-time). It's not perfect but does result in very visually pleasing results at low costs, and filtering can avoid most of the jittered temporal aliasing artifacts.
RLBURNSIDE is offline  
post #462 of 1307 Old 07-02-2017, 12:32 PM
Advanced Member
 
Tomas2's Avatar
 
Join Date: Jan 2017
Location: New Orleans,LA
Posts: 727
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 453 Post(s)
Liked: 688
Quote:
Originally Posted by darinp View Post
...Our eyeballs will blur things some, so the better the lens can convey that fine detail the more chance we will have of actually seeing it.
The eShift algorithm is blurring the original 4K image. If you take a look at Javs post where he included that image from ARRI you will see exactly what is happening with this process.

The first image of the ARRI (FIG. 6) camera has twice as many pixels but the black level is elevated, color saturation and contrast is lowered as well. The second image is filtered such it's half resolution, with contrast and color saturation emphasized (course detail) and overall black level optimized. In this special case the second image is perceptually more appealing. This only happened due to the first image being exposed in a way to benifit from the side-by-side comparison.

4K content is not graded in this way...the image is optimized and still retains 4K pixel density. It also turns out that your hypothetical E3 would actually suffer from eShift because the detail is NOT in the diagonal but rather vertical detail...a weakness of this process.

SAMSUNG QLED | ROTEL | MOREL | M&K | HAFLER | TECHNICS SP-25
Tomas2 is offline  
post #463 of 1307 Old 07-02-2017, 12:34 PM
 
RLBURNSIDE's Avatar
 
Join Date: Jan 2007
Posts: 3,901
Mentioned: 16 Post(s)
Tagged: 0 Thread(s)
Quoted: 2012 Post(s)
Liked: 1406
Quote:
Originally Posted by Javs View Post
Talking about temporal anomalies is not moving the discussion forward frankly, sorry but I didn't take physics in high school, I did film studies instead, so sue me. Yet I can still talk logic and clearly lay out things which are easy to digest, is that so hard? I think that's one reason Dave was getting frustrated, its just throwing massive words at people and claiming you are correct without attempting to find a middle ground.
In science, the truth isn't necessarily half-way between two extremes.

One side could be 100% right and the other completely wrong, or both could be completely wrong. Trying to find consensus isn't really what science is about, that would be an argumentum ad populum which is a logical fallacy.

To really get to the bottom of this particular issue though, a background in physics is pretty important, I think. So I'll wait until I see an actual optics engineer chime in before changing my mind (assuming of course they make a compelling, reasoned argument).
RLBURNSIDE is offline  
post #464 of 1307 Old 07-02-2017, 12:35 PM
AVS Forum Addicted Member
 
coderguy's Avatar
 
Join Date: Dec 2006
Posts: 13,971
Mentioned: 61 Post(s)
Tagged: 0 Thread(s)
Quoted: 2381 Post(s)
Liked: 1284
Quote:
Originally Posted by cmjohnson View Post
While it is certainly possible to pass information through a lens that can be combined by the viewer (or viewing device) to create an image with more information in it than has passed through the lens at any given moment, such as sequential 3D as an easy example, even that does not have any bearing on the fact that angular resolution of the lens places a hard limit on its information bandwidth capacity.

I am simply choosing not to address any form of temporal pre- or post-processing of the image.
It became an issue because people were saying earlier that e-shift was being built only off a 1080p frame even for 4k content, which to me did not really make sense. We can see by how much better looking the 4k e-shift image looks in Javs post as opposed to the 1080p e-shift image, there is a huge difference showing they have to be taking into account interpolating the actual 4k data.

The e-shift process and supersampling are related, the e-shift technology was derived from the principles of supersampling, in this case the "supersampled' frame is the higher res 4k frame which is then broken into sub-frames that are enhanced by interpolation of the 4k data. We can see the net effect in Jav's images are quite amazing working even better than a regular supersampling algorithm would generally work (guessing because of the shifted pixels helping the interpolation in e-shift's case, and also because instead of just supersampling deficiencies of a frame from another frame, e-shift is essentially supersampling a higher res frame).

I would agree if an interpolation algorithm weren't specifically designed to coordinate with the shifted pixels, then the net effect would likely be very close to the same as far as lens requirements, but e-shift is combining multiple processes. It's splitting a 4k image into sub-frames to create smaller pixels with interpolation assisting the process, and Jav's images speak louder than words.

**Updated Projector Calculator Released NOV 2017**
-- www.webprojectorcalculator.com --

Last edited by coderguy; 07-02-2017 at 12:53 PM.
coderguy is online now  
post #465 of 1307 Old 07-02-2017, 12:56 PM
AVS Forum Addicted Member
 
coderguy's Avatar
 
Join Date: Dec 2006
Posts: 13,971
Mentioned: 61 Post(s)
Tagged: 0 Thread(s)
Quoted: 2381 Post(s)
Liked: 1284
Quote:
Originally Posted by Tomas2 View Post
The eShift algorithm is blurring the original 4K image. If you take a look at Javs post where he included that image from ARRI you will see exactly what is happening with this process.

4K content is not graded in this way...the image is optimized and still retains 4K pixel density. It also turns out that your hypothetical E3 would actually suffer from eShift because the detail is NOT in the diagonal but rather vertical detail...a weakness of this process.
It does not matter how its graded, because supersampling also "blurs" (anti-aliases) the images, but yet the imaging scientists still call supersampling increasing resolution and specifically refer to increasing the spatial data in the abstracts from the anti-aliasing effect, which would specifically mean a higher spec lens in e-shift's case since the pixels are split and the source is 4k.

The reason regular supersampling hits its resolution limits quicker is because they don't get a higher-res source to sample from like e-shift does, with regular supersampling they only get higher res by analyzing a culmination of multiple frames (it's like rebuilding a poor frame from a reference frame, except that the reference frame is actually a composite of all the frames differences estimated and combined). With e-shift, JVC can just interpolate a higher res source directly which bypasses a lot of the weaknesses of regular supersampling.

I think your cherry picking and estimating only the part that is relevant to your argument.

**Updated Projector Calculator Released NOV 2017**
-- www.webprojectorcalculator.com --

Last edited by coderguy; 07-02-2017 at 01:10 PM.
coderguy is online now  
post #466 of 1307 Old 07-02-2017, 01:59 PM
Advanced Member
 
Tomas2's Avatar
 
Join Date: Jan 2017
Location: New Orleans,LA
Posts: 727
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 453 Post(s)
Liked: 688
I'm not "cherry picking" in fact I'm using the very hypothetical that Darin is putting forward to demonstrate said lens requirements.

And if you followed the point I was making...that if the image already has significant contrast (which is particularly 100%) and saturated color, then there's no benifit from enhancement. The process is blurring detail then emphasizing contrast, etc.

Standard rescaling of 4K or 2K would be more transparent than eShift could deliver. It's only in some special cases where eShift would shine. It's actually a deviation from the original intent. Images that were intended to be soft contrast wise would be reproduced with lower global black level, higher contrast and color saturation.

Earlier I posted someone's observations of his JVC eShift projector and it aligned nicely with my conjecture. Also IMO this whole lens needs to be better debate is based on a flase premise to begin with
Dave Harper likes this.

SAMSUNG QLED | ROTEL | MOREL | M&K | HAFLER | TECHNICS SP-25
Tomas2 is offline  
post #467 of 1307 Old 07-02-2017, 02:18 PM
AVS Forum Addicted Member
 
coderguy's Avatar
 
Join Date: Dec 2006
Posts: 13,971
Mentioned: 61 Post(s)
Tagged: 0 Thread(s)
Quoted: 2381 Post(s)
Liked: 1284
Quote:
Originally Posted by Tomas2 View Post
I'm not "cherry picking" in fact I'm using the very hypothetical that Darin is putting forward to demonstrate said lens requirements.

And if you followed the point I was making...that if the image already has significant contrast (which is particularly 100%) and saturated color, then there's no benifit from enhancement. The process is blurring detail then emphasizing contrast, etc.
This is contradictory to what the point of anti-aliasing is for. Your speaking in absolutes instead of actual balancing traits of the image. The license plate image is not losing detail compared to its 1080p counterpart image, it's not even close. There were 3 stages, the 1080p non-shifted image, the 1080 e-shifted image, and the 4k e-shifted image. The UHD one was so much better looking and had so much more AA that I don't think any argument can be made that it is losing any detail. It specifically says in abstracts that increasing anti-aliasing is a form of increasing spatial resolution, even if it were not 1:1 mapped.

UHD (E-Shift)



Here is 1080p


**Updated Projector Calculator Released NOV 2017**
-- www.webprojectorcalculator.com --

Last edited by coderguy; 07-02-2017 at 02:23 PM.
coderguy is online now  
post #468 of 1307 Old 07-02-2017, 02:27 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Tomas2 View Post
Also IMO this whole lens needs to be better debate is based on a flase premise to begin with
What premise is that?

Do you agree or disagree that all else being equal native 4K projectors have higher lens requirements than 1080p projectors?

Do you agree or disagree with Dave's expert and myself that with totally incoherent light the lens requirements are just as high when sending the eShift sub-frames through the lens at different times as sending them through at the same time? That is, with totally incoherent light the average interference between the light from the sub-frames averages to zero and so sending the sub-frames through the lens at the exact same time doesn't require a better lens than sending them at different times. Do you disagree with that?

Thanks,
Darin
darinp is offline  
post #469 of 1307 Old 07-02-2017, 02:39 PM
AVS Forum Addicted Member
 
coderguy's Avatar
 
Join Date: Dec 2006
Posts: 13,971
Mentioned: 61 Post(s)
Tagged: 0 Thread(s)
Quoted: 2381 Post(s)
Liked: 1284
Their argument is just repeated over and over again without substance of substantiation.

The fact is simple, adding AA by deriving it from a 4k source, even if adding no extra intrinsic detail, this process requires a higher resolution to represent that AA (or smaller simulated pixels). This is a fact from the abstracts since at some point even combining temporal resolution AA goes beyond the original spatial resolution requirement even in the abstracts, and that was working with source frames at the SAME resolution, much less a 4k frame. Their claims are contradictory to imaging scientists.

Here is why:
AA interpolation of 4k sources is best represented at a closer level of detail with smaller pixels, if you did not have the smaller simulated pixels, you'd have edgier borders, so to represent the AA with an actually higher output resolution (which is another thing supersampling is used for, upscaling).

They don't just use the supersampling at the same res all the time, they also have to sometimes output it to a higher res to maintain the increased spatial data transformation due to the AA being better represented at a higher res. If you cannot understand how that invalidates the lens theory that the lens can be exactly the same (since we can almost all agree that lens requirements is at least partly based on resolution spec), then there is no point continuing this discussion, because you are skirting over the facts of how AA works with e-shift.

**Updated Projector Calculator Released NOV 2017**
-- www.webprojectorcalculator.com --

Last edited by coderguy; 07-02-2017 at 02:55 PM.
coderguy is online now  
post #470 of 1307 Old 07-02-2017, 02:47 PM
AVS Forum Special Member
 
Dave in Green's Avatar
 
Join Date: Jan 2014
Location: USA
Posts: 7,861
Mentioned: 140 Post(s)
Tagged: 0 Thread(s)
Quoted: 3552 Post(s)
Liked: 2734
Quote:
Originally Posted by RLBURNSIDE View Post
... Trying to find consensus isn't really what science is about, that would be an argumentum ad populum which is a logical fallacy. ...
I hope you aren't referring to my earlier reference to a consensus forming. I was strictly referring to what seemed to be a growing recognition that some of the disagreement was based on a misunderstanding and distortion of the original premise in the first post as opposed to science directly applicable to that premise. In fact scientists do reach consensus (agreement) when they agree on scientific fact determined by scientific methods.

It strikes me as somewhat ironic that a discussion involving focus can so easily lose focus. The owner of the definition of the original premise is @darinp2 and I think it's productive when he steers the conversation back to that original premise and away from the many blind alley side paths that have strayed away from it.

Perhaps it would be beneficial for him to more frequently re-state the original premise in simplest terms to narrow down the scope and refocus the discussion as opposed to allowing the discussion to be pulled down some of those blind alley side paths that seem to do nothing but generate unnecessary animosity.
Dave in Green is online now  
post #471 of 1307 Old 07-02-2017, 02:51 PM
AVS Forum Special Member
 
Javs's Avatar
 
Join Date: Dec 2012
Location: Sydney
Posts: 7,837
Mentioned: 475 Post(s)
Tagged: 0 Thread(s)
Quoted: 6779 Post(s)
Liked: 6395
Quote:
Originally Posted by Tomas2 View Post
I'm not "cherry picking" in fact I'm using the very hypothetical that Darin is putting forward to demonstrate said lens requirements.

And if you followed the point I was making...that if the image already has significant contrast (which is particularly 100%) and saturated color, then there's no benifit from enhancement. The process is blurring detail then emphasizing contrast, etc.

Standard rescaling of 4K or 2K would be more transparent than eShift could deliver. It's only in some special cases where eShift would shine. It's actually a deviation from the original intent. Images that were intended to be soft contrast wise would be reproduced with lower global black level, higher contrast and color saturation.

Earlier I posted someone's observations of his JVC eShift projector and it aligned nicely with my conjecture. Also IMO this whole lens needs to be better debate is based on a flase premise to begin with

Do you have a JVC yourself or spent any good amount of time with one viewing UHD input signals?

I cant help but think that you speak from a point of inexperience, the shifted UHD images are not even close to lacking sharpness or contrast, and there is considerable fine detail delta over the 1080p counterparts.

JVC X9500 (RS620) | 120" 16:9 | Marantz AV7702 MkII | Emotiva XPA-7 | DIY Modular Towers | DIY TPL-150 Surrounds | DIY Atmos | DIY 18" Subs
-
MadVR Settings | UHD Waveform Analysis | Arve Tool Instructions + V3 Javs Curves
Javs is offline  
post #472 of 1307 Old 07-02-2017, 03:49 PM
AVS Forum Special Member
 
Javs's Avatar
 
Join Date: Dec 2012
Location: Sydney
Posts: 7,837
Mentioned: 475 Post(s)
Tagged: 0 Thread(s)
Quoted: 6779 Post(s)
Liked: 6395
Quote:
Originally Posted by rak306 View Post
+1, I agree with everything you said, and I believe you have stated it the clearest.

Just one comment. When they quote a spec like: film can resolve 80 lp/mm, or lens resolution: 150 lp/mm, the lines resolved will look like those fuzzy ones in this image, and not sharp and distinct ones.

Make no mistake about it, for 4K movies, the raw movie frames are low pass filtered to remove any frequency content above 1920 line pairs. (I have tried to find out - unsuccessfully - the shape/cutoff of that filter). That means, there is no content in the movie, that requires a lens better than what would pass those blurry lines.

Of course, what matters is the entire shape of the MFT curve (as you have pointed out before). And lenses don't just have a flat MTF up to a cutoff, then fall like a rock. So to get a good MFT where you need it, the lens does resolve well beyond a pixel.
Make no mistake about it, for 4K movies, the raw movie frames are low pass filtered to remove any frequency content above 1920 line pairs.

Horizontal or vertical line pairs? 4K would have 2160 horizontal line pairs, and they are not low passed at 1920 (unless you mean vertical).

Its done to prevent moire on high frequency information, sometimes the camera itself will actually do this. Or they use a higher resolution sensor to record internally but spit out a lower resolution file. My Canon C100 does this, its actually a 4k sensor but it output 1080p totally free of moire, as that is what is required to capture data which does not exhibit this issue.




JVC X9500 (RS620) | 120" 16:9 | Marantz AV7702 MkII | Emotiva XPA-7 | DIY Modular Towers | DIY TPL-150 Surrounds | DIY Atmos | DIY 18" Subs
-
MadVR Settings | UHD Waveform Analysis | Arve Tool Instructions + V3 Javs Curves
Javs is offline  
post #473 of 1307 Old 07-02-2017, 06:33 PM
Advanced Member
 
Tomas2's Avatar
 
Join Date: Jan 2017
Location: New Orleans,LA
Posts: 727
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 453 Post(s)
Liked: 688
Quote:
Originally Posted by Javs View Post
...I cant help but think that you speak from a point of inexperience, the shifted UHD images are not even close to lacking sharpness or contrast, and there is considerable fine detail delta over the 1080p counterparts.
That is true, I'm basing my comments on what I know about image processing and comments made in this forum and others.

The consensus seems that sharpness increases when eShift is disabled (technically has to) in most cases. I'm not saying it lacks contrast...just the opposite, it is actually emphasizing contrast as part of it's MO.

Per my previous post, this process is basically encoding ~1.5K per subframe that can perceptually combine for a pseudo 3K composite. This explains why I think a standard 2K lens is more than adequate to transport my hypothetical payload(s).

For the record I'm not saying anyone is wrong, this pure conjecture on my part. That's why I always use "IMO" this stuff is not that important to me and like I posted above, I maybe 100% wrong

SAMSUNG QLED | ROTEL | MOREL | M&K | HAFLER | TECHNICS SP-25
Tomas2 is offline  
post #474 of 1307 Old 07-02-2017, 09:32 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
I'm sure there are still some holdouts who think that all that matters for lens requirements is what goes through the lens at one instant.

So, here is one more example. If somebody built a 1080p projector where only 1/4th of area of each pixel was shown at once and this ran at 1000Hz, would the lens requirements go up?

For instance, if single pixels in 1080p space were 1 mm^2 on screen and this projector only sent 0.25 mm^2 worth of info for each pixel through the lens at once, would the lens MTF requirements change from what is required for 1080p to what is required for 2160p?

--Darin
darinp is offline  
post #475 of 1307 Old 07-02-2017, 10:47 PM
 
Dave Harper's Avatar
 
Join Date: Feb 2000
Location: Paradise on Earth
Posts: 6,554
Mentioned: 62 Post(s)
Tagged: 1 Thread(s)
Quoted: 3159 Post(s)
Liked: 1721
Quote:
Originally Posted by darinp View Post
I'm sure there are still some holdouts who think that all that matters for lens requirements is what goes through the lens at one instant.

So, here is one more example. If somebody built a 1080p projector where only 1/4th of area of each pixel was shown at once and this ran at 1000Hz, would the lens requirements go up?

For instance, if single pixels in 1080p space were 1 mm^2 on screen and this projector only sent 0.25 mm^2 worth of info for each pixel through the lens at once, would the lens MTF requirements change from what is required for 1080p to what is required for 2160p?


--Darin

I'm going to regret this, but......

Are these physical pixels, like on an imaging chip, or derived ones that happen afterwards?


If physical pixels and by "this projector" you mean a 1920x1080 eShift projector, then it can't physically display anything smaller than one of its 1920x1080 pixels. If those pixels are 1mm^2 each, then they can't show anything smaller than that, be it .25mm^2, .5mm^2 or anything under 1 mm^2. One pixel in that entire 1920x1080 imager's display is the smallest detail it is able to present. IOW, it can only be in one state at a time, i.e. - anywhere between its black state and its full capacity (for each color of course and dependent on the tech we are talking about.)

Or did I miss something you're trying to convey?


Quote:
Originally Posted by coderguy View Post
It became an issue because people were saying earlier that e-shift was being built only off a 1080p frame even for 4k content, which to me did not really make sense.......

For the record, I never once said or implied that.
Dave Harper is offline  
post #476 of 1307 Old 07-02-2017, 10:58 PM
AVS Forum Addicted Member
 
coderguy's Avatar
 
Join Date: Dec 2006
Posts: 13,971
Mentioned: 61 Post(s)
Tagged: 0 Thread(s)
Quoted: 2381 Post(s)
Liked: 1284
Quote:
Originally Posted by Dave Harper View Post
I'm going to regret this, but......
For the record, I never once said or implied that.
You didn't, someone else did.
Dave Harper likes this.

**Updated Projector Calculator Released NOV 2017**
-- www.webprojectorcalculator.com --
coderguy is online now  
post #477 of 1307 Old 07-02-2017, 11:05 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Dave Harper View Post
Are these physical pixels, like on an imaging chip, or derived ones that happen afterwards?
If it makes it easier let's say a company makes a projector with a 4K chip, but if you don't buy an expensive license key then the projector will only display 1080p images.

The way that it will display 1080p images with a 4K chip is to use 4 pixels in 4K space to represent one 1080p pixel. Every block of 4 pixels will be set to the same value, so that the final composite image can only have 2 million different values, even though the chip used has 8 million pixels.

Does it matter whether the projector lights up all 4 pixels in each block (which are sub-pixels in 1080p space) at the same time, or lights up just 1 pixel per 2x2 block at a time?

You can imagine what would happen with a single 1080p pixel wide white and black line test for MTF on this projector versus one that uses a native 1080p chip. Would it really matter for that MTF test that the 4K chip basically has pixels that are sub-pixels in 1080p space?


There are other ways I could propose to do this, like with shutters over DLP mirrors that block 3/4ths of the mirror at a time, but for the moment I'll use the above design.

--Darin

Last edited by darinp; 07-02-2017 at 11:09 PM.
darinp is offline  
post #478 of 1307 Old 07-03-2017, 10:04 AM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Dave,

Feel free to ask your expert about this latest scenario if you want. I will try to shorten it to make it simpler:

"If somebody built a 1080p projector that used incoherent light and only displayed 1/4th of the size of each pixel at a time (changing which 1/4 each millisecond), would the lens MTF requirements be based on 1080p line pairs, or 2160p line pairs?"

--Darin
darinp is offline  
post #479 of 1307 Old 07-03-2017, 10:32 AM
 
RLBURNSIDE's Avatar
 
Join Date: Jan 2007
Posts: 3,901
Mentioned: 16 Post(s)
Tagged: 0 Thread(s)
Quoted: 2012 Post(s)
Liked: 1406
Quote:
Originally Posted by Dave Harper View Post
One pixel in that entire 1920x1080 imager's display is the smallest detail it is able to present.
Incorrect. (IMO)

The smallest detail that the entire system can project is smaller than the distance between static pixels, because the wobulator's offset counts too.

Showing a pixel at position A then offsetting it slightly to show something different at position B counts as a "detail". This is the logical gap you guys are glossing over. This does result in a lower distance which must be resolvable through the lens.

You keep ignoring the effect of the wobulator but offer no logical reason why you are allowed to do that. Please explain the rationale for this. Optical systems are evaluated as a whole, you can't ignore stuff in the optical path because it makes it simpler to prove a point.

The "smallest detail" is therefore in fact governed by the distance between wobulated and unwobulated frames, not the actual pixel distances themselves (because the wobulation offset is smaller, on purpose).

If a lens can't resolve something that small statically, it can't resolve it dynamically either (given incoherent light with extents much wider than the diffraction limit, this is what makes static = dynamic here).

The impact on the image due to moving the pixels over 1/2 a pixel-width needs to be visible through the lens (IMO), so if the lens isn't good enough, then those two sub-images will look the same, as seen on the screen. You can't get around this by ignoring the existence of the wobulator as if it weren't part of the entire optical system.

So this entire problem can be reduced thus: what are the lens requirements to resolve perfectly the differences between nearest sample offsets emitting distinct light information (whatever that may be).

Static vs dynamic is a red herring. Wobulation works and effectively doubles the spatial sampling frequency (or increases resolution or reduces wavelength, all interchangeable terms). This is known and not up for debate, really.

The only question is: do people here accept that at any given position, on average, the distance between neighbouring (thus distinct) spatially arrayed sample coordinates are a 1/2 pixel instead of a whole pixel? I think we can reduce the entire thread to this question, more or less (ignoring diffraction effects which wikipedia tells us we can, because we're not trying to see details on the order of hundreds of nanometers but rather a couple microns). Perhaps this is incorrect).
RLBURNSIDE is offline  
post #480 of 1307 Old 07-03-2017, 10:43 AM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by RLBURNSIDE View Post
Incorrect. (IMO)

The smallest detail that the entire system can project is smaller than the distance between static pixels, because the wobulator's offset counts too.

Showing a pixel at position A then offsetting it slightly to show something different at position B counts as a "detail". This is the logical gap you guys are glossing over. This does result in a lower distance which must be resolvable through the lens.
Yep. Just to add Jav's picture one more time.



If this was the only image a projector was ever supposed to show, would the lens requirements be different between these 3:

1. The large green and red "pixels" are sent sequentially through the lens at 1000Hz.
2. The large green and red "pixels" are sent at the same time through the lens.
3. Seven pixels are sent through the lens: 3 red, 3 green, and 1 yellow?

Other than some very minor stuff, those 3 different cases would have the same spatial requirements for the lens. It seems like some people keep gravitating to, "But they go through the lens at different times", but have never really explained why they think this matters for the lens requirements beyond that is just how they feel light should work.

--Darin
darinp is offline  
Sponsored Links
Advertisement
 
Reply Digital Hi-End Projectors - $3,000+ USD MSRP

Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off