Resolution requirements for lenses - Page 39 - AVS Forum | Home Theater Discussions And Reviews
Forum Jump: 
 269Likes
Reply
 
Thread Tools
post #1141 of 1307 Old 08-03-2017, 10:34 AM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by CherylJosie View Post
Until we agree on basic physics, we are nowhere. Wikipedia:

In physics, two wave sources are perfectly coherent if they have a constant phase difference and the same frequency.
Thanks. I did a little refresher and I was either missing (or forgot) the part about the light needing to come from a point source.

Is it your position that the individual mirrors for each pixel don't count as enough of a point source to say the light from them that the lens sees would be pretty coherent if it was all one wavelength?

Thanks,
Darin
darinp is offline  
Sponsored Links
Advertisement
 
post #1142 of 1307 Old 08-03-2017, 10:50 AM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Dave Harper View Post
I will leave you with this from TI's XPR Training Modules:
Quote:
...Another general design tip is that the projection optics must be able to resolve the pixel size of the DMD the DLP660TE contains an array of 5.4 micron mirrors. The lenses must be able to see these small feature sizes and magnify them to create sharp 4K UHD...
Do they make the same statement about their non-XPR projectors?

As I've said before, you can look at just the native frame to get a very good idea of how well the lens will do with the eShifted images.

A lot of this comes down to where you draw the line for adequate. I don't believe the line for an adequate lens for just the native resolution is as high as the line for adequate for eShifted images (whether they go through the lens at the same time or not), but I am willing to consider that A and C have the same lens requirements. From the beginning over 18 months ago the disagreement was whether the temporal separation of the sub-frames matters and this thing from TI does not address that.

It wouldn't surprise me at all if TI explained that the way they did because that is easiest for people to understand. If you put up a native image and it is sharp enough then it should be sharp enough for eShift. That does not mean that the line for "adequate" is the same and if it was then TI should have made the same statement about their non-XPR projectors.

--Darin
darinp is offline  
post #1143 of 1307 Old 08-03-2017, 12:59 PM
AVS Forum Special Member
 
cmjohnson's Avatar
 
Join Date: Nov 2003
Location: Sharply focused on sharper focus.
Posts: 6,140
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 104 Post(s)
Liked: 34
Nope, still wrong, stereodude, because the "virtual" pixels which are 1/2 size (H and V) which are real picture information requires the lens to have sufficient angular resolution to resolve pixels of that size REGARDLESS of the fact that the virtual pixels are not actually discretely projected through the lens system.

If that were not the case, then you could get extra performance out of any lens by shifting it slightly relative to the imaging device on a dynamic basis, giving "free" bandwidth in direct violation of Nyquist.

The shift is angular spatial information and it must be resolvable to a defined standard. As feature size decreases, the sharpness of the lens has to increase in order to achieve the same MTF value.

A 4K projector requires a better lens than a 1080p projector. A 4K e-shift projector requires a better lens than a 1080p projector.

If this was not true then nothing would ever require a better lens according to the way you're thinking. So why not use a lens built for
an early generation VGA projector at 640x480 resolution? Well, why not? Because if a 1080p lens is good enough for 4K then a VGA lens should be good enough for 1280x960 or so, according to your (defective) reasoning.
cmjohnson is offline  
Sponsored Links
Advertisement
 
post #1144 of 1307 Old 08-03-2017, 01:42 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Dave Harper View Post
There is no new or smaller detail where the lens is concerned.
This being a reason for your position contradicts what you say later about B and C having the same lens requirements, since in the C case there is smaller detail that goes through the lens at a given point in time. To say that B and C have the same lens requirements is to say that it doesn't matter whether smaller detail goes through the lens at a moment in time, or over multiple moments in time.
Quote:
Originally Posted by Dave Harper View Post
I think the confusion is coming in because I am talking "time" in the reply above, which in my mind when talking about eShift assumes we know we are talking about the native sized 1080p pixels, not smaller 4K sized pixels, be they spatially, temporally or otherwise created after the lens elements.

Looking at it now after all this time has passed, it seems darinp focused on the time part of it more-so than the fact that it was more about the size of the pixels, and that eShift used the same resolution and same sized pixels as its 1080 non-eShift counterpart, so it HAS TO send them separately in order to even display anything resembling a higher resolution like faux-K, 3K or 4K.
I did take your "at any given point in time" as time like on a precise clock and that your position was that the lens only needed to resolve the native resolution because at any given point in time only 4 million pixels are going through the lens (for XPR) and not 8 million pixels at any given point in time. Since then it seems like you have tried to say that I was wrong in what your position was (I can look for links on that one if you disagree), but you even cleared it up pretty quickly after that IMO when 1 day after your original post you followed it up by posting:
Quote:
Originally Posted by Dave Harper View Post
Exactly, which is exactly why the lens only needs to resolve the resolution of the imager's native resolution at any one given time, as it's native resolution is all that's actually passing through the lens.
...
But think of the image's light path in time. It doesn't reach your eyes until AFTER it goes through the lens (one flash of its native resolution at a time, in super fast sequence), bounces off the screen and back into your eyes, which then allows your brain to merge the images due to persistence of human vision.
and a day later you posted:
Quote:
Originally Posted by Dave Harper View Post
Your eyes are ONLY ever seeing one half of the XPR frame at a time, but they're flashed fast enough that your brain merges them later. This happens AFTER the image, containing one frame at a time, goes through the lens sequentially, NOT simultaneously. Therefore, you only need the lens to resolve fully the detail of that one native resolution frame.
It seems like at one point we went round and round with me saying that your position that you took was that the temporal separation between the sub-frames was the reason that B only had the same lens requirements as A, while you would tell me that I was misrepresenting your position, then you would again claim that it is the fact that the sub-frames are separated in time that makes B have the same lens requirements as A. You even did it again in this post when you pretty much said that it is the fact that the yellow pixel is from something combined after the lens and not before the lens that means the lens requirements are lower. Do you disagree with this representation of how things have gone? If so, I would like to know how because I think you have indicated close to 20 times that it is the fact that the smaller sized sub-pixels do not go through the lens at any given moment in time that matters.

If your position from the beginning was really about the size of the pixels on chip why have you repeated over and over again for more than a month that it is that the smaller detail doesn't go through the lens at a single moment in time, but only looks like it to our vision, that matters?
Quote:
Originally Posted by Dave Harper View Post
I do disagree with him still that..."The lens requirements are built around the final composite image..." though.
And yet you say that B and C have the same lens requirements, which seems kind of interesting to me. In the case where JVC creates 2700 lines by using temporal differences they still measure the performance of the lens for the 2700 line composite image.

How about for a DLP? Do you judge how well the lens lines up the 3 primary colors by how well it displays the composite image that has white, or only the images for a given moment in time that do not contain any white? How about when you calibrate the grayscale for a single chip DLP? Do you calibrate by the composite images, or by the individual images that are separated by maybe 1 millisecond?

Also looking back at the beginning of this subject coming up again in the Optoma thread, one of the confusions was likely a semantic one with Ruined's "fully resolved". As I explained, no lens is perfect. For the E3 example, if a person can read the E when shown all by itself is the E fully resolved, or maybe less than fully resolved? Of course if you choose a lens that resolves down to 1/100th the size of one of the branches on the E then that lens will be able to resolve things much smaller than the E. However, resolving the E to the point that a person can see it when shown by itself does not mean that the same person would be able to see the E3 as E3 when shown at 120Hz. Adequate for one is not adequate for the other even though for any given moment in time there is not extra detail going through the lens. It is only after considering the composite image that the extra detail is present and needs to be accounted for.

A truly perfect lens works for everything, but I tried to touch on what "needs" for lens requirements means in the first post in this thread.
Quote:
Originally Posted by Dave Harper View Post
My expert said the pixels DO matter, because they are what is creating the photons you keep harping on darinp.
Yet he also said that both sub-frames can go through the lens at the same time with totally incoherent light and it wouldn't matter to the lens requirements. If you send both sub-frames at the same time how would the lens even know that the pixels are 4 times the size of the smallest element going through the lens at a given moment in time? If you had 100% fill ratio and sent both sub-frames at the same time, is there any way the lens would know that the red, yellow, and green image is made up of 2 pixels in 1080p space instead of 7 pixels in 4K space?
Quote:
Originally Posted by Dave Harper View Post
They are in the shape of each pixel,going through the lens, so the smaller each one is, the better lens is what's required. Since eShift projectors only have 1920x1080 .67" imager sized pixels creating those photons, then that is all they need to resolve at any one given point in time.
Your expert made some fairly contradictory claims when he seemed to say both that the temporal separation between the sub-frames didn't matter (other than a minor factor of the light now being totally incoherent) and also that the temporal separation was important, when he used the fact that the sub-frames don't go through the lens at the same time as part of his reasoning. Did he ever weigh in on A vs C in my example?
Quote:
Originally Posted by Dave Harper View Post
By using TEMPORAL dithering, just like the JVC guy said in that HTG video darinp.
Exactly. And when they measured the 2700 lines that they had created through temporal dithering they measured the whole composite image that had 2700 lines. Do you disagree with that?
Quote:
Originally Posted by Dave Harper View Post
Oh, so you're saying the lens has some "persistence" then?
Of course not. When JVC uses that temporal dithering to create 2700 line images the lens is judged by how well it displays those and the fact that it takes time to display that doesn't matter for human vision or the cameras or meters JVC uses to measure that 2700 line pattern. Not sure why you think that would require any persistence in the lens. Especially since you say that B and C in my example have the same lens requirements. Taking that is taking the position that the temporal separation between the individual sub-frames doesn't really matter.
Quote:
Originally Posted by Dave Harper View Post
Yep, which is why in darinp's scenario, lens A=B=C.
To say that B is the same as C is to agree with basically the whole reason I started this thread 18+ months ago. I can provide a link again if anybody wants, but if people look back at where this whole thing started 18+ months ago the original premise was that eShift didn't require any better lens than non-eShift. My first post on the subject matter I asked for the reason, as their reason mattered to the discussion. If they had said A=C then that is what I would have considered, but they made it clear that their reason for the claim was the temporal separation between the sub-frames. I explained why this didn't really matter and many people repeated over and over again that the sub-frames don't go through the lens at the same time and therefore the lens requirements are the same. I even got responses like here acting like I must not know that the sub-frames go through the lens at different times.

I lost track of how many times I had to explain that I wasn't saying the sub-frames go through the lens at the same time, I was saying it was irrelevant.

Now by saying that B=C you are agreeing with my main point right from the beginning over 18 months ago. Are you sure you want to do that?

The part about whether A=C was not in dispute 18 months ago. I believe everybody agreed that C had higher lens requirements than A. I didn't pursue that one because we were all in agreement.

As I've said, I'm flexible on A vs C. I'm not to the point of believing that those really are equal, but I think there are arguments to be made and it could turn out that way, especially if the eShift algorithms don't actually take advantage of some smaller detail, even when they can.
Quote:
Originally Posted by Dave Harper View Post
But the lens never sees the "yellow pixels". Only our brains do because we have persistence of vision, the lens does not. The lens only sees a RED 1080 sized pixel in the 1st/120th of a second, and then it sees the shifted GREEN 1080 sized pixel (no different than if we shifted the image using the lens shift feature to move it around the lens surface.) in the 2nd/120th of a second.
I'm sure you don't realize it, but you are again contradicting your position that A=B=C. The reason I came up with those is that it keeps people honest as far as what they think the real causes are here. Since the only difference between B and C is the temporal separation, once you say that B=C you have agreed that the temporal separation doesn't matter and so then you are contradicting yourself if you again argue that A=B because of the temporal separation in B.

If you truly believe that B=C then you are agreeing with my main original (and current) position that the temporal separation doesn't matter (at least leaving out minutia). In that case your A=C would be what disagrees with what I and many others believed right from the beginning and so that is where we should concentrate our efforts if you want to stick with B=C.

As I've tried to say, I'm not really very flexible on whether the temporal separation matters. I believe it is basically irrelevant, as I have pointed out from the beginning and I am very confident in that position. But as far as A=C, I think there are arguments to be made for that and I'm willing to consider that I and everybody else were wrong from the beginning about that one.

However, if you really want to stick with your position that B=C then you will have to give up your positions like this:
Quote:
Originally Posted by Dave Harper View Post
Yes, it is "just light", and that light is only a 1920x1080 .67" imager sized pixel at any given point in time, so that's all it needs to resolve.

There is no "increased resolution of the image" as far as the lens is concerned. That happens in our brains after it merges them together with our persistence of vision.
because C does have "increased resolution in the image" as far as the lens is concerned and if B=C then it doesn't matter whether that increased resolution has temporal separation or not.

If you want to keep using your expert as proof then you should tell us what his actual position is on A vs C (or B vs C since it is the same thing for anybody who says A=B). For somebody to be the final word on a subject matter they should be able to answer without contradicting themselves. To do otherwise is to ask people to do something smart people shouldn't do, which is to only consider the part of what your expert said that you want them to believe, and not believe the part where your expert made the comment about both sub-frames being able to go through the lens at the same time without a problem, with totally incoherent light.

--Darin

Last edited by darinp; 08-03-2017 at 04:15 PM.
darinp is offline  
post #1145 of 1307 Old 08-03-2017, 03:18 PM
AVS Forum Special Member
 
CherylJosie's Avatar
 
Join Date: May 2008
Posts: 1,504
Mentioned: 18 Post(s)
Tagged: 1 Thread(s)
Quoted: 798 Post(s)
Liked: 323
Quote:
Originally Posted by Tomas2 View Post
Sorry, but that was not what you were asking me about in your original question

i.e. the temporal "compression" used with interlaced formats vs progressive and whether it may impact or possibly lessen a given lens requirements.

CRT persistence has nothing to do with the premise of your question.
Interlacing is not a form of temporal compression, it is a form of temporal multiplexing. The flow of information occurs at the same rate regardless of whether the frame is composed of two interlaced fields or not because the frame rate is constant and so is the resolution.

Now take the case where both of the interlaced fields comprising any given frame are sampling an identical static image. The information contained in both 1080i and 1080p formats is exactly identical with a static image and there is no need to deinterlace them with motion detecting algorithms, but they can rather be directly combined into a single frame, so as long as the CRT is capable of either interlaced or progressive scan at 30fps, we can ignore motion artifacts and flicker entirely, and concentrate on the lens instead.

CRT phosphor persistence means that, in the differential limit at the region of the scan line where phosphor continues to glow after the electron beam is already several lines down the raster, the phosphor may as well be illuminating not just two adjacent lines in the interlaced field, but also the entire field simultaneously, and some of the videos I posted demonstrated that it actually is doing exactly that to some extent because the entire screen is lit at varying level of brightness at any instant in time.

As illustrated in the last slow-motion CRT video I posted, the one scan has already largely decayed by the time the next scan lights up, so we can assume for the purposes of this discussion that the two fields of interlaced video are transmitted through the lens in total but at separate times, halving the peak information rate through a projection lens.

If your claim is that more information can be transmitted through a given lens when the fields are time-multiplexed because eShift halves the peak information rate, this should apply to both eShift and interlaced raster because other than the overlapping of pixels, these two cases are essentially identical.

Why would this information benefit occur with diagonal half-pixel shift of overlapping pixels, but not with vertical whole-pixel shift of a non-overlapping interlaced raster? Are you claiming that by similarly crowding the scan lines of 1080i until they partially overlap, the maximum limit for the information being transmitted through a given lens would also increase? How?

There are only two possible cases. Either that partial diagonal overlap conveys additional information, or it does not. The images posted here prove that it works.

If pixel overlap conveys additional information, then it must be true also that the lens must be capable of resolving that overlap for that additional information to be preserved at the display, or it must also be true that CRT 1080i raster scan imposes less restrictions on the optics than 1080p. It has to be one or the other.

Taking this argument to its logical conclusion, the highest resolution through any given lens would occur with only one pixel lit at any given time regardless of the display technology and this is obviously an absurd conclusion. That principle applies with audio A/D and D/A converters regarding the anti-alias filters, but not with optical lenses that have no DSP capability to take advantage of.

That is why the fact of a lens being analog was raised. The objection that lenses have quantum effects does not negate the fact that they are analog devices rather than sampled. Now if you want to invent a lens that does quantum sampling, there might be a Nobel in THAT for you.
CherylJosie is offline  
post #1146 of 1307 Old 08-03-2017, 04:20 PM
AVS Forum Special Member
 
CherylJosie's Avatar
 
Join Date: May 2008
Posts: 1,504
Mentioned: 18 Post(s)
Tagged: 1 Thread(s)
Quoted: 798 Post(s)
Liked: 323
Quote:
Originally Posted by darinp View Post
Thanks. I did a little refresher and I was either missing (or forgot) the part about the light needing to come from a point source.

Is it your position that the individual mirrors for each pixel don't count as enough of a point source to say the light from them that the lens sees would be pretty coherent if it was all one wavelength?

Thanks,
Darin
So I have yet again talked myself into the position of ignorant resident expert. There is probably a physicist somewhere laughing his ass off at me.

My position is that I am not nearly smart or educated (let alone experienced) enough to make definitive statements on complicated matters of physics and stake my life on them, but yes, the pixels are not coherent light sources for the purposes of this discussion because the phase of filtered light from a lamp is random.

Pixels are point sources, but there is no two-slit or other tightly constrained aperture to induce that constant phase shift down the two discrete paths that each photon takes in parallel either. A lens is neither an array of slits nor an array of pinholes, except maybe it can be modeled as such across its surface down to a given center-center spacing and thus resulting in a particular angular resolution.

My understanding is that the diffracting etc. effects in a typical projection lens affect all light equally regardless if it is coherent or not, and the same quantum/wave effects we see in the two-slit experiment are responsible for the blurring of incoherent light images through a lens too (at least according to my admittedly naive reasoning and total lack of relevant background).

Quote:
Originally Posted by cmjohnson View Post
Nope, still wrong, stereodude, because the "virtual" pixels which are 1/2 size (H and V) which are real picture information requires the lens to have sufficient angular resolution to resolve pixels of that size REGARDLESS of the fact that the virtual pixels are not actually discretely projected through the lens system.
Concise.
CherylJosie is offline  
post #1147 of 1307 Old 08-03-2017, 04:40 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by CherylJosie View Post
My position is that I am not nearly smart or educated (let alone experienced) enough to make definitive statements on complicated matters of physics and stake my life on them, but yes, the pixels are not coherent light sources for the purposes of this discussion because the phase of filtered light from a lamp is random.
I realize that we aren't dealing with pinholes, and this is kind of off the track since we have many wavelengths of light with most of the applications we are discussing, but here is an article I was looking at and thought was interesting, where a lamp was used with a pinhole and then a wavelength filter in figure 3:

http://amasci.com/miscon/coherenc.html

The author also had what I thought was an interesting list of increasing coherence:
Quote:
Sources in increasing coherence

Bright cloudy sky (least spatially coherent)
Fluorescent tube lamp
Frosted incandescent bulb
Sun during clear weather
Clear incandescent bulb
Clear incandescent bulb w/noncoil filament (aquarium bulb)
LED
Electric welding arc 50ft away
Laser (coherence-leng in MMs, up to a few Meters)
Starlight (coherence leng 1000s KM)
--Darin
darinp is offline  
post #1148 of 1307 Old 08-03-2017, 05:14 PM
AVS Forum Special Member
 
Highjinx's Avatar
 
Join Date: Dec 2002
Location: Australia
Posts: 3,461
Mentioned: 6 Post(s)
Tagged: 0 Thread(s)
Quoted: 1246 Post(s)
Liked: 514
Some interesting discussions....Darinp there should be some financial compensation for the starter of these threads, that get so many views!...surely the increased traffic increases the sites value.

Anyway, interesting that JVC's 4k Native Pro projector(DLA -SH4KNL) and the E-shifted 8k Pro Projector(DLA-VS4800) uses the same lenses.

Attached Images
File Type: jpg JVC.jpg (351.3 KB, 117 views)

May the success of a Nation be judged not by its collective wealth nor by its power, but by the contentment of its people.
Hiran J Wijeyesekera - 1985.
Highjinx is offline  
post #1149 of 1307 Old 08-03-2017, 05:18 PM
AVS Forum Special Member
 
Highjinx's Avatar
 
Join Date: Dec 2002
Location: Australia
Posts: 3,461
Mentioned: 6 Post(s)
Tagged: 0 Thread(s)
Quoted: 1246 Post(s)
Liked: 514
Dave Harper, what a detailed post. You sir have my utmost admiration.......!
Dave Harper likes this.

May the success of a Nation be judged not by its collective wealth nor by its power, but by the contentment of its people.
Hiran J Wijeyesekera - 1985.
Highjinx is offline  
post #1150 of 1307 Old 08-03-2017, 05:21 PM
Advanced Member
 
Tomas2's Avatar
 
Join Date: Jan 2017
Location: New Orleans
Posts: 740
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 459 Post(s)
Liked: 716
Quote:
Originally Posted by CherylJosie View Post
Interlacing is not a form of temporal compression, it is a form of temporal multiplexing. The flow of information occurs at the same rate regardless of whether the frame is composed of two interlaced fields or not because the frame rate is constant and so is the resolution.
The format effectively doubles the time resolution (also called temporal resolution) as compared to progressive formats (for frame rates equal to field rates).

Interlaced video is a technique for doubling the perceived frame rate without consuming extra bandwidth AKA as compression. A trade off in spatial resolution for temporal time.

Per uncompressed 4:2:2 serial digital interface (SDI) 1080i @ 59.94 frame rate requires ~1.5 Gb/s where a 1080P @ 59.94 is ~3 Gb/s. I say appropriately because each include 16 channels of embedded PCM audio plus other metadata.

SAMSUNG QLED | ROTEL | MOREL | M&K | HAFLER | TECHNICS SP-25

Last edited by Tomas2; 08-03-2017 at 06:51 PM.
Tomas2 is offline  
post #1151 of 1307 Old 08-03-2017, 05:24 PM
AVS Forum Special Member
 
Javs's Avatar
 
Join Date: Dec 2012
Location: Sydney
Posts: 7,888
Mentioned: 476 Post(s)
Tagged: 0 Thread(s)
Quoted: 6819 Post(s)
Liked: 6445
Quote:
Originally Posted by Highjinx View Post
Some interesting discussions....Darinp there should be some financial compensation for the starter of these threads, that get so many views!...surely the increased traffic increases the sites value.

Anyway, interesting that JVC's 4k Native Pro projector(DLA -SH4KNL) and the E-shifted 8k Pro Projector(DLA-VS4800) uses the same lenses.
Not really, even the lens in the RS620 is enough to annihilate the minimum requirements to discern a proper 8k line pair, I showed this pages and pages back.

The lens in that 4k projector would be high enough to discern probably up to 32k line pair, but we get back into that same old argument of what is defined as the minimum.

JVC X9500 (RS620) | 120" 16:9 | Marantz AV7702 MkII | Emotiva XPA-7 | DIY Modular Towers | DIY TPL-150 Surrounds | DIY Atmos | DIY 18" Subs
-
MadVR Settings | UHD Waveform Analysis | Arve Tool Instructions + V3 Javs Curves
Javs is online now  
post #1152 of 1307 Old 08-03-2017, 05:26 PM
AVS Forum Special Member
 
Highjinx's Avatar
 
Join Date: Dec 2002
Location: Australia
Posts: 3,461
Mentioned: 6 Post(s)
Tagged: 0 Thread(s)
Quoted: 1246 Post(s)
Liked: 514
Quote:
Originally Posted by Javs View Post
Not really, even the lens in the RS620 is enough to annihilate the minimum requirements to discern a proper 8k line pair, I showed this pages and pages back.

The lens in that 4k projector would be high enough to discern probably up to 32k line pair, but we get back into that same old argument of what is defined as the minimum.
So true!

May the success of a Nation be judged not by its collective wealth nor by its power, but by the contentment of its people.
Hiran J Wijeyesekera - 1985.
Highjinx is offline  
post #1153 of 1307 Old 08-03-2017, 05:29 PM
AVS Forum Special Member
 
Javs's Avatar
 
Join Date: Dec 2012
Location: Sydney
Posts: 7,888
Mentioned: 476 Post(s)
Tagged: 0 Thread(s)
Quoted: 6819 Post(s)
Liked: 6445
Here it is again...

The legal minimum MTF requirements...

Legal MTF limit to discern a 1080p line pair.



Legal MTF limit to discern a 4k line pair.



Here is around where I think my RS620 lies...



This would be close to the MTF limit for 8k, again, I don't believe there is any reasonable 1080p projector out there with MTF this bad...


JVC X9500 (RS620) | 120" 16:9 | Marantz AV7702 MkII | Emotiva XPA-7 | DIY Modular Towers | DIY TPL-150 Surrounds | DIY Atmos | DIY 18" Subs
-
MadVR Settings | UHD Waveform Analysis | Arve Tool Instructions + V3 Javs Curves
Javs is online now  
post #1154 of 1307 Old 08-03-2017, 05:38 PM
Advanced Member
 
AJSJones's Avatar
 
Join Date: Dec 1999
Location: Sonoma County, CA, US
Posts: 886
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 100 Post(s)
Liked: 24
Quote:
Originally Posted by CherylJosie View Post
So I have yet again talked myself into the position of ignorant resident expert. There is probably a physicist somewhere laughing his ass off at me.

My position is that I am not nearly smart or educated (let alone experienced) enough to make definitive statements on complicated matters of physics and stake my life on them, but yes, the pixels are not coherent light sources for the purposes of this discussion because the phase of filtered light from a lamp is random.

Pixels are point sources, but there is no two-slit or other tightly constrained aperture to induce that constant phase shift down the two discrete paths that each photon takes in parallel either. A lens is neither an array of slits nor an array of pinholes, except maybe it can be modeled as such across its surface down to a given center-center spacing and thus resulting in a particular angular resolution.

My understanding is that the diffracting etc. effects in a typical projection lens affect all light equally regardless if it is coherent or not, and the same quantum/wave effects we see in the two-slit experiment are responsible for the blurring of incoherent light images through a lens too (at least according to my admittedly naive reasoning and total lack of relevant background).
Coherent light is very rare, while highly collimated light with a very narrow bandwidth (spread of wavelengths) from, say, an interference filter, is not difficult to obtain. It is still incoherent. The light from a laser is coherent because of the way it is generated (in a resonant cavity of some sort) so all the photons/waves necessarily come out in phase. So let's put the interference of light rays/waves/photons issue to bed right there.

Diffraction occurs at edges - in the two slit experiment, it happened at both slits and the diffracted waves were responsible for the pattern of cancellation and reinforcement. In (the projector there is) a lens with an aperture the light is diffracted at the edge of the aperture and limits the possible resolution of the lens. The smaller the aperture, the greater the diffraction and the lower the resolution. (Many photographers here may be aware of this - on a DSLR the captured image gets softer and softer as one stops down to narrower than f/8 or f/11) Other than that aperture effect, there's no diffraction going on in a lens. Only refraction

As for the pixels bing point sources, that's only true in digital space and goes to stereodude's comments. They are based on sampling theory and the ability of a digitally sampled array to carry only a limited resolution (or MTF at a certain frequency, up to the limit of the array). Hence the 2160p MTF 0 comments. Two things: one, the whole idea of the eshift (using a 4k source at least) is to extend the bandwidth beyond the limit of either subframe (I think that was the point of the JVC MTF graph a while back) it would be analogous to sending two different audio signals, each limited to some Nyquist-limited upper frequency but different in content to allow reconstruction of signals beyond that limit when appropriately combined. The video version of that is the pre-process into two subframes with slightly different content so that when superimposed, they contain more information than allowed by the Nyquist limit of either. Se, we can go beyond the 2160 limit becuase the eye does the processing into "perceived" pixels that recombine them realize the effect of the pre-processing.

The fact that the final 1/4 pixels are not addressable and cannot be expected to carry the full 4k info because they are not "pure" (they are necessarily affected by the content of the subframe pixels they come from) is why the bandwidth extension is modest. Barco's processor for the 2.7k DMD creates 16MP addressable elements during the processing for enhancement, interpolation, sharpening,scaling if needed, etc. everything all in one go and then backs out the subframes that will give the most perceived enhancement. The MTF graph and test pattern shots appear to show it's not great but the pictures of real-world screen and the feedback in general suggest it's more than, well, meets the eye. I would put that down to the nature of the algorithm using knowledge of human vision/perception and decreased SDE. So I think there is more bandwidth coming through and a better lens will keep the system MTF (the combination of image and lens) at its optimum. (Once we define an "objective" "quantitative" definition of adequate for 1080p, the lens will need to be better (in all probability not that much) to maintain the benefit of the exra bandwidth.

That was covering things from the digital smpling theory side of things and why I think the e-shift's improvement will benefit from a better lens than the 1080p subframes (the sequential/simultaneous issue I have stopped considering).

From the point ofview of the pixels being point sources, they are not in real life in the analog world: they are squares where image elements are formed and they have discrete boundaries- image quality depends on the information from adjacent pixels not mixing/overlapping so we don't get softness/blur etc. An image pixel with perfectly distinct edges (such as a DMD element - ignoring any scatter from the siubstructure for now) on the chip will be seem as a square with blurred edges on the screen, as a result of the limitations of the lens. The least amount of blur will occur if it is a "diffraction-limited" lens where the optical quality is no longer the limiting factor but rather the aperture of the lens which results in an Airy disk that cannot be made smaller (basic law of physics that we cannot break https://en.m.wikipedia.org/wiki/Airy_disk). It's unlikely that projectors use such high quality lenses but the amount of blur seen in many systems suggest that, as we have all noted by seeing good SDE, is pretty small as a % of the size of a pixel.

The "objective/quantitative" threshold we used above to "measure" the system MTF now shows up in the analog world - how much do I have to increase the lens blur before people perceive a degradation in quality for the 1080p system. Done by audience participation, of course, since we are now "perceiving not measuring" This will create pixels with "X blur per pixel width" as the threshold we can perceive as "inadequate". (Chances are that most lenses are actually significantly better than this so this "threshold" situation may only be a demonstration case). Nonetheless, any pixel perceived on screen from the e-shift process through that lens as enclosed by the edges of the 1/4 pixel (1/2 width) will now have "2x blur per pixel width" - and we just established that as "inadequate" - so we need a better lens to keep the blur down, if we want to be able to "perceive" the improvement available.

How much better will depend on the success of the e-shift strategy. On the other hand if the lens was pretty good (i.e. more than adequate and "beyond "requirements) then the benefit may be trivial or even imperceptible by humans.

I think I'm done here

Last edited by AJSJones; 08-03-2017 at 05:46 PM.
AJSJones is offline  
post #1155 of 1307 Old 08-03-2017, 05:42 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Highjinx View Post
Dave Harper, what a detailed post. You sir have my utmost admiration.......!
It was very detailed. I hope he will clarify whether he believes the temporal separation between the sub-frames matters.

People who believe the temporal separation is irrelevant would believe that B=C.

People who believe that it matters whether the 1/4 sized yellow sub-pixel goes through the lens at a given moment in time, or in 2 pieces at different times, are the ones who believe the temporal separation between the sub-frames matters.

Dave seems to have taken both positions in the same post.

--Darin
darinp is offline  
post #1156 of 1307 Old 08-03-2017, 06:41 PM
AVS Forum Special Member
 
cmjohnson's Avatar
 
Join Date: Nov 2003
Location: Sharply focused on sharper focus.
Posts: 6,140
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 104 Post(s)
Liked: 34
Whether information passes through the lens one quanta at a time or all at once, the resolution capacity of the lens remains the same. By what mechanism could the lens possibly interact differently with photons passing through it at different times?

The single/double slit experiment, while interesting, doesn't apply because the lens system bears absolutely no resemblance to the double slit experimental setup and even if you were to say that the imaging device could be made to simulate that, by displaying spaced pairs of lines, the point there is that the diffractive effects of such an experiment would be negligible. I'm quite sure of that.

Interesting that JVC is now talking about an 8K e-shift, already. I believe the exact same terms and conditions for 8K e-shift would apply
as they do for 4K except for everything being, presumably, half scale. If the native 4K lens is good enough to well resolve 4K, then it should be enough to provide some benefit with an e-shifted 8K system however I question whether or not there's really much point in 8K unless you're shooting at a really stupendously large screen.

Now, how good can a lens actually be? The Rayleigh limit can be described as the point at which the wavelength of light and the spacing of two discrete light sources (photons) are the same. In the case of red light, at about 650 nanometers, that translates to 0.00065 millimeters. Or 0.00002559055 inches. So a perfect lens would be able to resolve two red pinpoints spaced that far apart.

I'll go out on a limb here and say that no projection lens you'll ever see closely approaches the Rayleigh limit.
cmjohnson is offline  
post #1157 of 1307 Old 08-03-2017, 07:33 PM
 
Dave Harper's Avatar
 
Join Date: Feb 2000
Location: Paradise on Earth
Posts: 6,554
Mentioned: 62 Post(s)
Tagged: 1 Thread(s)
Quoted: 3159 Post(s)
Liked: 1723
OK, here is how far I got today in my limited time I could actually concentrate on this thread. It goes through up to post #1080 of 1147 as of now:

This is fun!


Quote:
Originally Posted by Dave in Green View Post
...........Repeating an irrelevant point doesn't make it any more relevant. It only contributes to the merry-go-round effect ..........
You mean like darinp's irrelevant hypotheticals that he keeps throwing out here to confuse the weak minded?

Quote:
Originally Posted by darinp View Post
While having an intermediary is never ideal, there are some people who will act in a trustworthy way while acting as an intermediary, while others will not. For instance, some will prioritize skewing information so they can say, "You're wrong" higher than actually getting to the truth.

I think the posts by Dave Harper about his expert's opinions show that Dave cares less about getting to the truth than he does about whether he can say that I am wrong. Dave told us straight out that he didn't want to post his expert's answers to my questions when over a month ago he said:
Dave has lived up to his word there. Dave claims that he talked to his expert for an hour, but outright refuses to tell us even whether his expert weighed in on whether it matters when the sub-frames go through the lens (the B vs C) in my example, let alone what his expert said about that.

Dave even tried to bargain with me to forget what was really the main subject matter of this thread (whether it matters whether the sub-frames go through the lens at the same time or at different times), as if I would step aside and let Dave misinform readers here by leaving out relevant information.

Dave wants readers to believe some of what his expert claimed, as if this guy's word is final, while ignoring part of what his expert said, since the guy couldn't even answer the questions without contradicting himself. Sorry, you can't be the final source on a subject matter if you are contradicting even your own claims. True experts on a subject matter don't contradict themselves.

And for those who want to claim they are trying to get to the truth, a position like, "You'll make fun of my expert's position if I actually tell you what it is", is a pretty poor excuse for trying to keep readers from the truth.

Dave has proven that if information would show that he was wrong then he will not provide it, instead only providing pieces of information that he thinks "prove" he was right.

A person would have to be pretty gullible to take part of what Dave's expert said as the final word on the matter while Dave outright refuses to tell us whether the guy really believes that the temporal separation matters, given that this expert stated both that it didn't matter and that it did, with incoherent light.

When a person claims both of these:

1: A and B have the same lens requirements because the sub-frames for B don't go through the lens at the same time.
2: B and C have the same lens requirements because the temporal separation between the sub-frames is irrelevant.

Only somebody not too swift or whose goal is to be able to say they were right no matter what the truth is, would say that #1 must be true because the person who made those 2 claims gets paid to know about lenses.

If people will pay attention they will see that Dave needs to hide his expert's views on whether the temporal separation matters in order to be able to continue to claim that I am wrong.

Maybe someday Dave will care enough about the truth to tell us what the truth is about his expert's views on whether the temporal separation between sub-frames matters. Until then, Dave has shown himself to be a very untrustworthy intermediary.

A stand up person who cares about the truth will post answers from their expert even if they show they were wrong and even if that is embarrassing. Dave has made it pretty clear that if it would be embarrassing to tell us every position from his expert then he will purposely leave important information from the expert out.

This expert of Dave's didn't seem very confident in his claims given that he needed to go check with his mentor, not even counting not being able to answer without contradictions. I have little faith that Dave would actually post here if it came back from this mentor that Dave was wrong, given what I have seen so far where it looks like Dave will pass on answers that he thinks help him, but not answers that he thinks don't help him.

--Darin
Thanks for proving to the world who you truly are darinp. You have just shown the reasons WHY I did not want to post those things. What expert in their right mind would come on here and listen to an arrogant weekend warrior enthusiast that thinks he knows it all and challenges people that use these things on a daily basis in their jobs, actually DESIGNING systems that you tout to be the expert in? Exactly what system like this have you designed, darinp? Telescopes? Cameras? Projectors? When is the last time you used all this physics in a real world scenario? I believe you said college, right? So how long ago was that exactly?

The ENTIRE reason I didn't post was mainly so I COULD wait to find the real truth before I did post. It isn't my fault that these experts don't reply to me, but as I said I don't blame them given the situation. Would YOU?

In fact, I have posted what he said, that A=B=C, and at the time I said "I have my doubts", so I waited until I could alleviate some of those "doubts", darinp. Isn't the TRUE measure of a man and his trustworthyness and integrity that he can be open and listen to new information that differs from his initial beliefs and be MAN ENOUGH to say that with the new information he may have changed his viewpoint? Or are you of the camp that thinks a "Real man" is someone who is so stubborn and bullheaded and will not change his ways and beliefs all the way to the grave, even when new evidence is shown to prove the contrary? It actually appears by the way you write here, that may be the case it seems. I feel sad if that is so. I would think close relationships would be very hard for someone who acts like that.

Quote:
Originally Posted by AJSJones View Post
....... I've only just put the replacement bulb in my RS1 so I think I'll take my time to find and view the candidates in person and not rush things ......
Quote:
Originally Posted by cmjohnson View Post
One of my best friends sold me his RS45 (1080p native, no e-shift) when he upgraded to an RS500 (I think that's the model) with e-shift.

Yes, the image is more detailed than with the RS45 but I'll say the difference is not huge. It is an intermediate step toward native 4K and is properly advertised as being just that.

On my RS45 the lens will focus well enough to show screen door effect on the screen at close range. On the RS500, I got close to the screen and I suspect that the lens had not been focused to maximum sharpness because I couldn't really see ANY pixel structure, so since I suspect that lens focus was not maximized, I am unable to render judgement on how ultimately sharp the RS500 can be.
So you guys have an RS1 and and RS45, yet are somehow you're experts on eShift? Have you ever owned one, or even seen or tested one for an extended period of time?

Quote:
Originally Posted by AJSJones View Post
........
They used the same lens throughout the experiment The different shifts were accomplished by varying the half-lens combinations:

Quote:
Note that achieving general shifts does not require any change in the optical design and only needs the adjustment of the shift between the lenses
.........
Thanks for reposting the proof that I posted while ago that shows that "general shifts" (such as eShift) does not require any change in the optical design, just as we have been saying all along! :up:

The "half lens combinations" were the precursor to what is now the eShift Optical Actuator in eShift/XPR projectors, and they also went from a 60Hz 1080p signal which was duplicated and shifted, to what eShift is now at 120Hz discrete 1080p sub-frames that are shifted. That pdf was just an initial proof of concept to what the actual eShift technology is now.

Quote:
Originally Posted by darinp View Post
Spoiler!
Simply answer the question about whether 2 single white pixel eShift sub-frames would look like this:



and if not, tell us what they would look like with an eShift projector.

Is there any reason you won't tell us what 2 overlapping white pixels would look like with an eShift projector?

Seriously, is this really that hard? You have to keep ignoring simple questions in order to continue your misrepresentation of what is in that document.

--Darin
Well, at 1/120th of a second exposure it would look like one 1920x1080 .67" sized pixel. Then the next 2nd/120th of a second it would look the same, just shifted slightly down (or up, whichever they choose) and to the right. This is why that is all the lens needs to resolve, because that is all that is going through it at one instant in time. i.e. - Temporal Dithering. Refer back to my TI Training link to show this to be the case.

Quote:
Originally Posted by darinp View Post
Obviously. Everybody knows that.

What you don't seem to get is that their results would have been the same if they temporally separated the pixels.

It doesn't seem like the truth matters a whole lot to you given that you outright refuse to say how the image on the right would be different than I said here:



If you understood this subject matter as well as you claim it would be super easy for you to tell us what 2 single white pixels with 1/2 pixel overlap would look like, if you think it isn't the same as what is shown in 4(c).

What possible reason could you have for thinking the same 2 pixels in 4(c) wouldn't look like what is represented in 4(c), just because the pixels were sequential?

Again, you are making the part up about sequential being different than simultaneous. The document says no such thing. It is a figment of your imagination.

You can prove me wrong. Just tell us how 2 eShift pixels would look different than they represented in 4(c). Is that hard for you to do?

--Darin
Obviously, you don't. See my above answer.

Quote:
Originally Posted by AJSJones View Post
Spoiler!
ALL THEIR EXPERIMENTS WERE SIMULTANEOUS and used quite a good lens (as you can see by the small blur on the edges.
Why do you keep stating this when I have corrected this on many occasions? They did NOT use good lenses. They used cheap, COTS (Commercial Off The Shelf) lenses. Please see this excerpt from the pdf you're referencing:
Quote:
...Since the lenses were chosen from COTs components,
the images suffered from considerable chromatic
aberration. Therefore, we have only used the green channel
of the projector in our experiments and then converted the
captured image to grayscale...
Quote:
Originally Posted by darinp View Post
Spoiler!
I get the feeling you think the images go through the lens elements in some ordered matter.

Spoiler!

--Darin
In eShift they do actually. Think of it like when you used to (hopefully!) draw stick figures on pages of a book and then flip through it to create a pseudo cartoon of him running or doing some other crazy crap. That was all an optical illusion, right? Did each page (lens) of the stick figure image (sub-frame) have to have and resolve the entire cartoon's images at one time? Of course the answer is NO! So if eShift does basically the same thing and flashes one 120th of the 1920x1080 .67" sized pixel image at a time, then why does the lens (page) need to resolve any more than that? Guess what, it doesn't and the illusion is ALL created in your brain based on each interpolated sub-frame (stick figure drawing) flashed sequentially (each page as you flip though), just like the cartoon does! Each page only needs to contain (resolve) the information needed to do its part of the illusion.
Dave Harper is offline  
post #1158 of 1307 Old 08-03-2017, 07:39 PM
AVS Forum Special Member
 
Dave in Green's Avatar
 
Join Date: Jan 2014
Location: USA
Posts: 8,099
Mentioned: 147 Post(s)
Tagged: 0 Thread(s)
Quoted: 3669 Post(s)
Liked: 2874
Quote:
Originally Posted by Javs View Post
... but we get back into that same old argument of what is defined as the minimum.
We wouldn't need to get back into that same old argument if instead of trying to define a minimum we simply worded it the way I choose to, i.e. the finer the level of detail in a projected image the more demanding it is of lens quality. End of argument, or at least that part of it.

But, no, I expect the same old argument will return some time in the next few pages because those who do not learn from history are doomed to repeat it.
Dave in Green is offline  
post #1159 of 1307 Old 08-03-2017, 07:45 PM
AVS Forum Special Member
 
Dave in Green's Avatar
 
Join Date: Jan 2014
Location: USA
Posts: 8,099
Mentioned: 147 Post(s)
Tagged: 0 Thread(s)
Quoted: 3669 Post(s)
Liked: 2874
Quote:
Originally Posted by Dave Harper View Post
... You mean like darinp's irrelevant hypotheticals that he keeps throwing out here to confuse the weak minded? ...
No.
Dave in Green is offline  
post #1160 of 1307 Old 08-03-2017, 08:10 PM
Advanced Member
 
AJSJones's Avatar
 
Join Date: Dec 1999
Location: Sonoma County, CA, US
Posts: 886
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 100 Post(s)
Liked: 24
Quote:
Originally Posted by Dave Harper View Post
Thanks for reposting the proof that I posted while ago that shows that "general shifts" (such as eShift) does not require any change in the optical design, just as we have been saying all along!
I guess you didn't notice their use of the word "design" as opposed to "quality" of the optics. That comment does not support your case, let alone constitute "proof" . In any case their work is irrelevant to the sequential vs simultaneous transmission of subframes.


Quote:
Originally Posted by Dave Harper View Post
Why do you keep stating this when I have corrected this on many occasions? They did NOT use good lenses.
The quality of the lens they used was quite good enough for their purposes (otherwise they'd have used a better one ) - that is still not relevant to the discussion here. The key point of that post was what you ignored in your respoinse "ALL THEIR EXPERIMENTS WERE SIMULTANEOUS " i.e., not at all relevant to a discussion of simultaneous vs sequential.
Quote:
Originally Posted by Dave Harper View Post
So you guys have an RS1 and and RS45, yet are somehow you're experts on eShift? Have you ever owned one, or even seen or tested one for an extended period of time?
Where did I claim to be "an expert"??? Sounds like you think someone has to own one of these things to qualify to participate in a discussion of the physics and optics of the system?????? WTF. The discussion is not about what I've seen or own but about silly claims that the data density or photons etc interact when they go through the lens and that sending pieces separately means you can use a worse lens Still waiting for any real evidence for this claim from optics or physics texts or papers. Even Tomas is having a hard time finding evidence in support. Can you provide some, perhaps?

Last edited by AJSJones; 08-03-2017 at 08:13 PM.
AJSJones is offline  
post #1161 of 1307 Old 08-03-2017, 09:46 PM
AVS Forum Special Member
 
CherylJosie's Avatar
 
Join Date: May 2008
Posts: 1,504
Mentioned: 18 Post(s)
Tagged: 1 Thread(s)
Quoted: 798 Post(s)
Liked: 323
Quote:
Originally Posted by Tomas2 View Post
The format effectively doubles the time resolution (also called temporal resolution) as compared to progressive formats (for frame rates equal to field rates).
I specifically kept the frame rate constant at 30fps to avoid dragging any signal format ambiguities into this discussion.

You never answered. Does 1080i/30 raster display have a spatial resolution advantage over 1080p/30 raster display when transmitted through a lens? Why, or why not?
CherylJosie is offline  
post #1162 of 1307 Old 08-03-2017, 11:12 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Dave Harper View Post
You mean like darinp's irrelevant hypotheticals that he keeps throwing out here to confuse the weak minded?
They are not irrelevant at all and they are not to confuse anybody, weak minded or strong minded. Was your hypothetical about a 16mm film version to try to confuse people? I highly doubt that. It was a good hypothetical for people to check to see if their theories would hold up to small changes and it was a good hypothetical, IMO.

My E3 example was to show that the assumption that temporal differences that are so small that a person can't detect them don't matter. The lens has to be good enough for the E3 right next to each other, whether they are shown at the same given point in time or 1 millisecond apart.

The threshold for an adequate lens for most viewers with 20/20 vision is lower for the E by itself than the adequate line for the E3. The line for an adequate lens is the same whether the E3 is shown at exactly the same given point in time, or different milliseconds. Now your expert even seems to agree with me that the temporal differences don't matter in something like the E3 or eShift sub-frames, since they say that B=C.

My hypothetical about sending the eShift sub-frames is a good check for whether a person's position is strong enough to stand up to small changes. It is not to confuse, but to help get to positions that are actually true, instead of positions that immediately fail if you try to apply them to a new situation.
Quote:
Originally Posted by Dave Harper View Post
Exactly what system like this have you designed, darinp? Telescopes? Cameras? Projectors? When is the last time you used all this physics in a real world scenario? I believe you said college, right? So how long ago was that exactly?
Interesting that you are using this argument since it seems like you don't even realize you are contradicting your own expert over and over again. If you really feel like your expert must be right because he works on lenses, and your expert says that B=C, why do you keep saying that the temporal separation matters. If you expert was really the final word then you shouldn't keep contradicting him.

Of course, there is the part about your expert not being able to answer the questions without contradicting himself.

I've gotten this stuff about how much experience people have over and over with the contrast ratios and many of these "experts" have been wrong over and over again about the subject matter. They can tell me how great their resumes are, but they can't really answer the questions without making mistakes.

They could tell me that the right thing to do was to just take their answers as fact because of their resumes, but that would have been a stupid thing for me to do. It isn't about stubbornness, it is about caring about the truth and it would be ridiculous for me to take a person's word as the final word even as they are contradicting themselves. Too many of them think the, "How much experience do you have?" is proof that they were right and I was wrong. Many of those who have paid attention here over the last 15 years have seen that time showed I was right.

If somebody had good points about CR I would listen just like with this subject matter, but I won't take things as proof that aren't.

If you want your expert to be taken seriously here you should quite telling us how much experience he has, and instead get him to quit contradicting himself. And you should most definitely stop contradicting him, as telling me that I have to take his word as final while you contradict him is hypocritical.

If your expert came here we could probably have an intelligent conversation about this if they didn't play games. Jav's and I disagreed and had a reasonable discourse. He didn't misrepresent what I said or tell me he hadn't said things that he actually said. I think if you go back and look at your posts the first 3 days you did claim that it was the fact that the sub-frames went through the lens at different points in time that meant the lens requirements were lower, yet later you accused me if misrepresenting what you said, even though you had said it way more than once.
Quote:
Originally Posted by Dave Harper View Post
In fact, I have posted what he said, that A=B=C ...
Really? Where? It has felt like trying to get blood out of a stone to find out what your expert's position was on B vs C. If you show me a post where you said your expert's position (not your's, but their's) was fir A vs C or B vs C, then I will apologize for missing it. I do not remember you ever telling me that your expert said A=B=C.
Quote:
Originally Posted by Dave Harper View Post
Isn't the TRUE measure of a man and his trustworthyness and integrity that he can be open and listen to new information that differs from his initial beliefs and be MAN ENOUGH to say that with the new information he may have changed his viewpoint?
Yep. Real information, but you seem to think I am supposed to take just part of contradictory claims as final proof, even as you contradict your expert yourself. Bring real proof and I will say I was wrong, but what you have brought from your expert as real proof. Your own expert can't even seem to figure out whether he thinks the temporal separation matters. When your expert says that B=C, which means that the time separation between the sun-frames doesn't matter, then when you tell him that I said the timing separation doesn't matter he says I am wrong because the sub-frames don't go through the lens at the same time, he is confused. How am I supposed to know his true position when he doesn't even know what it is.

As I've said, I am somewhat flexible on A vs C, but I am confident that the temporal separation between the sub-frames doesn't really matter, which is why B=C. Your expert seems to think I am both right and wrong about that. What is their
real answer?

And do you think they were right when they indicated that the temporal separation doesn't matter? If so, why do you keep contradicting them about that?
Quote:
Originally Posted by Dave Harper View Post
In eShift they do actually.
They hit the screen in an ordered manner, since the lens focuses the chip to the screen, but they do not go through the lens in an ordered manner. Photons go all over the place. Have you looked at what the light pattern looks like as the photons go through the lens? I have taken the front half of a lens off and put a white piece of paper in the middle to see what the photons look like there. They don't look like the image.
Quote:
Originally Posted by Dave Harper View Post
So if eShift does basically the same thing and flashes one 120th of the 1920x1080 .67" sized pixel image at a time, then why does the lens (page) need to resolve any more than that? Guess what, it doesn't and the illusion is ALL created in your brain based on each interpolated sub-frame (stick figure drawing) flashed sequentially (each page as you flip though), just like the cartoon does! Each page only needs to contain (resolve) the information needed to do its part of the illusion.
Why are you contradicting your expert again? You just told us that he said B=C, so you claiming that due to the temporal separation A=B is contradicting your expert. If your claim that A=B because of the temporal separation was true, then B would not equal C, since C doesn't have that temporal separation. If A=C then there must be a different reason than the one you have given. If A=C then the yellow sub-pixel going through the lens all at once doesn't raise the lens requirements. In case C the lens doesn't know that the yellow sub-pixel isn't a pixel.

--Darin
darinp is offline  
post #1163 of 1307 Old 08-04-2017, 01:17 AM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
One thing I find interesting is that for a lot of people, when they hear how the eShift technology works their intuition tells them that since the sub-frames have temporal separation, the lens requirements must be exactly the same as what they are for whatever goes through the lens at a given moment in time, and the lens requirements are not affected at all by the fact that the images humans see are not the same as what goes through the lens at a given moment in time.

I wonder if people started looking at the problem from a different angle if their intuition would lead them to a different answer.

I think most of us can agree that the main reason we say a lens for one projector has higher requirements than the lens for a different projector, assuming the same chip size, is because there is finer detail for that lens to deliver, or retain. Of course, people could argue that this isn't why we want better lenses, but not sure what the grounds would be for that.

If a person knew nothing about what projector model or mode was used to create these images, where the top image in each is for a cutout from the same large image and the bottom image for each is for a cutout from a different larger image:





What would people's intuition be? Would it be:

1: Both of these have the exact same lens requirements.
2: The top images have higher lens requirements.
3: The bottom images have higher lens requirements.

I wonder if Dave's expert would say that those images have the exact same lens requirements. Is the line where a lens crosses from being inadequate to adequate exactly the same for those images?

--Darin
Attached Thumbnails
Click image for larger version

Name:	imageAB_1.png
Views:	213
Size:	509.5 KB
ID:	2263132   Click image for larger version

Name:	imageAB_2.png
Views:	126
Size:	306.5 KB
ID:	2263134  

Last edited by darinp; 08-04-2017 at 02:12 AM.
darinp is offline  
post #1164 of 1307 Old 08-04-2017, 01:44 AM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
One more on this:
Quote:
Originally Posted by Dave Harper View Post
Think of it like when you used to (hopefully!) draw stick figures on pages of a book and then flip through it to create a pseudo cartoon of him running or doing some other crazy crap. That was all an optical illusion, right? Did each page (lens) of the stick figure image (sub-frame) have to have and resolve the entire cartoon's images at one time? Of course the answer is NO!
Nope, they didn't, because you were seeing each as one image and just the fact that you said it was a cartoon means you didn't perceive the whole thing as one image. eShift is fast enough that you should not know the sub-frames are shown at the same time.

I like your flip book analogy, so let's go with that.

Let's say the first couple of pages are not for flipping, but just looking at static images one at a time. The first page has:



Now let's say the optometrist holds the book out and shows this first page to a patient. This patient can just make out that the page shows an E. So, the lenses in their eyeballs and their foveas are just adequate to see the E.

Next the optometrist flips to the 2nd page and holds the book out and shows this page to the same patient. This page has:



Would you claim that this patient is likely to be able to identify the E3? Or is it more likely that a person who could just make out an E by itself would have trouble identifying the E3?

If you say they are just as likely to be able to identify the 2nd page properly as the 1st page then I don't know what to say, since I think it is obvious that the majority of patients would have more trouble with the 2nd page, but if you think exactly the same percentage of patients would be able to identify the 2nd page as the 1st then we can talk about it.

Now page 3 and beyond just have a single character. The same E from the 2nd page is on the 3rd page in exactly the same spot and the same 3 from the 2nd page is on the 4th page in exactly the same spot. Then the E is on every odd page and the 3 is on every even page.

The optometrist then flips the pages starting from page 3 at 120Hz. So fast that the patient doesn't even realize that the E and 3 are not on the same page. When slowed down it looks like this:




but what the patient sees is this:



So, would the E3 at 120Hz have the same eyeball lens and fovea quality requirements as the lone E on page 1, the E3 on page 2, or something else? If a person could just make out the E on page 1 would that mean they would be able to make out the Es and 3s from page 3 on shown at 120Hz?

And no, this is not meant to confuse people at all. It is meant to get people to think about whether temporal separation matters when it is so fast that the viewer doesn't even know the images are separate. If there is temporal separation then the lenses in a person's eyeballs do not ever get the whole image and those lenses have requirements too to be able to make out certain detail. If your argument about the whole image never going through the projector lens at one time means the lens requirements are lower then the same thing should apply to human eyeball lenses when the whole image doesn't go through them at any one moment in time.

As I believe I have said before, if you were filming with a 1000 fps camera then you would judge based on what you capture every one millisecond. If the camera only captured one page at a time then the gap between the E and the 3 would not need to be resolved from page to page. But if you are watching with your eyes and the E and the 3 look like they are up at exactly the same time, then the lenses and foveas in your eyes would have to be up to the task of resolving that small gap between the E and the 3 well enough that you don't see an 8. That is true whether the E3 is on every page or the E and the 3 are on every other page, but look like they are on every page because the book is flipped so fast (120Hz to emulate eShift).

If you want to move from flip books to projectors then the projector lens is part of not obscuring that gap between the E and the 3 too much, which is a different threshold than just showing the E well enough that a viewer recognizes it as an E. And the part where your expert took the position that the temporal separation doesn't matter, when they said B=C, would agree that it doesn't matter whether you show the E and the 3 at the exact same moment in time, or if it just looks like it.

--Darin

Last edited by darinp; 08-04-2017 at 02:09 AM.
darinp is offline  
post #1165 of 1307 Old 08-04-2017, 02:20 AM
Advanced Member
 
dovercat's Avatar
 
Join Date: Apr 2008
Posts: 739
Mentioned: 3 Post(s)
Tagged: 0 Thread(s)
Quoted: 103 Post(s)
Liked: 54
Quote:
Originally Posted by darinp View Post
One thing I find interesting is that for a lot of people, when they hear how the eShift technology works their intuition tells them that since the sub-frames have temporal separation, the lens requirements must be exactly the same as what they are for whatever goes through the lens at a given moment in time, and the lens requirements are not affected at all by the fact that the images humans see are not the same as what goes through the lens at a given moment in time.

I wonder if people started looking at the problem from a different angle if their intuition would lead them to a different answer.

I think most of us can agree that the main reason we say a lens for one projector has higher requirements than the lens for a different projector, assuming the same chip size, is because there is finer detail for that lens to deliver, or retain. Of course, people could argue that this isn't why we want better lenses, but not sure what the grounds would be for that.

If a person knew nothing about what projector model or mode was used to create these images, where the top image in each is for a cutout from the same large image and the bottom image for each is for a cutout from a different larger image:





What would people's intuition be? Would it be:

1: Both of these have the exact same lens requirements.
2: The top images have higher lens requirements.
3: The bottom images have higher lens requirements.

--Darin
I would assume the bottom image has a better lens as the smallest detail I can clearly see is the lines that make up the pixel squares, the screen door effect. I would assume the top image has a higher resolution but blurry image chip or blurry lens or is projecting onto a fine grainy coarse surface like a fine glass bead screen.

The MTF contrast/sharpness of the extra detail produced by the various e-shift, 4K enhancement, pixel shift systems looks like it falls drastically. How good are they at displaying a 4K 1 pixel checker board, 1 black pixel, one white pixel. In comparison to non pixel shifting display displaying a checker board of their native image chip resolution?

Is not the real advantage of pixel shift systems that they eliminate the visibility of pixel structure by overlaying the pixels and cause MTF contrast to fall off with resolution rather than drop off a cliff to zero at the native image chip resolution. So in theory creating a more natural looking image. At the expense of a less artificially sharp looking image.

Though I get your point that the lens has to be good enough to display a half pixel difference otherwise you would not clearly see the half pixel difference when its created in one image following another that is half a pixel different. But, the lenses for the native chip projectors are easily capable of that as on a projected image you can see the native pixel structure, the native pixels do not all blur into one another due to poor lens focus. And I am unsure if they would bother improving lens quality much for pixel shift systems when in any event contrast/sharpness is I think going to be falling away for the smaller pixel shift created details due to the blurring caused by overlaying pixel shifted images.

Last edited by dovercat; 08-04-2017 at 02:29 AM.
dovercat is offline  
post #1166 of 1307 Old 08-04-2017, 03:12 AM
AVS Forum Special Member
 
Javs's Avatar
 
Join Date: Dec 2012
Location: Sydney
Posts: 7,888
Mentioned: 476 Post(s)
Tagged: 0 Thread(s)
Quoted: 6819 Post(s)
Liked: 6445
Quote:
Originally Posted by dovercat View Post
I would assume the bottom image has a better lens as the smallest detail I can clearly see is the lines that make up the pixel squares, the screen door effect. I would assume the top image has a higher resolution but blurry image chip or blurry lens or is projecting onto a fine grainy coarse surface like a fine glass bead screen.

The MTF contrast/sharpness of the extra detail produced by the various e-shift, 4K enhancement, pixel shift systems looks like it falls drastically. How good are they at displaying a 4K 1 pixel checker board, 1 black pixel, one white pixel. In comparison to non pixel shifting display displaying a checker board of their native image chip resolution?

Is not the real advantage of pixel shift systems that they eliminate the visibility of pixel structure by overlaying the pixels and cause MTF contrast to fall off with resolution rather than drop off a cliff to zero at the native image chip resolution. So in theory creating a more natural looking image. At the expense of a less artificially sharp looking image.

Though I get your point that the lens has to be good enough to display a half pixel difference otherwise you would not clearly see the half pixel difference when its created in one image following another that is half a pixel different. But, the lenses for the native chip projectors are easily capable of that as on a projected image you can see the native pixel structure, the native pixels do not all blur into one another due to poor lens focus. And I am unsure if they would bother improving lens quality much for pixel shift systems when in any event contrast/sharpness is I think going to be falling away for the smaller pixel shift created details due to the blurring caused by overlaying pixel shifted images.
My screen is matte white perfectly smooth surface. No texture at all. The fine texture you are seeing is the e-shift pixel grid which is still very visible in real life.

They fail spectacularly at displaying 4k single pixel test patterns, but thankfully content doesn't actually have single pixel details which happen to fall precisely in line exactly where a single pixel exists on a 4k frame, the only ones that do are ones that are in patterns showing exactly one perfect single pixel line. A human hair for eg is almost always curved in some fashion when on a frame of film, and the e-shift system has no problem picking this up.

Today, the real advantage of e-shift is not to erase the pixel grid, that would be what you get when you enable e-shift on a standard 1080p input, but by using UHD input you get the super resolution effect, whereby the e-shift system is able to stack unique subframes and the end result is an image with higher density and resolution than that of 1080p, effectively 4 million pixels or so, or, 3k.

https://www.extremetech.com/extreme/...gment-yeeaaaah






JVC X9500 (RS620) | 120" 16:9 | Marantz AV7702 MkII | Emotiva XPA-7 | DIY Modular Towers | DIY TPL-150 Surrounds | DIY Atmos | DIY 18" Subs
-
MadVR Settings | UHD Waveform Analysis | Arve Tool Instructions + V3 Javs Curves

Last edited by Javs; 08-04-2017 at 03:21 AM.
Javs is online now  
post #1167 of 1307 Old 08-04-2017, 03:28 AM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
I don't recall if anybody has answered this one.

For those who say that the lens requirements are based only on the sizes of the elements going through the lens at a given moment in time, if a 1080p projector only illuminated half of each pixel each millisecond so that your eyes saw a native 1080p image, you would claim that the lens requirements are higher than for native 1080p, since for a given moment in time the lens would see elements smaller than a native 1080p pixel, right?

To be clear the composite images would be native 1080p since each pixel would have to be consistent for each full frame of video, but the light source would only illuminate half of each pixel at a time. The lens wouldn't even know what the size of a native pixel was since it would only see half of each at a time and lenses don't have memories.

--Darin

Last edited by darinp; 08-04-2017 at 03:34 AM.
darinp is offline  
post #1168 of 1307 Old 08-04-2017, 05:10 AM
Advanced Member
 
dovercat's Avatar
 
Join Date: Apr 2008
Posts: 739
Mentioned: 3 Post(s)
Tagged: 0 Thread(s)
Quoted: 103 Post(s)
Liked: 54
Quote:
Originally Posted by Javs View Post
My screen is matte white perfectly smooth surface. No texture at all. The fine texture you are seeing is the e-shift pixel grid which is still very visible in real life.

They fail spectacularly at displaying 4k single pixel test patterns, but thankfully content doesn't actually have single pixel details which happen to fall precisely in line exactly where a single pixel exists on a 4k frame, the only ones that do are ones that are in patterns showing exactly one perfect single pixel line. A human hair for eg is almost always curved in some fashion when on a frame of film, and the e-shift system has no problem picking this up.

Today, the real advantage of e-shift is not to erase the pixel grid, that would be what you get when you enable e-shift on a standard 1080p input, but by using UHD input you get the super resolution effect, whereby the e-shift system is able to stack unique subframes and the end result is an image with higher density and resolution than that of 1080p, effectively 4 million pixels or so, or, 3k.

https://www.extremetech.com/extreme/...gment-yeeaaaah





But the images in that post do not appear to be a comparison of 1080p native vs 1080p native with pixel shifting to create higher resolution as used by JVC e-shift 4K, Epson 4K enhancement, and for DLP 1528p native pixel shifting 4K.

Pixel shifting I would expect inherently causes a drop off in MTF contrast/sharpness of the extra smaller details created as I would expect blurring to be caused by overlaying two lower resolution pixel shifted images to create one image with higher resolution.

I would expect 1080p native to give a lower resolution but artificially sharper image as MTF contrast/sharpness would be high at the single pixel size but then drop off a cliff to nothing as there can be no details smaller than one pixel in size. While pixel shifting enables the display of smaller details but with MTF contrast/sharpness falling off in those smaller details due to the overlaying of two images to create one image. So pixel shifting creating a more detailed and more natural looking image at the cost of the image loosing artificial sharpness.
dovercat is offline  
post #1169 of 1307 Old 08-04-2017, 07:20 AM
AVS Forum Special Member
 
cmjohnson's Avatar
 
Join Date: Nov 2003
Location: Sharply focused on sharper focus.
Posts: 6,140
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 104 Post(s)
Liked: 34
Let's reduce this to a very, VERY simple way of looking at it.

There are those who persist in believing that:

If a given lens is good enough to resolve a specific pixel size when projected, but that pixel is of a size that is the limit of the resolution of the lens,

then that lens can resolve a detail that is SMALLER than the FIRST pixel,

BY PROJECTING IT AGAIN IN A SLIGHTLY DIFFERENT LOCATION.



Jaw hits floor. How can anyone be that DUMB?
cmjohnson is offline  
post #1170 of 1307 Old 08-04-2017, 08:39 AM
AVS Forum Special Member
 
Dave in Green's Avatar
 
Join Date: Jan 2014
Location: USA
Posts: 8,099
Mentioned: 147 Post(s)
Tagged: 0 Thread(s)
Quoted: 3669 Post(s)
Liked: 2874
Quote:
Originally Posted by Javs View Post
... Today, the real advantage of e-shift is not to erase the pixel grid, that would be what you get when you enable e-shift on a standard 1080p input, but by using UHD input you get the super resolution effect, whereby the e-shift system is able to stack unique subframes and the end result is an image with higher density and resolution than that of 1080p, effectively 4 million pixels or so, or, 3k. ...
This is precisely what makes me believe that the degree of demand placed on projector lens quality is determined by the level of actual fine detail displayed in the final assembled projected image regardless of how or when the image is produced and projected through the lens.
AJSJones likes this.
Dave in Green is offline  
Sponsored Links
Advertisement
 
Reply Digital Hi-End Projectors - $3,000+ USD MSRP

Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off