Resolution requirements for lenses - Page 13 - AVS Forum | Home Theater Discussions And Reviews
Forum Jump: 
 269Likes
Reply
 
Thread Tools
post #361 of 1307 Old 06-30-2017, 01:04 PM
GGA
AVS Forum Special Member
 
GGA's Avatar
 
Join Date: Dec 2001
Location: Topanga CA
Posts: 1,026
Mentioned: 3 Post(s)
Tagged: 0 Thread(s)
Quoted: 303 Post(s)
Liked: 69
Originally Posted by stanger89
...That's not relevant, it's your brain that integrates the two fields, not the projector.......

I have casually been following this thread trying to reconcile all the statements. I am not sure what @stanger89 is replying to, but I would like to post my observations along his lines to see if I am going wrong.

e-Shift starts with a single 1080p frame (F). It creates a new 2160p frame (Fn), which it splits into two new 1080p frames (Fn1 and Fn2). Fn1 and Fn2 are then projected onto the screen at 120Hz, i.e., there is a short temporal separation between Fn1 and Fn2. Our brain then combines Fn1 and Fn2 to form Fnc, which ideally is the same as Fn, but is of course only an approximation.

Fnc is an optical illusion. We see it on the screen only because of the way our brain processes Fn1 and Fn2. Fnc does not exist in reality. If our brains processed fast enough we would see Fn1 and Fn2 separately and not see Fnc at all. A camera with a fast enough shutter would never show Fnc, only Fn1 and Fn2.

Fn1 and Fn2 are 1080p and go through the lens at different times. Assuming there is no interference between the two they should be equally sharp. If their images were captured on the screen by a fast enough camera their sharpness would identical. It would thus seem the lens needs to resolve only 1080p since Fn1 and Fn2 are only 1080p. It our brain that it taking the two1080p frames and creating the e-Shift pseudo 2160p image.

I do not know how true any of the above is. I am pretty confident I have made some wrong assumptions but don't know where. I post it here as on the surface it appears reasonable to me and would welcome corrections.
Dave Harper and Stereodude like this.
GGA is offline  
Sponsored Links
Advertisement
 
post #362 of 1307 Old 06-30-2017, 02:26 PM
AVS Forum Special Member
 
cmjohnson's Avatar
 
Join Date: Nov 2003
Location: Sharply focused on sharper focus.
Posts: 6,140
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 104 Post(s)
Liked: 34
Another way to look at is this: If simply offsetting the image slightly allowed you to recover more effective resolution through the lens system, then astronomers would not need to invest large amounts of money in newer, higher resolution telescopes to see more detail on a subject of interest, they would simply shift the central point of view slightly, and capture a photo before the shift, capture a photo after the shift, and then correlate the two into one higher resolution image.

But they don't do that because THAT DOES NOT WORK.

The lens does not CARE about WHEN. There is no temporal function in optical mathematics. Get that idea out of your head. It's nonsense.

Believe me, I WISH that you could get 4K level of detail out of e-shifted 1080p because if that were possible then it would allow at least
a theoretical solution to the question of how to push CRT projection into 4K territory. But though the lenses are huge, and hugely expensive, and quite good, they're apparently not THAT good. I've done a few basic experiments and think that 4K is unfortunately too much to ask of
even the best CRT projection lenses, even if there was a way to get 4K on the phosphor face.

And, no, the phosphor face does not retain the image. It decays quite rapidly, with the scanned electron beam being the brightest point in a VERY short intensity modulated line, and this is easy to verify if you were to photograph the CRT face in operation with a camera that is both very sensitive and has an insanely fast shutter speed. It is the phenomenon of visual persistence which turns the flying spot on the CRT face into an image in your eye. The actual amount of phosphor that is lit up at any given moment is a tiny fraction of a single scan line.
cmjohnson is offline  
post #363 of 1307 Old 06-30-2017, 03:10 PM
Advanced Member
 
rak306's Avatar
 
Join Date: Oct 2004
Location: Syracuse NY
Posts: 989
Mentioned: 6 Post(s)
Tagged: 0 Thread(s)
Quoted: 616 Post(s)
Liked: 252
Quote:
Originally Posted by stanger89 View Post
The problem for me is we keep referencing "requirements" without ever defining what that requirement that a lens must meet is. That really seems to be the key issue here.
+1. You have really gotten to the heart of the issue, IMO.

Let's take it one step further. Let's imagine a 1920x1080 LCOS chip with 3/4 of each pixel is made up of black surround, and the remaining 1/4 pixel is active. Now instead of E-shift doing 2 shift positions per frame, imagine it does 4. Each one shifted 1/2 pixel Up/down, and 1/2 pixel left/right. With each sub image being a corresponding 1/4 of the full 4k frame. Let's say the lens is capable of resolving a 16k image - so that there is minimal degradation to the image from the lens.

This would be a true 4k display, with just as much capability as any 4k display. (Of course, no one would do this, as it throws away 3/4 of the light).

Now imagine the same scheme, but this time replace the chip with a standard 1920x1080 version. While this is still projecting a 4k image, with all of the original 4k content, the resulting image will not resolve as much detail, because the pixels are 2x the size they should be. So it is blurred more than the true 4k image, (but still will have more detail than a down scaled 1920x1080 image).

And Stanger89 said it perfectly - what does it mean to say a lens good enough to display an X resolution image? Let's say you have a lens just like Stanger89 showed, that has a blur such that it just passes an alternating light/dark pattern. And we then compare to a lens where we can see the fine detail of the underlying pixel structure. We would probably like the higher quality lens image, even though the lesser lens passes all the frequencies. The reason is (likely) that the MTF doesn't fall off as fast (and that we like fine details - even if they are from the internal pixel structure and not from the image). But where is the line between just good enough and unnecessarily sharp?

So now get back to the lens quality needed. And I would suggest that as you go from 1920x1080 to JVC type 4kE-shift to the 4k e-shift I described above (with full 4 shifts of 1920x1080 images) to true 4K, that the lens quality needed increases in each case because the resulting resolution capability increases.
rak306 is offline  
Sponsored Links
Advertisement
 
post #364 of 1307 Old 06-30-2017, 03:13 PM
AVS Forum Addicted Member
 
coderguy's Avatar
 
Join Date: Dec 2006
Posts: 14,027
Mentioned: 62 Post(s)
Tagged: 0 Thread(s)
Quoted: 2420 Post(s)
Liked: 1316
Quote:
Originally Posted by GGA View Post
e-Shift starts with a single 1080p frame (F). It creates a new 2160p frame (Fn), which it splits into two new 1080p frames (Fn1 and Fn2).
Not with 4k content, this is not what JVC says. If you are correct, then JVC is not telling the truth.

@cmjohnson
The reason you cannot gain more from a telescope is because the lens is limited to what it can receive from itself, in a telescope the lens is capturing the input and displaying it to your eyes as the output, you guys are mixing up analog input/output and resolving tricks with digital data. Even a NON-4K algorithm can add data to a lower res source over temporal time, because with E-shift we are dealing with both a physical (the mirror) and a software component modifying the source, not just one or the other. With a digital telescope, they could add data to what you see, but there is no way people in the world of telescopes would put up with seeing stars that don't exist (or at least detail that doesn't exist). They have to be very very careful about the algorithms they use to enhance the image because you want to see what's really there. In video, we are not trying to see something exactly, we are just trying to get the best picture.

With e-shift, the input can be an actual 4k source, but that's not even the point, it doesn't have to be because the algorithm can add stuff. The reason extra detail can be created is because a digital component can create things differently over a temporal space than trying to mimic it as analog.

Inside a digital algorithm it can do things in space and time that no normal image trickery can do to itself.

Resolving tricks to increase resolution temporally that can then increase spatial resolution in the final composite absolutely do work. There are also a few temporal tricks in the analog world, but they are not the same, it's a different class of tricks. I've done some of this stuff in code, so it's like telling the guy that takes out the garbage that you cannot carry that heavy of a bag of garbage, right after he just finished doing it.

**Updated Projector Calculator Released NOV 2017**
-- www.webprojectorcalculator.com --

Last edited by coderguy; 06-30-2017 at 04:03 PM.
coderguy is offline  
post #365 of 1307 Old 06-30-2017, 03:25 PM
AVS Forum Special Member
 
R Johnson's Avatar
 
Join Date: Nov 2002
Location: Chicago, Illinois
Posts: 1,778
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 89 Post(s)
Liked: 61
Quote:
Originally Posted by GGA View Post
.... Fn1 and Fn2 are 1080p and go through the lens at different times. Assuming there is no interference between the two they should be equally sharp. If their images were captured on the screen by a fast enough camera their sharpness would identical. It would thus seem the lens needs to resolve only 1080p since Fn1 and Fn2 are only 1080p. It our brain that it taking the two1080p frames and creating the e-Shift pseudo 2160p image. ...
This makes sense to me.
Dave Harper likes this.
R Johnson is offline  
post #366 of 1307 Old 06-30-2017, 03:27 PM
AVS Forum Addicted Member
 
coderguy's Avatar
 
Join Date: Dec 2006
Posts: 14,027
Mentioned: 62 Post(s)
Tagged: 0 Thread(s)
Quoted: 2420 Post(s)
Liked: 1316
I don't understand how people don't understand the printer example which can cause aberrations even without trickery. You can replicate the example yourself which disproves much of the malarkey being peddled in here. I should probably opt out of this conversation, because it's very frustrating to read stuff from people telling you what is and is not possible on stuff you've already done before.

**Updated Projector Calculator Released NOV 2017**
-- www.webprojectorcalculator.com --

Last edited by coderguy; 06-30-2017 at 03:32 PM.
coderguy is offline  
post #367 of 1307 Old 06-30-2017, 03:30 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by GGA View Post
e-Shift starts with a single 1080p frame (F).
It can, but it is better if it starts with a single 4K frame and intelligently extracts two 1080p frames that when put together with one shifted by half a pixel will best represent that 4K frame. This displays a composite image that can't show all the detail in every 4K frame, but can display finer detail than extracting just one 1080p frame from the 4K frame could do.
Quote:
Originally Posted by GGA View Post
Fnc is an optical illusion.
You could say that, much like it is an illusion that single chip DLPs show colors other than black, 100% red, 100% green, and 100% blue at any instant.

Even without eShift the composite frames the JVCs produce could also be said to be an illusion, because if you film the screen with a 1000fps camera you will see that the JVCs don't really put the image up at any instant. They display single pixels at multiple values during a frame that average out to what we see. For instance, 10% luminance red can be displayed by showing 40% luminance red for 1/4th of the time.
Quote:
Originally Posted by GGA View Post
It would thus seem the lens needs to resolve only 1080p since Fn1 and Fn2 are only 1080p. It our brain that it taking the two1080p frames and creating the e-Shift pseudo 2160p image.
No lens is perfect, so the question is how sharp does a lens need to be to show those 1080p pixels and have people still be able to make out the detail, then how sharp do they need to be to show the smaller 1/4 pixel detail and have people be able to make out that detail.

Not everybody likes this example, but it is meant to address the question of whether the lens only needs to resolve what goes through it at one time or does it sometimes need to resolve detail that never went through it at one moment in time. That is, does the lens need to be able to display the finest detail in the composite image.

It would probably be better if I found the MP4 version I made a couple of years ago, but this version from Jav's will have to do for now.



Those who display the forum on their projectors can display this gif and defocus their lens just enough that if the E and 3 were playing at 120Hz it would look like an 8 instead of E3. Then use a piece of paper over the 3. Does that amount of defocus cause the E to disappear just like the E3 disappeared? No, the line where the lens quality becomes bad enough that the image detail is no longer conveyed to human viewers is much lower for making the E disappear than making the E3 disappear. That is despite the fact that the black detail never even passed through the lens at any instant.

Those viewing on other displays can cover and uncover the 3 and see if they can imagine the different scenarios.

--Darin

Last edited by darinp; 06-30-2017 at 03:37 PM.
darinp is offline  
post #368 of 1307 Old 06-30-2017, 03:36 PM
AVS Forum Addicted Member
 
coderguy's Avatar
 
Join Date: Dec 2006
Posts: 14,027
Mentioned: 62 Post(s)
Tagged: 0 Thread(s)
Quoted: 2420 Post(s)
Liked: 1316
Quote:
Originally Posted by darinp View Post
You could say that, much like it is an illusion that single chip DLPs show colors other than black, 100% red, 100% green, and 100% blue at any instant.
Exactly, and it's not an illusion because we can measure the composite color similarly to a still image with actual calibration equipment.

An optical illusion essentially means something is occurring in our brain/eyes that conflicts with what is really there by the science of it, it doesn't really mean it conflicts with a final composite. It means seeing something that doesn't really exist at all, even beyond the basic and common rules of how our eyes relate to imaging and video.

Seeing things over time whether by a refresh rate or temporally across frames is not considered an optical illusion, that means all video is an optical illusion since it is just still frames being flashed across the screen. I've never heard people classify optical illusions like that.

**Updated Projector Calculator Released NOV 2017**
-- www.webprojectorcalculator.com --

Last edited by coderguy; 06-30-2017 at 03:51 PM.
coderguy is offline  
post #369 of 1307 Old 06-30-2017, 04:06 PM
AVS Forum Special Member
 
cmjohnson's Avatar
 
Join Date: Nov 2003
Location: Sharply focused on sharper focus.
Posts: 6,140
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 104 Post(s)
Liked: 34
Look, e-shift is not just a 1080p frame shifted half a pixel horizontally and vertically. The image is also processed so that the information in the shifted frame is DIFFERENT, and aimed specifically at adding detail that is not present in the previous frame.

But that's not what I'm talking about when talking about lens limitations.

The fact remains that you can't get extra resolution out of a lens system just by shifting its point of aim by half a pixel's space diagonally. BECAUSE, the shift iself is image information and it all counts toward system resolution (optical) bandwidth.

For a moment, let me go back to my idea of an e-shifted CRT system, because it's something I wish existed.

Yes, it's possible that such a system, if set up correctly, might actually yield some subjectively visible detail enhancements over non-shifted 1080p, but the fact remains that the actual SIZE of a specific "enhanced" feature is still only going to be a single 1080p pixel in size at the very minimum. But the location of that "new detail" is going to be diagonally offset by half a pixel from the non-shifted frame, and while that WILL be displayed, this is perhaps best thought of as an image artifact rather than as an actual resolution enhancement.

So separate the two phenomenon entirely: Processed detail enhancement applied on a per-frame basis, and system optical resolution.
They're different things and one is not at all equivalent to the other. In fact, that detail enhancement processing in software could be
applied on a frame-by-frame basis without even using an e-shift system. The difference it would make would probably manifest
as a shimmering effect in the overall image, at a guess. Half the frames would carry information not present in the other half,
This would be analogous to having an interlaced video system where field A and field B were derived from slightly different video feeds.


Once again, there's a resolution limit to the lens system itself. If you were to superimpose a pair of shifted and non-shifted images and treat that as your enhanced resolution image, the fact remains that the lens can't resolve all that detail even if the two images are displayed one after the other and SLOWLY. UNLESS that lens has the resolution required to fully resolve the higher resolution image you are simulating with the e-shift system.

Look at it this way: Do you think that a 3D enabled movie, projected in 3D via your projector, has more detail to it than the non-3D version
of the same movie? After all, you're getting two different perspectives, you'd have to be getting more effective detail, right?

But no, nobody claims that 3D movies have more resolution than non-3D movies. Because it's just not so.

And you can't force image detail above the resolution limit of the lens, thru the lens, and retain that detail, whether you do it in 4K-60 or
sequentially, one pixel per frame. The spatial resolution capacity of the lens is a fixed value regardless of the nature of the imagery being
transmitted through it. You can't trick it into resolving more by feeding it that image more slowly or split into fields or frames or interlacing it. There is no temporal component to lens system resolution and that is what you will be taught once you get into chapter 2 of your Optics 101 textbook. In fact it maybe IN chapter 1.


Now, it IS possible to transmit a higher bandwidth signal through the lens in a FUNCTIONAL aspect if you post-process the signal, just as you can transmit high resolution audio and video information over a bandwidth limited connection but this requires the signal to be integrated over time. You can send a full HD movie over a 56K dial-up connection, but you won't watch it in real time.

But we don't have that capacity when watching a movie. We have to live with the visual and temporal limitations of ourselves, human being 1.0, operating at a fixed rate of 1 subjective second per second. We can't integrate higher resolution information over time via our eyes
so the potential for transmitting UHD information thru a limited lens system is irrelevant as it is beyond our natural capacity.

Actually we DO do that, via our visual persistence phenomenon, but that phenomenon is sharply limited in terms of the time of effect. It's just milliseconds. It has to happen quickly enough that motion is perceived as motion and not a series of still pictures, which decades of prior experience indicates that the rate of image acquisition for motion purposes requires a frame rate of 20-something frames per second, or more, for sequential frames to integrate into smooth motion in our brains.
cmjohnson is offline  
post #370 of 1307 Old 06-30-2017, 04:13 PM
AVS Forum Addicted Member
 
coderguy's Avatar
 
Join Date: Dec 2006
Posts: 14,027
Mentioned: 62 Post(s)
Tagged: 0 Thread(s)
Quoted: 2420 Post(s)
Liked: 1316
That's too long for me to respond to in a detailed fashion, let's keep it simple, as I have forum fatigue from this thread.

You would almost be correct except there is a very serious problem with what you said above, and that is that (a) the pixels are no longer in the same place once shifted, (b) video content and objects aren't always moving between frames, and (c) the root characteristic of flashing a displaced pixel grid every other frame is too fast to interfere with a+b to the point where you could objectively say we never see enough extra detail to be somewhat representative of the original non-interpolated 4k source. See flicker fusion...

It's the same regurgitated arguments we have seen throughout this thread. I will just say, it doesn't matter how you add detail, there is a complex chaotic relationship between added detail and what the lens shows spatially and temporally.

The problem is "what the lens has the potential to show" and what the lens actually is showing. With digital data, the potential changes of what the lens can show, especially if you start messing with stuff in the hardware.

These analog limitations people keep envisioning in their minds do not exist with digital data that is being altered in both physical and virtual forms both spatially and temporally.

**Updated Projector Calculator Released NOV 2017**
-- www.webprojectorcalculator.com --

Last edited by coderguy; 06-30-2017 at 05:05 PM.
coderguy is offline  
post #371 of 1307 Old 06-30-2017, 04:30 PM
AVS Forum Addicted Member
 
coderguy's Avatar
 
Join Date: Dec 2006
Posts: 14,027
Mentioned: 62 Post(s)
Tagged: 0 Thread(s)
Quoted: 2420 Post(s)
Liked: 1316
Quote:
Originally Posted by cmjohnson View Post
But we don't have that capacity when watching a movie. We have to live with the visual and temporal limitations of ourselves, human being 1.0, operating at a fixed rate of 1 subjective second per second. We can't integrate higher resolution information over time via our eyes
so the potential for transmitting UHD information thru a limited lens system is irrelevant as it is beyond our natural capacity.
I see where you are going with this, but this is a different argument entirely, that is the argument of how close do we have to be to the screen to see a difference in detail or any adjustment to the image in a video. I have personally never tested this as I've never had a pure 4k projector, as many of us have not. Also, it's not like the shifted pixels are flashed just once in 10 frames, it's every other frame. As noted before, when it goes this fast flicker fusion overrules much of this point about what we can or cannot see in temporal space. For the sake of our existing discussion, we are of course assuming someone is close enough to the screen to see small details.

Remember that video isn't constantly always moving, some things like in the background or in a room are basically near perfectly still across frames. Our table lamps don't generally move from the air flow...

Motion resolution does kill the benefits of 4k pretty quickly I imagine, but that's a different discussion and only applies to certain parts of a movie.

**Updated Projector Calculator Released NOV 2017**
-- www.webprojectorcalculator.com --

Last edited by coderguy; 06-30-2017 at 04:57 PM.
coderguy is offline  
post #372 of 1307 Old 06-30-2017, 05:09 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by R Johnson View Post
This makes sense to me.
Many people think that, but if it were true then a system would only have to be able to resolve the E to be able to resolve the E3 when played at 120Hz.



Would be nice if it were true, but it is false, Cover the 3 with your finger and see how easy it is for the system to resolve the E enough to see it. Not even close to the same requirement as conveying the E3 to your brain at 120Hz.

--Darin

Last edited by darinp; 06-30-2017 at 05:31 PM.
darinp is offline  
post #373 of 1307 Old 06-30-2017, 05:28 PM
GGA
AVS Forum Special Member
 
GGA's Avatar
 
Join Date: Dec 2001
Location: Topanga CA
Posts: 1,026
Mentioned: 3 Post(s)
Tagged: 0 Thread(s)
Quoted: 303 Post(s)
Liked: 69
Quote:
Originally Posted by GGA
e-Shift starts with a single 1080p frame (F). It creates a new 2160p frame (Fn), which it splits into two new 1080p frames (Fn1 and Fn2).
Coderguy Reply:
Not with 4k content, this is not what JVC says. If you are correct, then JVC is not telling the truth
.

GGA: I think it is correct. I copied it straight from the JVC website:

http://usjvc.com/faq/index.php?actio...268&artlang=en

The process involves evaluating each 1920x1080 video frame using a correlation detection algorithm and then creating a new 3840x2160 video frame internally. During this process it enhances edge transitions, increases contrast in detailed areas, and nearly eliminates aliasing and stair-stepping. This enhanced 3840x2160 frame is then separated into two new 1920x1080 sub-frames, which are then alternately projected to the screen at 120Hz.

They mention the FAQ has info on native 4K input but I could not find any. It really should not make much difference for my analysis. With 1080p the JVC creates two 1080p sub-frames from the 1080p frame, with 4K input it also creates two 1080p sub-frames but from a 4K frame.

From Coderguy:
Quote:
An optical illusion essentially means something is occurring in our brain/eyes that conflicts with what is really there by the science of it, it doesn't really mean it conflicts with a final composite. It means seeing something that doesn't really exist at all, even beyond the basic and common rules of how our eyes relate to imaging and video.

Seeing things over time whether by a refresh rate or temporally across frames is not considered an optical illusion, that means all video is an optical illusion since it is just still frames being flashed across the screen. I've never heard people classify optical illusions like that.


Well, we could probably debate this all day and the only thing accomplished would be tickling our brains.

I always considered movies to be optical illusions, but they are so widespread now I doubt if anyone else thinks this way except for hardcore physcists. A quick search shows this quote from the Wiki (more can be found). Would you consider 3D movies to be optical illusions?

The phi phenomenon is the optical illusion of perceiving a series of still images, when viewed in rapid succession, as continuous motion. Max Wertheimer, one of the three founders of Gestalt psychology, defined this phenomenon in 1912.[1] The phi phenomenon and persistence of vision together formed the foundation of Hugo Münsterberg's theory of film[2] and are part of the process of motion perception.

Thanks for the info, all very helpful.
GGA is offline  
post #374 of 1307 Old 06-30-2017, 05:39 PM
AVS Forum Addicted Member
 
coderguy's Avatar
 
Join Date: Dec 2006
Posts: 14,027
Mentioned: 62 Post(s)
Tagged: 0 Thread(s)
Quoted: 2420 Post(s)
Liked: 1316
An optical illusion (also called a visual illusion) is an illusion caused by the visual system and characterized by visually perceived images that differ from objective reality.

I agree with the previous definition I just posted, and I differ with that cherry picked definition of optical illusion you found used in a sentence because we are already speaking about video. The trick is objective reality, no-one considers the video we see as being objectively the same as a still image.

This is what I would call an optical illusion because we do objectively see A+B as a different contrast, yet they are the same as measured...
Attached Thumbnails
Click image for larger version

Name:	Grey_square_optical_illusion_proof2.svg.jpg
Views:	7
Size:	58.8 KB
ID:	2212145  
rak306 likes this.

**Updated Projector Calculator Released NOV 2017**
-- www.webprojectorcalculator.com --

Last edited by coderguy; 06-30-2017 at 05:44 PM.
coderguy is offline  
post #375 of 1307 Old 06-30-2017, 05:45 PM
AVS Forum Special Member
 
Javs's Avatar
 
Join Date: Dec 2012
Location: Sydney
Posts: 7,849
Mentioned: 475 Post(s)
Tagged: 0 Thread(s)
Quoted: 6791 Post(s)
Liked: 6418
Quote:
Originally Posted by darinp View Post
It can, but it is better if it starts with a single 4K frame and intelligently extracts two 1080p frames that when put together with one shifted by half a pixel will best represent that 4K frame. This displays a composite image that can't show all the detail in every 4K frame, but can display finer detail than extracting just one 1080p frame from the 4K frame could do.
You could say that, much like it is an illusion that single chip DLPs show colors other than black, 100% red, 100% green, and 100% blue at any instant.

Even without eShift the composite frames the JVCs produce could also be said to be an illusion, because if you film the screen with a 1000fps camera you will see that the JVCs don't really put the image up at any instant. They display single pixels at multiple values during a frame that average out to what we see. For instance, 10% luminance red can be displayed by showing 40% luminance red for 1/4th of the time.
No lens is perfect, so the question is how sharp does a lens need to be to show those 1080p pixels and have people still be able to make out the detail, then how sharp do they need to be to show the smaller 1/4 pixel detail and have people be able to make out that detail.

Not everybody likes this example, but it is meant to address the question of whether the lens only needs to resolve what goes through it at one time or does it sometimes need to resolve detail that never went through it at one moment in time. That is, does the lens need to be able to display the finest detail in the composite image.

It would probably be better if I found the MP4 version I made a couple of years ago, but this version from Jav's will have to do for now.



Those who display the forum on their projectors can display this gif and defocus their lens just enough that if the E and 3 were playing at 120Hz it would look like an 8 instead of E3. Then use a piece of paper over the 3. Does that amount of defocus cause the E to disappear just like the E3 disappeared? No, the line where the lens quality becomes bad enough that the image detail is no longer conveyed to human viewers is much lower for making the E disappear than making the E3 disappear. That is despite the fact that the black detail never even passed through the lens at any instant.

Those viewing on other displays can cover and uncover the 3 and see if they can imagine the different scenarios.

--Darin
I dont think your E3 gif is a good example. You need to have the E and 3 sharp indicating the source is sharp, otherwise you start with something blurry and then blur it even more. which would not be acceptable MTF for 1080p in the first place. Read acceptable as being able to resolve 1080 line pairs horizontally in the first place. That is what is an acceptable MTF for 1080p. An acceptable MTF for 4k is 2160 line pairs horizontally. If you cannot in either case make out the individual line pairs in both examples you dont have acceptable lens for the resolution you are viewing. I dont think anybody is going to dispute this though.

Here is a video I just made from sharp E and 3... they alternate in 60p frame by frame... this is as close as you are going to get on 60p screens to get the effect that e-shift may be alternating frame A and B.

I cant imagine any scenario where that black line will become invisible due to distance or de-focusing. If we do reach a scenario where that black line will be lost, we are NOT at an acceptable level of MTF for 1080p either.


Here is anothing thing nobody on this forum is going to disagree with.

You cant achieve 100% MTF in real life that's too perfect. So lets say we have a lens that resolves a 1080p pixel to a realistic standard. With a certain amount of blur on it to represent a lens, and then lets look at a 4k pixel right next to it, same blur.



Then lets look at a 4k level lens, same thing.



Nobody here is going to disagree that the better lens pretty much equally improves both scenarios. Since we cannot hit 100% MTF in 1080p at all in real life, the answer to this whole damn thread, as I said, is, a lens can always be better!

JVC X9500 (RS620) | 120" 16:9 | Marantz AV7702 MkII | Emotiva XPA-7 | DIY Modular Towers | DIY TPL-150 Surrounds | DIY Atmos | DIY 18" Subs
-
MadVR Settings | UHD Waveform Analysis | Arve Tool Instructions + V3 Javs Curves

Last edited by Javs; 06-30-2017 at 05:48 PM.
Javs is online now  
post #376 of 1307 Old 06-30-2017, 05:48 PM
GGA
AVS Forum Special Member
 
GGA's Avatar
 
Join Date: Dec 2001
Location: Topanga CA
Posts: 1,026
Mentioned: 3 Post(s)
Tagged: 0 Thread(s)
Quoted: 303 Post(s)
Liked: 69
Originally Posted by GGA
It would thus seem the lens needs to resolve only 1080p since Fn1 and Fn2 are only 1080p. It our brain that it taking the two1080p frames and creating the e-Shift pseudo 2160p image.

darinp Reply: No lens is perfect, so the question is how sharp does a lens need to be to show those 1080p pixels and have people still be able to make out the detail, then how sharp do they need to be to show the smaller 1/4 pixel detail and have people be able to make out that detail.

GGA: Would this be a correct simplistic analysis? The smallest the JVC can project is 1 pixel, it cannot project 1/4 pixel. The 1/4 pixel is created when our brain processes the two 1 pixel sub-frames. The sharper the 1 pixel sub-frames are (up to a point) the sharper the 1/4 pixel created by the brain will be.
GGA is offline  
post #377 of 1307 Old 06-30-2017, 05:49 PM
AVS Forum Addicted Member
 
coderguy's Avatar
 
Join Date: Dec 2006
Posts: 14,027
Mentioned: 62 Post(s)
Tagged: 0 Thread(s)
Quoted: 2420 Post(s)
Liked: 1316
No-one is actually thinking about the points I and Darin have made, they are skipping over them and regurgitating the same arguments that have already been disproven in about 1000 different ways with objective examples.

**Updated Projector Calculator Released NOV 2017**
-- www.webprojectorcalculator.com --
coderguy is offline  
post #378 of 1307 Old 06-30-2017, 06:10 PM
AVS Forum Addicted Member
 
coderguy's Avatar
 
Join Date: Dec 2006
Posts: 14,027
Mentioned: 62 Post(s)
Tagged: 0 Thread(s)
Quoted: 2420 Post(s)
Liked: 1316
During temporal manifestations = At the lens, the spatial profile does not change, but it does cause a spatially dependent momentum shift which allows for refocusing and manipulation of beams.

What the above sentence means is even with the same spatial lens profile, you still get spatially dependent momentum shifts from temporal manifestations.

Sensor resolution (temporal)
(bunch of unrelated stuff goes here), and then finally ... Therefore, sensor sensitivity and other time-related factors will have a direct impact on spatial resolution.

A temporal-imaging system can magnify time waveforms in the same manner as conventional spatial-imaging systems magnify scenes. We analyze this space-time duality and derive expressions for the focal length and f-number of a time lens. In addition, the principles of temporal imaging are developed and we derive time-domain analogs for the imaging condition, magnification ratio, and impulse response of a temporal-imaging system separately than from a spatial system.

**Updated Projector Calculator Released NOV 2017**
-- www.webprojectorcalculator.com --

Last edited by coderguy; 06-30-2017 at 06:14 PM.
coderguy is offline  
post #379 of 1307 Old 06-30-2017, 07:06 PM
AVS Forum Special Member
 
cmjohnson's Avatar
 
Join Date: Nov 2003
Location: Sharply focused on sharper focus.
Posts: 6,140
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 104 Post(s)
Liked: 34
Quote:
Originally Posted by coderguy View Post
That's too long for me to respond to in a detailed fashion, let's keep it simple, as I have forum fatigue from this thread.

You would almost be correct except there is a very serious problem with what you said above, and that is that (a) the pixels are no longer in the same place once shifted, (b) video content and objects aren't always moving between frames, and (c) the root characteristic of flashing a displaced pixel grid every other frame is too fast to interfere with a+b to the point where you could objectively say we never see enough extra detail to be somewhat representative of the original non-interpolated 4k source. See flicker fusion...

It's the same regurgitated arguments we have seen throughout this thread. I will just say, it doesn't matter how you add detail, there is a complex chaotic relationship between added detail and what the lens shows spatially and temporally.

The problem is "what the lens has the potential to show" and what the lens actually is showing. With digital data, the potential changes of what the lens can show, especially if you start messing with stuff in the hardware.

These analog limitations people keep envisioning in their minds do not exist with digital data that is being altered in both physical and virtual forms both spatially and temporally.

You're not quite getting what I'm referring to.

Nothing that can be done with digital data by any form of software can overcome the physical resolution limitations of the lens system. A given lens system can only resolve a specific maximum resolution at a defined MTF value using a defined measurement procedure.

Let us assume for a moment that the pass/fail limit for MTF value is 20 percent. Just to pick a number.

Let us assume that a specific 1080p projector featuring an imaging device with an effective diagonal size of 0.8 inches diagonally is evaluated and it is found that it achieves its rated 1080p resolution at a measured MTF value of 20 percent,
which is a pass, but barely.

Now let us remove that lens and place it on a similar projector but equipped with E-shift, or an equivalent technology,
not focusing only on JVC's implementation of the technology. This projector also has a native 1080p imager with
a diagonal size of 0.8 inches and in every respect but for the wobulation system, (e-shift, or whatever) it is optically the same.

When fed 4K content, e-shift does its thing to creat a quasi-simulated 4K-ish picture.

Capture the output of the optical engine as it enters the lens. Capture two consecutive frames, one non-shifted, one shifted,
and display the theoretical result. It's two 1080p images superimposed over each other with a diagonal shift of 50 percent of the size of a pixel. This can be thought of as "virtual" 4K. The number of distinct elements in the picture to be displayed is
actually a question that is worthy of its own discussion as actually the number of pixels displayed in each frame is 1080p's pixel count, so actually two frames (shifted and not) amount to 2x1080p. But the "virtual" subdivision of the pixels takes it
up to 4K minus two missing corner pixels.

And here's the thing: That virtual 4K image, which has all the ANGULAR information of a 4K image, is beyond the capacity of this lens to resolve at the defined MTF value of 20 percent. While it can resolve both the non-shifted and the shifted frames which are individually 1080p, it does not have the resolution capacity to display 4K at the same MTF value as it can display 1080p.

You would not argue that this lens is incapable of displaying native 4K at the same MTF value.

But the e-shift combined simulated 4K image has, other than the two missing corner pixels, exactly the same amount of ANGULAR information to be RESOLVED as the 4K image. And this lens does not have that resolution capacity at
the defined MTF value.

The lens does not care at all if it's being fed the 4K pixels one pixel at a time or all together at once. The lens has no temporal transfer function. Regardless, THIS lens doesn't have the resolution needed to achieve the target MTF value for a 4K pixel pitch.

You can't get something for nothing and you can't resolve a 4K image, native or e-shifted, through a lens that doesn't have the angular resolution capacity to RESOLVE the smaller angular variation of 4K pixels.

Ultimately, lens resolution is all about ANGULAR resolution. That's the point I've been trying to make all along and finally
I've come up with the right words for it. ANGULAR RESOLUTION.

YES, you can get both 1080p frames, shifted and non-shifted, through the lens and they'll both individually resolve as well as a standard 1080p frame without any form of e-shift. Even moving the frame slightly (half a pixel's worth) that 1080p frame will still resolve. But the lens doesn't have the angular resolution for 4K pixel pitch no matter how it's displayed.

The difference is that the shifted image creates a "virtual" 4K image only in the respect that each pixel in the non-shifted frame is then shifted, and maybe you could say overlaid, by the intersection of four pixels in the shifted frame, creating four smaller pixels (as far as the boundaries are concerned) in the place of a single unshifted 1080p pixel. Those "virtual" 4K pixels are the problem. They require twice the angular resolution capacity out of the lens. The shift ITSELF is picture information.
RLBURNSIDE likes this.
cmjohnson is offline  
post #380 of 1307 Old 06-30-2017, 07:08 PM
AVS Forum Addicted Member
 
coderguy's Avatar
 
Join Date: Dec 2006
Posts: 14,027
Mentioned: 62 Post(s)
Tagged: 0 Thread(s)
Quoted: 2420 Post(s)
Liked: 1316
Quote:
Originally Posted by cmjohnson View Post
Nothing that can be done with digital data by any form of software can overcome the physical resolution limitations of the lens system. A given lens system can only resolve a specific maximum resolution at a defined MTF value using a defined measurement procedure.

And here's the thing: That virtual 4K image, which has all the ANGULAR information of a 4K image, is beyond the capacity of this lens to resolve at the defined MTF value of 20 percent. While it can resolve both the non-shifted and the shifted frames which are individually 1080p, it does not have the resolution capacity to display 4K at the same MTF value as it can display 1080p.
Again, regurgitated stuff that is overly wordy and you are over-complicating it when it does not need to be, and again leaving out the shifted pixels. You are not understanding what a temporal manifestation as related to spatial time that can directly increase a spatial resolution requirement to reproduce the temporal manifestation in its original form. Many of your examples are true given you leave (a), (b), or (c) from my previous post out of the explanation in any given example, when all 3 are applicable.

**Updated Projector Calculator Released NOV 2017**
-- www.webprojectorcalculator.com --

Last edited by coderguy; 06-30-2017 at 07:12 PM.
coderguy is offline  
post #381 of 1307 Old 06-30-2017, 07:12 PM
AVS Forum Special Member
 
cmjohnson's Avatar
 
Join Date: Nov 2003
Location: Sharply focused on sharper focus.
Posts: 6,140
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 104 Post(s)
Liked: 34
It IS as simple as understanding that the lens resolution limit is defined by angular resolution and MTF values.

If you want the super simple version of it, here it is: 4K, real or simulated, demands double the angular resolution capacity of the lens
as compared to 1080p. You can't get around this limit with any software processing trick.

A lens that is adquate for 1080p but not adequate for native 4K is not going to be adequate for simulated e-shifted 4K as the angular resoltion requirement is the SAME as that of 4K.

Simple enough?
rak306 and RLBURNSIDE like this.
cmjohnson is offline  
post #382 of 1307 Old 06-30-2017, 07:14 PM
AVS Forum Addicted Member
 
coderguy's Avatar
 
Join Date: Dec 2006
Posts: 14,027
Mentioned: 62 Post(s)
Tagged: 0 Thread(s)
Quoted: 2420 Post(s)
Liked: 1316
Quote:
Originally Posted by cmjohnson View Post
If you want the super simple version of it, here it is: 4K, real or simulated, demands double the angular resolution capacity of the lens
as compared to 1080p. You can't get around this limit with any software processing trick.
I always agreed that simulated 4k requires an enhanced lens requirement and so does native 4k.
Not sure where this is going and my original disagreement was with your telescope example which is backwards to this situation (an interpretative algorithm operating on virtual data, as compared to the analog capture of a lens)...

Going back to my printer example earlier, in your telescope example the printer would be sending data to the screen, and we know the reverse it true.

You also stated that there are no temporal functions or considerations in optical mathematics, that is not true, there are considerations. Now generally we commonly measure a lens only spatially, but even the sensor data has to be accounted for temporally. Also, just because a lens has X value from a spatial measurement does not mean that value always applies to shifted pixels on a temporal existence.

They didn't build lenses and measure these things with the idea of 1/2 pixel shifts moving around temporally, that wasn't the original intended goal.

**Updated Projector Calculator Released NOV 2017**
-- www.webprojectorcalculator.com --

Last edited by coderguy; 06-30-2017 at 07:25 PM.
coderguy is offline  
post #383 of 1307 Old 06-30-2017, 08:40 PM
 
Dave Harper's Avatar
 
Join Date: Feb 2000
Location: Paradise on Earth
Posts: 6,554
Mentioned: 62 Post(s)
Tagged: 1 Thread(s)
Quoted: 3159 Post(s)
Liked: 1723
What darinp and coderguy want you to believe:...............
Attached Thumbnails
Click image for larger version

Name:	Slide1.JPG
Views:	22
Size:	67.2 KB
ID:	2212329  
Stereodude likes this.
Dave Harper is offline  
post #384 of 1307 Old 06-30-2017, 08:41 PM
 
Dave Harper's Avatar
 
Join Date: Feb 2000
Location: Paradise on Earth
Posts: 6,554
Mentioned: 62 Post(s)
Tagged: 1 Thread(s)
Quoted: 3159 Post(s)
Liked: 1723
Resolution requirements for lenses

............and Reality: (*EDIT: The "Video Processor" label should say "eShift Video Processor")
Attached Thumbnails
Click image for larger version

Name:	Slide2.JPG
Views:	18
Size:	110.0 KB
ID:	2212337  
Stereodude likes this.

Last edited by Dave Harper; 06-30-2017 at 09:42 PM.
Dave Harper is offline  
post #385 of 1307 Old 06-30-2017, 09:36 PM
AVS Forum Addicted Member
 
coderguy's Avatar
 
Join Date: Dec 2006
Posts: 14,027
Mentioned: 62 Post(s)
Tagged: 0 Thread(s)
Quoted: 2420 Post(s)
Liked: 1316
We've broken this down into temporal manifestations, time waveforms, and space time dualities.
And you're still going on about when the image is manifested by our brains.

I've had one too many interdimmensional temporal manifestations of my own after this thread, so time to grab some pepto and retire.
When I move too fast, I feel like I'm floating, must be a paraphysical cosmic coincidence resulting from too much hyper-space buggy travel.

**Updated Projector Calculator Released NOV 2017**
-- www.webprojectorcalculator.com --
coderguy is offline  
post #386 of 1307 Old 06-30-2017, 09:39 PM
 
Dave Harper's Avatar
 
Join Date: Feb 2000
Location: Paradise on Earth
Posts: 6,554
Mentioned: 62 Post(s)
Tagged: 1 Thread(s)
Quoted: 3159 Post(s)
Liked: 1723
Resolution requirements for lenses

Quote:
Originally Posted by coderguy View Post
We've broken this down into temporal manifestations, time waveforms, and space time dualities.


And you're still going on about when the image is manifested by our brains.



I've had one too many interdimmensional temporal manifestations of my own after this thread, so time to grab some pepto and retire.


When I move too fast, I feel like I'm floating, must be a paraphysical cosmic coincidence resulting from too much hyper-space buggy travel.

Exactly! You're making temporal manifestations, time waveforms, and space time duality mountains out of simple eShift mole hills!

You're still acting as if that composite image is happening before the lens, like I put in those graphics, and it is not. Simple as that.

Maybe study those graphics a little more.
Dave Harper is offline  
post #387 of 1307 Old 06-30-2017, 09:44 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by Dave Harper View Post
What darinp and coderguy want you to believe:...............
Dave,

Please don't post falsehoods. I believe there is a forum rule against out and out lies, which that is.

I think the evidence is building that you are actually trolling. I have a hard time believing that you don't realize you are taking 2 positions that are completely opposite from one another.

You claim both of these things:

1: With completely incoherent light sending the 2 eShift sub-frames through the lens at different times has the same lens requirements as sending them through the lens at the same time.
2. Sending the 2 eShift sub-frames through the lens at different times means much less lens requirements than combining them before the lens and sending them through the lens at the same time.

You said #1 was true because that was what your expert said. You said #2 was true because it is what your intuition told you.

How about being honest and telling readers here whether you and your expert are wrong about #1 being true or you are wrong about #2 being true?

If you continue to refuse to address why you are taking 2 opposite positions it will be pretty clear that you are trolling at this point.

--Darin

Last edited by darinp; 06-30-2017 at 10:11 PM.
darinp is offline  
post #388 of 1307 Old 06-30-2017, 09:44 PM
AVS Forum Addicted Member
 
coderguy's Avatar
 
Join Date: Dec 2006
Posts: 14,027
Mentioned: 62 Post(s)
Tagged: 0 Thread(s)
Quoted: 2420 Post(s)
Liked: 1316
@Dave Harper's Baloney Sandwich
I never thought it was happening before the lens, my point was only that it is irrelevant when it happens due to spatial requirements being affected by temporal shifts.

**Updated Projector Calculator Released NOV 2017**
-- www.webprojectorcalculator.com --
coderguy is offline  
post #389 of 1307 Old 06-30-2017, 09:48 PM
AVS Forum Special Member
 
darinp's Avatar
 
Join Date: Jul 2002
Location: Seattle, WA
Posts: 4,735
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 543 Post(s)
Liked: 749
Quote:
Originally Posted by coderguy View Post
I never thought it was happening before the lens,
Dave knows that. He is trying to deceive readers about what you said. Dave already agreed that with completely incoherent light it makes no difference whether the sub-frames are combined before the lens or after. That is B and C in my question from before and Dave posted that B and C have the same lens requirements. To say otherwise would be to say that his expert is wrong.

--Darin
darinp is offline  
post #390 of 1307 Old 06-30-2017, 10:13 PM
 
RLBURNSIDE's Avatar
 
Join Date: Jan 2007
Posts: 3,901
Mentioned: 16 Post(s)
Tagged: 0 Thread(s)
Quoted: 2012 Post(s)
Liked: 1406
Quote:
Originally Posted by cmjohnson View Post
It IS as simple as understanding that the lens resolution limit is defined by angular resolution and MTF values.

If you want the super simple version of it, here it is: 4K, real or simulated, demands double the angular resolution capacity of the lens
as compared to 1080p. You can't get around this limit with any software processing trick.

A lens that is adquate for 1080p but not adequate for native 4K is not going to be adequate for simulated e-shifted 4K as the angular resoltion requirement is the SAME as that of 4K.

Simple enough?
Yep. You gotta pay the piper! I.e. there's no free lunch in physics i.e. there's no way to project an effectively 4K image on a wall through a lens which isn't good enough for a true 4K image. Because those two things are effectively the same. The fact that the subframes are separated in time don't matter from the point of view of the optics of the lens, only from our perceptual system due to super resolution.
RLBURNSIDE is offline  
Sponsored Links
Advertisement
 
Reply Digital Hi-End Projectors - $3,000+ USD MSRP

Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off