or Connect
AVS › AVS Forum › Display Devices › Digital Projectors - Under $3,000 USD MSRP › 1920x1080 vs 1280x720 - your eye can't tell the difference
New Posts  All Forums:Forum Nav:

1920x1080 vs 1280x720 - your eye can't tell the difference - Page 2  

post #31 of 100
Thread Starter 
This is going to sound like I am repeating myself because that's exactly what I am doing.

The analysis takes into the account the worst possible situation, that is, that we have two adjacent pixels that have the greatest possible color/brightness/contrast difference. If I am unable to detect that situation, i.e., an alternating pattern of black and white pixels (they appear gray), then surely, even under the worst cirumstances, I am never going to be able to perceive the screen door.

Some interesting speculations have been presented, but my reasoning, to me, seems stronger than the speculations.

It's not that I think you are lying about seeing screen door, but my reason points to two explanations:

1. There is a factor we are failing to account for in how the eye perceives images
2. You are not seeing screen door, but another phenomenon you are attributing to screen door.
post #32 of 100
You can see stars at night even though they are much smaller than the image size we are talking about can't you? SDE isn't some conspirisy theory. It is very real and obvious in the right situation. Alternating black and white completely hides the SDE since it would blend into the black squares. It is on a bright white that it is very obvious. Especially white text i.e. movie credits show SDE from any reasonable distance.
Warren.
post #33 of 100
Thread Starter 
I feel like a broken record.

Multiple people have pointed to the fact that the brightness and contrast difference between the screen door and the pixel could be a factor in being able to perceive screen door. I am not going to argue this is true.

However, let's write down what we know:

1. For all technologies I know of, pixels are larger than the lines or screen door between them
2. A black pixel is as black or nearly as black as the screen door between it.

So, let's consider the worst case. I have two adjacent pixels, one the brightest white and one the blackest black. According to the analysis, in some situations, the viewer should not be able to perceive there is a difference between the two pixels - the viewer will see a single pixel that is an "average" of the two indistinguishable pixels.

Remember that we wrote down what we knew. We know that the pixel is always larger than the screen door in between it. Therefore, the worst case for screen door is to have a white background. Now, the visibility of the black screen door lines is at a maximum. However, since the width of the screen door is always less than that of a pixel, we can conclude that if I cannot resolve two adjacent pixels with all other parameters being the same (contrast, etc), then surely I cannot resolve the much smaller width of the screen door between the pixels!
post #34 of 100
Thread Starter 
I see a difference between the case of a star and the case of being able to distinguish between two pixels or a pixel and screen door.

I one case, the pixel case, we are questioning whether one can detect the difference between two adjacent pixels.

In the other case, the case of the star, we are not questioning whether one can detect the difference between two adjacent items.

As one poster already said, being able to detect a point source is not analagous. Being able to detect that a star were a binary star would be much more analagous.

For example, the human eye can tell in complete darkness there is a candle burning from a very far distance away. However, if the distance is long enough, I cannot detect whether that light source is one or two candles.

In conclusion, I don't find the analogy fitting.
post #35 of 100
Thread Starter 
I have (hopefully) corrected the math errors and reposted the updated PDF file.
post #36 of 100
Quote:
Originally Posted by kingstud
Basically, for most people, and even for some who use front projection, the difference between 1080i and 720p is essentially non-existant because the eye is not capable of telling the difference between the two. Read the attached PDF if it has you curious. All the equations are spelled out, so you can substitute in numbers for your own situation and see what the difference between the two resolutions is.

Comments/suggestions/feedback welcome!
I think the distance assumptions are correct for the average family with the average TV in the Den. Not so for dedicated room trying to capture the experience of the theater.

In my theater, I prefer a viewing angle of about 38.5 degrees. That translates to seating about 1.45x screen width. At this distance, one can see pixel structure on most displays of 720p which is why I'm using a CRT PJ at 768p.

Dave
post #37 of 100
Quote:
Originally Posted by kingstud
In conclusion, I don't find the analogy fitting.
Agreed. The analogy is flawed and not relevant to the discussion.

This isn't that complicated. I have a PJ with 960x540 resolution and approx 60% fill factor. I can easily see (not detect, I can see it.) the screen door from 1.5x screen widths. So at the very least I can resolve 1920x1080 from that distance, and probably a bit more. (Think of a PJ with 100% fill factor and 1920x1080 resolution. Now draw a grid of black llines every other pixel and presto! It looks just like my 960x540 PJ except with a 25% fill factor)

If you can see the screen door on your PJ, then a bump up in resolution will provide a visual benefit. Period.

Mike
post #38 of 100
Quote:
Originally Posted by Erik Garci
This is equivalent to doubling the pixel count (both horizontally and vertically) and doubling the fill factor's difference from 100%. For example, a 960x540 panel with 90% fill factor becomes a 1920x1080 panel with 80% fill factor.

Correct?
Almost. The example above sampled the simulated mirrors onto a sub grid with inter-mirror pitch, P, of 40 points and inter-mirror gap, G, of 4 points. Each "point" represents a screen pixel in the simulation (except there was a final 4X sub-sampling in each dimension for the web pictures). So, the normal fill factor is ((P-G)/P)^2 = 81%.

The “masked mirror†image simulates what would happen if each mirror of a conventional DMD is etched with a “+†shaped black mask with line width G. This creates 4 "sub-mirrors" per physical mirroe. The fill factor is reduced disproportionately (ie: more than doubling the fill factor's difference from 100% to ((P/2-G)/(P/2))^2 = 64 % in this example.

The sub-mirror pitch is half that of the inter-mirror pitch, but since the 4 sub-pixels on each mirror all display the same brightness, one might argue that the pixel pitch has not changed (depending on how one choses to define pixel pitch). However, in terms of screen door pitch, it is definitely halved. This is what makes it less visible.

Brent
post #39 of 100
Quote:
Originally Posted by kingstud
The analysis takes into the account the worst possible situation, that is, that we have two adjacent pixels that have the greatest possible color/brightness/contrast difference. If I am unable to detect that situation, i.e., an alternating pattern of black and white pixels (they appear gray), then surely, even under the worst cirumstances, I am never going to be able to perceive the screen door.
It turns out that screen door generates more high spatial frequency content than an alternating black/white (checkerboard) pattern. Why ? because the thin black lines are small and therefore contain more "energy" at higher spatial frequencies. The eye clearly does not have a sharp cut-off and is still responsive to these higher spatial frequencies. This is why screen door can be detected at closer distances than a checkerboard pattern.

I do agree that at the limit of detectibility we may not be seeing screen door per se (in the sense of resolving it), but the perceived "digitalness" is caused by the screen door grid.

Brent
post #40 of 100
Hi, Kingstud,

As one of the very first owners of the D-ILA, as someone who sold modifications to these projectors for several years (and with ~3,000 posts on the AVS Forum, to boot), I'm quite familiar with "SDE", and no, I'm not attributing it to the wrong display artifact. Furthermore, I'm not claiming that I would be able to visualize screendoor on your hypothetical "2-pixel display". However, regardless of what you feel are the merits of your analysis, under the proper circumstances, I can quite clearly see a visual grid formed by the inter-pixel gaps on the D-ILA at distances that are greater than you claim that my eyes can resolve. In fact, this is not, contrary to your 2-pixel setup, comprised of two adjacent pixels that are at opposite luminance levels. Rather, it is visible over a relatively large uniform field of the display. One of the most common cases where I've seen this is with a Windows HTPC display - a white window on top of a blue background. Obviously, in the blue regions of the screen, SDE isn't visible (blue receptor cones are quite poor at angular differentiation). However, in this display environment, its possible to see the SDE grid within a large portion of the white window when projected and ambient luminance levels are right. While I haven't analyzed this carefully enough to state it for certain, it does appear that the predominantly blue background screen may be a contributing factor to achieving the proper level of sensitivity to enable the inter-pixel gap grid to be seen in the white region of the screen.

Just to address a couple of expected concerns ahead of time... No, this isn't caused by noise from the HTPC output, which has been well scoped. It's also a pure white window without dithering. No, it's not induced by the eight-bar drive scheme in the D-ILA, which looks totally different. And no, I'm not hallucinating. It's real!

Cheers!
MarkF
post #41 of 100
Thread Starter 
Quote:
It turns out that screen door generates more high spatial frequency content than an alternating black/white (checkerboard) pattern. Why ? because the thin black lines are small and therefore contain more "energy" at higher spatial frequencies. The eye clearly does not have a sharp cut-off and is still responsive to these higher spatial frequencies. This is why screen door can be detected at closer distances than a checkerboard pattern.
I have no idea what that means. Can you please elaborate or point to an explanation?

Also, I am not necessarily talking about a checkerboard patterns. What if the pattern is vertically alternating lines of black and white pixels? That's very much analogous to screen door.
post #42 of 100
Thread Starter 
I don't doubt that you can see screen door on your D-ILA. I am merely searching for an explanation. I threw some explanations out there - nothing personal.

No matter what special circumstances that are color, image, contrast, etc. related you can come up with, I can simply counter by saying that I can replicate those circumstances on a grander scale using much larger pixels rather than the more diminuitive lines between them. If the circumstances are the same, it stands to reason, in my mind at least, that it would be impossible to see screen door if I could not see pixelation under the same circumstances.
post #43 of 100
How's come I can tell a difference between 1280x1024 and 1600x1200? Actually two resolutions closer together then 720P and 1080P? And on a smaller screen? Either way I'll tell a difference driving the projector via HTPC some day with a 1920x1080 desktop resolution. Here’s to the HD-DVD format!
post #44 of 100
Hi, Kingstud!

Perhaps you might be mixing up others' comments with mine? Of course - I can definitely see pixelation long before it's possible to see the inter-pixel grid! In order from long distance to short, here's what I see:
  • See a single pixel on a black background (pretty much from the neighbor's house!)
  • Identify significant distortion in single-pixel width near-horizontal or near-vertical lines due to pixelation (at ~~2.5-3 screen widths).
  • Clearly identify the shape of individual pixels as rectangular (~~1.5 screen widths)
  • See the inter-pixel grid (between ~~0.8 and 1.2 screen widths, depending on conditions)
All this applies to the current fleet of 1365x1024 D-ILAs, which have a 92% fill factor.

Cheers!
MarkF
post #45 of 100
You guys are much better at geometry than I am. All I can tell you is my experience. I've seen 1080i on a 960x540 pj and on a 1280x720 pj and I couldn't discern a noticeable difference on a 106" diagonal screen from 10 feet away. But I've seen 1080p source on a 1080p pj on a 100" screen from 10 feet away and it was jaw-dropping. Definitely a difference.

Will folks with 42-50" plasmas, lcds, rptvs be able to see the difference with 1080p? I doubt it seriously. Will projector owners? I guarantee it.

Now back to the hypotenuse of the angle theorem radius tan gobbledy gook :)
post #46 of 100
The math leaves a lot to be desired. Highly localized contrast is irrelevent in measuring the resolving acuity of the human visual system, which is a stereoscopic, motion-adaptive system that infers from cues. Binocular vision tremendously improves the ability to resolve a small detail - it is part of the reason it often is easier to see the starry sky in detail with both eyes open. (Try it.) Then there's the whole motion-energy thing. I can resolve structures from a field of view that is in motion, however slight, as a direct result of binocular vision. And finally there's that hole gradient inference thing... I may not be able to pick out that thin line between two adjacent pixels, but take a few hundred of them and stack them vertically, and they're going to become a LOT more obvious due to what effectively is a neurophysiological "convolution" that tends to emphasize visual cues such as vertical lines.

Lastly, as an aside, I've been using 1920x1200 laptop displays for a few years now, and I only need to open an Excel spreadsheet to make out a heck of a difference... and I can do that at 2x, 3x or 4x the distance. Yeah, I may not make out much difference between a picture of a tree at 1920x1200 vs. 1280x800 from more than a foot or two away. Other kinds of content easily reveal the benefits of resolutions beyond 1280x720.
post #47 of 100
So, what resolution will it take, and what fill factor, for most people to get 1 screen width away with no bad screen door or pixilation?

Doesn't look like the 7700 I was contemplating is gonna make it there.
post #48 of 100
nice discussion... cant follow almost any of it... but it all seems very clever.....

krlock2
post #49 of 100
To the thesis;

Erm, tell that to Opera (my web browser). More pixels = more screen real estate = me a much happier camper.

I think the world would be a far different place if LCD monitor manufactuers got their heads out of their ar$e and started buidling higher definition LCD monitors. My 19 CRT purrs along at 1600x1200, hell, its only 18" visible. Accepting less fidelity is not an option. Less screen real estate is bunk. When you start scaling your apps smaller to run at lower res, the problem becomes clear, the need becomes real.
post #50 of 100
As sphinx99 has said, a bit of math won't do here.
The eye is not a camera that takes snapshots, it delivers a constant stream of signals and the details are kind of added up by the visual cortex (I wonder why they don't use that to enhance resolution of film, btw, as this process is mathematically reproducible to some extent). This and many other processes are complex and cannot be calculated with some simple formulas - if you ever worked with the primitive neuronal networks in computer science, you'll know they are completely 'chaotic' and have a pretty much infinite number of parameters that influence each other all the time. We're talking about several hundred KBytes just to describe the neuron weight constants for a simple network of a few hundred neurons used for character recognition, resolution 10x16 pixels.
Thus, taking all possible effects into account is impossible, their number in a brain is virtually infinite.

Anyhow, I forgot which one it was, but I once read about some animal that tilts its head right and left in certain situations for that reason, and humans can sometimes be observed doing the same thing if they try to decipher something.
Also, just pause a movie with progressive playback. Watch the details. Start the movie again. Details will increase, probably mostly because the adjacent frames reveal different parts of the picture due to the grain. If we were unable to "combine" frames, detail would not always be higher on a moving picture.
This is only one of the many factors to take into consideration.

Personally, I can easily see screen door on 2.5x screen width on a 720p LCD, at least when there is green in the picture - red and blue are pretty much SDE free even at <1.5x.
The difference between 1080 and 720 is so huge, I believe I could actually see it on my 21" TFT monitor from a few meters away. On a 2 meters wide screen, the difference should be visible from a larger distance than the average room even lets you get away from it. And also regarding that TV, the analysis seems too pessimistic by an order of magnitude to me.
720p is just a compromise, resolution only starts to get interesting at 2K. BTW, I would take part in an A/B test anytime to prove what I said above.
post #51 of 100
Quote:
Originally Posted by audiophobe
Personally, I can easily see screen door on 2.5x screen width on a 720p LCD
I think some people would not even be able to see alternating black and white pixels in that situation, much less screen door.
post #52 of 100
Thread Starter 
There have been many comments that the analysis is not mathematically rigorous enough. You are correct that I don't account for and don't attempt to quantify other factors such as motion or how the brain may manipulate the image. I am not a specialist in optics. In fact, I have no formal training in optics. I only know some basic geometry and some physics behind some of the optics. That was the whole point of starting this discussion - to learn! If someone wants to point me in the right direction, I will glady research any factors that may influence the resolving capabilities of the eye.

Yet, I would like to point out that the first part of the analysis produces an angular resolution of approximately 0.02 degrees under low-light viewing conditions. This is simply the limit of the ability of an optical system to resolve objects as defined by physics. Does the eye somehow, through image manipulation, violate such a law? I tend to think not. Regardless of how many neurons are present or processing, I am inclined to think that the eye, just like all other objects in our universe, is constrained by the law of physics.

Many people have presented interesting theories as to why the eye may be able to better perceive pixelation or screen doors than what the optical limits dictate. These are all interesting dicussions and theories. I am especially intrigued, as mentioned, by the influence binocular vision might have on the resolving ability of the eye.

To those of you who point out the simplicity of the analysis and its failure to quanitfy many factors: the analysis clearly stated that there are many factors unaccounted for that I did not attempt to quanitfy and that the goal was a worst-case analysis. You aren't telling me anything new by telling me I left something out. Instead of telling me it's missing, do something productive; show me the math! Speculation and hypothesizing are interesting, and I appreciate the insights; however, it does not do a bit of good unless we can quantify it and express it formally.
post #53 of 100
Another way to look at the problem is in terms of line pairs. That is, instead of saying the angular resolution is 0.02 per line, say that the angular resolution is 0.04 per line pair.

Let's consider these two patterns of alternating black and white lines:
1. Each black line is 0.02, and each white line is 0.02.
2. Each black line is 0.004, and each white line is 0.036. (This is similar to a screen door effect with 0.90 fill factor.)

In both cases the angular resolution is 0.04 per line pair.

So the question is...
How much is the visibility of the black lines reduced as you increase the proportion of white in relation to black?

For example...
In the case of black=0.019 and white=0.021, is black totally invisible? Or is it just slightly less visible, even though its own angular resolution is less than 0.02? I suppose that black is just slightly less visible, compared to black=0.02 and white=0.02.
post #54 of 100
Quote:
Originally Posted by kingstud
Yet, I would like to point out that the first part of the analysis produces an angular resolution of approximately 0.02 degrees under low-light viewing conditions. This is simply the limit of the ability of an optical system to resolve objects as defined by physics. Does the eye somehow, through image manipulation, violate such a law? I tend to think not. Regardless of how many neurons are present or processing, I am inclined to think that the eye, just like all other objects in our universe, is constrained by the law of physics.
Of course it is constrained by the laws of physics. But that doesn't mean it has to follow only things that can be determined with simple math. I can't do the math as the human optical system is very complicated and experimentation is the only way that we can determine some of these things at this point. At least as far as I know. If anybody has created a mathematical model that determines these things with our binocular system where the eyes can move and the brain takes a lot of cues and puts them together, then it wouldn't surprise me if that person won a Nobel Prize. Our center vision is also much better for detail than vision to the sides. I have heard that if our whole vision had consistent processing at the level of the center vision, the amount of brain mass it would take just for this would not fit inside our heads. Instead we have a relatively large amount of brain mass dedicated to one small spot (the center).

I applaud what you have done to try to determine this, I just don't think you are going to find simple math that works. Instead, I think you would have more luck finding results of experiments that have been done. I believe that the HD resolution of 1920x1080 was determined partially by experimenting with what people can resolve at different distances (or using results from somebody else's experiments).

--Darin
post #55 of 100
Thread Starter 
Quote:
For example...
In the case of black=0.019 and white=0.021, is black totally invisible? Or is it just slightly less visible, even though its own angular resolution is less than 0.02? I suppose that black is just slightly less visible, compared to black=0.02 and white=0.02.
Again, I am not an expert in my optics. But let's think about what would happen if that black and white lines have equal widths right at the threshold. One should effectively see gray. That is, if we break this down into RGB values ranging from 0-255, one would perceive a perfectly "gray" image with RGB values in the exact middle of the range.

In your case, since white is more predominant you would, in general see a lighter gray than the above case, and you might also be able to perceive some pixelation since the width of the white line is right at the theoretical threshold.

Quote:
I applaud what you have done to try to determine this, I just don't think you are going to find simple math that works. Instead, I think you would have more luck finding results of experiments that have been done. I believe that the HD resolution of 1920x1080 was determined partially by experimenting with what people can resolve at different distances (or using results from somebody else's experiments).
I remain unsatisfied with mere observations. If such experiments exists, then quantifications and mathematics should also exist that extrapolate experimental data and attempt to map it to formulas. I want empirical, mathematical evidence. The experimental data, in my opinion, is nearly useless unless it can be quantified mathematically.

Quote:
Of course it is constrained by the laws of physics. But that doesn't mean it has to follow only things that can be determined with simple math. I can't do the math as the human optical system is very complicated and experimentation is the only way that we can determine some of these things at this point. At least as far as I know. If anybody has created a mathematical model that determines these things with our binocular system where the eyes can move and the brain takes a lot of cues and puts them together, then it wouldn't surprise me if that person won a Nobel Prize. Our center vision is also much better for detail than vision to the sides. I have heard that if our whole vision had consistent processing at the level of the center vision, the amount of brain mass it would take just for this would not fit inside our heads. Instead we have a relatively large amount of brain mass dedicated to one small spot (the center).
Does it really matter where the effective resolution of the human eye is highest - middle, sides, etc.? All portions are still constrained by the law of physics.
post #56 of 100
One thing you might want to look at is, as eluded to above, the "binocular effect", it would seem logical that our two eyes work like an interferometer. Just look at what's going on with telescopes these days, they are starting to stop building "bigger" ones, and starting to build binocular ones. Why? Because two smaller telescopes in a binocular config can replace one very large monocular telescope. Also consider the Very Large Array. Here's an applicable quote from the VLA site:
http://www.vla.nrao.edu/genpub/overview/
Quote:
The VLA is an interferometer; this means that it operates by multiplying the data from each pair of telescopes together to form interference patterns. The structure of those interference patterns, and how they change with time as the earth rotates, reflect the structure of radio sources on the sky: we can take these patterns and use a mathematical technique called the Fourier transform to make maps.
It would be interesting to see what kind of results your calculations would produce if you were to take this into account.

Another quote
http://astrosun2.astro.cornell.edu/a...rferometer.htm
Quote:
Interferometer

An interferometer consists of two or more separate telescopes that combine their signals almost as if they were coming from separate portions of a telescope as big as the two telescopes are apart.

The resolution of an interferometer approaches that of a telescope of diameter equal to the largest separation between its individual elements (telescopes). However, not as many photons are collected by the interferometer as would be by a giant single telescope of that size.
-edit

So what would the result be? Well to borrow from kiwishred:
Quote:
- Used 1 mm instead of 6 mm for pupil diameter. This under-estimates the resolving power by a factor of 6.

- The angular resolution formula gives result in radians, but the tan was taken using the argument in units of degrees (thereby over-estimating the resolving power by a factor of about 57).
If we assume say 3.5in, say 90mm spacing, that would imply that the true resolving power of the human visual system could be on the order of about 2x (~90/57) kingstud's results. I'm rushing here (and lazy) so I've probably missed things in my "calculations" but they visual system being an interferometer would seem to explain a lot of the issues raised:
"I can 'see' much finer stuff...." - An ideal interferometer would support that
"Some can't even see that...." - Perhaps those people's brains aren't as good at doing the combination.

Food for thought. :)
post #57 of 100
TrickMcKaha, I sit at one screens width from my 7700 and love it. Only a few times in some movies do I see the SDE and I just don't mind. 99% of the time it just looks awesome. I don't think it is the resolution that is the problem, but the gap between pixels. I never notice the pixelization in movies. I also run my computer monitor at 1024x768 a lot. Of course different people have different sensitivities to it.
Warren.
post #58 of 100
Quote:
Originally Posted by kingstud
I have no idea what that means. Can you please elaborate or point to an explanation?
Best thing would be to bone up on a digital signal processing (or image processing) text book to understand sampled data systems. The Fourier Transform & Its Applications, R. Bracewell or Multidimensional Digital Signal Processing, D. Dudgeon and R. Mersereau would tell you more than you would ever want to know.

Basically, the samples on, say, a DVD represent brightness/colour values at infinitely small points on a regular rectangular grid. But, these samples if displayed as tiny points will generate not only the desired fundamental signals (desired image), but also unwanted harmonics of the fundamental signals. The harmonics are responsible for image artifacts such as pixellation and SDE. Remove the harmonics using a high quality low pass filter and you get a nice smooth, artifact free, image. Unfortunately a DMD mirror makes a rather poor low pass filter. And, using the eye as a low pass filter is inefficient because (a) it forces you to sit back from the screen further than necessary and (b) different eyes have different frequency responses (visual acuity) and, possibly, differences in the other "higher order" pysco-visual responses that others have described.

I may have overstated the amount of high frequency content that screen door generates. If the spatial sample rate (1/mirror pitch) is f then the first unwanted harmonic of the screen door occurs at f whereas the first unwanted harmonic of a checkerboard (or line pair) pattern occurs at 1.5 f (and is therefore, more highly attenuated by frequency response of the eye). While in theory the fundamental of a grid pattern should be considerably stonger than the first unwanted harmonic of the screen door I just did some experiments looking at print outs of a checker board grid patten and a 72% fill factor screen door grid (white background). The checker board merged to grey at 2.5X (distance/screen width) whereas, using the same subjective criteria the SDE grid disappeared (merged to grey) at 2.1X. However, the spatial frequency associated with the SDE grid is double that of the checkerboard. So, these distances are far closer to each other than simple optical resolution theory would predict. When theory conflicts with emperical evidence, I question the theory, not the experiment.

Quote:
Originally Posted by sphinx99
And finally there's that hole gradient inference thing... I may not be able to pick out that thin line between two adjacent pixels, but take a few hundred of them and stack them vertically, and they're going to become a LOT more obvious due to what effectively is a neurophysiological "convolution" that tends to emphasize visual cues such as vertical lines.
I think that may be well be what is going on and is what I have been trying to say about SDE being related to visual perception over large patches of the screen (as opposed to eye charts that measure visual acuity in a very small region). I think the psyco-visual process would be one of correlation rather than convolution though.

Brent

Edited for typos/clarity
post #59 of 100
Quote:
Originally Posted by kingstud
In your case, since white is more predominant you would, in general see a lighter gray than the above case, and you might also be able to perceive some pixelation since the width of the white line is right at the theoretical threshold.
In this image...
http://home1.gte.net/res18h39/images/lines1.gif
the bottom row indeed appears to be a lighter gray, when viewed from far away. To my eyes, the top row is slightly easier to resolve, however.
post #60 of 100
Quote:
Originally Posted by kingstud
Does it really matter where the effective resolution of the human eye is highest - middle, sides, etc.?
It means that we should take the highest resolution into account and it also means that we should take into account that with a lot of processing power, moving images, the ability for the eyes to move, the fact that our visual systems take cues and make assumptions from those, etc., with any of this.
Quote:
All portions are still constrained by the law of physics.
You keep saying this, but then you seem to want to constrain to very simple physics. Our systems are not constrained to the physics of fixed images, no guessing, no aberations around things, etc. I think the reason that optical illusions can be made is largely that our visual systems are taking a lot of data and making best guess kinds of images out of them. But this also means that we can detect things that wouldn't be possible with just a few single points with static images and no movement, etc. Our visual systems can take multiple snapshots, make guesses based on only seeing a portion of something (like straight lines) or variations that would most likely be representative of a straight line. In other words, I don't think you are going to find a simple mathematical limit with simple assumptions like static reception and images that the processing over a large amount of data and estimating can't defy in this situation.

Just like how our brains will actually create fake images. Like if we see someone walking toward a banana peal and then slip, our brains can create an image of the person slipping on the peal even though we never saw it. So, it can be wrong. But in the case of resolution and SDE, it means it can be right based on quite a few cues also. The physics is whether it can get those cues in this complicated system, not whether it can get those cues in an extremely simple system.

--Darin
New Posts  All Forums:Forum Nav:
  Return Home
This thread is locked  
AVS › AVS Forum › Display Devices › Digital Projectors - Under $3,000 USD MSRP › 1920x1080 vs 1280x720 - your eye can't tell the difference