Having trouble explaining why tv's cause lag to gamers. - AVS Forum
Forum Jump: 
Reply
 
Thread Tools
post #1 of 35 Old 01-24-2013, 09:37 AM - Thread Starter
AVS Special Member
 
tgm1024's Avatar
 
Join Date: Dec 2010
Posts: 6,130
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 264 Post(s)
Liked: 667
I'm hunting for a good way to explain to gamers feeling burned by not understanding what's going on in a TV (as opposed to a CRT or computer LED) and this is the best I've come up with so far.

I'll appreciate any input, because I'm not sure yet how else to describe this.

Note: This is not meant to be perfect. It's meant to explain *roughly* why if frames are coming out X per second, why it takes longer than 1/Xth of a second from input to output.

Note2: I'm (unfortunately) collapsing together the notion of response time and time spent per frame, because though different, perhaps it's close enough to not warrant a divergence here? Tell me.

Note3: It's also not allowing for the understanding of frames being created midstream (via tweening/interpolation) within the display. It's strictly trying to describe the difference between the frame rate displayed and the amount of time working on all of them.

Here's my analogy so far. (The numbers are stretched to make a point.)

Imagine you're designing a very high-end mythical car wash. You're trying to get as many cars washed as you can in a day. But you realize that it still takes an hour to really wash/dry/wax a car properly.

What do you do?

You set up a 12 stage wash. There are 12 bunches of people, each bunch (say 4 people each) after another in a long line.

A car comes in to the first stage and gets 5 minutes of attention and then moves to the next stage. As it moves to the next stage, another car comes into the first stage.

Each car still takes a hour to complete from going in to going out, but you're still able to do 12 cars an hour exiting the wash.

Each car is a frame.
The hour is the lag.
The 5 minutes is the response time. (<----broken idea)
12 cars a second is the frame rate.

It's reminding me of the difference between "ping" and "data rate" that I had an endless problem explaining to gamers in the past. I had to resort to depicting a gigabit per second data trunk from here to the sun to explain the difference.

WARNING: You have now entered a no @#$%tard zone. Please gather your anti-vaccine propaganda nonsense and slowly back out the way you came in.
tgm1024 is online now  
Sponsored Links
Advertisement
 
post #2 of 35 Old 01-24-2013, 10:37 AM
AVS Special Member
 
Chronoptimist's Avatar
 
Join Date: Sep 2009
Posts: 2,560
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 1 Post(s)
Liked: 203
Do you really need an analogy like that? I find that analogies actually tend to complicate matters, rather than make things easier to understand. I understand exactly what the problem is, and I don't understand your analogy at all.


All you need to do is explain that CRTs were analogue displays that basically displayed an image instantly via a simple electrical connection.
Modern displays need to process the image before it can be displayed, and this introduces a delay. The length of the delay depends on the display and the amount of processing being done.

The delay usually doesn't matter for watching video - though it can cause lip-sync problems if there is not an equal delay applied to the audio - but is a big problem with games because you have to react to things happening on-screen, and you expect the game to react to your inputs immediately, rather than being shown a delayed image.


P.S. Please don't use colored text. Not everyone is using the default theme, and blue on black is hard to read.
Chronoptimist is offline  
post #3 of 35 Old 01-24-2013, 11:20 AM - Thread Starter
AVS Special Member
 
tgm1024's Avatar
 
Join Date: Dec 2010
Posts: 6,130
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 264 Post(s)
Liked: 667
Quote:
Originally Posted by Chronoptimist 
All you need to do is explain that CRTs were analogue displays that basically displayed an image instantly via a simple electrical connection.
Modern displays need to process the image before it can be displayed, and this introduces a delay. The length of the delay depends on the display and the amount of processing being done.

You still don't see how that doesn't explain how that processing can be longer than the between frame time?

Perhaps you're making assumptions that your knowledge of what's going on is intuitive---or you've lost what it's like to know next to nothing about displays. Your CRT explanation would not reach as many people as you think, because you're not addressing where the stumbling block is, and *only* that stumbling block.

The reason is because the following question, and only the following question, seems so counter intuitive that any further explanation falls on deaf ears.

How is it that possible that a TV that displays X frames a second takes longer than 1/Xth of a second from input to output?

On its surface to someone never dealing with this before, it seems as if there would be a backlog of frames because the output would never "catch up" to the input. It's THAT concept that needs to be cleared up. Once they have a clearer picture of how that could be in a non-display real world example, then all the other technology explanations will fall right into place.

In fact a bestbuy employee put the disconnect perfectly: "well lag can never exceed the time between frames or else the tv would have to throw frames out."

WARNING: You have now entered a no @#$%tard zone. Please gather your anti-vaccine propaganda nonsense and slowly back out the way you came in.
tgm1024 is online now  
post #4 of 35 Old 01-24-2013, 01:32 PM
AVS Special Member
 
Chronoptimist's Avatar
 
Join Date: Sep 2009
Posts: 2,560
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 1 Post(s)
Liked: 203
Quote:
Originally Posted by tgm1024 View Post

You still don't see how that doesn't explain how that processing can be longer than the between frame time?
Any delay impacts gameplay, whether it's below 17ms or not. (assuming 60fps)

Quote:
Originally Posted by tgm1024 View Post

Perhaps you're making assumptions that your knowledge of what's going on is intuitive---or you've lost what it's like to know next to nothing about displays. Your CRT explanation would not reach as many people as you think, because you're not addressing where the stumbling block is, and *only* that stumbling block.
I'm not sure what's hard to understand when you say that a CRT has no delay because it's an analogue display and the signal is essentially a direct electrical connection requiring no processing, therefore there was no delay.

Flat panels require the image to be processed before it can be displayed, which adds a delay. The length of the delay can vary depending on the type or amount of processing being done.


Making it into an analogy only serves to complicate matters, and if you're wanting a basic explanation, there's no need for anything more than that.

Quote:
Originally Posted by tgm1024 View Post

The reason is because the following question, and only the following question, seems so counter intuitive that any further explanation falls on deaf ears.

How is it that possible that a TV that displays X frames a second takes longer than 1/Xth of a second from input to output?
Answer: because flat panels need to process the image before it can be displayed, which takes time, and different models of TV can have different delays depending on the type or amount of processing applied.


Talking about frames just complicates matters, and frames aren't all that relevant - delays are in milliseconds, not frames.
For a gamer, you may want to divide that by ~17 (1/60) to calculate how many frames of delay there are but as I said, it just complicates things. ~17 assumes the game is 60fps, rather than 30fps (console) or 120 fps. (high-end PC)

For example, 40ms delay would be two frames at 30fps, three frames at 60fps, and five frames at 120fps, because you must always round up when counting frames.

Or if you had two displays, one with a 1ms delay, and one with a 7ms delay, even at 120fps (8.33..ms per frame) the 1ms display would still be faster, though both technically have a zero frame delay.
Quote:
Originally Posted by tgm1024 View Post

On its surface to someone never dealing with this before, it seems as if there would be a backlog of frames because the output would never "catch up" to the input. It's THAT concept that needs to be cleared up. Once they have a clearer picture of how that could be in a non-display real world example, then all the other technology explanations will fall right into place.
If you simply explain that there's a fixed time delay, I'm not sure why someone would think that there could be a backlog that builds up. If you start talking about frames etc. then it complicates matters and perhaps someone could think that.
Chronoptimist is offline  
post #5 of 35 Old 01-24-2013, 01:35 PM
AVS Addicted Member
 
rogo's Avatar
 
Join Date: Dec 1999
Location: Sequoia, CA
Posts: 30,112
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 89 Post(s)
Liked: 464
Lag can exceed the time between frames.

Lag can be as long as it takes for the first frame to appear once input is received.

To take your carwash analogy (which you want to replace by the way). "How long is the carwash?"

On a CRT, you opened your garage and started driving. No lag.

On a spiffy LCD, you opened your garage, some people buffed the car, then you started driving.

The "buffing" is your measure of lag.

The reason the car wash analogy is bad is that it doesn't really help show why the next frame is delayed.

Maybe plumbing?

In the old days, the water supply was on a short pipe under the sink. Turn it on, the water flowed. You could use each drop immediately.

Now, due to nasty stuff in the water supply, we need to purify each and every drop. We turn it on, each drop gets UV, reverse osmosis, filtering. Then we get it. Each drop gets these processes. so each drop is delayed before we can use it.

The problem for gaming (and again the analogy sucks), is we need to react to each "frame" quickly -- as Chron says -- because the next drop is wriggling out before we've even seen the previous one. And it's "programmed" to be hot, cold, whatever. We might know something about it based on how hot or cold the previous drop is, but we haven't even seen the previous drop.

But, again, the processing time could be 100 drops (or frames) long. It's simply not true that lag cannot exceed time between frames (drops). Sure, any TV or monitor that did would suck for gaming (or really for video), but it would very, very much be possible TGM. And I think your problem with the question stems from a misapprehension of that fact.

The "pipeline", the "carwash", whatever, is entirely independent of the frame length. That pipeline is "lag". It could be a partial frame or many frames long. So long as it can hold all the frames as "work in progress", it won't lose any of them. A factory does this with ease. It might have the capability of producing 60 cars per hour (frames per hour, i.e time between "frames" = one minute), but take 30 minutes to produce a single car ("lag" of 30 minutes because a car needs welding, parts being put in, painting, etc.). We can see that this is like having lag that is 30x the frame rate. This would suck for a first-person shooter. But it's probably really good for an auto plant. Oh, and before you say, "That's not realistic". There are absolutely auto plants that can mange 60 cars per hour (about 1200 per day, >300K per year assuming multiple shifts, but at least one shutdown day per week). There's a plant like that not from from me, although it's current only configured for 20,000 cars per year because it was sold by Toyota to Tesla. Now, I'm guessing that the "lag" is actually far longer than 30 minutes because I suspect there is no car on earth assembled that quickly, but if anything, that just enhances the point, doesn't it?

There is no difference in HDMI cables. If you can see the picture without visible dropouts or sparklies, the cable is working at 100%. No other cable will display a better version of that picture. You're simply wrong if you think there is a better digital cable than one that is already working.
rogo is offline  
post #6 of 35 Old 01-24-2013, 03:48 PM
Member
 
Whatstreet's Avatar
 
Join Date: Sep 2010
Posts: 51
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 13
Gamers might need an analogy to help them understand propagation delay for an image. But, what they really want to know is can displays be designed to drastically reduce propagation delay.
Whatstreet is offline  
post #7 of 35 Old 01-24-2013, 04:02 PM - Thread Starter
AVS Special Member
 
tgm1024's Avatar
 
Join Date: Dec 2010
Posts: 6,130
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 264 Post(s)
Liked: 667
Quote:
Originally Posted by Chronoptimist View Post

If you simply explain that there's a fixed time delay, I'm not sure why someone would think that there could be a backlog that builds up.


Ok, let's start with your explanation, and I'll show you why it's not enough for some. Let's stretch the numbers to extremes, because that's always a great illustrator, and hopefully the assumptions will shake out.

Let's keep it simple, and have a mythical tv do no interpolation, take 60Hz in, and display 60Hz, but have large 100ms lag because of God knows what processing. Don't come up with "why", put the interpolation conecepts in later if you like, I'm just picking a number greater than the 16.666ms (1/60th sec).

Given their understanding (from what they're used to:)


Frame 1 comes in.
Processing starts.
16.66 ms later another Frame 2 shows up at the input
Frame 1 isn't done yet, it has 83.33 ms more work to do. It can't go out.

Frames start to collect.

Without a visualization of how it might be that lag is not the same as the time between frames, an explanation such as yours (that makes perfect sense to you and I) will fall on deaf ears.

WARNING: You have now entered a no @#$%tard zone. Please gather your anti-vaccine propaganda nonsense and slowly back out the way you came in.
tgm1024 is online now  
post #8 of 35 Old 01-24-2013, 04:05 PM - Thread Starter
AVS Special Member
 
tgm1024's Avatar
 
Join Date: Dec 2010
Posts: 6,130
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 264 Post(s)
Liked: 667
Quote:
Originally Posted by Whatstreet View Post

Gamers might need an analogy to help them understand propagation delay for an image. But, what they really want to know is can displays be designed to drastically reduce propagation delay.

Yep. But until they understand (visually) how that might work, they'll never ask the right questions, nor completely not understand that the 120Hz monitor isn't naturally lower lag than the 60Hz, or or or or or. I even had to talk some folks down who had concluded that the lag must be within the cable itself as a result of not understanding this.

WARNING: You have now entered a no @#$%tard zone. Please gather your anti-vaccine propaganda nonsense and slowly back out the way you came in.
tgm1024 is online now  
post #9 of 35 Old 01-24-2013, 04:15 PM - Thread Starter
AVS Special Member
 
tgm1024's Avatar
 
Join Date: Dec 2010
Posts: 6,130
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 264 Post(s)
Liked: 667
Quote:
Originally Posted by rogo View Post

But, again, the processing time could be 100 drops (or frames) long. It's simply not true that lag cannot exceed time between frames (drops). Sure, any TV or monitor that did would suck for gaming (or really for video), but it would very, very much be possible TGM. And I think your problem with the question stems from a misapprehension of that fact.

I've been saying precisely that. I've been talking about their perception, not mine. And their perception is entirely that: that if a frame comes in, and one comes out 1/120th of a second later, then the lag couldn't possibly be longer than 1/120th of a second.
Quote:
The "pipeline", the "carwash", whatever, is entirely independent of the frame length. That pipeline is "lag".

Like I said. (You understand that right?)

Quote:
It could be a partial frame or many frames long. So long as it can hold all the frames as "work in progress", it won't lose any of them. A factory does this with ease. It might have the capability of producing 60 cars per hour (frames per hour, i.e time between "frames" = one minute), but take 30 minutes to produce a single car ("lag" of 30 minutes because a car needs welding, parts being put in, painting, etc.). We can see that this is like having lag that is 30x the frame rate. This would suck for a first-person shooter. But it's probably really good for an auto plant. Oh, and before you say, "That's not realistic". There are absolutely auto plants that can mange 60 cars per hour (about 1200 per day, >300K per year assuming multiple shifts, but at least one shutdown day per week). There's a plant like that not from from me, although it's current only configured for 20,000 cars per year because it was sold by Toyota to Tesla. Now, I'm guessing that the "lag" is actually far longer than 30 minutes because I suspect there is no car on earth assembled that quickly, but if anything, that just enhances the point, doesn't it?

It almost sounds like you're agreeing with me, or disagreeing without realizing that you're agreeing, or telling me how it works when I know how it works, or something, but honestly I'm not sure. But yes to your points. And yes, that's what I'm trying to give a visualization for.

WARNING: You have now entered a no @#$%tard zone. Please gather your anti-vaccine propaganda nonsense and slowly back out the way you came in.
tgm1024 is online now  
post #10 of 35 Old 01-24-2013, 04:26 PM - Thread Starter
AVS Special Member
 
tgm1024's Avatar
 
Join Date: Dec 2010
Posts: 6,130
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 264 Post(s)
Liked: 667
Quote:
Originally Posted by rogo View Post

But, again, the processing time could be 100 drops (or frames) long. It's simply not true that lag cannot exceed time between frames (drops). Sure, any TV or monitor that did would suck for gaming (or really for video), but it would very, very much be possible TGM.

On 2nd thought, no, you didn't realize I understood this at the start, did you. Look at my post in a new light: I'm trying to give a visualization as to how lag can exceed the time between frames. I'm not one of the gamers doubting or confused by that fact.

WARNING: You have now entered a no @#$%tard zone. Please gather your anti-vaccine propaganda nonsense and slowly back out the way you came in.
tgm1024 is online now  
post #11 of 35 Old 01-24-2013, 04:39 PM
Member
 
Whatstreet's Avatar
 
Join Date: Sep 2010
Posts: 51
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 13
Quote:
Originally Posted by tgm1024 View Post

Yep. But until they understand (visually) how that might work, they'll never ask the right questions, nor completely not understand that the 120Hz monitor isn't naturally lower lag than the 60Hz, or or or or or. I even had to talk some folks down who had concluded that the lag must be within the cable itself as a result of not understanding this.

I realize it’s caused by an apparatus and not propagation time of a cable, but a delay time is still perceived the same.

Would UHD up scaling due to lack of native content increase lag due to image processing? Are the array processors used for image processing more powerful to compensate for the increased pixel count?
Whatstreet is offline  
post #12 of 35 Old 01-24-2013, 04:53 PM - Thread Starter
AVS Special Member
 
tgm1024's Avatar
 
Join Date: Dec 2010
Posts: 6,130
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 264 Post(s)
Liked: 667
Quote:
Originally Posted by Whatstreet View Post

I realize it’s caused by an apparatus and not propagation time of a cable, but a delay time is still perceived the same.

Would UHD up scaling due to lack of native content increase lag due to image processing? Are the array processors used for image processing more powerful to compensate for the increased pixel count?

Any processing that needs to be done takes time. It's up to the manufacturer to put in the hardware to mitigate the impact that causes.

It makes me chuckle thinking of the scale of things over the years. In the past I have designed a few DSP imaging pipeline systems making use of fairly complicated geometry engines. In a couple cases the software was as @#$%ing hairy as you could imagine: controlling multiple DSP's each with the architecture of a floating point processor supported by satellite integer-only parallel processors each only able to see a 4K window at a time. Getting that design to produce real time FFT's.....now that was a true nightmare. Back then I could only whimsically dream of the cpu power that we take for granted today and can do entirely without DSP's.

WARNING: You have now entered a no @#$%tard zone. Please gather your anti-vaccine propaganda nonsense and slowly back out the way you came in.
tgm1024 is online now  
post #13 of 35 Old 01-24-2013, 05:08 PM
AVS Addicted Member
 
rogo's Avatar
 
Join Date: Dec 1999
Location: Sequoia, CA
Posts: 30,112
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 89 Post(s)
Liked: 464
TGM, I gave you a visualization.

I didn't intend to imply you don't understand the problem, but I'm not sure how these people have concocted such a stupid false equivalency that it's required.

Regardless, I gave you a solution.

And this, quite frankly is moronic:
Quote:
"Given their understanding (from what they're used to:)

Frame 1 comes in.
Processing starts.
16.66 ms later another Frame 2 shows up at the input
Frame 1 isn't done yet, it has 83.33 ms more work to do. It can't go out.

Frames start to collect."
Where are they "used to" this from?

What they should be "used to" is "frame 1 comes in", it goes down the assembly line... later, frame 2 comes in... frame 1 is far enough down the line that frame 2 is at Station A... frame 3 comes in... Frame 1 is at Station C.. Frame 2 is at Station B... Frame 3 is at Station B.... Frame 4 comes in... Frame 1 exist... Frame 2 goes to station C.... Frame 3 goes to Station B... Frame 4 goes to Station A...

Lag = A + B + C.... 3 frames...

Anyone who can't understand this is too dumb to be wasting your time on. I'm sorry.

There is no difference in HDMI cables. If you can see the picture without visible dropouts or sparklies, the cable is working at 100%. No other cable will display a better version of that picture. You're simply wrong if you think there is a better digital cable than one that is already working.
rogo is offline  
post #14 of 35 Old 01-24-2013, 05:46 PM
AVS Special Member
 
Chronoptimist's Avatar
 
Join Date: Sep 2009
Posts: 2,560
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 1 Post(s)
Liked: 203
Quote:
Originally Posted by Whatstreet View Post

Would UHD up scaling due to lack of native content increase lag due to image processing? Are the array processors used for image processing more powerful to compensate for the increased pixel count?
It shouldn't, if manufacturers actually care to put in a decent game mode. 4K is exactly 9x 720p or 4x 1080p and nearest neighbour resampling would give you the appearance of a 1:1 mapped image. (but with less pixel structure)
Quote:
Originally Posted by rogo View Post

Anyone who can't understand this is too dumb to be wasting your time on. I'm sorry.
I agree. The concept of having a buffer to collect frames for processing (i.e. queuing) shouldn't be that hard to get your head around.

If it's a 60Hz display, and it takes 150ms (9 frames) to process the image, then the display will have enough memory to store 10 frames. (9 plus the one it's currently working on)
Chronoptimist is offline  
post #15 of 35 Old 01-25-2013, 05:47 AM - Thread Starter
AVS Special Member
 
tgm1024's Avatar
 
Join Date: Dec 2010
Posts: 6,130
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 264 Post(s)
Liked: 667
Quote:
Originally Posted by Chronoptimist View Post

The concept of having a buffer to collect frames for processing (i.e. queuing) shouldn't be that hard to get your head around.

If it's a 60Hz display, and it takes 150ms (9 frames) to process the image, then the display will have enough memory to store 10 frames. (9 plus the one it's currently working on)

A buffer is an easy concept, and very simple to implement. But a buffer doesn't help much here. "plus the one it's currently working on"?

If the processing time for that frame is greater than the time between frames, then there is no way for what you suggest to work without the frames being incrementally worked on in stages. And if they're not, there is no need for 9 in the queue in the first place.

We are two seasoned professionals, albeit probably from different disciplines in technology, and we're still talking past each other. This discussion by itself is now what's of interest to me. Let's see if we can iron this small part out, because I am truly interested in this disconnect though you likely are not.

From what you said alone, (remember, in this device, there's one "currently being worked on", as you say, and 9 queued up)

Frame 1 shows up. Processing starts on it. During that processing 9 frames queue up behind it. Frame 1 is output.
Frame 2 now worked on. What then? 9 additional frames queuing up? This queue is growing.

WARNING: You have now entered a no @#$%tard zone. Please gather your anti-vaccine propaganda nonsense and slowly back out the way you came in.
tgm1024 is online now  
post #16 of 35 Old 01-25-2013, 06:06 AM - Thread Starter
AVS Special Member
 
tgm1024's Avatar
 
Join Date: Dec 2010
Posts: 6,130
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 264 Post(s)
Liked: 667
Quote:
Originally Posted by rogo View Post

TGM, I gave you a visualization.

I didn't intend to imply you don't understand the problem,

Ok. I thought that at first, but when I saw you try to explain to me this:
Quote:
Originally Posted by rogo 
It's simply not true that lag cannot exceed time between frames (drops). Sure, any TV or monitor that did would suck for gaming (or really for video), but it would very, very much be possible TGM.
...it really seems like you were trying to tell me something you thought I didn't know. Thanks for re-clarifying.


Quote:
And this, quite frankly is moronic:
Let's keep this calm.
Quote:
Where are they "used to" this from?

What they should be "used to" is "frame 1 comes in", it goes down the assembly line... later, frame 2 comes in... frame 1 is far enough down the line that frame 2 is at Station A... frame 3 comes in... Frame 1 is at Station C.. Frame 2 is at Station B... Frame 3 is at Station B.... Frame 4 comes in... Frame 1 exist... Frame 2 goes to station C.... Frame 3 goes to Station B... Frame 4 goes to Station A...

Lag = A + B + C.... 3 frames...

Anyone who can't understand this is too dumb to be wasting your time on. I'm sorry.
Stay calm. You've diagramed the car wash analogy again, but no, they're not dumb.

Among the biggest problems in engineering, designing products, and especially in teaching is how to remove your own set of intuitions. It comes up routinely in software, particularly in UI design....in the past my guys have had this endlessly raised as an issue by me. It's a strategy in removing your own knowledge base. In this particular case, I say "what they're used to" because prior to high lag devices, their world was deceptively simple. They increase the frame rates to their computer monitor, and they fly out. To them, it was all about the time between frames. A high speed First Person Shooter, for example, benefited greatly from increasing the frame rate (at the game level) and there is no sense of incremental stages for those frames at all.

WARNING: You have now entered a no @#$%tard zone. Please gather your anti-vaccine propaganda nonsense and slowly back out the way you came in.
tgm1024 is online now  
post #17 of 35 Old 01-25-2013, 06:24 AM
Member
 
jdc_connor's Avatar
 
Join Date: Jan 2005
Posts: 61
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
This is really weird.
jdc_connor is offline  
post #18 of 35 Old 01-25-2013, 06:26 AM - Thread Starter
AVS Special Member
 
tgm1024's Avatar
 
Join Date: Dec 2010
Posts: 6,130
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 264 Post(s)
Liked: 667
Quote:
Originally Posted by Chronoptimist View Post

It shouldn't, if manufacturers actually care to put in a decent game mode. 4K is exactly 9x 720p or 4x 1080p and nearest neighbour resampling would give you the appearance of a 1:1 mapped image. (but with less pixel structure)
That's all an upscale is attempting to do? They're claiming a nearest neighbor resampling (or super-sampling) of some kind? I would have thought they'd be attempting some form of edge and shape identification beyond the surrounding pixels....to make sure the shape was right. Are they using convolutions to find the edges, and then using those to fill in the gaps?

WARNING: You have now entered a no @#$%tard zone. Please gather your anti-vaccine propaganda nonsense and slowly back out the way you came in.
tgm1024 is online now  
post #19 of 35 Old 01-25-2013, 06:33 AM - Thread Starter
AVS Special Member
 
tgm1024's Avatar
 
Join Date: Dec 2010
Posts: 6,130
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 264 Post(s)
Liked: 667
Quote:
Originally Posted by jdc_connor View Post

This is really weird.

LOL. I have to admit, I just had the exact same thought myself. I ended up in a discussion of trying to make the point of how something doesn't work to end up fighting an explanation of how it doesn't work to establish a clearer picture of how it doesn't work. I'm deeply regretting even bringing this up.

WARNING: You have now entered a no @#$%tard zone. Please gather your anti-vaccine propaganda nonsense and slowly back out the way you came in.
tgm1024 is online now  
post #20 of 35 Old 01-25-2013, 09:40 AM
AVS Special Member
 
Chronoptimist's Avatar
 
Join Date: Sep 2009
Posts: 2,560
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 1 Post(s)
Liked: 203
Quote:
Originally Posted by tgm1024 View Post

That's all an upscale is attempting to do? They're claiming a nearest neighbor resampling (or super-sampling) of some kind? I would have thought they'd be attempting some form of edge and shape identification beyond the surrounding pixels....to make sure the shape was right. Are they using convolutions to find the edges, and then using those to fill in the gaps?
If you are designing a game mode, you want the fastest processing possible to reduce lag - nearest neighbour is as fast as you can get.
Because 4K is exactly 9x 720p, and exactly 4x 1080p, you can use it without introducing any artifacts into the image - it will look the same as it would on a 1280x720 or 1920x1080 native display.

You can't do that with today's displays because they are either 1366x768 or 1920x1080 native, so 720p doesn't scale nicely. 1080p can be 1:1 mapped on a 1920x1080 display. You just increase the size of the pixels so that a 720p pixel is now a 3x3 square on a 4K display, and a 1080p pixel is now a 2x2 square.

With games, you are actually much better off using nearest neighbour resampling if you can. Using anything more "advanced" than that blurs the image. It's different when you are dealing with the perfectly anti-aliased, much softer images that you get with filmed content - but even then I would prefer the option of being able to use nearest neighbour resampling.
Chronoptimist is offline  
post #21 of 35 Old 01-25-2013, 09:52 AM
AVS Special Member
 
borf's Avatar
 
Join Date: Oct 2003
Posts: 1,172
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 14
Quote:
Originally Posted by tgm1024 View Post

Frame 1 comes in.
Processing starts.
16.66 ms later another Frame 2 shows up at the input
Frame 1 isn't done yet, it has 83.33 ms more work to do. It can't go out.

To dispel the backlog idea here, you could emphasize that frame 2 is still 16.66ms behind frame 1 (simultaneous processing - not buffering) so everything just comes out 83.33ms later. This never explained to me how some 60hz (16ms) lcds used to have > 80ms response times (a pixel can't have simultaneous values). The color reproduction must have been terrible.
borf is offline  
post #22 of 35 Old 01-25-2013, 10:15 AM - Thread Starter
AVS Special Member
 
tgm1024's Avatar
 
Join Date: Dec 2010
Posts: 6,130
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 264 Post(s)
Liked: 667
Quote:
Originally Posted by Chronoptimist View Post

If you are designing a game mode, you want the fastest processing possible to reduce lag - nearest neighbour is as fast as you can get.
Because 4K is exactly 9x 720p, and exactly 4x 1080p, you can use it without introducing any artifacts into the image - it will look the same as it would on a 1280x720 or 1920x1080 native display.

You can't do that with today's displays because they are either 1366x768 or 1920x1080 native, so 720p doesn't scale nicely. 1080p can be 1:1 mapped on a 1920x1080 display. You just increase the size of the pixels so that a 720p pixel is now a 3x3 square on a 4K display, and a 1080p pixel is now a 2x2 square.

With games, you are actually much better off using nearest neighbour resampling if you can. Using anything more "advanced" than that blurs the image. It's different when you are dealing with the perfectly anti-aliased, much softer images that you get with filmed content - but even then I would prefer the option of being able to use nearest neighbour resampling.

Well, it's certainly guaranteed to blur the image somewhat if you use a standard bi-lin interpolation or similar. I was referring to a far more advanced style of "up scaling" which is not merely yet another "scaling up". We experimented a lot with some of this stuff in the pre-press world in the 80's. People forced to use older bitmapped fonts found it useful to have them be scaleable cleanly to higher sizes: a normally very disgusting proposition. Long and short of it is that something going 1:2 like this:

Original:
OM
MM


To:
OOMM
OOMM
MMMM
MMMM


(as you're talking about)

Is actually better as:
OOOM
OOMM
OMMM
MMMM

Depending on what was around the original 2x2. In a game mode taking in a mere HD signal, I'd be certainly expecting only pixel replication. For other options, I'd be surprised if it wasn't more edge based. Otherwise, what's the point? (EDIT: no, I could also see me getting dismayed with the results of an HD movie and wanting to flip a switch to pixel replication.)

WARNING: You have now entered a no @#$%tard zone. Please gather your anti-vaccine propaganda nonsense and slowly back out the way you came in.
tgm1024 is online now  
post #23 of 35 Old 01-25-2013, 10:59 AM
AVS Special Member
 
Chronoptimist's Avatar
 
Join Date: Sep 2009
Posts: 2,560
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 1 Post(s)
Liked: 203
Quote:
Originally Posted by tgm1024 View Post

Depending on what was around the original 2x2. In a game mode I'd be certainly expecting only pixel replication. For other options, I'd be surprised if it wasn't more edge based. Otherwise, what's the point?
Some people think upscaling is detrimental to image quality, and much of the 4K image scaling we've seen so far also includes image sharpening in their algorithms. Nearest neighbour allows the image to look exactly the same as it did on a 1080p native display, but without the pixel grid over the image, due to the increased pixel density. E.g. 1080p native display, 4K native display. (note: examples are actually photographs of iPads with & without a retina display, but it would be the same)

If you are using more advanced interpolation, hard-edged parts of the image are softened and can be more difficult to read.
For video content, I would definitely be wanting to use good edge-adaptive interpolation, but not for games, where the appearance of a 1:1 mapped image is far better.

And for the sake of comparison, here's that same image scaled 3x with:
I would say that example was a good candidate for an edge-adaptive algorithm like Jinc, but Jinc scaling is probably better than most displays are using. The image definitely appears softer when using it as well.
Chronoptimist is offline  
post #24 of 35 Old 01-25-2013, 11:21 AM - Thread Starter
AVS Special Member
 
tgm1024's Avatar
 
Join Date: Dec 2010
Posts: 6,130
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 264 Post(s)
Liked: 667
Quote:
Originally Posted by Chronoptimist View Post

Some people think upscaling is detrimental to image quality, and much of the 4K image scaling we've seen so far also includes image sharpening in their algorithms. Nearest neighbour allows the image to look exactly the same as it did on a 1080p native display, but without the pixel grid over the image, due to the increased pixel density. E.g. 1080p native display, 4K native display. (note: examples are actually photographs of iPads with & without a retina display, but it would be the same)

If you are using more advanced interpolation, hard-edged parts of the image are softened and can be more difficult to read.
For video content, I would definitely be wanting to use good edge-adaptive interpolation, but not for games, where the appearance of a 1:1 mapped image is far better.

And for the sake of comparison, here's that same image scaled 3x with:
I would say that example was a good candidate for an edge-adaptive algorithm like Jinc, but Jinc scaling is probably better than most displays are using. The image definitely appears softer when using it as well.

Basically for most static images, I prefer Lanczos or Box. You've got me wondering about Jinc though. Standard Lanczos is based on sinc functions.

WARNING: You have now entered a no @#$%tard zone. Please gather your anti-vaccine propaganda nonsense and slowly back out the way you came in.
tgm1024 is online now  
post #25 of 35 Old 01-25-2013, 11:44 AM
AVS Addicted Member
 
rogo's Avatar
 
Join Date: Dec 1999
Location: Sequoia, CA
Posts: 30,112
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 89 Post(s)
Liked: 464
Quote:
Originally Posted by tgm1024 View Post


Stay calm. You've diagramed the car wash analogy again, but no, they're not dumb.

Among the biggest problems in engineering, designing products, and especially in teaching is how to remove your own set of intuitions. It comes up routinely in software, particularly in UI design....in the past my guys have had this endlessly raised as an issue by me. It's a strategy in removing your own knowledge base. In this particular case, I say "what they're used to" because prior to high lag devices, their world was deceptively simple. They increase the frame rates to their computer monitor, and they fly out. To them, it was all about the time between frames. A high speed First Person Shooter, for example, benefited greatly from increasing the frame rate (at the game level) and there is no sense of incremental stages for those frames at all.

They sound dumb.

This "deceptively simple" world they come from is irrelevant to the question, "How can lag be greater than the time between frames?"

I don't have any interest in rehashing the past of low-frame-rate FPS games (although it might be instructive in how they developed such a completely irrelevant worldview).

The "car wash analogy" explains perfectly how lag can be 10000000000000000 frames long. I believe I could explain this to a 5-year-old. I am confident I could explain it to a Best Buy employee.

So I have no idea what they are "intuiting" here because it really doesn't make any sense at all. What possible logic would their be to intuit, "You can't have lag higher than time between frames without dropping frames?" Why wouldn't you intuit, "We'll hold onto the frames somewhere."

I mean there apparently is software development going on here and this concept is hard to grasp?

To me, that's impossible to grasp.

There is no difference in HDMI cables. If you can see the picture without visible dropouts or sparklies, the cable is working at 100%. No other cable will display a better version of that picture. You're simply wrong if you think there is a better digital cable than one that is already working.
rogo is offline  
post #26 of 35 Old 01-25-2013, 11:59 AM - Thread Starter
AVS Special Member
 
tgm1024's Avatar
 
Join Date: Dec 2010
Posts: 6,130
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 264 Post(s)
Liked: 667
Quote:
Originally Posted by rogo View Post

The "car wash analogy" explains perfectly how lag can be 10000000000000000 frames long. I believe I could explain this to a 5-year-old. I am confident I could explain it to a Best Buy employee.

I thought your phrasing was "The analogy sucks".

But ok. Dead horse and all that....

WARNING: You have now entered a no @#$%tard zone. Please gather your anti-vaccine propaganda nonsense and slowly back out the way you came in.
tgm1024 is online now  
post #27 of 35 Old 01-25-2013, 12:20 PM
AVS Special Member
 
Chronoptimist's Avatar
 
Join Date: Sep 2009
Posts: 2,560
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 1 Post(s)
Liked: 203
Quote:
Originally Posted by tgm1024 View Post

Basically for most static images, I prefer Lanczos or Box. You've got me wondering about Jinc though. Standard Lanczos is based on sinc functions.
Lanczos has too much ringing and aliasing.
Jinc's benefits are more obvious in other images though, I'm only using that one because there are also photographs that illustrate how Nearest Neighbour looks on actual displays.
Chronoptimist is offline  
post #28 of 35 Old 01-25-2013, 01:39 PM - Thread Starter
AVS Special Member
 
tgm1024's Avatar
 
Join Date: Dec 2010
Posts: 6,130
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 264 Post(s)
Liked: 667
Quote:
Originally Posted by Chronoptimist View Post

Lanczos has too much ringing and aliasing.
Jinc's benefits are more obvious in other images though, I'm only using that one because there are also photographs that illustrate how Nearest Neighbour looks on actual displays.

What are you using for a front end? The ImageMagick suite?

WARNING: You have now entered a no @#$%tard zone. Please gather your anti-vaccine propaganda nonsense and slowly back out the way you came in.
tgm1024 is online now  
post #29 of 35 Old 01-25-2013, 02:07 PM
Member
 
Whatstreet's Avatar
 
Join Date: Sep 2010
Posts: 51
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 13
A bucket brigade is a simple analogy or visualization of delay caused by a process.

There are fifty men standing in a line. Someone hands the first man a bucket full of water, and he hands it to the next man in the line. Someone hands the first man another bucket full of water, while the second man is passing his bucket to the third man in line. The first man passes his bucket to the second man, and the second man passes his bucket to the third man and the third man to the fourth man.
The bucket passing process continues, but it takes time to pass each bucket to the next.

We have the summation of the time it takes to pass a bucket for each of the fifty men for the first bucket passed to reach the end of the line. After the first bucket reaches the end of the line, it only takes the time to pass one bucket to receive the next bucket at the end of the line.

Throughput can be increased by adding lines of fifty men. Given a second line of fifty men there will be twice as many buckets processed. But the time it takes for an individual bucket to reach the end of the line is still the summation of time of bucket passes.

I don’t know the context of the presentation but a video of a Rube Goldberg like contraption would be fun to show an apparatus being loaded and the delay of arrival at the output, but lower time for subsequent arrivals oncethe process path is loaded.
Whatstreet is offline  
post #30 of 35 Old 01-25-2013, 02:12 PM
AVS Special Member
 
Chronoptimist's Avatar
 
Join Date: Sep 2009
Posts: 2,560
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 1 Post(s)
Liked: 203
Quote:
Originally Posted by tgm1024 View Post

What are you using for a front end? The ImageMagick suite?
madVR
Chronoptimist is offline  
Reply Flat Panels General and OLED Technology

User Tag List

Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off