or Connect
AVS › AVS Forum › Display Devices › Flat Panels General and OLED Technology › Will Future Flat Panel Technology Reduce Judder Or Eliminate It Completely?
New Posts  All Forums:Forum Nav:

# Will Future Flat Panel Technology Reduce Judder Or Eliminate It Completely? - Page 2

It will still be liner . It can scan ahead to plot the points in advance to reduce error rate but the calculation is still liner. And if the pixel displacement too high then the motion would appear even more artificial.

### AVS Top Picks

Quote:
Originally Posted by Nielo TM

It will still be liner . It can scan ahead to plot the points in advance to reduce error rate but the calculation is still liner. And if the pixel displacement too high then the motion would appear even more artificial.

Are you saying it would have to interpolate a straight path between all the points? Why couldn't it plot a curved path if it thinks it looks like it's moving in a curve? Surely in real-world video imagery that would be the most likely path an object would be taking between a set of >2 points? Also, if it looks like it's accelerating, it wouldn't have to make the interpolated positions equal distances apart.
MCFI is like differentiation. It is using linear interpolation to approximate non linear. Like audio, with enough sampling it can approximate to the real thing.

Hence you can MCFI a 120fps source to death and I doubt the brain can see the difference due to visual limitations. But when you MCFI 24fps with fake 96 frames, the brain is not so easily fooled.
Quote:
Originally Posted by specuvestor

MCFI is like differentiation. It is using linear interpolation

You mean straight-line motion assumed between points in each source frame instead of curved path motion? Are you saying it wouldn't be possible to check for things like acceleration/deceleration and interpolate based on that?
Curve path is also using straight line interpolation. That's calculus and what Nielo is saying. It is much more error prone if you try to guess the curvature of 2 points.

And sure they can adjust for acceleration but you are trying to adjust a 2 MP picture within 8ms on the fly and power consumption in TV as important consideration to limit processor capability. That's why studio MCFI is much more effective
Exactly and modern TVs are no where near powerful enough to process 2.1M pixels on the fly as spec stated and won't be for a whole unless they decide to embed a GPU.
Quote:
Originally Posted by Nielo TM

Exactly and modern TVs are no where near powerful enough to process 2.1M pixels on the fly as spec stated and won't be for a whole unless they decide to embed a GPU.

At the moment don't they just check for motion of 'macroblocks' (not the mpeg signal, but the TV picture split into blocks)? If so, surely it wouldn't need a lot more processor power to check how far each of those blocks of pixels have moved between source frames (and therefore how much they have accelerated/decelerated)? It wouldn't be as good as pixel-based motion (and even that would be inaccurate) but at least it should slightly improve accuracy of interpolation? Also, don't some Toshiba TVs have a cell processor (like in the PS3) which is supposed to be quite powerful?
ATM that's exactly what consumer grade MCFI does and why you're seeing microblocking type artifact around the edges of moving objects
We need technology to analyze and process at the pixel level. Analyzing alone needs powerful LSI. It needs to keep track of the pixel by analyzing it's frequency.

As for the Tosh is just a one off. Everyone is trying to lower power consumption. Last thing they want to do is add a 50W LSI

Btw the hardware is just one part of the story. We need good algorithms as well.
Hopefully this should explain as to why 24p to 60p appears synthetic

Are you saying the image on the left (showing more of a curve) is more like what the motion should look like than the very angular motion in the image on the right? If so, they could just use automatic "bezier" spatial interpolation instead of linear spatial interpolation, or would that need a lot more processor power in the TV?

eg:

(red dotted curve shows interpolated object positions based on those 3 original points for 2 seconds of motion at 60 fps using auto-bezier spatial interpolation - and linear temporal interpolation).
Quote:
Originally Posted by Joe Bloggs

Are you saying the image on the left (showing more of a curve) is more like what the motion should look like than the very angular motion in the image on the right?

Actually both are valid. As you know, 24p is sufficient for capturing slow moving objects but not fast moving ones.

On the left, there's more information to plot a curve (even though the curve is comprised of liner lines). You can shorten the liner line by interpolating interpolated frame but that increases the error rate (similar to converting lossy to lossy).

On the right, because the image moved so fast, there isn't enough information to plot a curve. So we end up with two liner lines.

If the right is captured in 60fps, we can use the additional data to produce a curve, which we can then convert it to 120p.

Quote:
Originally Posted by Joe Bloggs

If so, they could just use automatic "bezier" spatial interpolation instead of linear spatial interpolation, or would that need a lot more processor power in the TV?

eg:

(red dotted curve shows interpolated object positions based on those 3 original points for 2 seconds of motion at 60 fps using auto-bezier spatial interpolation - and linear temporal interpolation).

That could yield more errors because it could interpolate liner motion as non-liner. And it can affect the whole frame negatively.

Interpolation is basically guess work. When it comes to automated interpolation, it is best to keep it simple than to compensate for absent data.

However, in the studio, it’s a different matter. Because we can manually control how the motion should be interpolated and we can render multiple times to obtain error free frames (or frames with least amount of errors).
Quote:
Originally Posted by Nielo TM

That could yield more errors because it could interpolate liner motion as non-liner. And it can affect the whole frame negatively.

I thought in real (photographed) video that that would be more likely to be how things would actually move (in a bezier curve between a set of points) instead of as a series of linear motion between those points. So I'd say, if most of the time it produces more accurate/more natural looking motion for real-world video (which they could test which was most accurate by doing tests with real-world video), surely they should use it instead of linear (always straight line) interpolation, if they can do so without needing too much processing power - or have it as an option.
Video is basically stills frozen in time. In order to create convincing motion, you'll need sufficient amount of frames

You simply can't plot a curve if the original frames lack sufficient data. It's just not possible.And as I've said before, if you force the MCFI to treat liner motion as non-liner, the interpolated image will be full of errors.
Forgot add, Mirillis has the best MCFI ATM, but it’s on PC. You can download the player (Splash Pro) on the link below. But be warned, you need a powerful CPU (4 core or above for the best performance).

I’m still waiting for them of offload it to the GPU.

http://mirillis.com/en/products/splashpro.html

It still has a long way to go, but its quality is much better than any MCFI found in consumer goods.
New Posts  All Forums:Forum Nav:
Return Home
• Will Future Flat Panel Technology Reduce Judder Or Eliminate It Completely?

### AVS Top Picks

AVS › AVS Forum › Display Devices › Flat Panels General and OLED Technology › Will Future Flat Panel Technology Reduce Judder Or Eliminate It Completely?