Status
Not open for further replies.
1 - 9 of 9 Posts

#### oferlaor

·
Joined
·
8,098 Posts
Discussion Starter · ·
Hi,

This posting is from TAW's forum. It's Phil explaining how deinterlacing works. I have trouble understanding some of this stuff.

First the quote:

Quote:
 Eric: The algorithm in all hardware based processors use gates (hardware ) to accomplish their algorithms which are exactly as you say.. a bit of math and a bit of human innovation to paint a picture as closely resembling the original source as possible. The most primative algorithms are non-motion compensated such as those found in the Silicon Image, Genesis, Miranda and yes even the ROCKs ClearMatrix. They use typical Bob,Weave and combo techniques and follow Moores laws. Its how the combo and when the combo is used that makes one slightly better over the next. The better algorithms use motion compensation such as those found in S&W. They use Phase Correlation, a means to compare how much motion has occured between frames by comparing the amplitude and angles of phase vectors . The more motion between two frames the larger the phase vector. Looking at the resultant output of two frames compared, the phase "map" would look like a series of hills some steep, some rolling, some high & some low. These peaks or hills represent how fast objects are moving (steepness) and how much the object has moved (height) and how large the image is (area under the curve) between the two frames being sampled. The deinterlacing algorithm then takes this information and applies it to the variables (gains) dynamically changing based on this phase "map". Nice thing about Phase correlation is that it doesnt care how many objects are moving, what the contrast or size of the object is in relation to its background. Other techniques can be fooled since they are dependent on the object to be contrasted from its background and it needs to be bigger than the block being observed OR the object would magically disappear and reappear again going thru the sampled block. So a football traveling in the air may look fine when it is against a light blue sky but when the background changed to a dark green grass the balls characteristics would change or the ball may disappear temporarily all together. By far Phase Correlation was the best method to do motion compensation since it is immune to color and contrast changes. Others use block techniques breaking the image into many blocks, primative compared to phase correlation since artifacts will usually occur at the block transitions. Software coding vs. hardware is the best method given you have infinite processing power, unfortunetly multiple parallel processing needs to be done such is the case with Terenax. Multiple CPUs are paralleled effectively giving up to 16 GHz of processing speed. This means is brute force. SInce the image is digital the image goes thru dozens and even hundreds of filters to shape the picture to as close to what the original source was supposed to look like as possible. Since its digital no loss of detail occurs. This method is expensive and inefficient, many times the sound needs to be delayed to compensate for the video delay due to the time it took to refilter the image dozens and possibly hundreds of times. The output can look spectacular though if done correctly. It can also look too digital when done incorrectly. TAW has a Terranex on its DigiLink here running in Orlando as we speak. The hardware algorithms have big advantages in that they do everything super fast, no code processing is necessary, the code is in the way the gate array is set up in the chip. The disadvantage of hardware based deinterlacing is that its pretty much set in stone. If someone were to decide a better way to do it, the entire chip would need to be replaced. As an inventor, its required to look at things differently, step out of the norm, get abstract. Sorry, I can not devulge how TAW approached its Pixel Perfect De-Interlacing . I can say we didnt take the easy way out by stealing someone elses chip or copying someone elses code. That would take the fun, challenge and self respect out of it. If you cant invent something better, dont bother... Thats my motto. Phil
1. I thought ClearMatrix, Sil504 and DCDi were motion adaptive. Are they not?

2. Does 16Ghz sound right to you?

3. How does the Terranex apply here?

4. I don't understand the Phase correlation stuff. Would appreciate some help.

I want a serious discussion, no TAW bashing, please!

#### Alexander

·
Joined
·
429 Posts
Quote:
 Originally posted by oferlaor 1. I thought ClearMatrix, Sil504 and DCDi were motion adaptive. Are they not?
Motion adaptive does not mean motion compensated.

It would be useful to look at even more primitive deinterlacing algorithms first: bob and weave.

Interlaced video is made up of two fields per frame, slightly offset in space and time.

Bob just drops one of the fields, and uses the other to create an interpolated image. The result is artifact-free, even with high motion, but half of the resolution is lost.

Weave just interlaces the two fields together into one image. The result keeps all of the resolution, and yields a "perfect" image when there is no motion, but severely combs when there is substantial motion.

Motion-adaptive deinterlacing uses both of these techniques in a single frame. Areas that have little or no motion use weave, which maintains a high resolution, and areas with significant motion use bob. There are other little tricks you can do to smooth out the image, but that's the basic principle behind what most scalers are doing. The point is that some data is being thrown away to reduce artifacting.

To do motion-adaptive deinterlacing, the deinterlacer only needs to know where in the frame the motion is occurring.

Motion compensation is much more complex, because it actually determines where, in what direction, and how fast the motion is.

Knowing this, as well as the difference in time and space between the two fields, allows you to physically shift pixels to where they would be at a single point in time. This keeps the resolution and doesn't cause significant artifacting.

The problem is that motion determination is a complex problem, and requires serious processing power to do well. Hence the cost and complexity of high-end MPEG encoders, the Teranex, and presumably the S&W box.

I have only a passing knowledge of Faroudja's techniques, but my understanding is they do a highly simplified form of motion compensation. Someone correct me if I'm wrong.

The 16GHz is pretty meaningless (and wrong), as you're outside the realm of traditional CPUs here. What you really have is the equivalent of thousands to millions of VERY simple, relatively slow custom processors. Highly parallel, as Teranex will tell you. The point is that for this particular task, it's MUCH faster and MUCH easier to do well with this kind of parallel setup than a traditional monolithic CPU. If you were to try and do exactly what the Teranex is doing with a standard CPU (which would be dumb), I'd say you'd need MUCH more than 16GHz. But that's really not the point, because if you're intent on doing very-high-quality motion compensation, you'd use a highly parallel architecture.

That said, it would probably be pretty easy to do somewhat less intense motion compensation on today's high-end computers. (After all, they can do MPEG compression, just more slowly.) Now that I think about it, in fact, I'm surprised DScaler doesn't have it. Extra bonus points to dual processor machines (in fact, that might be required) and those with good SIMD architectures (cough...G4 with its Altivec...cough).

And a quick comment on one of the lines in the quote ("The disadvantage of hardware based deinterlacing is that its pretty much set in stone"): This is not necessarily so. In the case of the Teranex and I believe the higher-end Faroudja products (and possibly other expensive machines), the "hardware" is actually FPGAs: programmable logic. It's basically hardware that can be physically reconfigured by software. Pretty nifty stuff, available off-the-shelf, and used in lots of applications other than deinterlacing. (Disadvantages: expensive in volume and substantially lower density than traditional chips)

Alex

#### oferlaor

·
Joined
·
8,098 Posts
Discussion Starter · ·
Strange, I wasn't aware of the distinction between "motion adaptive" and "motion compensation".

Both algorithms are clear to me, but I didn't know what each one was "called".

As I understand it, all of these algorithms (PureProgressive by SI, ClearMatrix by KD and DCDi by FJ) are all motion compensated, to a degree. is that correct?

·
##### Registered
Joined
·
1,687 Posts
Quote:
 Originally posted by oferlaor As I understand it, all of these algorithms (PureProgressive by SI, ClearMatrix by KD and DCDi by FJ) are all motion compensated, to a degree. is that correct?
I think all these are defined as motion compensated, they differ in how they make up the bobbed pixels when there is motion and what they define as motion. I would say all of these methods will use some combination of near pixels to guess the missing moving ones none will move pixels from elsewhere on the screen to fill in the gaps as would happen on a motion compensated method.

John

#### JeffY

·
##### Registered
Joined
·
3,653 Posts
It sounds like TAW are going to be bundling Dscaler with the Rock.

#### trbarry

·
##### Registered
Joined
·
9,884 Posts
I am impressed. I don't know Phil but from only the comments here I hadn't t realized he was that technical. Cool.

I agree with most of Alex's & John's descriptions above. And I think that Phase Correlation approaches are those that do motion compensation by looking at data in the frequency domain. That is, evaluating the results of a Fourier transform of the data. But I can't do the math and know about this approach only through readings in motion comp for Mpeg-4 compression studies. I think it still has some limitations though.

I think most PC based scalers have previously used motion adaptive methods (at best) due to limitations of CPU power. But the next release of DScaler should have my new TomsMoComp plugin. This should be able to do a simple scalable (variable effort) motion compensated deinterlace on CPU's of maybe 800 mhz or higher. I'm still tinkering with it but it was used for the Flag pic I posted over in the Poor Man's SDI thread. (but please ignore the somewhat torch mode colors from my calibrated-for-text dev/test machine)

www.trbarry.com/DScaler_VE_Flag.jpg

And why does everyone now add smileys whenever they talk about DScaler MCDI anymore?

- Tom

·
##### Registered
Joined
·
1,687 Posts
Tom

Can't wait to see your new plug-in sounds really cool, also time for a CPU upgrade I think.....

Phil's description of Phase correlation sounds about right, it is the way to do it properly but it will be a while before Moore's law allows us to do that in a PC.

Ofer

Phase correlation involves transforming the picture into the frequency domain (using a fourier tramsform) and then doing a search to find the direction objects are moving.

000000

011100

011100

011100

000000

000000

and the next one is

000000

000000

001110

001110

001110

000000

in the frequency domain, this will look like two splodges whose only difference is in phase i.e where the wave starts.

Hope this helps

John

#### Michael Grant

·
##### Registered
Joined
·
10,201 Posts
I have heard of this distinction between motion compensated and motion adaptive as well. As already stated, they key difference is that motion adaptive deinterlacing does no shifting of pixels based on motion information; each pixel is a product of either a simple weave or a simple bob, or some mixture of both. Motion compensated deinterlacing actually shifts around pieces of an image in order to match the motion that is occurring.

#### dschmelzer

·
##### Registered
Joined
·
2,379 Posts
"The disadvantage of hardware based deinterlacing is that its pretty much set in stone. If someone were to decide a better way to do it, the entire chip would need to be replaced."

Actually, this isn't the case. Or in any event, is no longer the case. The Teranex allows for reprogramming the FPGAs that they use to process the video.

This is a very nifty way to go about things. I think there's some valuable lessons there.

John: Tom's new plug-in is downright "fabu".

1 - 9 of 9 Posts
Status
Not open for further replies.