AVS Forum banner
  • Our native mobile app has a new name: Fora Communities. Learn more.

Reverse Filtering

855 Views 16 Replies 5 Participants Last post by  trbarry
Hi all--


Does anybody know of a good on-line source that explains pre-filtering for interlaced material (DVD, VHS, etc.)? I've been told that it cuts down on effective resolution by about 25%.


I'm wondering whether this process can be mathematically "reversed" to get to the original resolution.


Thoughts?
Status
Not open for further replies.
1 - 17 of 17 Posts
Nifty idea. (DScaler DFilter opton ;) )



I don't know what technique is used for the filtering but I have heard that it is possible to reverse somewhat.


Maybe they just use some sort of vertical low pass filter. But if we can find out what is typically done then I'll bet we could add an optional poor-mans filter reverser that can run real time. It might end up being something like vertical edge enhancement.



- Tom
See less See more
I'm nearly positive that its just a low-pass filter. The prefilter out frequencies that are high enough that they would produce alternating light-dark regions on adjacent vertical lines, which would 'jitter' severely due to the interlacing.


Unfortunately, theres no going back. Once you've lost the information, you've lost it. Otherwise, we would have detail-recreating scalers, not to mention full-resolution internet streamed video on a 56k modem :)


Andy K.
Darn. That's upsetting.


I'm sure I've heard at least a couple of sources that I believed credible say that you could reclaim some of it. The idea being that whatever they were filtering out, they never would get all of it so you could amplify the residue or something like that.


Of course the artifacts you create might look suspiciously like edge enhancement too. ;)


But anyway, I have no idea where I heard this.


- Tom
See less See more
Is it really that simple? That seems like a shame...


Andy: if they can make perfect streaming on 56K, I would buy it! :D
In theory, a perfectly clean signal fed through a perfect low-pass filter would eliminate all of the information above the cutoff frequency. However there is no implementable perfect low-pass filter. Instead, the high-frequency information can be attenuated, but not eliminated. In truth, there may be total information loss at certain specific notch frequencies, but that can happen only at an infinitesimally small subset of the band, so you won't miss it. So an appropriately-designed reversing filter could recover effectively all of the useful information.


In practice, this breaks down because noise enters the picture. The quantization of the output of the low-pass filter will introduce noise in the signal. Now if done right, the noise will be low enough that you won't notice it if you leave the filtering alone. But if you try to reverse the filtering, then those places that were severely attenuated will now be severely noisy.


So what I think all of this means from a practical standpoint is that you can do a limited amount of reversal, but if you try to be too aggressive with it you'll just end up with a lot of noise and you'll quickly see that it's not worth it.


I'd start by designing the theoretically near-perfect reversing filter, and follow it with a new lowpass filter. By adjusting the cutoff of the new lowpass filter, you can trade noise for high frequency content to your liking.


(Edit) Oops, I forgot one important wrinkle. Sometimes the low-pass filtering will be accompanied by some sort of upsampling or downsampling (interpolation, chroma resampling, etc.) In those cases, thanks to aliasing effects, it's much harder (and sometimes impossible) to reverse the filtering process. For example, in a downsampling stage, high-frequency information is actually folded back into the low-frequencies as aliasing noise, and cannot be separated out.
See less See more
Quote:
I'd start by designing the theoretically near-perfect reversing filter, and follow it with a new lowpass filter. By adjusting the cutoff of the new lowpass filter, you can trade noise for high frequency content to your liking.
Michael -


Any idea what a poor man's not-at-all-perfect reversing filter might look like as a software algorithm?


- Tom
This sounds very tricky to me. To figure out the best deconvolution, you'd need to know how the image was filtered. And I'm pretty sure that varies from source to source.


Some searching on the web tells me that it generally involves a spatial and/or a temporal noise filter. So I'm not even sure whether you'd want to get that information back -- If it's mostly noise, separating out the signal will be difficult. And since prefiltering is (purposefully) lossy, and the video image and the MPEG data have limited resolution, there will be less signal after reconstruction than before the prefiltering.


So it would be difficult. But maybe there's some telltale of prefiltering which can be identified without trying to figure out the whole process.


My hunch is that the most practical way to improve the resolution will be from an edge-detecting sharpness filter. In other words, take advantage of prior knowledge about the image (rather than the image processing) in order to improve it.


[Or if you were writing for VirtualDub, you could try to combine multiple images of a moving picture to increase your sampling frequency, but that would be extremely hard and very slow.]
See less See more
I've had a chance to think about this a bit more.


I've pretty much convinced myself that trying to do this on top of deinterlacing video source could be an exercise in futility, just amplifying the same deinterlace artifacts we go to such lengths to avoid.


But film source is a horse of an entirely different color (when done badly ;) ).


I might just take the GreedyHM horizontal edge enhancement code and make a vertical version of it, for pulldown only. If it doesn't work well then we can leave it turned off but a prototype might be fun to play with.


But I wanted to revisit that code anyway because it also can be a smoothing filter with (mostly) a change of sign.


- Tom
See less See more
I've come across a reference for this sort of thing.

http://www.owlnet.rice.edu/~elec539/.../blind/bd.html


The relevant section is a little more than half way down the page, entitled "blind deconvolution". (The methods further up the page look better, but they assume that the parameters of the blurring process are known.) The most effective method (from Jain) reduced mean squared error by a factor of about 30, but I'm not sure the picture looks better than the original.
Lindsey -


Good find!


I only skimmed thru it and didn't take the time to understand all the math but a couple things come to mind from a first reading.


It's encouraging that we could somewhat reverse the effects of low pass and some other filters in certain cases. OTOH, as we've already discussed above, it appears this process would amplify noise.


In an interactive product like DScaler if we could actually do this in real time we might not have to do all the code to estimate the noise level. Just let the user have a slider like is already available for Edge Enhancement. Users can rapidly optimize things containing only one or maybe even 2 variables if they can see and easily judge the results.


Drifting OT somewhat ...


There is a depressing implication of the article that I didn't really want to see. It seems to show that measuring the Mean Squared Error (MSE) of the first and final images is not really a very good measure of image quality. Visually, some of the best and worst images didn't have the best and worst MSE.


This has implications for some things I'd hoped DScaler could do in the direction of Auto Tuning & processing in the future.


For instance, I've already got a half dozen or so hand tuned parameters in Greedy (High Motion). This is already approaching the limit of things we could manually optimize when we can't separate out all the effects. But there are another dozen things I could throw into the pot and I'd hoped to maybe use some AI techniques like Genetic Algorithms to tune them. But this becomes harder if MSE is not a good enough measure of quality and can't be adjusted somehow. You can't automatically tune parms if you can't easily measure the result.


So we need a better objective measure of picture quality than just MSE.


- Tom
See less See more
Quote:
Originally posted by trbarry

It's encouraging that we could somewhat reverse the effects of low pass and some other filters in certain cases...
Yeah, it's pretty cool that you can simultaneously estimate the convolution and the original image. I'm worried about hidden assumptions about the nature of the prefilter, though.

Quote:
In an interactive product like DScaler if we could actually do this in real time we might not have to do all the code to estimate the noise level. Just let the user have a slider like is already available for Edge Enhancement. Users can rapidly optimize things containing only one or maybe even 2 variables if they can see and easily judge the results.
Well, the Wiener filter does explicitly model the noise via Snn, the noise spectrum. So noise inference is still necessary. Besides, an estimate (or likelihood curve) of the noise level would probably be very useful for deinterlacing and (especially) pulldown detection.

Quote:
There is a depressing implication of the article that I didn't really want to see. It seems to show that measuring the Mean Squared Error (MSE) of the first and final images is not really a very good measure of image quality. Visually, some of the best and worst images didn't have the best and worst MSE.
That's well known. For example, noise is much more visible on a flat background. The best reference I've found on the mathematics of the perception of color is
http://kiptron.psyc.virginia.edu/ste...orVision2.html

Quote:
This has implications for some things I'd hoped DScaler could do in the direction of Auto Tuning & processing in the future.


For instance, I've already got a half dozen or so hand tuned parameters in Greedy (High Motion). This is already approaching the limit of things we could manually optimize when we can't separate out all the effects. But there are another dozen things I could throw into the pot and I'd hoped to maybe use some AI techniques like Genetic Algorithms to tune them. But this becomes harder if MSE is not a good enough measure of quality and can't be adjusted somehow. You can't automatically tune parms if you can't easily measure the result.
My leaning from statistics is to use the simplest reasonable model, and to add parameters only where there's a compelling reason. That tends to be good for robustness and efficiency.


Before going all out with genetic algorithms, "neural" nets, or other other fancy stuff, I'd suggest a look at something simpler like EM. There are a bunch of multiparameter optimization techniques which are worth a try before getting into the mock biological methods.

Quote:
So we need a better objective measure of picture quality than just MSE.
We could certainly use one, but squared error (or absolute error) is really easy to calculate, and probably good enough for most purposes. For real time filters, speed of evaluation is probably more important than accuracy.
See less See more
Quote:
Before going all out with genetic algorithms, "neural" nets, or other other fancy stuff, I'd suggest a look at something simpler like EM. There are a bunch of multiparameter optimization techniques which are worth a try before getting into the mock biological methods.
Lindsey -


I mostly said genetic algorithms because I've already written all that (in Visual Basic) and am comfortable with it as a result of some of my unfortunate stock market explorations. They work quite well for non-linear spaces but are subject to the over curve fitting that bites many data miners. And this is something that would run in batch for days or weeks, not real time.

Quote:
"So we need a better objective measure of picture quality than just MSE."


We could certainly use one, but squared error (or absolute error) is really easy to calculate, and probably good enough for most purposes. For real time filters, speed of evaluation is probably more important than accuracy.
Again, the tuning I was talking about was batch, just as I spent weeks manually tweaking the Greedy/HM parms. There were actually a few more parms in my original test Greedy/HM that I took out for performance reasons. For the Genetic A's I'd figured on using mean absolute error but am still wondering if it can be tuned somehow to more closely mimic pretty pictures. But I don't yet have any good ideas here, so absolute error may have to do, if I ever do this at all.


But I guess we are drifting further from the idea of the Inverse Filter here. If I did that I'd just start with Vertical Edge Enhancement and a slider. Then tinker.


It sort of reminds me of an old Dilbert cartoon where his boss asked him for anti-gravity or some such but settled for a screen saver with fish. ;)


- Tom
See less See more
Tom -


Okay -- I was thinking entirely in terms of real time calculations. If you're using long runs to choose constants, then an expensive evaluation does make sense. You clearly need "Videophile in a box", which would give carefully considered opinions on each frame. ;)


More realistically, you could use squared error for automatic evaluation, then tweak things the last little bit by hand.


P.S.: To heck with anti-gravity. Fish screensavers rule!
Genetic routines are really cool, but aren't we talking months of processing time for HD, rather than hours or days?
Someone earlier asked if I had any ideas for a poor-man's reversing filtering might look like... perhaps some sort of parametric high-pass "premphasis" filter would do. Something with 0dB DC gain, adjustable high-frequency gain, and adjustable crossover.


It also seems to me that the best place to apply this filter would be on the 480p signal, after deinterlacing but before scaling. Putting it after scaling runs the risk of amplifying interpolation noise or aliasing. Before scaling avoids this, and it will also take less CPU time.


You would also want to experiment with two different kinds of designs: radially symmetric and horizontal/vertical separable. If you choose the latter, then the CPU load is even lower, because it can be implemented first by filtering horizontally, and then by filtering vertically. A radially symmetric filter would have a full two-dimensional kernel but it might achieve better results visually.


I agree with the other posters who suggested that genetic algorithms, neural nets, etc. are just too CPU intensive for real-time processing. If you try anything more ambitious than a simple linear 2-D FIR filter kernel then you need a Teranex! :)
See less See more
Quote:
It also seems to me that the best place to apply this filter would be on the 480p signal, after deinterlacing but before scaling. Putting it after scaling runs the risk of amplifying interpolation noise or aliasing. Before scaling avoids this, and it will also take less CPU time.


You would also want to experiment with two different kinds of designs: radially symmetric and horizontal/vertical separable. If you choose the latter, then the CPU load is even lower, because it can be implemented first by filtering horizontally, and then by filtering vertically. A radially symmetric filter would have a full two-dimensional kernel but it might achieve better results visually.
Micheal -


All scaling in DScaler is actually done by the video card anyway. Also I was most interested in just the vertical LP filtering because somehow I got the impression that more of this was being used because of the interlace issue. I hear this is even true on DVD's, but maybe less for the SuperBit DVD's. Any suggestions for sample code for a very efficient but simple vertical only un-filter?

Quote:
Genetic routines are really cool, but aren't we talking months of processing time for HD, rather than hours or days?
Dan -


My own experience is you can achieve decent results with a population of maybe 30 virtual deinterlacer individuals each of whom would watch and be tested on your sample test material once (one at a time) in a test generation. And usually you can converge on decent results in a few hundred generations. They could watch and be tested more or less in real time so it would mostly depend upon how long your test movie was.


If it was only a two minute test movie then it might take only about 2 * 30 * 250 minutes = maybe 15000 minutes, or a little over 10 days. But I'm being sloppy here and it could easily go many times that. But I could live with that.


I left one of my stock market tests going for about 3 weeks once on a slower machine a few years ago. You can usually tell when they stop making progress. But it's in the range you can do on a single dedicated processor if you are patient. (and I made it restartable ;) )


- Tom
See less See more
1 - 17 of 17 Posts
Status
Not open for further replies.
Top