Originally Posted by KHarper
But for source material that is 1080i, that means that there are two "frames" in succession, the first with odd-numbered lines the second with even-numbered lines. If my TV has a native resolution of 720p, do I get a series of frames that are basically 540 lines progressively scanned, or does the tv take two frames, put the odds and evens back together into a single frame, and then basically downconvert from 1080 lines to 720 lines? I'm not sure it matters--I like what I'm getting either way; but I'm curious.
If your TV has progressive scan capabilities, it will have a circuit designed to deinterlace SDTV or other interlaced signals. However, it isn't entirely accurate to say that it "puts the odds and evens back together".
If the original source of the signal comes from film, then each frame was originally converted into your "odd" and "even" scan for broadcasting (sort of), and because of this, the best method is to put the odds and evens back together as you said. The deinterlacing circuit will be designed to recognize this, and will use this method accordingly. I said "sort of" there, because even here, it's not entirely accurate. Remember, film is 24fps, while NTSC video (TV in the Americas and Japan) is 30fps, so there's a process called "Telecine" that splits the frames, but it isn't QUITE the odd/even split that I simplistically refer to here. If you're interested in how that works, google "Telecine."
However, if the original source was NOT film, but was in fact created through interlace scanning, then there would be a slight time-difference between those odd and even scans. While it's only a fraction of a second, with motion video, it would be enough to be noticeable, so instead a deinterlace circuit will typically use the existing scans and "fill in" the blanks based on available data.
In other words, it GENERATES those missing lines from scratch, based on the information in the surrounding lines. So, on the "odd scan," it will generate the missing even scans based on the information contained in those odd scans. On the "even scan" it will fill in the odds using the same method. The actual algorithm used will vary from company to company, which is why some manufacturers' TV pictures will look better than others after the conversion.
When upconverting or downconverting the resolution, the deinterlacing is done first, then the result is upscaled or downscaled accordingly using more manufacturer-specific algorithms.
By the way, since most DVD content is interlaced, a progressive scan DVD player does the same thing, so the actual deinterlacing (and, in fact, scaling) may happen there, rather than in the TV. This is why, when you have a Progressive-Scan TV, it sometimes actually looks better to turn off the progressive scan circuit on your DVD. If the deinterlacing circuit of your TV is far superior than that of your DVD player, then having it perform the deinterlacing, rather than the DVD player, can result in a better picture. This is even more true with scaling. Some DVD upconverters produce an absolutely horrid picture, while others are amazing (Same goes for DVRs and cable boxes that deinterlace/scale). The TVs that can upconvert tend to be better at it, but that's not always true either, so you just have to try it both ways and go with what looks better to you.