Originally Posted by ctreesh
I would like to thank those you have contributed to this post. It has indeed helped me get a better understanding.
The one part that is still a tuff cookie to grab is the fact that even if you go from 1080i to 1080p the acutal resoution of the picture dose not increase, yet the scan rate doiubles.
Dose doubling the scan rate acutally double the resoution?
No, however as is made apparent by DVD, it can obscure less/reveal more of the detail that is present in the source.
Most interlaced displays overlap scan lines slightly, to reduce the artifacts caused by scanning at a low rate. As such they tend to blur the fields together, obscuring detail present in the image.
By "progressive scanning", especially on progressive sources (ie film sourced content) you can bypass this blurring and get a result that's a significant improvement.
As for why there are progressive scan DVD players when displays have deinterlacers, it's primarilly because it's much easier to do the inverse telecine (IVTC) in the digital domain, where you have all the information (ie pulldown flags), vs after analog conversion where all you have is fields, and you have to evaluate ("guess") what fields go together.
Also, just so it's not missed, probably 90% of DVDs (ie those from Hollywood) are actually 480p. They contain true progressive content that is stored as fields and have metadata (flags) included for NTSC output. Note that stored as fields is different than being interlaced.
Its proabaly a arguement of terminology.
No, it dose not double the "Resoution", yet its still twice the amount of data in the same amount of time.
Not necessarilly, as in the case of film content, interlaced or progressive transmission are just two different ways of carrying the same data.
The real world question is---dose it make the picture look twice as good?
That's a value judgement and I think I disagree with many here on the magnitude of differences
Thanks again for all the input and feedback. Its a very complex topic indeed.
It's complex, but I think you're making it harder than it needs to be.
For example, as Mauney noted, there are really three distinct issues:
Nothing connects one to another, and you can come up with a huge number of combinations between them.
What I have learned is this...
In 1080i---only 540 lines of video are on a CRT display at any one time.
In 1080p-- All 1080 lines of video are on the CRT display at the same time.
True, but I'm not sure what bearing it has on anything.
On fixed pixel displays, all 1080 lines are on the screen at the same time, but there is a differense betteen 1080i and 1080p sources. 1080p will have more 'content'.
Not necessarilly. Actually, in all reality, 1080i sources probably have more real data since 1080p60 sources are basically unheard of (it's not part of ATSC AFAIK
). Basically the industry has standardized on 1080i60 and 1080p24 for source formats, the former being for video (documentaries, liveTV, DiscoveryHD type stuff), and the later being for movies and many TV shows. The former actually has the higher data rate.
As far as transmission goes, the industry so far has standardized on just 1080i60, but 1080p60 is a close second, with 1080p24 gaining traction slowly.
Like I said, I really think you're making this too hard.
1080p is the pinnacle of display technology at the moment (especially with things like the RS1 coming out).
Essentially all movies, and a large portion of TV content is 1080p24 (even if it's encapsulated in 1080i60), and a 1080p display is the only way to optimally display these.
For the remaining 1080 content, ie 1080i60, video processing is to a point now (RealtaHQV, Gennum VXP) that a 1080p display is probably the best way to view HD video as well.
For everything else, 1080p displays mean upconverting, ie they can display any other format without loss.
And the big thing, 1080i displays (especially FPs) are a dying breed these days, and the latest crop of digitals is rapidly closing (if not closed) the gap on CRT in image quality.