Originally Posted by Fudoh
for movie material there is no difference since the 1080i60 stream contains all full frame imagine. For live video material, you get double the vertical resolution per 1/60sec. Think of the 1080i60 stream of 540p60 instead. Good video deinterlacers (external processors) will calculate an amazing amount of detail out of this signal, while TVs themselves often just upscale the 540p fields without adding any detail - that's likely what your sales reps meant in the first place.
To clarify, for progressive source material shot on film or progressive digital cams- each 1920x1080 camera progressive frame is split into an odd field containing rows 1,3,5...,1079 and an even field (half frame) containing rows 2,4,6,...,1080. These fields are displayed sequentially every 1/60 of a second as 1080i60- so the same instant in time orginally captured by the camera (film or progressive digital cam) is now split into two instances in time.
These odd and even "540p" fields in the 1080i60 signal can be inverse telecined back to 1080p60. The original progressive frames are reconstructed from the two 540p fields (odd+even rows of pixels) that were split in two fields when telecined, or interlaced. Recombining like this is known as "weave" deinterlacing.
Simple upscaling one 540p field without weaving is known as "bob" deinterlacing.
For 1080i60 signals recorded with 1080i cameras, you need a motion adaptive deinterlacing chip to achieve maximum detail and avoid bob deinterlacing.
As for the specific questions:
1. I talked to a 3 different best buy reps and they all told me that no LCD TV will take a 1080i tv signal and de-interlace it to 1080p. Is that true, because for the longest I thought samsungs, sony's etc video processor in their newer LCD TV's were able to do so.
>> First mistake- BB is the WORST place to ask for advice. Common knowledge around here.
All 1080p displays MUST deinterlace somehow to display 1080i60 fullscreen. Most sets probably just bob each 1920x540 field to create a 1920x1080 frame, basically doubling every row of pixels. Better sets will do 3:2 pulldown/weave to re-create progressive frames from progressive source (shot with progressive/film cameras) and do motion adaptive deinterlacing on 1080i source (shot with interlaced cameras). The best sets will also remove judder by eliminating the 3:2 cadence of inverse telecined (weaved) progressive sourced material, displaying at an even multiple of 24fps (display at 48Hz, 72Hz or 120Hz).
2. When de-interlacing 1080i 30 fps to 1080p, does it result in 1080p30fps, or by combining the separate odd and even fields, do you get 1080p 60fps?
>> Already answered- 1080i30 doesn't exist in normal commercially available video sources. 1080i is always 60 (and maybe 50) *fields* per second (1920 columns x 540 odd rows the first 1/60 of a second, then 1920 columns x 540 even rows of pixels the next 1/60 of a second and so on).
3. If it does de-interlace to 1080p 30fps, what do you need to have to change the scan type from 30fps to 60fps.
>> Again, don't confuse *fields* (interlaced) per second with *frames* (progressive) per second.
4. Is there a difference between sourced 1080p compared 1080i that has been deinterlaced to 1080p.
>> Again, if the material is shot with an interlaced camera, you need motion adaptive deinterlacing to improve detail beyond 1920x540 (for moving images). If the material is shot with a film or progressive digital camera, weave deinterlacing is all that is needed (and 3:2 pulldown for 24 frame per second originating material).