Interesting topic. Nothing in practice does this, for good reason, but theoretically you can. In addition to the many practical issues, you also impose a lower upper-limit in quality if you reverse the steps.
Consider a simple (mathematically convenient) case: 640x480i30 -> 1280x960p60
And the two block diagrams which switch the order of operations:
1) Input -> Deinterlace -> Resize(2x,2x) -> Output
2) Input -> Resize(2x,2x) -> Deinterlace -> Output
Where H & V in Resize(H,V) are the scale factors in the horizontal and vertical directions respectively.
Modern deinterlacers have three main modes of operation:Video: Intra-Field (Interpolation)
Video: Inter-Field (Merging)
Film (Inverse Telecine)
When operating purely in intra-field mode, you can think of the input as 640x240p60. Then the deinterlacing operation is effectively a x2 resize in only the vertical direction: Resize(1x,2x). You can simplify the block diagrams by combining the resizing blocks and now both diagrams become the same thing:Input -> Resize(2x,4x) -> Output
So in this case, there is no difference.
Motion adaptive deinterlacing is now the norm for video content. This is where intra-field and inter-field modes are selected on a pixel-by-pixel basis depending on if the pixel is determined as moving or stationary. When a pixel is deemed as stationary, inter-field mode is engaged and the fields are merged to reconstruct that point at the full resolution (detail). A moving pixel triggers intra-field mode. In film mode, entire fields are merged to undo the Telecine process.
Blah, blah, blah the takeaway is: by scaling the image prior to deinterlacing, you ruin the opportunity to perfectly reconstruct stationary portions of video content and also entire frames of film content. The lowpass filtering involved in resizing operations destroys this advantage, so that's why you don't reverse the steps.
Originally Posted by DonoMan
Yes, scaling must be done in progressive form. Interlacing is always line-by-line, so scaling an image with interlacing (before deinterlacing) will result in >1 pixel lines (and most likely not even an integer multiple, which makes it even worse) which will be essentially impossible to feed through a deinterlacer since deinterlacers expect 1-pixel vertical weaves, not >1pixel.
Great example of one of the practical issues you run into when reversing the operations.
Originally Posted by DonoMan
And even if you did scale them, you better hope it's a bilinear scaler (which is not the best quality scaler, though it's a lot better than nearest-pixel) and not a bicubic or better, because bicubic would take information from the lines adjacent in the field, which is incorrect, since adjacent lines in a field are not adjacent lines in a frame. So bicubic scaling, while usually better than bilinear/NP, will make things even worse.
I don't understand. Why would a bicubic filter do that while bilinear and nearest-pixel don't? They are the same type of filter; they just have different coefficients. I can easily make a bilinear filter that uses lines adjacent in the field. Are you talking about specific implementations?