Originally Posted by markr041
Pixelization occures when the bitrate is too low.
What you're calling pixelization is probably visible DCT blocks... not exactly the same thing. Yes, you see visible DCT blocks when the bitrate falls too low without compensation. That's true of any DCT encoder: MPEG-1, MPEG-2, AVC, etc.
What normally happens in a frame is that the video is broken up into blocks, 16x16 pixels for example. After the DCT, the information is now frequency-space, not pixel-space. Some of this information is tossed out.. the higher frequency stuff, then losslessly compessed.
When there is too much difference between adjacent blocks for the given bitrate, you see the blocks... the edges no longer line up, due to the differences in filtered information. A DVD compression engineer will selectively apply low-pass filtering (eg, some blur) to a high motion scene to counteract the DCT blocking effect. But your camcorder, HDV or AVC, can't do that. Yet.
So the problem of fast motion IS a problem of bitrate, for any DCT-based encoder. Here's what happens. You record one I-Frame.. the next 14 or so are going to be referenced from that frame. The next frame is captured, and the video processor runs its motion detection algorithms to determine where things moved between frames. Along with those vectors, it stores the "error" in compressed form... the difference between the original frame with vectors applied and the current frame. That should be a very small amount of information, but in fast motion, it might well be a nearly full frame.
And thus, the P and B frames, past a certainly speed of motion, look horrible. There's too much moving from frame to frame for the MPEG algorithm to really work properly, and you wind up with heavily overcompressed video. Based entirely on motion.
This is one place a 60p mode is handy... you're recording a block of frames in half the time, thus, half the physical motion, versus 30p or 60i.
Originally Posted by markr041
Bad de-interlacing can result in things like mice teeth, tearing etc. But that is the fault of the software/hardware player.
Well, kind of. The problem on a PC, versus a television, is that the PC's display isn't likely to be synched to the video paying. When you play from your camcorder to an HDMI monitor or TV, it's that video driving the TV's horizontal and vertical syncs. They can count on real 59.97Hz refresh of the display as well, and they also have very sophisticated techniques for adaptive de-interlacing, if they need to de-interlace (obviously, CRTs and some plasmas will display interlaced video directly).
On a PC, the video display is synched to the graphics card's video refresh. Even if that's exactly some multiple of your video rate, there's no hard synchronization, and no way these days to actually display interlaced video. There's also less CPU power available for de-interlacing magic than the custom video DSPs you find in the better television these days... particularly since much of the PC's CPU horsepower is spent decoding the AVC.
Bottom line is, video on a PC is sketchy at best. Yes, some tools improve it, just as some television hardware does better with interlaced video or SD upscaling than others. But you're at an inherent disadvantage watching video on a PC... for any real testing, you need to get the computer out of the way.