Originally Posted by DonoMan
Well, the weird cadences are almost never used, even in animation, and certainly not in new animation. The problem is more that in a lot of telecined material, the pattern will switch from scene to scene (bad editing) and sometimes (particularly in animation, but not all of it) you will have different elements of a scene that were overlayed have different patterns so you can't get a good picture back just from inverse telecining. Also, when there's only a little bit of movement on the screen, like a still scene where just someone's mouth is moving for example, it is very hard to accurately and precisely tell which field match is the best. Processors will try to lock on to a pattern once it detects one, but again, detecting one can be a real pain, and things can take a while to detect a new pattern and correct when said pattern changes.
I know it seems straightforward and easy, but a lot of material is far from either of those.
I made an IVTC plugin for AVISynth. I am not going to claim that I know the most about the subject, nor that my filter can do a better job than any other decent IVTC filter (actually, I often use someone else's over mine these days) but it is really a pain to get the thing working well, especially when you have to use default parameters. In AVISynth, we can design our filters with parameters that we can then use to tweak each specific video (or if you want to spend a lot of time, each scene) but we can't do that in consumer products. Even I wouldn't want to mess with combing detection thresholds and whatnot when I just want to watch a DVD.