HD 1080i Test Pattern to determine Vector Adaptive Deinterlacing + others icl. Ticker - Page 8
AVS Top Picks
BTW the MS has Probs with 720p TV lifestream on an Ati card. Or better said: Ati cards have Probs with 720p + MS decoder due to bad error concealment of the cards (UVD).
But most store only 30-40 days without download, puh.
Don't want to re-up the rarer downloaded once a month.
One said www.nakido.com stores longer, I hope he is right... and seems to be fast as I tried out.
And they never tell us someth. about traffic limits, buuh!
Anyway the links should work again now.
- 117 Posts. Joined 8/2011
- Thumbs Up: 10
- Select All Posts By This User
Windows media player works correctly also, but has banding like EVR/haali.
Side note: under Options-Internal Filters, you must uncheck all transform filters (if not, you must make sure the MPEG filter is outputting interlaced flags otherwise it won't work either...another hour of my life wasted there )
Now, what I don't understand is why CCC deinterlacing isn't detecting the 2:2 cadence of the 1080i50 pattern and just weaving it? Perhaps it only supports 2:2 detection for i60? I would very much like the i60 version of the pattern, but the link is not working for me, it wants to install some bloatware downloader which is unacceptable.
If anybody could please upload the i60 version of the cheese slices pattern to mediafire.com (it's free) that would be much appreciated.
You AVS people may know it since a long time...
I do not handle those things daily so I was surprised some days ago, wow...
VC-1 interlaced 1080i Slices:
Since LAV Filters (v.0.50.1) finally everything has changed,
LAV Splitter is able to directly clean-connect both, ts and mkv Slices (VC-1 interlaced) in MPC-HC to LAV Video Decoder or Cyberl. Video Decoder (PDVD10) using DXVA !
Thank you to the authors!
Edited by blaubart - 12/1/12 at 1:54am
- 937 Posts. Joined 3/2010
- Thumbs Up: 104
- Select All Posts By This User
That makes sense. Thanks.
I'm sorry about this, but I have just found out that the 1080i Cheese Slice test patterns are *not* really useful for judging video deinterlacing quality.
I always thought that the 1080i Cheese Slice test patterns were encoded as natively interlaced video. But instead I've just found out that they're encoded as *film* with a 2:2 cadence in both 60i and 50i variants. Practically that means that every 2 fields can be weaved together to form a perfect progressive frame. So the proper way to deinterlace these test patterns is to use a good IVTC algorithm which is able to detect the correct 2:2 cadence. If we want to judge video mode deinterlacing, we need to use natively interlaced test patterns. Which means there needs to be motion between every single field. The current 1080i Cheese Slice test patterns were encoded by simply taking the progressive source and then encoding it interlaced with a 2:2 cadence. The proper way to create video mode test patterns would have been to simply drop half of the lines of every progressive frame of the source (alternating odd and even lines for every consecutive frame) .
So what does this mean? In contrast to what I originally thought (and probably many other people) the 1080i Cheese Slice test patterns do not (at all) say anything about whether video mode deinterlacing uses motion compensation or not. Motion compensation tries to "follow" the motion between all the fields, and it only works if the motion is relatively constant from one field to the next. This is not the case here. Since these test patterns are encoded as film, there is no motion between 2 fields, then there's a jump to the next field, then again there's no motion etc. No motion compensation algorithm sees this as something it can work with.
The only real use these 1080i test patterns have is to check if the IVTC algorithm is capable of detecting the 2:2 cadence and weaving the correct fields back together. Judging quality differences between different video mode deinterlacing algorithms with these test patterns is not a good idea because video mode deinterlacing is not the right way to handle these test patterns in the first place. If you do select different video mode deinterlacing algorithms for these test patterns, all you're seeing is how they handle *film* content, and that's not what they were made for.
Edit: If you want proof, just play the 1080i Cheese Slices with madVR with film mode forced on. Also disable the madVR option "only look at pixels in the frame center", so madVR cadence detection will also include the bottom scroll band. This way playback with madVR film mode will be 100% identical to the original progressive source.
Edited by madshi - 1/18/13 at 1:28am
..is true playing the 1080i MPEG-2 but not the h.264 and VC-1 Slices. May be I could change it for MPEG-2 but I'm not so sure if it's important:
Cheese Slices was made 2009, a time nobody was talking about madVR but many people having not even a chance for VectorAdaptive deinterlacing on their machines.
To show if it's really present or not on different setups (using DXVA). That worked reliable all the years.
Hardware deinterlacers in other devices, TV's, stand-alone players and more (another test area) also do not use madVR.
The problem is that if the GPU detects the cadence and applies proper IVTC then it really doesn't matter at all whether it uses VectorAdaptive deinterlacing or not, because video mode deinterlacing won't be in use at all if proper IVTC is done. So we don't even know for sure if good image quality means that VectorAdaptive deinterlacing is used. It could also simply mean that IVTC does its job. I believe that both NVidia and AMD do things "per pixel". So it's quite possible that good looking parts of the image look good because the film cadence was properly detected.
And this has really nothing to do whatsoever with madVR. I've only mentioned madVR as a quick way to check that these test patterns are really *film* and not *native interlaced video*. There's a world of a difference between the two, from the view point of a deinterlacer. Every (reasonably good quality) hardware deinterlacer in TVs, stand-alone players and more would behave quite differently if these test patterns were natively interlaced instead of telecined film.
That's a wonder I can't explain
And there is indeed a difference to an in Europe (H.264) recorded 1080i.ts - this kind of 1080i.ts is decoded to ugly weave by madVR (all MPC-HC on two different AMD Graphics).
Disabling madVR's internal decoding and unsing other DXVA decoders (PDVD10/12, LAV, MS) results in the normal DXVA deinterlacing as shown in Post #1.
PDVD10/12 playing H.264 Slices falls back to SkipField with madVR as renderer,
LAV on one PC did the progressive, the other decoders did not.
Seems this Inverse Telecine IVTC is an NTSC thing and has something to do with 2:3 Pulldown? Do you know where to download such a Video? Here in Europe is another DVB standard.
Edited by blaubart - 1/18/13 at 10:30am
You getting weave with madVR with some Europe 1080i content can have multiple reasons. Does the madVR OSD say deinterlacing is off? In that case simply the detection that deinterlacing is needed didn't work correctly. You can then press Ctrl+Alt+Shift+D twice to manually enable deinterlacing in madVR. Or maybe you still left madVR on forced film mode? If you then try to play natively interlaced video content, you *must* see combing artifacts as a result, simply because film mode can't handle native video content properly.
IVTC means undoing the Telecine. Telecine applies to both NTSC and PAL, see here:
Your Slices (both 50i and 60i) were encoded with 2:2 pulldown. Which is normal for PAL film content, but quite unusual (though "legal") for 60i.
> Do you know where to download such a Video?
What do you mean with "such a Video" exactly? You mean a sample with 2:3 pulldown? Pretty much every USA movie broadcast and NTSC DVD is like that. Although some are soft-telecined (24 progressive frames or 48 interlaced fields with "repeat_first_field" flags) while others are hard-telecined (60 interlaced fields per second). And some are a mixture of both. HD DVDs were always soft-telecined.
- 74 Posts. Joined 1/2007
- Thumbs Up: 11
- Select All Posts By This User
True interlaced content, which would properly test deinterlacing, would have motion between each field. To create this, you'd need to get a 1080p/50 video and convert it to 1080i/25 by removing half of the lines from each frame (i.e. turning each frame into a field). The same with 1080p/60 to 1080i/30. What you appear to have done is convert 1080p/25 to 1080i/25 (and 1080p/30 into 1080i/30), which is a non-destructive process and doesn't actually require deinterlacing.
Believe me or not, it's not so easy to get NTSC 1080i content in Europe. Blu-ray are all progressive, DVD all 720*576, no US TV in HD at all... Anyway, I'd only need it for some aimless testing.
I am somehow a passionated "after war child": most of my computers can't run madVR anyway and the gaming PC is for gaming. I LOVE DXVA, simple setup and runs fluently and cool under any circumstances and having compared even identical frames rendered by madVR and "underdog" EVR the differences were so tiny...
And I'm also not the guy to pronounce VA deinterlacing as "world saver" at all. If you read here you will see it. VA is only a tiny extra for sport freaks sitting 1 meter away from really big screens..
And the "madVR hysteria" allaround makes me nutty - everybody who does not have it is immediately drowning in the desert..? Lately I liked to say to them "some like it hot, me not"
Sorry sorry, please have mercy...
(1) Native interlaced video content (music concerts, sports) has 50 or 60 interlaced fields per second, where each field was recorded at a different point in time.
(2) Telecined film content (movies, newer sitcoms) has 50 or 60 interlaced fields per second where always at least 2 fields come from the same point in time and can simply be weaved together to get a perfect progressive output.
(3) For native interlaced video content you can use Bob, Motion Adaptive Deinterlacing, Vector Adaptive Deinterlacing, Motion Compensated Deinterlacing, YADIF etc etc. You can't use IVTC. If you do, you'll get heavy combing.
(4) For telecined film content, the only proper algorithm to use is IVTC. Using a video mode deinterlacer like Motion/Vector Adaptive Deinterlacing is the wrong algorithm to use and will produce subpar quality compared to IVTC.
The Cheese Slices have been used by many people to judge the quality of video mode deinterlacers. But fact is that the Cheese Slices are actually telecined film content, and as a result using video mode deinterlacers on the Cheese Slices is completely the wrong algorithm. That doesn't mean that the Cheese Slices are useless. They certainly can be used for several things. What they can *not* be used for, though, is to check if a video mode deinterlacer uses motion compensation, or how the quality of different video mode deinterlacers ultimately compares.
Basically I just want everybody to know what the Cheese Slices really are (I believe most people had mistaken them as native interlaced video test patterns), so that everybody knows how to properly interpret the test results.