or Connect
AVS › AVS Forum › Video Components › Home Theater Computers › HD 1080i Test Pattern to determine Vector Adaptive Deinterlacing + others icl. Ticker
New Posts  All Forums:Forum Nav:

HD 1080i Test Pattern to determine Vector Adaptive Deinterlacing + others icl. Ticker - Page 9

post #241 of 283
Thread Starter 
Quote:
The Cheese Slices have been used by many people to judge the quality of video mode deinterlacers.

Really? Where? I haven't seen anything of that. Last time I saw Cheese Slices used in the public was Anand Tech testing wether a new AMD graphics card can do VA deinterlacing or only MotionAdaptive.
Maybe 1-2 years ago? That's exactly what the Slices are made for and that works 100%. Meanwhile they all can VA..? Even AMD is using it.
.
Edited by blaubart - 1/18/13 at 3:39pm
post #242 of 283
Quote:
Originally Posted by blaubart View Post

Really? Where? I haven't seen anything of that.

The very reason why I investigated the Cheese Slices is that someone on another forum claimed that the Cheese Slices would proof that AMD and NVidia do Motion Compensated Deinterlacing. Ok, but let's ignore that. You mentioned AnandTech, so let's look there:

(1) http://www.anandtech.com/show/2931/4

Quote:
When doing the research for this review we came across these complaints, and also a very interesting test for the issue: a specially crafted heavily interlaced 1080i MPEG-2 file called Cheese Slices, made by blaubart of the AV Science Forum. Cheese Slices is what amounts to an stress test that has more noise and interlacing artifacts in it than any real video would have, and is more than deinterlacers today can handle. It’s an unfair test – but that’s by design

All of that is factually wrong. It is not heavily interlaced, it's just telecined, which is a relatively mild form of interlacing. It doesn't have *any* interlacing artifacts in it. By using IVTC you can restore the full progressive images. It is also not an unfair test. It just requires a good film mode and cadence detection.

Quote:
For the sake of reference, the Radeon 5670 and higher pass this test.

Totally wrong. Both utterly fail, because the only way to pass this test is by fully (pixel-by-pixel) reproducing the original progressive frames. What the AnandTech screenshots actually show is that neither card is able to detect that the Cheese Slices are telecined film. Judgement: Failure.

(2) http://www.anandtech.com/show/4479/amd-a83850-an-htpc-perspective/5

Here AnandTech compares image quality of AMD vs. Intel video mode deinterlacers, and comes to the conclusion: "Except for a certain segment in the noise response section, it looks like the AMD algorithm works much better than Intel's for the Cheese Slices test". Which is exactly what I claimed and which you called into question: AnandTech was using the Cheese Slices here to judge/compare quality of different video mode deinterlacing algorithms. What AnandTech should *really* have done is post the original progressive frame in addition to the Intel and AMD DXVA output, and then write the conclusion: "Both AMD and Intel fail to detect that the Cheese Slices are telecined film content".

I do not blame AnandTech (at all) for coming to wrong conclusions, because they thought what everyone else thought (including me), namely that the Cheese Slices were natively interlaced video. But it's not. And I believe it's important to let everyone know that fact.

Or why don't we look at the first post in this thread. There you write:

Quote:
First, don't get shocked - the pattern will NEVER look totally clean and "peaceful". I had to build in points (noise), lines and movements that can not be handeled by today's deinterlacers. And some that will be almost deinterlaced. But just in these parts the different video processors and deinterlacing options show determinable effects.

And please do not try to compare 1080i vs. 1080p Slices in order to get the right answer cause it's irrelevant: not the fastest Computer, video processor or GPU in the world will be able to deinterlace into progressive!

That's all wrong. madVR can "deinterlace" (IVTC) the 1080i Slices into perfect progressive. Which proves that the very description you wrote is wrong and was misleading everyone into believing that these were natively interlaced test patterns. Because if they were natively interlaced, everything you wrote would be true.



Now I don't want to sound too negative. As I said before, the 1080i Cheese Slice test patterns are still useful. People just need to know what they really are and how to judge results. It is possible that DXVA output with truely interlaced video mode Cheese Slices wouldn't look much different from these telecined Cheese Slices, but it's hard to know for sure. The current DXVA output with the Cheese Slices could partially use IVTC. We can't know that because DXVA doesn't tell which algorithm it uses, and what makes it even more complicated is that DXVA might change the algorithm (IVTC vs. video mode deinterlacing) per pixel. So only natively interlaced Cheese Slices would allow us to make sure IVTC is definitely not involved and we're really judging video mode deinterlacing quality. And then, really good deinterlacing algorithms would work better with natively interlaced content as opposed to telecined content.
post #243 of 283
Hmm, interesting arguments, and madshi is right. Cheese Slices was created from a 30fps *film* by 2-2 pulldown:

F1 F2 F3 .... (30fps) --> T1 B1 T2 B2 T3 B3 ...., where F1 = T1 + B1, etc. (F = frame, T = top field, B = bottom field)

(Some NTSC videos were created in this way, e.g. Friends and Oklahoma! (1955), according to wiki). So the true purpose of Cheese Slice is testing the graphics card's ability to detect 2-2 cadence. If the graphics card detects the 2-2 cadence correctly, then the original pattern will be restored perfectly. MadVR can inverse telecine it just fine when film-mode deinterlacing is forced, but can't detect the cadence automatically. None of the current graphics cards can detect it either under DXVA, so video-mode deinterlacing is always applied.

The correct way to create a test pattern for video-mode deinterlacing is start with a 60fps sequence (= the reference pattern) and discard top or bottom field of each frame (that should not be difficult for blaubart?):

F1 F2 F3 F4 F5 .... (60fps) --> T1 B2 T3 B4 T5 B6 ....

B1, T2, B3, T4, B5, T6, etc. are discarded and we want to see how close to the original the graphics card can create a 60fps pattern.

Nevertheless Cheese Slice is still useful (for me) to test video-mode deinterlacing quickly. It is Cheese Slices that I first run when something looks wrong.

Yeah, madshi said everything clearly. smile.gif
Edited by renethx - 1/19/13 at 6:38am
post #244 of 283
reneThx.

So, does anybody have enough avisynth knowledge to create natively interlaced video versions of the 1080i Cheese Slices (just h264 1080i60 would be enough, IMHO)? I think that would be really interesting. I'd love to see if the DXVA results would be any different. Maybe yes, maybe no, I'm not sure.

I think it would be ok if they ran twice as fast (that would be the easiest solution, I guess). Maybe a workaround would be to concatenate the Slices to make sure they have the same runtime despite the faster movements. I have to admit, I personally don't have enough knowledge/experience to do this without losing too much time.

(@renethx, JFYI: madVR currently has no automatic film vs. video detection built in, that's why you have to manually force film mode. I plan to add auto film vs video detection in a future version. Hopefully it will work better than what DXVA does...)
post #245 of 283
Code:
AssumeTFF()
SeperateFields()
SelectEvery(4, 0, 3)
Weave()

This should first split the progressive frames into its fields, then select the top field of the first frame and the bottom of the second, and weave them again.

If i can figure out how to properly encode something like this, maybe i'll do it. tongue.gif
post #246 of 283
Ah, the wonders of avisynth. I guess at some point I should actually learn it... tongue.gif
post #247 of 283
Here is the final script i used:
Code:
FFmpegSource2("D:\Encoding\Slices\Slices NTSC 1080p 29.97.ts")
AssumeTFF()
SeparateFields()
SelectEvery(4, 0, 3)
Weave()
AssumeFPS(30000,1001)
Loop(2)

Encoded in MeGUI with x264 using the "very slow" preset and crf 18, and here is the result:
http://files.1f0.de/samples/Slices NTSC 1080i60 h264 TFF.mp4

As expected, it runs faster now, because half of the fields have been dropped and the FPS increased. On madshi's suggestion, i looped the video once to give it back its original playtime.
Let me know what you think of the result.
post #248 of 283
Quote:
Originally Posted by Nevcairiel View Post

Encoded in MeGUI with x264 using the "very slow" preset and crf 18, and here is the result:
 

I was trying to use EVR/DXVA via LAV splitter but it won't decode properly.

 

It also didn't play natively on my Mac, but does decode properly on Mplayer.

 

Is it because the encoding profile was High@L5.2?

post #249 of 283
Thread Starter 
Thank you guys for the great concern.

@ mashdi
yeah I remember this AnandTech article now and i've commented it as "phantasy with wrong conclusions" long before but in German so it's useless to link on?
There I wrote AnandT. never has understood how to read the Slices right, but I would say you did also never mashdi, I would even say you never tried... Al those extras like 'here and there a little bit more edgy' blabla is not what I've written in Post#1 a long long time ago. Ceese Slices is nothing more and nothing less.

What do you think how I came to do the test? How many absurd phantasies came up in 2009 about DXVA and VA when only a vew people had it? And don't you think that I compared everything I wrote to org. broadcasting if it's really viewable there? Long before the Slices I discovered missing VA in Football broadcasts and after a long time unsuccessfully checking for good testpatterns decided to build my own to show the same effes better. Initially only for me and some friends.

Do you really think nobody over the years ever compared the 'Slices effects' to what he sees in real TV or other patterns like W6RZ or Joe Kane or or or changing CCC's deinterlacing options? Just read this Thread.
Anyway here comes newest garbage..


@ Nevcairiel

...you were faster but your mp4 seems only to run in madVR?

Here's my work of today, runs with any decoder and DXVA, if it's passing everybody's checks I call it "Speedy IVTC_Slices"..tongue.gif
To slow it down I had to completely renew the source pics that would take weeks. But i will double it later.


Doubled version -> click*
Edited by blaubart - 1/26/13 at 9:51am
post #250 of 283
Well i used x264s default settings, which apparently include 16 reference frames, which may cause issues on some crappy systems when using DXVA.
Works fine for me in LAV Splitter + LAV Video, renderer doesn't matter. smile.gif
post #251 of 283
Thanks, Nevcairiel, I appreciate that!

When frame stepping through the DXVA output frames with the original telecined Cheese Slices I've noticed some interesting: There are always two DXVA output frames with the same "motion position". But they look quite different. I guess this does make a lot of sense because the telecined stream has always 2 fields from the same motion position. So I've taken screenshots of both DXVA output frames, when using the telecined Cheese Slices, for comparison with the new video mode Cheese Slices screenshots:

Progressive:
progressive

Vector Adaptive:
video -|- telecined 1 -|- telecined 2

Motion Adaptive:
video -|- telecined 1 -|- telecined 2

Adaptive:
video -|- telecined 1 -|- telecined 2

Bob:
video -|- telecined 1 -|- telecined 2

A few observations:

(1) It seems that with Vector Adaptive, Motion Adaptive and Bob deinterlacing, "video" results are quite similar to the *2nd* frame of the "telecined" results, while the *1st* frame of the "telecined" results is quite different. Practically this means when comparing video mode deinterlacing algorithm quality by using the "telecined" Cheese Slices, a lot depends on whether you choose to screenshot the 1st or the 2nd frame with identical motion position. This problem is gone when using the "video" test pattern.

(2) Generally, "video" results are quite similar to the "telecined" results, at least with my ATI card, which is probably good news because it means that the old Cheese Slice tests are still "somewhat" valid. It would be interesting to double check this with NVidia and Intel, just to be safe. The "video" results do show quite a bit more deinterlacing artifacts, though, compared to the "telecined" results, especially in the center section of the image.

(3) Very interesting is the 2nd output frame when using "Adaptive" deinterlacing with the telecined Cheese Slices. Here the 2nd output frame looks almost progressive! The 1st output frame looks horrible, though. So again the person making screenshots could randomly pick either the "good" or "bad" output frame, which is not good.
post #252 of 283
Quote:
Originally Posted by blaubart View Post

@ mashdi
[...] but I would say you did also never mashdi, I would even say you never tried... Al those extras like 'here and there a little bit more edgy' blabla is not what I've written in Post#1 a long long time ago.

I'm not sure why you're so defensive about this. One thing is sure: The description in your first post is factually wrong, when talking about the original Cheese Slices (but correct when talking about the new encoding you made today). And I believe every single user thought that these test patterns were video mode. Nobody even had the idea it could be telecined content, simply because it makes no sense to create telecined test patterns to test video mode deinterlacing algorithms. It was a total accident that I discovered this. Btw, my nick is "madshi", not "mashdi".

Quote:
Originally Posted by blaubart View Post

Here's my work of today, runs with any decoder and DXVA, if it's passing everybody's checks I call it "Speedy IVTC_Slices"..tongue.gif
To slow it down I had to completely renew the source pics that would take weeks. But i will double it later.

Thanks. Looks good. Actually it looks better to me than nevcairiel's encoding because with nevcairiel's file I have some weird chroma artifacts that don't occur with yours. Guess x264 screwed up something there.

It would be great if you could rename the original 1080i Slices to something like "Telecined 2:2 Cheese Slices" and the new ones "Video Cheese Slices" or something like that, to make things clear to everybody. The SD Slices were always alright, I think.
post #253 of 283
I don't think interlaced encoding is a primary focus of the x264 developers, so who knows if it has some problems.
Was also my first time encoding interlaced, tbh. smile.gif
post #254 of 283
Thread Starter 
Before you I've never realized sth. like Telecined believe me! tongue.gif

Comparing the new SpeedySlices to the Telecined on DXVA they show exactly the same effects changing CCC's deinterlacing options as described in Post#1.

So please would you explain viewable differences in concrete tests? I am really interested now.
If something is missing in the pattern - I have some options here to alter the encoding.
post #255 of 283
Quote:
Originally Posted by Nevcairiel View Post

Well i used x264s default settings, which apparently include 16 reference frames, which may cause issues on some crappy systems when using DXVA.
Works fine for me in LAV Splitter + LAV Video, renderer doesn't matter. smile.gif

"Crappy DXVA system 1" = AMD HD4600

"Crappy DXVA system 2" = Intel HD4000

post #256 of 283
Quote:
Originally Posted by tsanga View Post

"Crappy DXVA system 1" = AMD HD4600
"Crappy DXVA system 2" = Intel HD4000

The first is clearly crappy, i don't think the old 4xxx series could do 16 ref frames.
The seconds hardware is capable, you just need good software, like my decoders. I have a HD4000 myself and it works flawlessly on that one, on a HD3000 too.

But this is going OT now.
post #257 of 283
Ha, when using the new interlaced video, the diagonal lines in the centre of the test image turn to a chequerboard mush on my HD4000. The rest of the image looks roughly the same as in the old IVTC test video.
post #258 of 283
Thread Starter 
Quote:
Originally Posted by blaubart View Post

Comparing the new SpeedySlices to the Telecined on DXVA they show exactly the same effects changing CCC's deinterlacing options as described in Post#1.

That was too short.. my poor English, you know..

Before doing the Slices I had to to test over and over which line form in which thickness, rel. position and angle causes most VA-missing reaction moving at which angle and speed. Each one of these had enormous influence on how the line looks like moving! So changing just one like the speed in SpeedySlices of course will cause rel. heavy differences. The right interpretation is reallly not easy and I will try to show some effects here.

At first read again the "Deinterlacing Response" part in Post#1. According to this VA 'weakness' AMD and Nvidia both show movement shadows of some line parts in Slicies. The faster the lines move the longer are the shadows. Here two screenshots to show what happens if moving speed is doubled:

-> the lines in the following pics are very tiny so it's important to see them in original size. The browser must not be zoomed in any direction. In Firefox and Internet Explorer just press Ctrl + 0 (zero).
-> if after Ctrl+0 you still see waves instead of 1 pixel lines just click on the pictures. Should open them now in org size.






"shadow narrower" - because in this case the deinterlacer has less time to calculate. The other shadows are more (or less) speed-delayed atefacts.


The most important thing is the reference part (the green sports field) you always should have an eye on: may happen whatever in the pattern, as long as these lines are clean you will see almost nothing of the ohter in 'normal' 1080i video!



That's the whole 'secret' that lets me stay cool while other testers hit the ceiling.. (<- dictionary). I've mentioned that often enough but nobody is listening?



For me a more interesting part - what does Intel? Wow seems they created sth like VA XL? Watch this (HD 3000):





AnandTech didn't notice - nearly no weak "Deinterlacing Response" at all ! Only the 'Fluxus' (round lining) shows a narrower (deinterlaced) shadow with Speedy Slices. Also these shadows are nearly static not bouncing around as in AMD + Nvidia. Neverheless, if that has any viewable effect in 'average' 1080i video again may be doubted wink.gif but fantastic deinterlacing!

What's happening at the white arrows is most likely also speed caused.

It is night here and I'm tired now, another time more.. Don't believe anything the Slices tell you as long as you haven't tried them a 1000 times on different setups! Then may be you get a bit familiar. Do you know what? It's just a piece of really heavy deinterlacing..tongue.gif
.
Edited by blaubart - 2/10/13 at 7:42am
post #259 of 283
Thread Starter 
Quote:
Originally Posted by renethx View Post

Hmm, interesting arguments, and madshi is right. Cheese Slices was created from a 30fps *film* by 2-2 pulldown:

F1 F2 F3 .... (30fps) --> T1 B1 T2 B2 T3 B3 ...., where F1 = T1 + B1, etc. (F = frame, T = top field, B = bottom field)

No, Cheese Slices was created from pictures which I drawed, each single one, before let TMPGEnc render them to video.

So how about this:
- in TMPGEnc i've choosen nothing like 2-2 pulldown or telecined - just NTSC interlaced.
- TMPGEnc is not interested in the content of the pictures at all, I could draw Mickey Mouse into the 2nd pic and TMPGEnc would still keep on deleting the even and odd lines altering of the pictures and render to interlaced 1920*1080 video.

I must confess I didn't care about those things enough 3 years ago, the surrounding things to take care of were just too complex and moreover if - I'd had to paint double as much pictures!
And regard - pictures have no freqency (frames per second) or line orders at all they are just pictures. The encoder is completely free in deciding what to do with them.

What TMPGEnc did in Cheese Slices was
- delete the 2-4-6... lines from the first pic and put the rest in frame one
- then delete the 1-3-5... lines from the first pic and put the rest in frame two
- then delete the 2-4-6... lines from the 2nd pic and put the rest in frame three
- ... and so on

In Speedy Slices I simply told TMPGEnc to treat the 874 pics as sequence:

- delete the 2-4-6... lines from the first pic and put the rest in frame one
- then delete the 1-3-5... lines from the 2nd pic and put the rest in frame two
- then delete the 2-4-6... lines from the 3rd pic and put the rest in frame three
- ... and so on

So if 'Telecined' is any code the video has to contain - in order to tell the graphics card how to handle the video - I doubt that TMPGEnc did write that code cause not video was the source but only pictures.

Only if exclusively the content of the lines lets a graphics card decide wether it's telecined or not - then what madshi said could take effect. Is that so?
Even if - is it quite sure that the lines TMPGEnc cuts out of single pictures and renders for each frame new, that these frames contain what a graphics card can match 'as usual' to the others in order to detect telecine or not?

Howsoever, hundreds of different screenshots of different AMD graphics card series and even Nvidia featured quasi identical screenshots in the last years (-> even with madVR - on the faster cards wink.gif)...
Considering now Speed Slices non existing new features (except speed-caused) using DXVA + EVR (also madVR) - isn't it time to give the 'all-clear signal' (<- dictionary) to those who checked with Cheese Slices in the past?


@ madshi

How about telling the madVR people just NOT to push the 'film mode forced on' button and everything works right?

Nevertheless, I will soon have time to finish the long version Speed Slices.
Edited by blaubart - 1/21/13 at 9:41pm
post #260 of 283
The pattern looks interlaced to me. Here's the upper left hand corner of frame 140.

cheese.png

Ron
post #261 of 283
@Ron, which pattern do you mean? The *original* Cheese Slices? It's 2:2 telecined, but every frame contains the bottom field of frame A and the top field of frame A + 1.
post #262 of 283
Quote:
Originally Posted by madshi View Post

@Ron, which pattern do you mean? The *original* Cheese Slices? It's 2:2 telecined, but every frame contains the bottom field of frame A and the top field of frame A + 1.

I'm looking at the 1080i@29.97 version from the first post of this thread. I think I see what you're getting at. Here's a pic of two consecutive frames (first frame on top with red line at the bottom and second frame below and motion to the left).

cheese1.png

If you weave the bottom field from frame A with the top field from frame B, you'll get a solid vertical line. I don't think that's a 2:2 pulldown issue, but more of an incorrect depiction of constant velocity motion. Since the line moves 2 pixels between top field and bottom field, then the line in the same parity fields should move by 4 pixels (not 2 pixels as shown).

In other words the field motion velocity is twice the frame motion velocity (I think, this is one of those things that makes my head hurt).

Ron
post #263 of 283
I'm not sure I understand what you said. In any case, this definitely is 2:2 telecine. I have a PAL DVD movie sample which is encoded exactly the same way. Basically if you weave the correct fields together (= IVTC) you get perfect progressive output.
post #264 of 283
Thread Starter 
dr1394 did not weave with mavVR's 'film mode forced on' activated - but just any kind of 'normal' weaving - like setting CCC (AMD graphics card) to 'weave' and playing Cheese Clices with:

- any decoder + mavVR's 'film mode forced on' deactivated (if the computer is fast enough)
- any decoder + EVR

See the "weave" screenshot in post#1
.
Edited by blaubart - 1/22/13 at 12:43pm
post #265 of 283
Thread Starter 
Correction, the madVR's progressive output on AMD Graphics works solely with

LAV video decoder -> set to DXVA2 (copy-back) + mavVR's 'film mode forced on' activated

It's enough to just set LAV to DXVA (native) or use any other video decoder + mavVR's 'film mode forced on' activated = no more progressive output!
post #266 of 283
Quote:
Originally Posted by madshi View Post

I'm not sure I understand what you said. In any case, this definitely is 2:2 telecine. I have a PAL DVD movie sample which is encoded exactly the same way. Basically if you weave the correct fields together (= IVTC) you get perfect progressive output.

With PAL movies, the correct fields to weave are always in the same frame. If we weave the two fields of a frame with motion in "Slices", we get the first image I posted. It has motion from field to field, therefore it is interlaced.

BTW, I'm using the ancient (1996) MSSG reference decoder. It just outputs to files, so there is no graphics system involved. The decoder has an option to weave the two fields of a frame into one full size image, which is what I'm using.

http://www.mpeg.org/pub_ftp/mpeg/mssg/mpeg2v12.zip

Ron
post #267 of 283
Here's a sequence of 4 fields (temporal order is top to bottom). The first two fields are from frame A and the second two fields are from frame B. There is motion from field 1 to field 2, but there is no motion from field 2 to field 3. We're (madshi and I) on the same page, we're just using different semantics. Madshi wants to invoke 2:2 pulldown (which I consider to be confusing and doesn't help blaubart fix the issue), but I say it's just an incorrect depiction of smooth (constant velocity) motion.

field1.png
field2.png
field3.png
field4.png

Ron
post #268 of 283
Quote:
Originally Posted by dr1394 View Post

With PAL movies, the correct fields to weave are always in the same frame.

Is there a specification which defines that? Or an EBU recommendation or something? Or is that just your personal experience?

In my experience at least 99% of all PAL content has the correct fields to weave in the same frame. But to every rule there is an exception. As I said, I have a sample of a PAL DVD where the correct fields to weave together are *not* in the same frame, similar to the original Cheese Slices. Also, an IVTC algorithm which claims to support "any cadence" (e.g. for Anime sources) has to be very flexible, and should have no problem finding the correct fields to weave, even if they're spread over two frames, like with the original Cheese Slices. With hard-telecined 3:2 content it happens all the time that the correct fields to weave together are not in the same frame. So supporting this for PAL, too, is an automatic side effect of an "any cadence" IVTC algorithm.
Edited by madshi - 1/23/13 at 1:08am
post #269 of 283
Thread Starter 
It's not so much the technical details but madshi is trying to tell us every test with Cheese Slices in the past was crap! Without having any idea what he's talking about! He didn't really try it at all before shouting out of his devine madVR corner... I belive 'telecine' or what ever he says, no problem, but *hello?* Cheese Slices worked how it should for years on hundreds of graphic cards! There I could not see any exceptions!
Theory + practice you know - madshi has not brought 1 considerable example where the telecined has visible disadvantage in Cheese Slices! He's only dreaming.. smilie_les_069.gif And he's not dreaming friendly...

Technically I know exactly what he wants, I did it already in Speed Slices. No considerable difference to Ceese Slices. At the moment I'm playing on a short 1-1 pixel slower 'every-frame-changing' Slices, again no differences worth to dream of.. Will upload later.
post #270 of 283
Quote:
Originally Posted by blaubart View Post

Correction, the madVR's progressive output on AMD Graphics works solely with

LAV video decoder -> set to DXVA2 (copy-back) + mavVR's 'film mode forced on' activated

It's enough to just set LAV to DXVA (native) or use any other video decoder + mavVR's 'film mode forced on' activated = no more progressive output!

That is incorrect. It works with any *software* decoder. It currently just doesn't work with native DXVA decoding, as I already told you before.

Quote:
Originally Posted by blaubart View Post

madshi is trying to tell us every test with Cheese Slices in the past was crap!

That is not true. I said that the original Cheese Slices should not be used to judge video mode deinterlacing quality, but that they might still be useful for other purposes. You actually agreed with me there. And you said nobody ever did use the Cheese Slices to judge video mode deinterlacing quality, anyway. Then I pointed you to AnandTech doing it in the past. Then you said, ooops, now you remember, but you always said they used the Cheese Slices wrong.

Furthermore after having compared telecined and video Cheese Slices DXVA deinterlacing results, I wrote:

> Generally, "video" results are quite similar to the "telecined" results, at least
> with my ATI card, which is probably good news because it means that the
> old Cheese Slice tests are still "somewhat" valid.

Which directly contradicts what you just said.

Quote:
Originally Posted by blaubart View Post

Theory + practice you know - madshi has not brought 1 considerable example where the telecined has visible disadvantage in Cheese Slices! He's only dreaming..

Am I? How about this:

http://madshi.net/cheese/filmVA2.png
http://madshi.net/cheese/filmA2.png

The first is Vector Adaptive Deinterlacing, the second is just Adaptive Deinterlacing (which in theory is much lower quality). In these screenshots it looks like Adaptive Deinterlacing is better than Vector Adaptive Deinterlacing. But that's not actually true in real life. With the telecined Cheese Slices, Adaptive Deinterlacing produces one good output frame, then one bad output frame etc. If you look at the good frames, they look better than Vector Adaptive Deinterlacing output, but the bad frames look worse. This is one negative side effect of the telecined encoding. This doesn't happen with proper video encoding.

I've already mentioned this before, but it seems you missed it. Or chose to ignore it...

Quote:
Originally Posted by blaubart View Post

He didn't really try it at all before shouting out of his devine madVR corner...

I had been using Cheese Slices several times in the past, they've been in my "test patterns" folder for years. I've not the faintest idea how you came to the conclusion that I didn't try them at all. But then that's only one of many wrong conclusions you came to recently...


Anyway, I'm done replying to you, until you do your homework and learn the deinterlacing & IVTC basics (you really have some catching up to do there) and until you come out of your defensive corner and argue based on facts, instead of posting false claims about my intentions and making fun of madVR.
Edited by madshi - 1/23/13 at 1:30am
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Home Theater Computers
AVS › AVS Forum › Video Components › Home Theater Computers › HD 1080i Test Pattern to determine Vector Adaptive Deinterlacing + others icl. Ticker