or Connect
AVS › AVS Forum › Video Components › Home Theater Computers › Nvidia GeForce GTX 460(GF104 GPU) supports full audio bitstreaming
New Posts  All Forums:Forum Nav:

Nvidia GeForce GTX 460(GF104 GPU) supports full audio bitstreaming - Page 52

post #1531 of 1812
Quote:
Originally Posted by madshi View Post

non-BD content? What kind of content and playback situation do you mean exactly? Depending on the media player and renderer implementation, sometimes pixel shader power is needed for scaling or for calibration / color correction or for post processing. But this is all so very much dependent on what setups and settings users are using. Because of that it's IMHO almost impossible to make general judgements about which GPU power is needed for "video playback". There's simply too much variation. I think maybe you could prepare 3 different typical usage scenarios (one very simple with straight playback, one with deinterlacing + scaling etc, and maybe one with lots of additional post processing), and then test which GPUs can handle which. That's the only thing that comes to my mind right now. Don't know if that helps, just brainstorming here...

The most challenging clips for HTPC GPUs are from AVCHD camcorders (1080p60). I was just talking to Andrew about a Blu-Ray M2TS file being played back with DXVA on a 6570. I enabled the MPC-HC denoise shader. The file (1080p24 VC-1) continued playback normally at the expected frame rate. The same GPU was also able to decode a 1080p60 H264 clip using DXVA, but turning on the denoise shader made a slideshow out of the video file.

I am wondering if there is any way to get an idea of the UVD engine usage (for ATI cards) and VPU load (I think GPU-Z provides it for NVIDIA cards) / current GPU load for post-processing. With a set of standard testclips (say, from HQV benchmark Blu-Ray), we could enable various shaders in MPC-HC and get an idea of how much the GPU gets loaded. Again, not sure if this makes lot of sense as I am not even sure the MPC-HC shaders use the GPU for performing their 'magic'
post #1532 of 1812
Quote:
Originally Posted by madshi View Post

non-BD content? What kind of content and playback situation do you mean exactly?

The most obvious differences I've seen b/w GPUs are with recorded TV content.

Quote:
Originally Posted by madshi View Post

Depending on the media player and renderer implementation, sometimes pixel shader power is needed for scaling or for calibration / color correction or for post processing. But this is all so very much dependent on what setups and settings users are using. Because of that it's IMHO almost impossible to make general judgements about which GPU power is needed for "video playback". There's simply too much variation. I think maybe you could prepare 3 different typical usage scenarios (one very simple with straight playback, one with deinterlacing + scaling etc, and maybe one with lots of additional post processing), and then test which GPUs can handle which.

AFAIK, the most common applications for consuming recorded TV content (and non-Blu-ray disc for that matter) use the EVR w/ DXVA so that would be a good starting point, but I think a custom mixer/presenter might be required to really get at the GPU's capabilities.

Quote:
Originally Posted by madshi View Post

That's the only thing that comes to my mind right now. Don't know if that helps, just brainstorming here...

It does, thanks. It would probably help if I was better able to define the problem
post #1533 of 1812
Quote:
Originally Posted by Andy o View Post

BTW thanks for linking to that interpolation filter. Just tried it, and I'm getting dropped frames with madVR and my C2Q Q9450 and 5770. It needs power indeed. Had to disable ReClock's resampling and still doesn't run completely smooth. I can overclock from 2.66 to 3.2 GHz without much of a power hit, but yeah, that's one example for why you would need a powerful setup. Modern CPU/GPUs are great at handling efficiency though, so even a more powerful card doesn't draw much power if you're not hitting it hard.

I would say the minimum is Core i5-2500K (overclocked).
post #1534 of 1812
Although, for a totally smooth experience, you still need a display which can take a multiple of 24. Not many TVs can take 48Hz or 72Hz input, I think. Maybe projectors? I guess the advantage of "120Hz" TVs is that they take the 24p input and process internally. For actual 120Hz LCDs, I'm guessing much more processing power is required, which would justify a sweet hexa-core Sandy Bridge for an HTPC

Anyway, this was only a test for me, I don't have a lot of interest for this feature, but I don't hate it either.
post #1535 of 1812
There are lots of choices: 24 fps to 30/48/60/72 (48 fps at 60Hz desktop refresh rate is generally good), with all sorts of minor changes (affecting greatly CPU usage/quality) in the AviSynth script. Some script works fine for an older quad-core. Have a fun.
post #1536 of 1812
Dunno, I tried 48Hz (doubling) played at 60Hz and the slight stutter is still visible when panning. What does the script do when setting it to 24->60? It can't be much different, but I didn't have enough power for that to run smoothly, even overclocked at 3.2 GHz.
post #1537 of 1812
24 -> 60 requires a huge amount of calculation, like this:

1 x x x x 3 x x x x 5 x x x x ...

1, 2, 3 etc is the first, second, third frame etc. of the original 24 fps stream. "x" is an interpolated frame. 24 -> 48/72 at 60Hz is a lot easier.

24 -> 48 at 60Hz:

1 1 x 2 x 3 3 x 4 x ...

24 -> 72 at 60Hz:

1 x x 2 x 3 x x 4 x ...

post #1538 of 1812
I see. I'll see if some more OC'ing gives me better results.
post #1539 of 1812
Quote:
Originally Posted by Andy o View Post

Dunno, I tried 48Hz (doubling) played at 60Hz and the slight stutter is still visible when panning. What does the script do when setting it to 24->60? It can't be much different, but I didn't have enough power for that to run smoothly, even overclocked at 3.2 GHz.

I have a Q9650@3GHz and it struggles with 24p->60. The core i7 is much better.
I have mine running at 4GHz.

25p->60 @ 1080 is the one that makes the i7 hit the wall. I use SVP+MadVR+CoreAVC.
The i7 setup is still CPU bound. There is also a pair of GTX460s on the machine.
CUDA and OpenCL divides the work between the GPUs quite nicely.
GPUz tells me that the load on each of the GPUs seldom exceed 40% doing the 25p->60 conversion.

24p->60 @1080 runs the i7 around 60% CPU and 30% GPU. The is still a lot of
headroom left in GPU processing. The SVP options are set to perform full
frame interpolation. On SVP it is quite easy to see if interpolated frames
are being dropped, the image shimmers before stutters sets in and then
it finally degrades to the kind of image you see on the motion flow TV (ugh)
post #1540 of 1812
Tong Chia,

Have you tried LAV CUVID + ffdshow Raw (for SVP) + madVR? I have a color space conversion issue.
post #1541 of 1812
Quote:
Originally Posted by renethx View Post

Tong Chia,

Have you tried LAV CUVID + ffdshow Raw (for SVP) + madVR? I have a color space conversion issue.

Yes, no go due to color space incompatibilities.

LAVCUVID outputs YV12 . SVP wants NV12. The workaround is ffdshow
post processing but the cure is worse than the disease and eats up all
the CPU headroom on the i7. (24p->60 with ffsdshow postprocessing
is about 80% CPU and drops frames)

If LAVCUVID is used without ffshow postprocessing, 24p->60 conversion CPU use drops
to about 50% with the output colors messed up.
post #1542 of 1812
I got smooth 1080p 24->60 at 3GHz by switching from madVR to another renderer like Haali or EVR-Sync. Even doing high-quality RGB conversion w/dithering with ffdshow and using the ffdshow decoder, I got smooth results.
post #1543 of 1812
Quote:
Originally Posted by babgvant View Post

besides visual inspection, do you know of a way to quantify if a gpu has enough processing power?

Get LAV CUVID, run a 60i Movie with Adaptive Deinterlacing, Double Frame Rate mode, and DXVA Interop active, then run the whole thing on madVR with some typical scaling options.

The decoding engines in all recent cards are the same, but the deinterlacing performance comes down to raw 3D power, as do the pixel shaders used in madVR.

Some Benchmarks from LAV CUVID development (pure decoding, no rendering, using the settings mentioned above)
GT 240: AVC1: 57fps, WVC1: 59fps
GTS 450: AVC1: 109fps, WVC1: 121fps

I don't have measurements for any low-end 4xx/5xx series card, but if you throw madVR in the mix, using up more GPU power, i would guess you need at least 70+ fps output from LAV CUVID to still get smooth playback at 60fps.

And like madshi pointed out, rather buy with some overhead for the future, you never know what will come!

Quote:
Originally Posted by Tong Chia View Post

LAVCUVID outputs YV12

LAV CUVIDs native output format is NV12, YV12 output was just recently added to work with DirectVobSub which does not support NV12.
It can still output both, and will always prefer NV12, as that does not require conversion.
post #1544 of 1812
Quote:
Originally Posted by Nevcairiel View Post


LAV CUVIDs native output format is NV12, YV12 output was just recently added to work with DirectVobSub which does not support NV12.
It can still output both, and will always prefer NV12, as that does not require conversion.

I am using v0.6 posted on Doom9. Is there a newer version?

As for the color space problems, is there a way to force NV12 with
LAVCUVID? The 20% lower CPU demand vs CoreAVC is very appealing if I can get the
color space sorted out.
post #1545 of 1812
Quote:
Originally Posted by Andy o View Post

I got smooth 1080p 24->60 at 3GHz by switching from madVR to another renderer like Haali or EVR-Sync. Even doing high-quality RGB conversion w/dithering with ffdshow and using the ffdshow decoder, I got smooth results.

I see the same as well with EVR custom presentation. MadVR's image
quality is in a class by itself.
post #1546 of 1812
Quote:
Originally Posted by Tong Chia View Post

I am using v0.6 posted on Doom9. Is there a newer version?

0.6 is still the latest publicly available.

Quote:
Originally Posted by Tong Chia View Post

As for the color space problems, is there a way to force NV12 with
LAVCUVID? The 20% lower CPU demand vs CoreAVC is very appealing if I can get the
color space sorted out.

It will output whatever format the downstream filter will request. It exposes two media types, the first with NV12, the second with YV12. If you connect with NV12, it'll output NV12.

There is no way to "force" it otherwise. The next version might have an option to disable pixel formats, so you could disable YV12 if you never want it.

There might be something else going wrong. Is there an easy way i can reproduce the problem you're seeing?
post #1547 of 1812
Quote:
Originally Posted by renethx View Post

Tong Chia,

Have you tried LAV CUVID + ffdshow Raw (for SVP) + madVR? I have a color space conversion issue.

If SVP is off, there is no issue. If SVP is on, there is an issue. It's not just SVP, the issue appears with every AviSynth script.
post #1548 of 1812
Quote:
Originally Posted by Nevcairiel View Post

It will output whatever format the downstream filter will request. It exposes two media types, the first with NV12, the second with YV12. If you connect with NV12, it'll output NV12.

There is no way to "force" it otherwise. The next version might have an option to disable pixel formats, so you could disable YV12 if you never want it.

Something is getting lost in translation if LAVCUVID is sending NV12
to the downstream filter. A quirk in the ffdshow raw filter perhaps?
It plays nice with CoreAVC and the ffdshow video decoder.

LAVCUVID is one of the best video decoders in recent times.
The MPEG-4/XVID and high quality MPEG-2 decode capability is much appreciated.
post #1549 of 1812
Quote:
Originally Posted by Nevcairiel View Post

There might be something else going wrong. Is there an easy way i can reproduce the problem you're seeing?

This is appreciated.

Install SVP and select the ffdshow raw filter as a preferred filter in MPC-HC.

Playback video in MPC-HC with SVP running in background. I have seen this in H.264 Xvid and MPEG-2 files.

The colors are garbled, blue faces for example. I see this with MadVR and EVR

The culprit might be the ffdshow raw filter, it sometimes causes this behavior
even without SVP, for example when I enable frame doubling in LAVCUVID's
control panel. I have only tested this with MadVR

Disabling the ffdshow raw filter eliminates the problem.

Versions
ffdshow:ffdshow_rev3828_20110426_clsid_icl10
MPC_HC:mpc-homecinema.1.5.2.3058.x86.msvc2010
MadVR: v0.61
LAVCUVID: v06
SVP:SVP_3.0.2
post #1550 of 1812
Quote:
Originally Posted by renethx View Post

If SVP is off, there is no issue. If SVP is on, there is an issue. It's not just SVP, the issue appears with every AviSynth script.

For SVP to work reliably, I had to set the ffdshow raw filter as a preferred
filter in MPC-HC. I see the garbled colors (blue or purple faces for example) with LAVCUVID even without SVP.

Disabling the ffdshow raw filter im MPC-HC makes the problem go away.
post #1551 of 1812
Quote:
Originally Posted by babgvant View Post

besides visual inspection, do you know of a way to quantify if a gpu has enough processing power?

I'm not sure if this is really what you are looking for, but I found this link:

http://www.nvidia.com/object/graphic...s_buy_now.html

On nVidia's site. It ranks their most recent cards by computing power.
post #1552 of 1812
Quote:
Originally Posted by Nevcairiel View Post

Get LAV CUVID, run a 60i Movie with Adaptive Deinterlacing, Double Frame Rate mode, and DXVA Interop active, then run the whole thing on madVR with some typical scaling options.

The decoding engines in all recent cards are the same, but the deinterlacing performance comes down to raw 3D power, as do the pixel shaders used in madVR.

Some Benchmarks from LAV CUVID development (pure decoding, no rendering, using the settings mentioned above)
GT 240: AVC1: 57fps, WVC1: 59fps
GTS 450: AVC1: 109fps, WVC1: 121fps

I don't have measurements for any low-end 4xx/5xx series card, but if you throw madVR in the mix, using up more GPU power, i would guess you need at least 70+ fps output from LAV CUVID to still get smooth playback at 60fps.

What are you using to measure FPS? Also, why do you want to double the frame rate?

I have two concerns with using an approach like this:

1) Because of the niche nature of the decode/render steps the results are academic. Most users rely on DXVA decode and GPU DI (not sure if this is different) exposed through an EVR interface for interlaced content playback.

2) Can only be used to rank NVIDIA's cards

Quote:
Originally Posted by Nevcairiel View Post

And like madshi pointed out, rather buy with some overhead for the future, you never know what will come!

Personally I prefer a "buy middle, replace often" approach

My interest here is more to help quantify the costs/benefits of selecting a specific GPU and moving up a GPU family to a wider audience. For e.g. using my current methodology, I can't demonstrate a benefit in selecting a GTS450 over a GT430 for "mainstream" video playback; where there is a clear set of costs in $, size, heat and power use.
post #1553 of 1812
Quote:
Originally Posted by babgvant View Post

What are you using to measure FPS? Also, why do you want to double the frame rate?

GraphStudio in this case.
"Double Framerate" is a bit mis-named, in this case it just means that a 60i video produces 60p (proper video deinterlacing), and not 30p.
If the source is video material, and not film, you'll always want a 60p output for smooth movement. It doesn't actually "double" the frames, it deinterlaces the fields separately instead of only producing one frame out of both fields.

Quote:
Originally Posted by babgvant View Post

I have two concerns with using an approach like this:

1) Because of the niche nature of the decode/render steps the results are academic. Most users rely on DXVA decode and GPU DI (not sure if this is different) exposed through an EVR interface for interlaced content playback.

2) Can only be used to rank NVIDIA's cards

Its not meant to be a "generic" approach. You asked how to quantify the performance of the cards, and thats how i would do it. Thats my playback setup, and i need a card with enough performance to use it. (and i can highly recommend it, you won't get any more quality out of a HTPC any other way).
madVR alone is fine for progressive content, but with interlaced content, nothing can beat hardware deinterlacing, and only way to get it with NVIDIA and madVR is my decoder (and CoreAVC, but thats for H264 only) - that i know of.
AFAIK, its not possible at all right now with ATI, unless the Cyberlink HAM decoder does HW deinterlacing, but i don't believe it does. (unsure, however). So you're either stuck with EVR if you want DXVA+HW Deint, or with software decoding+yadif+madVR.

Also, this is a NVIDIA thread, and i will always use NVIDIA myself (seeing how i also developed that decoder, would be silly to go ATI), so i don't really care how to compare an ATI card to this.

Btw, when letting EVR do the deinterlacing, it'll automatically reduce deinterlacing quality if it notices that the hardware is too slow. I'm not even sure however if it actually produces 60p out of 60i, thats only really detectable with visual inspection when running through EVR. If it doesn't do it, it needs far less performance, as most cards can produce 30fps..
post #1554 of 1812
Quote:
Originally Posted by Nevcairiel View Post

GraphStudio in this case.
"Double Framerate" is a bit mis-named, in this case it just means that a 60i video produces 60p (proper video deinterlacing), and not 30p.
If the source is video material, and not film, you'll always want a 60p output for smooth movement. It doesn't actually "double" the frames, it deinterlaces the field separately instead of only producing one frame out of both fields.

Maybe I'm misunderstanding something, but doesn't 60i only contain 30 FPS (2 fields = 1 frame) worth of information?

Quote:
Originally Posted by Nevcairiel View Post

Its not meant to be a "generic" approach. You asked how to quantify the performance of the cards, and thats how i would do it. Thats my playback setup, and i need a card with enough performance to use it. (and i can highly recommend it, you won't get any more quality out of a HTPC any other way)

Yes, but the approach requires serious trade-offs in usability. If the differences in GPUs can only be demonstrated by selecting this method, it might not be a practical data point outside of its context. We could just tell everyone to buy a Lumagen and call it...

Quote:
Originally Posted by Nevcairiel View Post

Also, this is a NVIDIA thread

Yes, but we can't pretend that other solutions don't exist. The purpose of threads like this shouldn't be to proselytize; they should explain the "Ws" for a specific solution. Which for completeness should at the very least acknowledge the existence of alternatives and help make the component selection process easier.

Quote:
Originally Posted by Nevcairiel View Post

and i will always use NVIDIA myself (seeing how i also developed that decoder, would be silly to go ATI), so i don't really care how to compare an ATI card to this.

I understand that our goals don't align exactly in this area, but at some point you must have decided to use an NVIDIA card over another option for some reason.
post #1555 of 1812
Quote:
Originally Posted by babgvant View Post

Maybe I'm misunderstanding something, but doesn't 60i only contain 30 FPS (2 fields = 1 frame) worth of information?

Video content contains 60 fields, which were initially constructed from 60 individual frames (or directly filmed as 60 fields). You will want to re-create those 60 frames.
The fields are from individual points in time, you cannot recombine two to make up one frame. The "Weave" deinterlacing algorithm is what does just exact that, and as you will see, any movement will produce nasty artifacts.

The proper way is to deinterlace those 60 fields into 60 individual frames, reproducing the original 60 frames as closely as possible.

Now, "Film" content is filmed at 24p, and may then be telecined into 60 fields. This is not interlacing, but when you play it back without proper processing it looks pretty similar. The difference is that you can reproduce the original content 100% perfectly from telecined content, thats why you want it to produce 24 frames out of the 60 fields.
Sadly LAV CUVID at this time can only produce 30 frames out of it, still containing the duplicates from the telecining process.

Quote:
Originally Posted by babgvant View Post

Yes, but the approach requires serious trade-offs in usability. If the differences in GPUs can only be demonstrated by selecting this method, it might not be a practical data point outside of its context.

I just stated my opinion that the 520 might not be fast enough to do everything in HTPC-land. And i stated how i would measure if its fast enough to do what i was thinking about. I don't understand what we're arguing about.


Quote:
Originally Posted by babgvant View Post

Yes, but we can't pretend that other solutions don't exist. The purpose of threads like this shouldn't be to proselytize; they should explain the "Ws" for a specific solution. Which for completeness should at the very least acknowledge the existence of alternatives and help make the component selection process easier.

Since there is no software solution similar to what i can do with my NVIDIA for ATI (at least that i know of), which i outlined above as well, i think this is a clear factor to help the selection process.

Its not pretending, no other solution to my specific requirements exist.
Hardware decoding, hardware deinterlacing, madVR. Show me the ATI solution to this.

(Intel is already out, too slow for madVR processing)
post #1556 of 1812
Quote:
Originally Posted by Nevcairiel View Post

Video content contains 60 fields, which were initially constructed from 60 individual frames (or directly filmed as 60 fields). You will want to re-create those 60 frames.
The fields are from individual points in time, you cannot recombine two to make up one frame. The "Weave" deinterlacing algorithm is what does just exact that, and as you will see, any movement will produce nasty artifacts.

But isn't a field 1/2 a frame of information? If so, where does the other 1/2 come from?

Quote:
Originally Posted by Nevcairiel View Post

I just stated my opinion that the 520 might not be fast enough to do everything in HTPC-land. And i stated how i would measure if its fast enough to do what i was thinking about. I don't understand what we're arguing about.

It's not my intent to argue, but to understand the context of your statement and hopefully produce a metric that would help people buying a HTPC chose the right components for optimal* video playback. If (as it seems) the context is limited to meeting your specific goals, and not applicable in a broader sense, that's OK - I'll just have to keep looking.

Quote:
Originally Posted by Nevcairiel View Post

Since there is no software solution similar to what i can do with my NVIDIA for ATI (at least that i know of), which i outlined above as well, i think this is a clear factor to help the selection process.

Its not pretending, no other solution to my specific requirements exist.
Hardware decoding, hardware deinterlacing, madVR. Show me the ATI solution to this.

You clearly thumbed the scale on that . Before LAV CUVID it was pretty even b/w the two, no?

Quote:
Originally Posted by Nevcairiel View Post

(Intel is already out, too slow for madVR processing)

Would I be able to see this in the information MPC-HC provides? Also looking for a way to measure the difference b/w SNB's HD2000/3000 SKUs on a HTPC.

* we probably define "optimal" differently; mine would include usability and form factor considerations
post #1557 of 1812
Quote:
Originally Posted by babgvant View Post

But isn't a field 1/2 a frame of information? If so, where does the other 1/2 come from?

Thats how the deinterlacing algorithms come in. With a plain "Bob" algorithm, every line simply gets doubled, whats where it name comes from, as you can see the image bob up and down, if you look closely.

More advanced algorithms create that info by looking at previous fields, and the next fields, and combine all that information into one reconstructed frame.


Quote:
Originally Posted by babgvant View Post

You clearly thumbed the scale on that . Before LAV CUVID it was pretty even b/w the two, no?

NVIDIA did alot of catching up with the 4xx series, adding HD bitstreaming and the grand improvements in VP4, before that ATI was probably the HTPC choice.
Personally, i never had any problems with NVIDIA since i got my first card back in the Geforce2 days, and after reading the constant struggle with ATI drivers, and factoring in that i use linux once in a while (at which ATI really sucks), i always stuck with NVIDIA.

Only built my first HTPC about a year ago, before everything was running through my desktop PC, with analog speaker connections.

Quote:
Originally Posted by babgvant View Post

Would I be able to see this in the information MPC-HC provides? Also looking for a way to measure the difference b/w SNB's HD2000/3000 SKUs on a HTPC.

Just run some video with madVR, default scalers should be good, at least something higher then bilinear. Now run a 60fps movie, maybe 720p on a 1080p screen so it actually has something to upscale, and look at madVRs dropped frames.
As an alternative, can probably do the same with EVR-CP and some custom pixel shader scaler.
post #1558 of 1812
Quote:
Originally Posted by Andy o View Post

I got smooth 1080p 24->60 at 3GHz by switching from madVR to another renderer like Haali or EVR-Sync. Even doing high-quality RGB conversion w/dithering with ffdshow and using the ffdshow decoder, I got smooth results.

That's a great point of the script. Even a dual-core processor can do 24->60 smoothly with a *proper* script. A better processor simply means better interpolation quality.

I tested SVP for 1080p24 MKV with a Core i3-2100 + IGP system (no OpenCL support by SVP). LAV Splitter + ffdshow + EVR + SVP 24->screen refresh rate (60). It's almost perfectly smooth (SVP-index = ~1.0 [1 means "smooth", I guess. ]).

BTW IGP + madVR is horrendous to upscale 480 -> 1080, even without SVP.

And I see exactly the same color space issue when I use Microsoft decoders + ffdshow raw filter + SVP + EVR (not madVR).
LL
post #1559 of 1812
Quote:
Originally Posted by Nevcairiel View Post

Just run some video with madVR, default scalers should be good, at least something higher then bilinear. Now run a 60fps movie, maybe 720p on a 1080p screen so it actually has something to upscale, and look at madVRs dropped frames.
As an alternative, can probably do the same with EVR-CP and some custom pixel shader scaler.

Thanks
post #1560 of 1812
Quote:
Originally Posted by Nevcairiel View Post


Video content contains 60 fields, which were initially constructed from 60 individual frames (or directly filmed as 60 fields). You will want to re-create those 60 frames.
The fields are from individual points in time, you cannot recombine two to make up one frame. The "Weave" deinterlacing algorithm is what does just exact that, and as you will see, any movement will produce nasty artifacts.

The proper way is to deinterlace those 60 fields into 60 individual frames, reproducing the original 60 frames as closely as possible.

Now, "Film" content is filmed at 24p, and may then be telecined into 60 fields. This is not interlacing, but when you play it back without proper processing it looks pretty similar. The difference is that you can reproduce the original content 100% perfectly from telecined content, thats why you want it to produce 24 frames out of the 60 fields.
Sadly LAV CUVID at this time can only produce 30 frames out of it, still containing the duplicates from the telecining process.

I just stated my opinion that the 520 might not be fast enough to do everything in HTPC-land. And i stated how i would measure if its fast enough to do what i was thinking about. I don't understand what we're arguing about.

Since there is no software solution similar to what i can do with my NVIDIA for ATI (at least that i know of), which i outlined above as well, i think this is a clear factor to help the selection process.

Its not pretending, no other solution to my specific requirements exist.
Hardware decoding, hardware deinterlacing, madVR. Show me the ATI solution to this.

(Intel is already out, too slow for madVR processing)

Are you going to fix the issue of only 30 frames out of the 60 and make it 24 in a future release?
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Home Theater Computers
AVS › AVS Forum › Video Components › Home Theater Computers › Nvidia GeForce GTX 460(GF104 GPU) supports full audio bitstreaming