AVS Forum banner

Nvidia GeForce GTX 460(GF104 GPU) supports full audio bitstreaming

219K views 2K replies 151 participants last post by  22386 
#1 ·
 http://www.anandtech.com/show/3809/n...the-200-king/4

Quote:
With a $199 MSRP and 150W TDP, the 768MB GTX 460 is also the first card to be of a suitable design for HTPC use. Although we don’t expect very many GTX 460s to be used for that (rather it would be for the unannounced GF106) NVIDIA is already putting plans in to motion for HTPC cards. The GTX 460 will offer full bitstreaming audio capabilities, something the GF100 GPU powering the other GTX 400 series cards could not do. This means that the GTX 460 will be able to bitstream DTS Master Audio and Dolby TrueHD along with the 8 channel LPCM audio capabilities supported by the previous GTX 400 series cards. This brings NVIDIA up to par with AMD, who has offered bitstreaming on the entire range of Radeon HD 5000 series cards.
Quote:
Much like the launch of 3D Vision Surround however, this feature is late. It is not supported in the initial shipping drivers for the GTX 460 and will be made available at a later unknown date. We’ll be sure to test it along with the rest of the GTX 460’s HTPC capabilities once it’s available.

Looks like all future Nvidia GPUs will support full audio bitstreaming, GF100 GPUs(GTX 480/470/465) are the exception of course.
 
#1,552 ·

Quote:
Originally Posted by Nevcairiel /forum/post/20452877


Get LAV CUVID, run a 60i Movie with Adaptive Deinterlacing, Double Frame Rate mode, and DXVA Interop active, then run the whole thing on madVR with some typical scaling options.


The decoding engines in all recent cards are the same, but the deinterlacing performance comes down to raw 3D power, as do the pixel shaders used in madVR.


Some Benchmarks from LAV CUVID development (pure decoding, no rendering, using the settings mentioned above)

GT 240: AVC1: 57fps, WVC1: 59fps

GTS 450: AVC1: 109fps, WVC1: 121fps


I don't have measurements for any low-end 4xx/5xx series card, but if you throw madVR in the mix, using up more GPU power, i would guess you need at least 70+ fps output from LAV CUVID to still get smooth playback at 60fps.

What are you using to measure FPS? Also, why do you want to double the frame rate?


I have two concerns with using an approach like this:


1) Because of the niche nature of the decode/render steps the results are academic. Most users rely on DXVA decode and GPU DI (not sure if this is different) exposed through an EVR interface for interlaced content playback.


2) Can only be used to rank NVIDIA's cards

Quote:
Originally Posted by Nevcairiel /forum/post/20452877


And like madshi pointed out, rather buy with some overhead for the future, you never know what will come!

Personally I prefer a "buy middle, replace often" approach



My interest here is more to help quantify the costs/benefits of selecting a specific GPU and moving up a GPU family to a wider audience. For e.g. using my current methodology, I can't demonstrate a benefit in selecting a GTS450 over a GT430 for "mainstream" video playback; where there is a clear set of costs in $, size, heat and power use.
 
#1,553 ·

Quote:
Originally Posted by babgvant /forum/post/20453697


What are you using to measure FPS? Also, why do you want to double the frame rate?

GraphStudio in this case.

"Double Framerate" is a bit mis-named, in this case it just means that a 60i video produces 60p (proper video deinterlacing), and not 30p.

If the source is video material, and not film, you'll always want a 60p output for smooth movement. It doesn't actually "double" the frames, it deinterlaces the fields separately instead of only producing one frame out of both fields.

Quote:
Originally Posted by babgvant /forum/post/20453697


I have two concerns with using an approach like this:


1) Because of the niche nature of the decode/render steps the results are academic. Most users rely on DXVA decode and GPU DI (not sure if this is different) exposed through an EVR interface for interlaced content playback.


2) Can only be used to rank NVIDIA's cards

Its not meant to be a "generic" approach. You asked how to quantify the performance of the cards, and thats how i would do it. Thats my playback setup, and i need a card with enough performance to use it. (and i can highly recommend it, you won't get any more quality out of a HTPC any other way).

madVR alone is fine for progressive content, but with interlaced content, nothing can beat hardware deinterlacing, and only way to get it with NVIDIA and madVR is my decoder (and CoreAVC, but thats for H264 only) - that i know of.

AFAIK, its not possible at all right now with ATI, unless the Cyberlink HAM decoder does HW deinterlacing, but i don't believe it does. (unsure, however). So you're either stuck with EVR if you want DXVA+HW Deint, or with software decoding+yadif+madVR.


Also, this is a NVIDIA thread, and i will always use NVIDIA myself (seeing how i also developed that decoder, would be silly to go ATI), so i don't really care how to compare an ATI card to this.


Btw, when letting EVR do the deinterlacing, it'll automatically reduce deinterlacing quality if it notices that the hardware is too slow. I'm not even sure however if it actually produces 60p out of 60i, thats only really detectable with visual inspection when running through EVR. If it doesn't do it, it needs far less performance, as most cards can produce 30fps..
 
#1,554 ·

Quote:
Originally Posted by Nevcairiel /forum/post/20453732


GraphStudio in this case.

"Double Framerate" is a bit mis-named, in this case it just means that a 60i video produces 60p (proper video deinterlacing), and not 30p.

If the source is video material, and not film, you'll always want a 60p output for smooth movement. It doesn't actually "double" the frames, it deinterlaces the field separately instead of only producing one frame out of both fields.

Maybe I'm misunderstanding something, but doesn't 60i only contain 30 FPS (2 fields = 1 frame) worth of information?

Quote:
Originally Posted by Nevcairiel /forum/post/20453732


Its not meant to be a "generic" approach. You asked how to quantify the performance of the cards, and thats how i would do it. Thats my playback setup, and i need a card with enough performance to use it. (and i can highly recommend it, you won't get any more quality out of a HTPC any other way)

Yes, but the approach requires serious trade-offs in usability. If the differences in GPUs can only be demonstrated by selecting this method, it might not be a practical data point outside of its context. We could just tell everyone to buy a Lumagen and call it...

Quote:
Originally Posted by Nevcairiel /forum/post/20453732


Also, this is a NVIDIA thread

Yes, but we can't pretend that other solutions don't exist. The purpose of threads like this shouldn't be to proselytize; they should explain the "Ws" for a specific solution. Which for completeness should at the very least acknowledge the existence of alternatives and help make the component selection process easier.

Quote:
Originally Posted by Nevcairiel /forum/post/20453732


and i will always use NVIDIA myself (seeing how i also developed that decoder, would be silly to go ATI), so i don't really care how to compare an ATI card to this.

I understand that our goals don't align exactly in this area, but at some point you must have decided to use an NVIDIA card over another option for some reason.
 
#1,555 ·

Quote:
Originally Posted by babgvant /forum/post/20453812


Maybe I'm misunderstanding something, but doesn't 60i only contain 30 FPS (2 fields = 1 frame) worth of information?

Video content contains 60 fields, which were initially constructed from 60 individual frames (or directly filmed as 60 fields). You will want to re-create those 60 frames.

The fields are from individual points in time, you cannot recombine two to make up one frame. The "Weave" deinterlacing algorithm is what does just exact that, and as you will see, any movement will produce nasty artifacts.


The proper way is to deinterlace those 60 fields into 60 individual frames, reproducing the original 60 frames as closely as possible.


Now, "Film" content is filmed at 24p, and may then be telecined into 60 fields. This is not interlacing, but when you play it back without proper processing it looks pretty similar. The difference is that you can reproduce the original content 100% perfectly from telecined content, thats why you want it to produce 24 frames out of the 60 fields.

Sadly LAV CUVID at this time can only produce 30 frames out of it, still containing the duplicates from the telecining process.

Quote:
Originally Posted by babgvant /forum/post/20453812


Yes, but the approach requires serious trade-offs in usability. If the differences in GPUs can only be demonstrated by selecting this method, it might not be a practical data point outside of its context.

I just stated my opinion that the 520 might not be fast enough to do everything in HTPC-land. And i stated how i would measure if its fast enough to do what i was thinking about. I don't understand what we're arguing about.


Quote:
Originally Posted by babgvant /forum/post/20453812


Yes, but we can't pretend that other solutions don't exist. The purpose of threads like this shouldn't be to proselytize; they should explain the "Ws" for a specific solution. Which for completeness should at the very least acknowledge the existence of alternatives and help make the component selection process easier.

Since there is no software solution similar to what i can do with my NVIDIA for ATI (at least that i know of), which i outlined above as well, i think this is a clear factor to help the selection process.


Its not pretending, no other solution to my specific requirements exist.

Hardware decoding, hardware deinterlacing, madVR. Show me the ATI solution to this.


(Intel is already out, too slow for madVR processing)
 
#1,556 ·

Quote:
Originally Posted by Nevcairiel /forum/post/20453974


Video content contains 60 fields, which were initially constructed from 60 individual frames (or directly filmed as 60 fields). You will want to re-create those 60 frames.

The fields are from individual points in time, you cannot recombine two to make up one frame. The "Weave" deinterlacing algorithm is what does just exact that, and as you will see, any movement will produce nasty artifacts.

But isn't a field 1/2 a frame of information? If so, where does the other 1/2 come from?

Quote:
Originally Posted by Nevcairiel /forum/post/20453974


I just stated my opinion that the 520 might not be fast enough to do everything in HTPC-land. And i stated how i would measure if its fast enough to do what i was thinking about. I don't understand what we're arguing about.

It's not my intent to argue, but to understand the context of your statement and hopefully produce a metric that would help people buying a HTPC chose the right components for optimal* video playback. If (as it seems) the context is limited to meeting your specific goals, and not applicable in a broader sense, that's OK - I'll just have to keep looking.

Quote:
Originally Posted by Nevcairiel /forum/post/20453974


Since there is no software solution similar to what i can do with my NVIDIA for ATI (at least that i know of), which i outlined above as well, i think this is a clear factor to help the selection process.


Its not pretending, no other solution to my specific requirements exist.

Hardware decoding, hardware deinterlacing, madVR. Show me the ATI solution to this.

You clearly thumbed the scale on that
. Before LAV CUVID it was pretty even b/w the two, no?

Quote:
Originally Posted by Nevcairiel /forum/post/20453974


(Intel is already out, too slow for madVR processing)

Would I be able to see this in the information MPC-HC provides? Also looking for a way to measure the difference b/w SNB's HD2000/3000 SKUs on a HTPC.


* we probably define "optimal" differently; mine would include usability and form factor considerations
 
#1,557 ·

Quote:
Originally Posted by babgvant /forum/post/20454074


But isn't a field 1/2 a frame of information? If so, where does the other 1/2 come from?

Thats how the deinterlacing algorithms come in. With a plain "Bob" algorithm, every line simply gets doubled, whats where it name comes from, as you can see the image bob up and down, if you look closely.


More advanced algorithms create that info by looking at previous fields, and the next fields, and combine all that information into one reconstructed frame.


Quote:
Originally Posted by babgvant /forum/post/20454074


You clearly thumbed the scale on that
. Before LAV CUVID it was pretty even b/w the two, no?

NVIDIA did alot of catching up with the 4xx series, adding HD bitstreaming and the grand improvements in VP4, before that ATI was probably the HTPC choice.

Personally, i never had any problems with NVIDIA since i got my first card back in the Geforce2 days, and after reading the constant struggle with ATI drivers, and factoring in that i use linux once in a while (at which ATI really sucks), i always stuck with NVIDIA.


Only built my first HTPC about a year ago, before everything was running through my desktop PC, with analog speaker connections.

Quote:
Originally Posted by babgvant /forum/post/20454074


Would I be able to see this in the information MPC-HC provides? Also looking for a way to measure the difference b/w SNB's HD2000/3000 SKUs on a HTPC.

Just run some video with madVR, default scalers should be good, at least something higher then bilinear. Now run a 60fps movie, maybe 720p on a 1080p screen so it actually has something to upscale, and look at madVRs dropped frames.

As an alternative, can probably do the same with EVR-CP and some custom pixel shader scaler.
 
#1,558 ·

Quote:
Originally Posted by Andy o /forum/post/20452872


I got smooth 1080p 24->60 at 3GHz by switching from madVR to another renderer like Haali or EVR-Sync. Even doing high-quality RGB conversion w/dithering with ffdshow and using the ffdshow decoder, I got smooth results.

That's a great point of the script. Even a dual-core processor can do 24->60 smoothly with a *proper* script. A better processor simply means better interpolation quality.


I tested SVP for 1080p24 MKV with a Core i3-2100 + IGP system (no OpenCL support by SVP). LAV Splitter + ffdshow + EVR + SVP 24->screen refresh rate (60). It's almost perfectly smooth (SVP-index = ~1.0 [1 means "smooth", I guess.
]).


BTW IGP + madVR is horrendous to upscale 480 -> 1080, even without SVP.


And I see exactly the same color space issue when I use Microsoft decoders + ffdshow raw filter + SVP + EVR (not madVR).
 
#1,559 ·

Quote:
Originally Posted by Nevcairiel /forum/post/20454103


Just run some video with madVR, default scalers should be good, at least something higher then bilinear. Now run a 60fps movie, maybe 720p on a 1080p screen so it actually has something to upscale, and look at madVRs dropped frames.

As an alternative, can probably do the same with EVR-CP and some custom pixel shader scaler.

Thanks
 
#1,560 ·

Quote:
Originally Posted by Nevcairiel /forum/post/0



Video content contains 60 fields, which were initially constructed from 60 individual frames (or directly filmed as 60 fields). You will want to re-create those 60 frames.

The fields are from individual points in time, you cannot recombine two to make up one frame. The "Weave" deinterlacing algorithm is what does just exact that, and as you will see, any movement will produce nasty artifacts.


The proper way is to deinterlace those 60 fields into 60 individual frames, reproducing the original 60 frames as closely as possible.


Now, "Film" content is filmed at 24p, and may then be telecined into 60 fields. This is not interlacing, but when you play it back without proper processing it looks pretty similar. The difference is that you can reproduce the original content 100% perfectly from telecined content, thats why you want it to produce 24 frames out of the 60 fields.

Sadly LAV CUVID at this time can only produce 30 frames out of it, still containing the duplicates from the telecining process.


I just stated my opinion that the 520 might not be fast enough to do everything in HTPC-land. And i stated how i would measure if its fast enough to do what i was thinking about. I don't understand what we're arguing about.


Since there is no software solution similar to what i can do with my NVIDIA for ATI (at least that i know of), which i outlined above as well, i think this is a clear factor to help the selection process.


Its not pretending, no other solution to my specific requirements exist.

Hardware decoding, hardware deinterlacing, madVR. Show me the ATI solution to this.


(Intel is already out, too slow for madVR processing)

Are you going to fix the issue of only 30 frames out of the 60 and make it 24 in a future release?
 
#1,561 ·
Get ready for a really stupid question (I admit I didn't search the whole thread for the answer).


If I connect a mini-HDMI to HDMI cable from a GTX 460 to my Onkyo HT-S5400 I should be able to play PC games like BFBC2 in full 5.1 surround sound, correct?


I'm asking this dumb question because I recently got my first surround setup in years, and after years of casually playing PC games, and even working in IT, it was a complete revelation to me that PC games don't produce surround sound with basic optical out. It was driving me nuts that I could watch blu-rays from my htpc in full surround sound, but playing a game of BCBF2 dropped audio to stereo. I started googling the issue today, and found that this has been common knowledge for some time. I asked my IT co-workers and they were as clueless as I was about the subject so that made me feel a little better.


As a side question to my main dumb question... how does a video card extract and output audio from the motherboard to begin with? I don't understand how that works. Oh, and just in case its important in answering question number 1, this is my current motherboard: GIGABYTE GA-MA785GM-US2H


Thanks for any answers/advice anyone can offer.
 
#1,562 ·
Yes, with HDMI you can get full 5.1 sound. (even 7.1 if you want)


The GTX 460 has its own sound card in board. Once you connect a HDMI device that accepts sound, it'll show up in your audio configuration.


Some motherboards also come with a Dolby encoder that allows you to pass 5.1 through the optical link, but not all. Its mostly a licensing issue, really. If yours does, you can choose a "Dolby Digital 5.1" output format in your audio device configuration.
 
#1,563 ·

Quote:
Originally Posted by Nevcairiel /forum/post/20456038


Yes, with HDMI you can get full 5.1 sound. (even 7.1 if you want)


The GTX 460 has its own sound card in board. Once you connect a HDMI device that accepts sound, it'll show up in your audio configuration.

Terrific! Thank you.

Quote:
Some motherboards also come with a Dolby encoder that allows you to pass 5.1 through the optical link, but not all. Its mostly a licensing issue, really. If yours does, you can choose a "Dolby Digital 5.1" output format in your audio device configuration.

Yeah, I don't see anything with the 5.1 designator in the output properties or anything, just plain DTS and DD (which, again, seems to work fine with everything except games).
 
#1,564 ·
You should have bought GA-MA785G(P)MT-UD2H, that supports Dolby Home Theater (including Dolby Digital Live).
 
#1,565 ·

Quote:
Originally Posted by renethx /forum/post/20452781


Tong Chia,


Have you tried LAV CUVID + ffdshow Raw (for SVP) + madVR? I have a color space conversion issue.

Selecting ffdshow raw > Codecs > Raw video: YV12 fixed the issue. Both video in and out are YV12 in ffdshow raw filter. This combination works for every HD/SD video formats (AVC, VC-1, MPEG-2) beautifully.



Edit


This was already answered in FAQ :

Quote:
I see wrong colors, for example the skin is blue like in Avatar movie.


The simpliest solution is to turn on "Postprocessing" in ffdShow.

This is another solution.
 
#1,566 ·
Quote:
Originally Posted by renethx
You should have bought GA-MA785G(P)MT-UD2H, that supports Dolby Home Theater (including Dolby Digital Live).
Oh wow. Yeah. Well I bought the board a year ago, and it was before I knew I was going to use the pc for an htpc, I just needed a new board, and it had decent reviews and price at the time. At least now I have a somewhat reasonable excuse to upgrade my ancient (2 and a half year old) current video card.
 
#1,567 ·
All these SVP color issues are weird. It sounds like its interpreting NV12 data as YV12, or vice versa. I don't know if its a ffdshow issue, but it certainly doesn't seem to be a issue with my decoder.


Possibly only with AVISynth in ffdshow. You give it NV12 data, and it tells the script its actually YV12? That would surely explain totally wrong colors.


The future version will allow you to force a specific pixel format, so if only YV12 works, then you can just force YV12 in LAV CUVID.

Of course that adds a small overhead converting the NV12 to YV12, but i wrote some assembler optimized code for that, so it should be pretty fast.
 
#1,568 ·

Quote:
Originally Posted by renethx /forum/post/20458339


Selecting ffdshow raw > Codecs > Raw video: YV12 fixed the issue. Both video in and out are YV12 in ffdshow raw filter. This combination works for every HD/SD video formats (AVC, VC-1, MPEG-2) beautifully.

That worked well, the colors are back to normal. Thank you for that.


LAVCUVID has surprising low CPU utilization, most of the work is now

pushed to the GPU. I am getting better than 60% peak utilization

on the GPUs (i7 + GTX460pair) from less than 40% with CoreAVC


This in turn, allows for more aggressive motion vector search and prediction
 
#1,569 ·

Quote:
Originally Posted by Nevcairiel /forum/post/20453974



...


Since there is no software solution similar to what i can do with my NVIDIA for ATI (at least that i know of), which i outlined above as well, i think this is a clear factor to help the selection process.


Its not pretending, no other solution to my specific requirements exist.

Hardware decoding, hardware deinterlacing, madVR. Show me the ATI solution to this.

Actually there is. I've just discovered the Korean software player PotPlayer and can get DXVA + MadVR with my ATI 5750. This is an evolution of KMPlayer and by the same author AFAIK, incredibly customizable in every conceivable way that I can imagine
. Highly recomended!
 
#1,570 ·

Quote:
Originally Posted by Tulli /forum/post/20468475


Actually there is. I've just discovered the Korean software player PotPlayer and can get DXVA + MadVR with my ATI 5750. This is an evolution of KMPlayer and by the same author AFAIK, incredibly customizable in every conceivable way that I can imagine
. Highly recomended!

I don't need another player. I would love it if that code were in a stand alone decoder, but, that doesn't seem likely does it?
 
#1,571 ·

Quote:
Originally Posted by SamuriHL /forum/post/20468482


I don't need another player. I would love it if that code were in a stand alone decoder, but, that doesn't seem likely does it?

Well, it's good to know that there is a DXVA + madVR solution for ATI cards, and I want one!
 
#1,572 ·
I tried PotPlayer just because of that feature, and it was a massive fail for me, but since Tulli is using a 5750 (me a 5770) I guess there's something in my system. Man I can't wait to upgrade to Ivy Bridge and a Radeon HD 7000.
 
#1,573 ·

Quote:
Originally Posted by Tulli /forum/post/20468475


Actually there is. I've just discovered the Korean software player PotPlayer and can get DXVA + MadVR with my ATI 5750. This is an evolution of KMPlayer and by the same author AFAIK, incredibly customizable in every conceivable way that I can imagine
. Highly recomended!

Actually hardware deinterlacing (i.e. AMD's VA) never works. Ranpha didn't say so:

Quote:
Then go to 'Deinterlacing' filter section and configure it exactly like what the picture below suggests. This filter is optional if you use LAV CUVID Decoder as your video decoder (the only decoder that can do decoder-level deinterlacing).

and I couldn't get it work either for Cheese Slices. Hardware decoding is not important for today's processor, but software can't do deinterlacing properly.


Or am I missing something?
 
#1,574 ·

Quote:
Originally Posted by Tulli /forum/post/20468475


Actually there is. I've just discovered the Korean software player PotPlayer and can get DXVA + MadVR with my ATI 5750. This is an evolution of KMPlayer and by the same author AFAIK, incredibly customizable in every conceivable way that I can imagine
. Highly recomended!

The player itself actually kinda sucks. Customizability isn't always good, heck in most cases it ends up in bloating and options overload - like in that one. I find the player unintuitive and really unnatural to use/configure.

Plus, i really wouldn't want to switch player..


Anyhow, i actually know how that DXVA mode works, and i nearly wrote a codec to do that, but instead i decided to write my CUVID filter instead, because it seemed so much easier (and was!).

All non-DirectShow players use that mode for DXVA, like VLC and XBMC and whatnot. It'll only work on Vista/7, and is still severly limited.


Anyhow, like others pointed out, that one does not use hardware deinterlacing, which is a crucial feature for me - decoding can be done in any CPU of todays generation.
 
#1,575 ·
Quote:
Originally Posted by renethx
Actually hardware deinterlacing (i.e. AMD's VA) never works. Ranpha didn't say so:




and I couldn't get it work either for Cheese Slices. Hardware decoding is not important for today's processor, but software can't do deinterlacing properly.


Or am I missing something?
No, you're missing nothing Rene. After more checking found that the player is in fact only doing (ffmpeg) software deinterlacing. So the solution of Nevcairiel for Nvidia is the only true/complete hardware accelerated one that works with madVR.


Sorry guys for getting OT somehow but maybe this will add to the clarification of the very interesting situation we have now, where Nvidia, thanks to Neivcairiel's masterful work, is enjoying a clear, neat advantage over AMD for a graphic card recommendation.
 
#1,576 ·
Quote:
Originally Posted by Tulli
No, you're missing nothing Rene. After more checking found that the player is in fact only doing (ffmpeg) software deinterlacing. So the solution of Nevcairiel for Nvidia is the only true/complete hardware accelerated one that works with madVR.


Sorry guys for getting OT somehow but maybe this will add to the clarification of the very interesting situation we have now, where Nvidia, thanks to Neivcairiel's masterful work, is enjoying a clear, neat advantage over AMD for a graphic card recommendation.
I think Nvidia's clear, neat advantage over ATI has always been their drivers. With my GTS 450, I was able to correct my issue of the card refusing to output Full Range simply by creating a custom resolution of 1920x1080 to output to my TV, the card accepted this without complaint. I was then able to also create a custom resolution of 1920x1080 with 23.976hz refresh, again the card accepted this without complaint and with MPC-HC using EVR Sync, I am able to get true 24fps playback of content. It's just so much easier when Nvidia gives you all these options in their drivers to create your own custom profiles for things when default ones don't work. My main gaming PC is the same way, I'm able to create custom AA and SLI profiles for games which don't have a default profile (the vast, vast majority already do), something that ATI owners have been complaining about for something like 5 years now with absolutely no plans for resolution.
 
This is an older thread, you may not receive a response, and could be reviving an old thread. Please consider creating a new thread.
Top