AVS Forum banner
  • Manufacturing, Myths, and Misfires with HDMI Cables. Episode 9 of the AVSForum Podcast is now live! Click here for details.

11,161 - 11,180 of 11,256 Posts
Thank you! I am using 113 MadVR sample and after a long period with NGU sharp very high I lately switched to NGU AA very high and I have to admit I like it very much…nice and sharp:)
I only use NGU AA for upscaling after lots of comparing. I found NGU standard and Sharp to be ultimately lacking in quality. On some titles I will add a bit of upscaling refinement. LunaSharpen with AR, no AB is nice to my eye and doesn't seem to introduce artifacts
 
When I compare I find NGU AA to be soft and Sharp to be Sharp 😂
so you are sane good.
the issue is not the sharpness it is lines been bad.
lanczos 8 is also massively sharper then lanczos 3 but no one sane uses it because there are problem with it.
this is gt -> 420 chroma scaler as a name jinc to UHD for GT just jinc. jinc is used because it is a save scaler that doesn't make stuff up.

sharp has terrible lines worse then super XBR and super XBR has other advantage not shown here it's doing bad here but it is still very very cheap.
NGU AA has perfect lines here it is scary how accurate it i is. sharp as always tries so hard that the Y channel and Cb/Cr are now different to much and you can see the steps.

i can also show you why NGU AA is a better image scaler than sharp.

everything is very obvious if you think about what subsampling does. it adds aliasing super XBR was made to remove aliasing from pixel art making it terrible for pixel art. the AA in NGU AA stands for anti aliasing. downscaling creates aliasing.

you can still pay for XBR...
BTW. anime 4K is the worst stuff ever made to upscale anything i only use it to show how horrible it can go.
obviously they sell you that too both are free...
 
I only use NGU AA for upscaling after lots of comparing. I found NGU standard and Sharp to be ultimately lacking in quality. On some titles I will add a bit of upscaling refinement. LunaSharpen with AR, no AB is nice to my eye and doesn't seem to introduce artifacts
Subjective evaluation is quite useless, ill recommend runnin and testing with testpatterns built for the purpose of evaluating what your testing, at least if you want to comment on quality.
And remember to take your displays shortcommings into acount.
 
Good day! A question.
I’m considering upgrading my GPU — currently on an RTX 2060. I’m very interested in the 5060 Low Profile — it would allow me to switch to a 2U-height case, which looks very interesting...
A couple of questions:
  1. How does the new 5000 series platform with the latest NV drivers behave with MadVR 208? 113? I’ve read the thread, but there aren’t many reviews.
  2. SVP... Theoretically, should the RTX 5060 have enough power to handle RIFE + NVIDIA TensorRT + MadVR DTM (in 4K SSIM2-100 downscaling mode) for my goal? I want to achieve 1080p@60 output to my JVC X35 projector. Lately, motion handling on it has been bothering me a bit... Has anyone tried something similar on these cards? Or is it underpowered?
 
Good day! A question.
I’m considering upgrading my GPU — currently on an RTX 2060. I’m very interested in the 5060 Low Profile — it would allow me to switch to a 2U-height case, which looks very interesting...
A couple of questions:
  1. How does the new 5000 series platform with the latest NV drivers behave with MadVR 208? 113? I’ve read the thread, but there aren’t many reviews.
  2. SVP... Theoretically, should the RTX 5060 have enough power to handle RIFE + NVIDIA TensorRT + MadVR DTM (in 4K SSIM2-100 downscaling mode) for my goal? I want to achieve 1080p@60 output to my JVC X35 projector. Lately, motion handling on it has been bothering me a bit... Has anyone tried something similar on these cards? Or is it underpowered?
I would aim for a 5060TI 8gb card, im running a 4060TI on a JVC X500 its a very nice match, decent rendertimes, however when outputting 1080p 23, Nvidia have framedrop issues, so to get around that you need to output the signal via a CPU with igpu, and motherboard HDMI out, preferable AMD CPU, i dont see any purpose for 60hz out for movie content, as it messes up motion more.
 
I would aim for a 5060TI 8gb card, im running a 4060TI on a JVC X500 its a very nice match, decent rendertimes, however when outputting 1080p 23, Nvidia have framedrop issues, so to get around that you need to output the signal via a CPU with igpu, and motherboard HDMI out, preferable AMD CPU, i dont see any purpose for 60hz out for movie content, as it messes up motion more.
Could you talk a little more about this? In particular about how to make sure the rendering is done by the GPU and the output is passed to the iGPU in Windows 11.

If you would have a HTPC/gaming/desktop hybrid type system, would you need to be running 2 HDMI from the PC and switching to the GPU HDMI for gaming, or can you force that through the iGPU out also?

I have had frame sync problems for a while due to issues with S/PDIF out on my display, so I must use the S/PDIF on my motherboard as passing it via HDMI isn't an option.
 
spdif has a different clock then the GPU so good luck with that.
using the igpu for output of the dgpu has massive performance penalty because it is a copy back operation.
just use 120 if available.
 
spdif has a different clock then the GPU so good luck with that.
using the igpu for output of the dgpu has massive performance penalty because it is a copy back operation.
just use 120 if available.
Not everybody run a crappy gaming monitor at 120hz with massive framedrops and think thats the way to enjoy movies, some actually want a reference image in the native framerate with no framedrops.
 
pain just pain.

let try it one more time.
the OSD is not the pesent it is the past.
i was setting up tone mapping because you ask me to do that for some reason i do not need tone mapping so i had to set it up and later 1080p which... WHATEVER.

with the wrong setting i drop frames and the OSD is going to remember that it is the past.

and again rending 23p at 119p cost literally the same as rendering 23p at 23p because it is still 23p.
you assuming that madVR is so utterly terrible that it would render every identical frame 5 times instead of repeating them is just insulting. there is no excuses this is the basic of basic math. and yes if you tell madVR to present a frame every v-sync it will cost something but good luck measuring that.

1000/24 = 41.6 to render this in Realtime with madVR you need rendertimes below that the frame rate of the screen is completely irrelevant.

and now the 120 hz gaming screen with someone saying that: "If you would have a HTPC/gaming/desktop hybrid type system, "
because 120 hz has nothing todo with gaming it a default feature for every more pricy TV.
an LG C9 has UHD444 120hz that screen is from 2019...
that's 6 years of mainstream 120 HZ. low end phones have 120 HZ it is that basic it doesn't need to be a gaming monitor... i do not even know a single TV worth buy that is litmited to 60 HZ for the past 4 years...

because you life in a bubble where your result will automatic apply to everyone else.
up to this day i doubt that you even understand why my clock is fine with nvidia because you couldn't even understand that i have different clock generator. that would mean you ahve to actually understand what is happening and that is work! you are not capable of getting a custom refreshrate to work so NO ONE CAN.

you told someone to disable v-sync this is the worst advice i have seen in my life... this is beyond words bad "advice".

i'm so utterly done with this...
 
pain just pain.

let try it one more time.
the OSD is not the pesent it is the past.
i was setting up tone mapping because you ask me to do that for some reason i do not need tone mapping so i had to set it up and later 1080p which... WHATEVER.

with the wrong setting i drop frames and the OSD is going to remember that it is the past.

and again rending 23p at 119p cost literally the same as rendering 23p at 23p because it is still 23p.
you assuming that madVR is so utterly terrible that it would render every identical frame 5 times instead of repeating them is just insulting. there is no excuses this is the basic of basic math. and yes if you tell madVR to present a frame every v-sync it will cost something but good luck measuring that.

1000/24 = 41.6 to render this in Realtime with madVR you need rendertimes below that the frame rate of the screen is completely irrelevant.

and now the 120 hz gaming screen with someone saying that: "If you would have a HTPC/gaming/desktop hybrid type system, "
because 120 hz has nothing todo with gaming it a default feature for every more pricy TV.
an LG C9 has UHD444 120hz that screen is from 2019...
that's 6 years of mainstream 120 HZ. low end phones have 120 HZ it is that basic it doesn't need to be a gaming monitor... i do not even know a single TV worth buy that is litmited to 60 HZ for the past 4 years...

because you life in a bubble where your result will automatic apply to everyone else.
up to this day i doubt that you even understand why my clock is fine with nvidia because you couldn't even understand that i have different clock generator. that would mean you ahve to actually understand what is happening and that is work! you are not capable of getting a custom refreshrate to work so NO ONE CAN.

you told someone to disable v-sync this is the worst advice i have seen in my life... this is beyond words bad "advice".

i'm so utterly done with this...
So your recommending a framerate that introduce massive framedrops as your not able to rende the image fast enough?
 
So your recommending a framerate that introduce massive framedrops as your not able to rende the image fast enough?
you know what i will now stand up go to my 360 monitor runa yt fiel or bbb or something like that and show you an OSD with zero dropped frames the frame interval is 2.77 ms i will try to push my card to 32 ms i hope i can...

and after that i will have to call in a mod because you are literally pretending to be unable to do the basic of basic math. because i hope for your mental health that you are just a troll and not that delusional or bad at simple logic.

edit:
Image

magic presenting 24 frame either in 24p or 360p you still have 41 ms time to render that frame.
if you would now excuse me i have to fix my madVR settings...

yes i know that this is still not enough to proof the obvious...
 
Could you talk a little more about this? In particular about how to make sure the rendering is done by the GPU and the output is passed to the iGPU in Windows 11.

If you would have a HTPC/gaming/desktop hybrid type system, would you need to be running 2 HDMI from the PC and switching to the GPU HDMI for gaming, or can you force that through the iGPU out also?

I have had frame sync problems for a while due to issues with S/PDIF out on my display, so I must use the S/PDIF on my motherboard as passing it via HDMI isn't an option.
There's been a lot of words since I asked this, but nobody has actually addressed my questions.

  1. How precisely do you make madVR use the GPU for rendering and then the system use the iGPU for output? I'm asking for myself as well as others.
  2. As a lot of internal iGPU output via HDMI seem to cap out at 4k@60hz, is running 2 HDMI cables feasible for multi-use case?

Context. C8 OLED from 2018, so HDMI 2.0 (60hz HDMI). I am unable to use optical out from the TV, bluetooth from the TV, and ARC with an optical out because any time the TV touches the digital audio, it creates pops and clicks every few minutes.

More context. Considering purchasing a G5/S95F (or their successors) in the near future, as well as possibly upgrading my PC (new chipset, new motherboard etc) and don't want to be in the same situation as I am now. A custom resolution in AMD control panel for 48hz is better than 23.976hz (which, for some reason, won't allow me to create/alter a custom resolution for) but there are still frame "glitches" (not drops, not repeats, but "glitches" in madVR OSD) every few minutes.
 
you can run 2 cables yes but again this has a huge penalty this is pretty much nvidia optimus and similar issues.
they changes it a bit in win 11 compared to win 10.

it currently on my system doesn't work so i can not show you an example.
these are the settings:
Image

you select the high performance GPU and connect the screen to the iGPU. it will use the selected GPU in theory to render the image and the other GPU to present the frame and frames are big very big and to get them to the iGPU it needs to copy them back over PCIe.

madVR gliches are currently very very common and there is no general fix there are many many reports on the bug tracker about that. you can try to restart the explorer in hope to reset the error. you can use FSE if madVR isn't entering FSE it is the current bug.
it also limits DFS to 8 bit and it will not enter 10 bit.

if you are in that state you will experience many many issues it's a very bad state.
 
you know what i will now stand up go to my 360 monitor runa yt fiel or bbb or something like that and show you an OSD with zero dropped frames the frame interval is 2.77 ms i will try to push my card to 32 ms i hope i can...

and after that i will have to call in a mod because you are literally pretending to be unable to do the basic of basic math. because i hope for your mental health that you are just a troll and not that delusional or bad at simple logic.

edit: View attachment 3812097
magic presenting 24 frame either in 24p or 360p you still have 41 ms time to render that frame.
if you would now excuse me i have to fix my madVR settings...

yes i know that this is still not enough to proof the obvious...
It proves everything, V sync 2,7ms, render time 27ms, totaly messed up.
 
it's a power of 15... 41,666666666666666666

even classic projector are using a power 2-4 playing 24 at 48, 72, 96 (96 is rare)...
And your running framerates higher than your GPU capabilities, over and over again, and telling everybodu to do the same, and always excuses, why not just show a screenshot with shorter rendertime than your V sync?
 
11,161 - 11,180 of 11,256 Posts