AVS Forum banner

2421 - 2440 of 2866 Posts

·
Registered
Joined
·
24 Posts
So! Let's summarize some of the results.
Two years of "hard work" have already passed.
The uncountable amount of money spent that is received from the investor.
And so far - zero sales, because there is still no MadEnvy :)
Maybe we misunderstood?
Wasn't that "Mad"?
Maybe it was "VaporEnvy" :)
Or: "NullResultEnvy" or,
"Very_Expensive_But_No_One_Needs_ENVY"?
 

·
Registered
Joined
·
1,942 Posts
So! Let's summarize some of the results.
Two years of "hard work" have already passed.
The uncountable amount of money spent that is received from the investor.
And so far - zero sales, because there is still no MadEnvy :)
Maybe we misunderstood?
Wasn't that "Mad"?
Maybe it was "VaporEnvy" :)
"All good things come to those who wait" :)
 

·
Registered
Joined
·
24 Posts
"All good things come to those who wait" :)
While we wait ...
Civilization does not stand still.
Cheap Neural Network at the $200 Nvidia Shield! Already looks no worse than NGU!
Not to mention the much more serious, while free, CNN`s.

Soon it’s not like any, even very cheap, video card.
It seems that, a few years later, any "HDMI-USB" stick, if properly CNN-programmed,
will give better video quality than a very long frozen in place, years behind the world CNN development process MadVR :)
 

·
Registered
Joined
·
1,942 Posts
While we wait ...
Civilization does not stand still.
Cheap Neural Network at the $200 Nvidia Shield! Already looks no worse than NGU!
Not to mention the much more serious, while free, CNN`s.

Soon it’s not like any, even very cheap, video card.
It seems that, a few years later, any USB stick, if properly CNN-programmed,
will give better video quality than a very long frozen in place, years behind the world CNN development process MadVR :)
So, you currently use madVR?
 

·
Registered
Joined
·
987 Posts
- Of course yes!
Because I need to be aware of its capabilities.

- Of course not!
Because, today, it is a very backward solution in terms of quality.
In this area now every month there are revolutions :)

Monthly revolutions ? Can you provide examples.


Sent from my iPhone using Tapatalk
 

·
Registered
Joined
·
7,947 Posts
I'm not fully across the constraints imposed by using the GPU AI - if things have to operate per pixel it is of course hard to implement stuff that looks at a number of pixels, but I guess many algorithms operate outside the bounds of the immediate pixel of concern? Or are those not able to be implemented with GPU AI?
You can do almost anything, using a framework such as CUDA, for example. However, if you want to do things with a good speed, you have to carefully design the algos to make use of the heavily parallel hardware design of today's GPUs. Usually typical video processing algorithms look into the direct neighborhood of each pixel, but not very far from the direct neighborhood.

For what it is worth I think there are a lot of avenues that could be explored for making such things (more?) robust (enough?) if you can overcome such fundamental issues. Such as ways of working out exactly what the player is (user input or automated identification based on fingerprinting), so that the unpicking of whether or not the scaler is in use can be very specific to the scaler known to be used by that device, improving certainty. It strikes me once you know the player you need to understand where the content edges are, look at those edges for evidence of the scaler in operation, and then come to some conclusion on the likelihood that this image has been subject to a given scaler based on a very high percentage of edges exhibiting the same processing traits, with sufficient confidence (which probably has to be established over a number of frames) to ensure you don't flip flop between processing and not processing.

The trigger for entering such a processing mode becomes "Significant number of processed edges detected over multiple frames". The trigger for leaving such a mode is "Significant number of non-processed edges detected over multiple frames". Frames which have no significant edges (not sure how many of them in real content, but anyway) become "don't cares" and indeed that seems pretty legit as for those frames where there are no edges - you probably don't care either way. Obvious traps of course are I guess OSD overlays which may well be generated at 4K and have lots of non-processed edges in them...

Given the way things are going with physical media, and the likelihood of a "videophile" source direct player happening seems small, such features aimed at improving playback on such devices seem really attractive from a consumer point of view. Of course there's lots of stuff we'd all like that just isn't possible...! :)
Things like that may be possible, but would require a *lot* of work, and then when a new firmware of the source device comes out which may change the upscaling algorithm, you have to start all over again. I'm not sure how realistic it is to do algos like that. Maybe I'll look into things like that once I've run out of other algo ideas. But ideally, my priority will be to make Envy shine for "good" source device, which don't upscale, but properly passthrough all content in its native form.

That said, since most devices only support 4:2:0 for 4K60, I may implement some sort of chroma upscaling detection to try to undo whatever bad chroma upscaling algorithms the source device has applied.

Maybe the Envy reads the KSV which is a unique device identifier sent over HDMI.
Since we recommend to place the Envy in between AVR and display, Envy would just see the KSV of the AVR, not the KSV of the source device. So that wouldn't help identify which source device the AVR is currently set to.

Likely the same way the HDfury devices know the player name even though the HDfury is placed after the AVR. The Source Product Description Infoframe (SPD).
Yep, that's the plan.

We're not fully sure yet how reliably this will work, though, because not all source devices output SPD InfoFrames, and not all AVRs pass them through properly (though probably newer ones will). Only time will tell. But of course if all else fails, Envy can be remote controlled via IP control.

Will Envy process a video stream (what ever that is) with exactly the same “possibilities” as madVR/HTPC does? Sounds like scaling could be an issue. DTM might not be a problem. But, if the source doesn’t let Envy have the final say on scaling, how does Envy have any chance? After all, that is one of madVR’s strongest algos (NGU).

Not that I don't love HSTM/DTM, also! But scaling is very important to my setup.
Picking a good source device will be key, of course. Ideally one which has a properly working "passthrough" mode. Oppo UHD Blu-Ray players come to mind. IIRC they do have such a mode. Only problem is that they don't support 4:2:0 output for anything but 4K60.

We may also try to work together with source device companies to optimize their source devices for best quality. We already have contact to one such company. They may consider adding 4:2:0 support for 1080p, as well. But it's all in the early stages right now, so I can't really say any more about it at this point.

In any case, Envy has all the same algos available as madVR. I might not make every algo available, for the simple reason that I want to keep the Envy menu as easy to use as possible. But if Envy misses any algo compared to madVR, it's only for ease-of-use reasons, not for any other reasons. And Envy already has a few tricks up its sleeve which madVR can not do, and more things are coming.

That's actually great to hear. Out of curiosity, did you generate the test frames on the PC side on Windows (10?) or Linux?
Windows. Nvidia drivers are one of those things where Windows actually has an edge over Linux. For the simple reason that most gamers use Windows. So Nvidia invests most of its driver development resources into the Windows drivers.

Any news ?
I've just made a new firmware available for our private beta testers which adds:

1) full 3D support (only for 1080p24)

Needs testing to confirm it works reliably. But looks good so far, and it will mean Envy can do full frame packed 3D, after all (!).

2) motion adaptive deinterlacer, using NGU Anti-Alias for interpolation

This will be mostly useful for EU users, I guess, where 1080i50 is still commonly used, e.g. for soccer. Not sure if it will be of any use to USA users? Ric tells me interlaced content has mostly disappeared in USA.

I'm a long time madVR and SVP user. Currently using an i9-7900x for interpolation and RTX 2080 Ti for madVR. My GPU is also overclocked as far as I can stably push it. Even though the CPU is relatively dated compared to the new AMD processors, it can run SVP at high settings and frame rates at 1080p / high bitrate sources.

Since increasing the frame rate with SVP also increases the amount of work for madVR, I need a balance between target resolution, madVR upscaling quality, and frame rate.

When upscaling 1080P content to 1440p, I have madVR chroma and image upscaling set to NGU antialias high which gives noticeably better results than NGU medium. SVP interpolates up to 120 FPS. The television handles the final upscaling from 1440P to 4K. Frames are rarely dropped.

If I want to upscale from 1080P to 4K using madVR alone, I would need to reduce the svp interpolation frame rate to 60 or less. SVP interpolation is much better than the TV and I find the trade-off better than the difference in upscaling from 1080P to 1440p vs 1080P to 4K.

The PC consumes 500 to 600 watts under load and sounds like a small hurricane, so I need to locate the PC in another room and run a long HDMI cable.

Anyway, I have several questions.

1. How does the madVR ENVY compare to the performance I just described? I'd like to know my general level of envy at such a device.
2. Does the madVR ENVY have customizable settings?
3. What software is being used for interpolation?
4. Is the madVR ENVY modular in any way e.g. swapping out the CPU or GPU.
5. How does the madVR ENVY handle cooling and fan noise?
1. In the long run I plan to make use of the Tensor cores to speed upscaling up, and to implement high quality motion interpolation (hope to beat SVP in quality, but we'll have to wait and see).
2. Sure. It's optimized for ease of use, though. So it doesn't have as many tweak options as madVR. It offsets some of that by choosing automatically and intelligently for you. E.g. you don't have to manually select the NGU quality levels. Envy will do that automatically for you, in such a way that the GPU has a good usage percentage but doesn't drop any frames.
3. You mean SVP like? Not available yet, but it's planned. Will be my own software, using neural networks.
4. Yes and no. The hardware is modular, and we do plan to make upgrades available (e.g. HDMI 2.1). But we can't allow the user access, due to HDCP restrictions. And also due to business reasons. And the cheaper Pro model will probably only get an HDMI 2.1 upgrade, and no further upgrades after that.
5. We've done comparison tests and carefully picked the best GPU cooling solution we could find. Beta testers seem fairly happy with the noise (or lack thereof).

It's interesting that you can see a noticeable difference between NGU Medium and High quality for chroma upscaling. Most users (probably including me) would have a hard time seeing a difference there. Might depend on the content, I guess. Maybe you're using a lot of Anime? I could imagine it making a bigger difference for Anime, compared to other content types.
 

·
Registered
Joined
·
2,742 Posts
1) full 3D support (only for 1080p24)

Needs testing to confirm it works reliably. But looks good so far, and it will mean Envy can do full frame packed 3D, after all (!).
do you make this possible on the PC version too? if user are willing to do the necessary i assume are needed to make it work?
2) motion adaptive deinterlacer, using NGU Anti-Alias for interpolation

This will be mostly useful for EU users, I guess, where 1080i50 is still commonly used, e.g. for soccer. Not sure if it will be of any use to USA users? Ric tells me interlaced content has mostly disappeared in USA.
envy only? i would love to test it. amd thinks frame adaptive deint is bilinear bob...

old concerts on BD and DVD go no where.

edit: because you are working/or was working on deint is it possible to talk about some "advanced" deint techniques which will be possible in the near fututre now where it seem to be the case that 120 HZ naively is a "basic" feature?
 

·
Registered
Joined
·
455 Posts
I've just made a new firmware available for our private beta testers which adds:

1) full 3D support (only for 1080p24)

Needs testing to confirm it works reliably. But looks good so far, and it will mean Envy can do full frame packed 3D, after all (!).
I assume that is just passthrough, correct? Since there is no opportunity to upscale. Also, I doubt there would ever be support for Left/Right 4K upscaled over two DisplayPort connections, which enables 4K 3D on some displays. I would imagine that is too limited a market.
 

·
Registered
Joined
·
7,947 Posts
do you make this possible on the PC version too? if user are willing to do the necessary i assume are needed to make it work?
Yes. The next madVR HTPC build will probably make 3D work again for Nvidia. But limited to windowed mode and fullscreen windowed mode. It crashes in FSE mode, for reasons I currently don't understand. I hope I didn't break anything for AMD.

envy only? i would love to test it. amd thinks frame adaptive deint is bilinear bob...
Envy only for now.

edit: because you are working/or was working on deint is it possible to talk about some "advanced" deint techniques which will be possible in the near fututre now where it seem to be the case that 120 HZ naively is a "basic" feature?
What do you mean? I think nobody uses interlacing for anything but 50i and 60i content. So the best way to deint seems to be to go to 50p/60p first, and then maybe use motion interpolation to go up to 100p or 120p (if that's really needed)?

I assume that is just passthrough, correct? Since there is no opportunity to upscale. Also, I doubt there would ever be support for Left/Right 4K upscaled over two DisplayPort connections, which enables 4K 3D on some displays. I would imagine that is too limited a market.
It's mostly passthrough. Though, stuff like aspect ratio detection, anamorphic stretch, debanding etc should all work. And yes, outputting over two DisplayPort connections seems a bit exotic, at least for now. In the long run maybe we could consider exotic ideas, but definitely not soon.
 

·
Registered
Joined
·
2,742 Posts
Envy only for now.
welp poor me.

What do you mean? I think nobody uses interlacing for anything but 50i and 60i content. So the best way to deint seems to be to go to 50p/60p first, and then maybe use motion interpolation to go up to 100p or 120p (if that's really needed)?
i'm the last person to use interpolation but i'm talking about the very common mixed content and the fact that switching between modes is not user friendly.
sony for a long time and samsung LG (there are more brand that can do this now is a relative basic feature now)can all do deint and field match on the fly if the panel is natively 120 hz. and it's relative simple in theory: can you field match it? yes do it and arrange the frames for 120 hz output. can you not field match it deint it.

if you field match a source for 24p you can't display it at 60 HZ without judder but you can at 120 HZ and you can display interlaced content at 120 hz. so with other word you can do both which madVr is currently not able to do. even at 60 Hz madVR has other tools available to make this work smooth motion. not optimal but this better then the current solution.
 

·
Registered
Joined
·
7,947 Posts
i'm the last person to use interpolation but i'm talking about the very common mixed content and the fact that switching between modes is not user friendly.
sony for a long time and samsung LG (there are more brand that can do this now is a relative basic feature now)can all do deint and field match on the fly if the panel is natively 120 hz. and it's relative simple in theory: can you field match it? yes do it and arrange the frames for 120 hz output. can you not field match it deint it.

if you field match a source for 24p you can't display it at 60 HZ without judder but you can at 120 HZ and you can display interlaced content at 120 hz. so with other word you can do both which madVr is currently not able to do. even at 60 Hz madVR has other tools available to make this work smooth motion. not optimal but this better then the current solution.
Ah, I see what you mean. I had already thought about this myself. My idea on how to solve this is to use "smooth motion" frame blending for field matched pixels. So truly interlaced pixels would be properly deinterlaced to 60p and telecined pixels would be IVTCed to 24p and then frame blended to 60p.
 

·
Registered
Joined
·
2,742 Posts
the 120 HZ version of this would be error "free" because instead of recreating 24p you are not dropping field you are just moving fields or just deint frames.

so if your frame adaptive NGU AA is really good like nvidia and intel (and AMD in the past) you get a very high quality progressive 3:2 judder in it by just detecting 3 very similar frames followed by 2 very similar frames (multiply this be a lot for anime) you show the 3 frames 5 times be repeating 2 of them and the 2 frames again 5 times by repeating 3. so even if the algorithm does a misdirection the error would be very small.

the issue with madVR is you really have to move this repeating part to presentation to spare processing power.

for 60 Hz with smooth motion you could detect the 3 frames repeated and drop one of them still very save and this should work wonder for the not that rare 4:2:2:2 by just finding the 4 and dropping two of them. for smooth motion it doesn't really matter if it is 48 with two repeated or 24p it would repeat these frame anyway when the target is 60 fps.
 

·
Registered
Joined
·
7,947 Posts
Yeah, but to be honest, interest in deinterlacing has been on the decline for a long time, so it's not a priority atm. The algo I implemented now is quite simple and straightforward, so it didn't cost me a lot of time to implement, while I still think it should look pretty decent. I would like to look into deinterlacing again in the future in more detail, but it will take some time to get there because frankly other algos seem more important atm.
 

·
Registered
Joined
·
2,742 Posts
Yes. The next madVR HTPC build will probably make 3D work again for Nvidia. But limited to windowed mode and fullscreen windowed mode. It crashes in FSE mode, for reasons I currently don't understand. I hope I didn't break anything for AMD.
just to be sure you didn't forget to change the disable optimise fullscreen compatibility win 10 settings?
maybe it helps for FSE maybe it's the reason it crashes.

and in the end FSE is pretty much hated anyway.
 

·
Registered
Joined
·
894 Posts
Hi Madshi. I've been reading about MadVR for years and am happy to learn the fact that you've managed to turn into a high quality real-time processor. Well done !

I have a few questions:-

  1. I'm just wondering how the Envy would would perform with DLP projectors such as the Optoma UHZ65 in terms of tone-mapping, noise reduction, upscaling and calibration (3D LUT) ? - I know the entry level envy would cost more than the projector itself but if the end-results are exceptional then it begins to make sense from a total cost point of view.
  2. Could Envy simulate a true RGBRGB or P3 filter to create an image that is similar or close to what a properly tone-mapped projector with a real P3 filter would produce without necessarily losing the luminance. I think the highlight recovery could form part of this in conjunction with shadow recovery and intelligent colour saturation algorithms. A real time and frame by frame dynamic contrast ability would also be nice.
  3. As a personal preference, I have always preferred DLP over LCOS because of the sharpness and colour POP (Colour contrast) of DLP engines. However, I am wondering if the same results can be achieved with LCOS in conjunction with the Envy. Would be nice to see what Envy can do with projectors such as JVC's current 4K range: NX5, NX7 and NX9.

Please let me have your thoughts....
 
2421 - 2440 of 2866 Posts
Top