Originally Posted by bobof
I'm not fully across the constraints imposed by using the GPU AI - if things have to operate per pixel it is of course hard to implement stuff that looks at a number of pixels, but I guess many algorithms operate outside the bounds of the immediate pixel of concern? Or are those not able to be implemented with GPU AI?
You can do almost anything, using a framework such as CUDA, for example. However, if you want to do things with a good speed, you have to carefully design the algos to make use of the heavily parallel hardware design of today's GPUs. Usually typical video processing algorithms look into the direct neighborhood of each pixel, but not very far from the direct neighborhood.
Originally Posted by bobof
For what it is worth I think there are a lot of avenues that could be explored for making such things (more?) robust (enough?) if you can overcome such fundamental issues. Such as ways of working out exactly what the player is (user input or automated identification based on fingerprinting), so that the unpicking of whether or not the scaler is in use can be very specific to the scaler known to be used by that device, improving certainty. It strikes me once you know the player you need to understand where the content edges are, look at those edges for evidence of the scaler in operation, and then come to some conclusion on the likelihood that this image has been subject to a given scaler based on a very high percentage of edges exhibiting the same processing traits, with sufficient confidence (which probably has to be established over a number of frames) to ensure you don't flip flop between processing and not processing.
The trigger for entering such a processing mode becomes "Significant number of processed edges detected over multiple frames". The trigger for leaving such a mode is "Significant number of non-processed edges detected over multiple frames". Frames which have no significant edges (not sure how many of them in real content, but anyway) become "don't cares" and indeed that seems pretty legit as for those frames where there are no edges - you probably don't care either way. Obvious traps of course are I guess OSD overlays which may well be generated at 4K and have lots of non-processed edges in them...
Given the way things are going with physical media, and the likelihood of a "videophile" source direct player happening seems small, such features aimed at improving playback on such devices seem really attractive from a consumer point of view. Of course there's lots of stuff we'd all like that just isn't possible...!
Things like that may be possible, but would require a *lot* of work, and then when a new firmware of the source device comes out which may change the upscaling algorithm, you have to start all over again. I'm not sure how realistic it is to do algos like that. Maybe I'll look into things like that once I've run out of other algo ideas. But ideally, my priority will be to make Envy shine for "good" source device, which don't upscale, but properly passthrough all content in its native form.
That said, since most devices only support 4:2:0 for 4K60, I may implement some sort of chroma upscaling detection to try to undo whatever bad chroma upscaling algorithms the source device has applied.
Originally Posted by SirMaster
Maybe the Envy reads the KSV which is a unique device identifier sent over HDMI.
Since we recommend to place the Envy in between AVR and display, Envy would just see the KSV of the AVR, not the KSV of the source device. So that wouldn't help identify which source device the AVR is currently set to.
Originally Posted by claw
Likely the same way the HDfury devices know the player name even though the HDfury is placed after the AVR. The Source Product Description Infoframe (SPD).
Yep, that's the plan.
We're not fully sure yet how reliably this will work, though, because not all source devices output SPD InfoFrames, and not all AVRs pass them through properly (though probably newer ones will). Only time will tell. But of course if all else fails, Envy can be remote controlled via IP control.
Originally Posted by catav
Will Envy process a video stream (what ever that is) with exactly the same “possibilities” as madVR/HTPC does? Sounds like scaling could be an issue. DTM might not be a problem. But, if the source doesn’t let Envy have the final say on scaling, how does Envy have any chance? After all, that is one of madVR’s strongest algos (NGU).
Not that I don't love HSTM/DTM, also! But scaling is very important to my setup.
Picking a good source device will be key, of course. Ideally one which has a properly working "passthrough" mode. Oppo UHD Blu-Ray players come to mind. IIRC they do have such a mode. Only problem is that they don't support 4:2:0 output for anything but 4K60.
We may also try to work together with source device companies to optimize their source devices for best quality. We already have contact to one such company. They may consider adding 4:2:0 support for 1080p, as well. But it's all in the early stages right now, so I can't really say any more about it at this point.
In any case, Envy has all the same algos available as madVR. I might not make every algo available, for the simple reason that I want to keep the Envy menu as easy to use as possible. But if Envy misses any algo compared to madVR, it's only for ease-of-use reasons, not for any other reasons. And Envy already has a few tricks up its sleeve which madVR can not do, and more things are coming.
Originally Posted by robl2
That's actually great to hear. Out of curiosity, did you generate the test frames on the PC side on Windows (10?) or Linux?
Windows. Nvidia drivers are one of those things where Windows actually has an edge over Linux. For the simple reason that most gamers use Windows. So Nvidia invests most of its driver development resources into the Windows drivers.
Originally Posted by *Mori*
Any news ?
I've just made a new firmware available for our private beta testers which adds:
1) full 3D support (only for 1080p24)
Needs testing to confirm it works reliably. But looks good so far, and it will mean Envy can do full frame packed 3D, after all (!).
2) motion adaptive deinterlacer, using NGU Anti-Alias for interpolation
This will be mostly useful for EU users, I guess, where 1080i50 is still commonly used, e.g. for soccer. Not sure if it will be of any use to USA users? Ric tells me interlaced content has mostly disappeared in USA.
Originally Posted by Interpolation
I'm a long time madVR and SVP user. Currently using an i9-7900x for interpolation and RTX 2080 Ti for madVR. My GPU is also overclocked as far as I can stably push it. Even though the CPU is relatively dated compared to the new AMD processors, it can run SVP at high settings and frame rates at 1080p / high bitrate sources.
Since increasing the frame rate with SVP also increases the amount of work for madVR, I need a balance between target resolution, madVR upscaling quality, and frame rate.
When upscaling 1080P content to 1440p, I have madVR chroma and image upscaling set to NGU antialias high which gives noticeably better results than NGU medium. SVP interpolates up to 120 FPS. The television handles the final upscaling from 1440P to 4K. Frames are rarely dropped.
If I want to upscale from 1080P to 4K using madVR alone, I would need to reduce the svp interpolation frame rate to 60 or less. SVP interpolation is much better than the TV and I find the trade-off better than the difference in upscaling from 1080P to 1440p vs 1080P to 4K.
The PC consumes 500 to 600 watts under load and sounds like a small hurricane, so I need to locate the PC in another room and run a long HDMI cable.
Anyway, I have several questions.
1. How does the madVR ENVY compare to the performance I just described? I'd like to know my general level of envy at such a device.
2. Does the madVR ENVY have customizable settings?
3. What software is being used for interpolation?
4. Is the madVR ENVY modular in any way e.g. swapping out the CPU or GPU.
5. How does the madVR ENVY handle cooling and fan noise?
1. In the long run I plan to make use of the Tensor cores to speed upscaling up, and to implement high quality motion interpolation (hope to beat SVP in quality, but we'll have to wait and see).
2. Sure. It's optimized for ease of use, though. So it doesn't have as many tweak options as madVR. It offsets some of that by choosing automatically and intelligently for you. E.g. you don't have to manually select the NGU quality levels. Envy will do that automatically for you, in such a way that the GPU has a good usage percentage but doesn't drop any frames.
3. You mean SVP like? Not available yet, but it's planned. Will be my own software, using neural networks.
4. Yes and no. The hardware is modular, and we do plan to make upgrades available (e.g. HDMI 2.1). But we can't allow the user access, due to HDCP restrictions. And also due to business reasons. And the cheaper Pro model will probably only get an HDMI 2.1 upgrade, and no further upgrades after that.
5. We've done comparison tests and carefully picked the best GPU cooling solution we could find. Beta testers seem fairly happy with the noise (or lack thereof).
It's interesting that you can see a noticeable difference between NGU Medium and High quality for chroma upscaling. Most users (probably including me) would have a hard time seeing a difference there. Might depend on the content, I guess. Maybe you're using a lot of Anime? I could imagine it making a bigger difference for Anime, compared to other content types.