Originally Posted by madshi
In theory true, but we don't want a test video to only have 1 video frame, do we? I don't think that would work well. Some media players might not even show that 1 frame at all, if that's all that's contained in the video. So we would have to encode the same test pattern image for a duration of maybe 5 seconds or so. But if we do that, each of those frames in those 5 seconds might be slightly different, due to lossy encoding.
there is CU-lossless coding in h265 so with careful work it could be possible to create a lossless video in a lossy encode like a very small video with a giant latter box.
Hmmmm... But even talking about it now, I guess it would be possible to stamp a frame counter into each frame, that way the Envy should be able to wait for a specific frame number and compare only that. That should fix the lossless problem.
yeah that part of the idea one complex frame should be enough
In theory 4:2:0 is also available for other frame rates and resolutions. The Envy's EDID actually reports 4:2:0 capability for a lot of resolutions and frame rates, and fully supports receiving all those formats losslessly. But whether the source device is actually able to send it that way is another question, of course. From what I recall, the Oppo 203 actually might!
as far as i know the spec needs UHD 50/60 hz for 4:2:0. a PC clearly does. but you are developing on this not me and most important that doesn't change that most user will use a device that is not going to use 4:2:0.
But even if the source devices converts to 4:2:2 or 4:4:4, I can still check if the Y channel is untouched or not. And I can try to check how much damage was done to the 4:2:2 and 4:4:4 channels, as well.
some user are already testing for signal integrity and if i remember correctly some show only issues in chroma not in luma.
and i personally would not consider using belinear or bicubic as damaging algorithm they have to do it. i simply have no clue how to work around it.
Ah yes, you're right, of course. Thanks for the heads-up. I've modified my post with the tech specs accordingly. I believe we should be able to use 16bit input/output with the TensorCores. It might require re-training our neural networks, though.
still plenty bits.
I'm not sure if I can do that within D3D9. I might have to switch to D3D11 to make that work, but I'm not sure.
not a programmer but DX11 PS 5.0 and win 8 or newer according to AMD. didn't expect the envy to be DX related and i have no clue what API you are going to use for tensor cores may guess would be cuda and that can do FP 16 for general computing too.
maybe going directly to DX12 not a fan of win 10 but if you want to use cutting edge tensor cores and such things DX11 isn't the new kid anymore and if you are already doing such a drastic change giving yourself more possibilities maybe better in the long term. Microsoft really doesn't stop with new features for dx12.