Originally Posted by ARROW-AV
Heres a titbit... ANSI contrast is increased by circa 50%
I was secretly hoping for that, but not really believing in it. So that sounds like great news! However, IIRC JVC has promised ANSI contrast improvements in the past which didn't really play out. So I'll keep my hat on for now, until measurements actually confirm this. But I'm hopeful!
Hmmmm... As Manni asked, though, is this only for the NX9? E.g. I can imagine the better lens helping with ANSI contrast. Or is this also for N5 and N7?
Originally Posted by bobof
I have had Darbee courtesy of various Lumagen units (and also the PrismaVue processing from the Prisma LUT box) for the last 3 years or so; and I always go through the same process with it. I like the effect as I turn it on - it clearly changes the image in a way which at that point of switching it in circuit I seem to like. But it only seems to survive for a little while before I get tired of the effect and decide that while it is doing "something" I've no indication it is getting me closer to the original intent and so it gets turned off again.
I've seen some good pics out of MadVR and some that look like a dog's dinner thanks to too much processing (there is one (in)famous poster with an old JVC who's name I shall not mention as they're a bit like Beetlejuice... lol... but he'll probably be along as soon as a screenshot emerges :-p).
FWIW, "processing" as in sharpening is something that definitely is a matter of taste. However, there are some sorts of processing which you cannot technically avoid. For example, consumer video is encoded as YCbCr 4:2:0, but displays are RGB 4:4:4, so someone somewhere has to upscale chroma from 4:2:0 to 4:4:4 and then convert that to RGB. Furthermore, if you play a 1080p Blu-Ray on a 4K display, again someone has to upscale the whole video from 1080p to 4K. So the question is not *if* you do that kind of processing, the question is only how high the quality of the algorithm you're using is. There are good and bad ones.
Originally Posted by bobof
An example of testing that would be really interesting (in the vein of the video I linked to); take some >4K images, process both direct to 4K and also as if via a 2K DI; pack it for transmission (ie encode at 4:2:0 with UHD compression). Then process via choice of image processing algorithms (eg MPC/Reality Creation/MadVR) and then compare the processed and unprocessed results with the original pre-compression masters sent to the display at full resolution 4K 4:4:4 and see how they fair. This would be really interesting and would further the understanding of the capabilities and limitations of such processing no end.
This is actually very near to how madVR's "NGU Sharp" algorithm was designed: It tries to undo/revert a 4K -> 2K downscale in the best possible way. There's zero artificial sharpening going on. The algo is just looking at the 2K downscale and then tries to take a best guess at how the original 4K image might have looked like, by throwing lots and lots of GLOPS on the task. The core part of the whole algo is a neural network (AI) which was carefully trained to "guess" the original 4K image, given only the 2K image. The training of such a neural network works by feeding it with both the downscaled 2K and the original 4K image, and then the training automatically analyzes what the neural network does and how much its output differs from the original 4K image, and then applies small corrections to the neural network to get nearer to the ideal results. This training is done hundreds of thousands of times, over and over again.
Sadly, if a video wasn't actually downscaled from 4K -> 2K, but is actually a native 2K source, the algorithm doesn't produce as good results as otherwise, but it's usually still noticably better than conventional upscaling algorithms.