Originally Posted by Nalleh
Wait what! Am i getting this right: no more HDR? You are now preffering SDR conversion?
It's always been HDR to SDR conversion. Our displays are not HDR displays. The question is where does the conversion happens (player, video processor, display), and what the consequences are.
With gamma D, the conversion took place into the display. It was crap. The dark gamma control either raised the black floor, killing contrast, or crushed black. It was unusable. And to add insult to injury, the DI was disabled with HDR content.
I preferred for a while the HDR to SDR BT2020 in the UB900 using the Integral. The main positives were being able to use the DI and get our black levels back, the main negative was that it was expecting an SDR display and the highlights were limited to about 100nits real peakY, otherwise it wouldn't be correct as reference white would be too high. Until a slider came later, there was also a significant compromise in the highlights that were clipped fairly low. Still I and others recommended it for a long time, because for a while it was better to do the conversion in the player, as long as it was the UB900 (the Oppo conversion has been buggy for a long time and crushed blacks).
Then I found a way to use Calman to calibrate to HDR10 manually, at about the time Chad B started to experiment as well. We actually did share tricks and opinions at the very beginning of this. This allowed to get better than Gamma D calibration, with peakY as high as the display was capable to, and no raised black floor or crushed black. This was the beginning of custom curves, which have to be designed for each set-up ideally according to vareious factors and taste. I shared a few of them to show people that we could get better than gamma D, but the further you were away from the way from the target of the curve (or from a complete bat cave), the worse the results. So that's what I recommended, to go back to doing the conversion in the display.
Then Arve dropped the A bomb, which transformed a 30mn process (time needed to create a custom curve manually) into a one minute process, allowing to experiment with design, roll off and create even better custom curves. I immediately embraced the process, as the result was better than what I could get with my manual curves, and I recommended that people use the tool to create their own custom curve, or ask someone competent to use it to design one for their set-up and their taste. I created and shared a first Dolby Cinema Emulation curve, so that people could target one single peakY (107 nits) and know they were getting a fairly accurate conversion for that peakY.
I then worked with HD Fury to implement a feature in the Linker (then the Vertex) to allow us to get DI back with HDR content.
Then Lumagen implemented the Intensity Mapping LUT into the Radiance Pro, Oppo made some progress and someone I entirely trust (@KrisDeering;), who I knew loved custom curves because we discussed the process extensively when I started using them and with whom I have shared privately lots of information reported that he was getting better results in a specific area (low light color saturation) with the Radiance Pro intensity mapping LUT and the new beta f/w in the Oppo (not yet public). I investigated using MadVR and found that he was entirely correct. The time had come - again - to let the source or the video processor do the conversion, simply because it was better there. The big difference from the SDR BT2020 we could get from the UB900 is that these solution use the full brightness available (up to 200nits and more if your display can achieve that) and are far more accurate, beyond the gain in low light saturation. So as I said, it's not "going back to SSDR BT2020", it's going back to doing the SDR BT2020 conversion in the source (MadVR, Oppo) or the VP (Radiance Pro) to get better results with HDR content than when doing the conversion in the display, even with a with a custom curve.
There might be a point in the future where the conversion is better done in the display again, and if that's the case I'll be happy to recommend doing that.
I'm not attached to the present way to do things, whether I came up with it or not. When Arve dropped the A bomb, I was the first to experiment with his tool and recommend it. When Kris dropped the K bomb, I was the first to experiment (with MadVR as I don't have access to a Radiance Pro or an Oppo yet) and recommend a new way. Simply because I'm after the best possible quality and accurate representation. Not after "being right". No one is "right" for very long in this field. It keeps changing at light speed, and will probably keep changing for a couple of years.
So if you're after best PQ and most accurate representation, be open to change and ready to change your ways
Originally Posted by stanger89
So I've been looking into how to get madVR to select the right picture mode. According to madshi, the latest madVR (just released, 0.92.11) can call a program/script based on criteria (still looking into that).
Based on that I decided to look into what it would take to write something to put my RS600 into a specific mode. Turns out it was really easy, Arve actually did all the hard work already, his jvcprojectortools already implements the whole JVC command set (or looks to). So a few minutes and a bit of python and I have a file I can run that puts the projector in User1 picture mode:
"""JVC projector module to select User1 Picture Mode"""
from jvc_command import *
from jvc_protocol import CommandNack
"""JVC select User1 mode class"""
print('Set User1 Picture Mode')
with JVCCommand(print_all=False) as jvc:
model = jvc.get(Command.Model)
picture_mode = jvc.get(Command.PictureMode)
print('Picture Mode:', picture_mode)
print('Failed to set Picture Mode User1')
except CommandNack as err:
except jvc_protocol.jvc_network.Error as err:
if __name__ == "__main__":
Now off to figure out how to get madVR to call it...
Well done, but unfortunately the external command call in custom profiles MadVR is buggy (broken), so there is no way to currently call a batch file when entering/exciting a custom profile [EDIT: I misread your sentence, it looked like the feature was new, when it was indeed fixed in the latest release, so that's a good thing!]. I'm having a discussion about this and other things with Madshi in the Doom9 thread, which is really where this should be discussed until we can come back and recommend settings to the general user, otherwise it's going to be super technical and boring for most. Please contribute to the discussion on Doom9 if you can, it will help to get results faster. I asked yesterday to Madshi the question about the JVC command format in a batch file, so it's great to have the answer in Python, but I'd like to know how to do it from a batch file, so if you know how to do that, please let us know in the doom9 thread, thanks.
What I'd like to do is get it to work first, then when the experimentation is over share the best settings and the batch files in one post, for those who want to experiment with MadVr, instead of posting each step of the experimentation here. There are lots of complex issues to resolve first re how to get the most accurate calibration (using MadVR's 64x64x64 3D LUTs) not only for the HDR to SDR conversion, but also for Rec-709, PAL and SECAM content.
Also, I'm trying to find the best way to get it to work with the Vertex, so that we can get MadVR to switch to the best HDR calibration automatically, but can still use the Vertex to switch calibrations with other sources. When MadVR does the conversion, there is no metadata sent, so the Vertex can't switch to SDR BT2020 automatically. Initially, we had the same issue with passthrough mode, but I reported it and it was corrected.
Finally, I'm trying to get another option in the software so that we don't have to choose between MadVR's excellent pixel shader math HDR to SDR conversion and using a 3D LUT as well, so that we can get super accurate results (far better than the JVC Autocal). All this is going to take time, and again, the best place to learn and or contribute is the Doom9 thread.
One of the reasons why I use MadVR for all my SD/HD content is its fantastic 3D LUT, which gives me reference quality (far better than Autocal) with all content. You can still get this with BT2020, but at the moment only if you use a 3D LUT to do the tone mapping. As Calman doesn't support BT2390 yet (I'm talking to Tyler at the moment to try to get them support it or at least better ways to calibrate HDR for projectors as quickly as they can), that's not a viable option. As Lightspace supports BT2390, I might try that if MadVR doesn't/can't implement what I'm asking for, but I'm not sure the results will be as good as with the pixel shader math, which does a lot of real-time frame by frame analysis that a static 3D LUT won't be able to do.
At the moment, I have MadVR to apply a different 3D LUT for Rec-709, PAL and SECAM based on the same Rec-709 calibration/profile, so I get reference quality for all that content. But we can't do this with SDR BT2020 (it has to be the largets gamut that's profiled and used to generate the other 3D LUTs), because that would mean keeping the iris fully open, possibly high lamp and the filter when displaying rec-709 and lower, with lower native on/off, more fan noise and shorter lamp life. Simply not an option.
I'm not going to post more about this here because 1) it's off topic until we have practical recommendations/tools to share with JVC users 2) it's taking a lot of time to repeat what's going on in MadVR's thread and 3) it's going to take a while before we can actually come up with a clear recommendation for the best way to achieve the best results not only in HDR but also in rec-709/PAL/SECAM. I'm happy to get better HDR with MadVR, but not if it means worse SDR.