To adress some of the more general points in here. The criticism provided by me - specifically never went the "HDR at 5000 or 10000 nits is not necessary" route - because, thats entirely open to interpretation.
Do I trust the Dolby "expermiental results"? No, I do not. They werent independently verified, so they are not science. Its as simple as that. That 10000 nits is a "target" whatsoever (its not that we are going to reach that anytime soon) - probably had more to do with the influence of the markting departement than anything else.
Please don't use this thread to drive the discussion towards "it all depends on the artistic intent" - as this is the same BS line we currently get fed by every participant in this industry. "We don't know how artists will use the capabilities."
Fine - but we know when and how we can't reproduce original intent in our living rooms, when the actual targets shift, when the way we deal with above max brightness information shifts, or when Samsung can go out to pull an entirely BS certification of "color volume matched!" out o a german certification outlet - and NONE of you is able to interpret what that actually means.
There is no standard for DCI-P3 HDR - so what color Volume at "a 100%" are they getting certified? The 65 nits we usually see in commercial cinemas? And again - if they want to proclaim that they are orienting themselves towards the mastering displays used - first they are not, because they don't deliver the capabilities in the home user segment, and second - even mastering "targets" are shifting all over the place - with every new nits increase studios are willing to spend money on. Why? Because presumably they have some dope colorists that wan't to try their hands at being artists.
But lets now turn to the current episode of AVSForums Home Theatre Geeks, and let me give you another summery of whats becoming harder and harder to swallow as an somewhat intelligent person, watching this field.
You have four "industry influences/calibrators/journalists" who don't know a thing about what they are talking about and instead are exchanging generalized set phrases to seem like they know what they are doing.
2017 LG OLEDs are supposed to show more black detail out of the box. According to the jonty panel thats because LG purposefully crushed black detail on their 2016 OLEDs to hide impurities out of black. None of them is offering up information on the gamma curves used. The entire industry has no concept at all of what gamma to stick to, thanks to bt.1886, and the "impurities" arent specified in any way - also, you have some people on the panel indicating that they "knew this all along and its great that the issue finally gets adressed" -
Here is what happened for real. None of the people on the panel probably has ever looked at the curve that dictates what out of black looks like. Because they are using Calman workflows and Calman tends to hide that setting stage from you - resulting in entirely botched display shootouts in the past, where all panels at display were calibrated to different gamma targets.
None of them had noticed the near black crush issue in the past. And none of them has put it in their "reviews" of previous generation panels.
None of them has played around with different gamma presets to see how they can adjust out of black (PLG 2.4 is not what looks right - regardless what bt.1886 tells you on "perfect black level devices).
None of them has seen artefacting near black on the previous generation in the past.
None of them is able to quantify it - because the dE 2000 formula tells us, that all of those chenges are "below the visible threashhold" - which of course isn't true.
And none of them ha been able to quantify it - because bt.1886 is constantly shifting targets around as well - so as an industry you have to litteraly pry out calibration monitors form color corrction facilities and look at the gamma levels they are setting.
"i noticed a slight green tint on the 2016 OLEDs" is driven by aggregated criticism, that the out of the box calibration on LG Oleds was botched - sometimes with no way to correct for it (Dolby Vision preset is locked down) -- and what Scott Wilkonson is doing there is called mirroring or pandering, depending on your standpoint. The "green tint" out of the box always was above dE 10 on most devices, so everyone with a keen eye was able to spot it instantly - even in showroom conditions, if they had to.
But then - Scott Wilkenson didn't until it was thrown around in forums - and he read it as part of criticism he hadn't expected. Probably. Good news is, that this is all on tape - because they taped an entire calibration where "perfect, oh - so great" was thrown around by them while Calman showed them dEs exceeding 15.
None of the bunch except Robert Haron seems to be interested in Colorchecker values with the new 3D Lut (meaning, they didn't conceptualize at all what this is about), but instead they are very, very grateful for being able to adjust the "brightness slider" more gradually.
For all I care - it could be taped to the TV - because you don't mess with "true black" once you've got it. No one cares if 3 or 7 iRE look "better" if true black is gone. More cranularity in this slider is almost useless - if you wan't to adjust low IRE values independent of the black level - you adjust the gamma curve. As there is no real target for gamma in bt.709 anyhow you practically are free to do anything in there.
But the four celebrate a more granular brighness control like it would bring any improvements - it doesnt. at all. At least not for people that can calibrate a gamma curve. And that those four arent able to talk about how gamma looked on the LG 2017 models we already discussed.
The next highlight is how all of a sudden viewing ange stability improved because of the filter layer on new TVs (thats probably correct), and it is used again - without any attempt to objectify it - and just devert from the fact, fact - that the have a hard time conceptualicing that there can be a green tint overall and a red tint on black at the same time - better move in a potential deniability excuse of "oh it must have been the viewing angles at that point".
Oh and by the way - the red tint on black in 2016 OLEDs? Is closer to neutral black (chroma whise (CIE triangle)) than on any other TV i've ever measured. Granted other TVs usually tend to move "panel black" more towards blue than towards red - and that is no remedy for you being able to see the tint in a lit room (can you?) - but then, why has no one ever talked about those things in the past?
Answer - because acording to the dE 2000 formula those issues all don't exist (they ll fall below the visually perceptable threshhold).
But the overarching point is still there - they get fed their list of "improvements" from the manufacturer - then proclaim that they have always known about the "issue" in the past - although none of them has written about it. (Presumably, but with a close to 100% likelyhood.).
Spectracal trying to get interpretation dominance over the "golden reference value" is at least an interesting move (because up until now that guaranteed, that Dolbyvision had "secret knowledge" to calibrate a display, that came into being by them talking in private with a display manufacturer - so comparability was not there, and you couldnt look at the math and possibly criticise it for what it was doing). So maybe thats a positive.
But then comes the part where the three of them talk about being fed a 10000 nits testpattern (on Oleds, right...) and being able to make out details up to 5000 nits (2016 model) or 7000 nits (2017 model) - without realizing thatn both of them are transformations. Because none of the TVs can display content that bright.
The subsequent discussion where "clipping is bad" is so full of openly displayed ignorance - that iphysically pains me - and I had to pause watching - to write this. If you are watching a 10000 nits test pattern - guess what - it clips on an OLED. AND ITS A FREAKING GOD THING IT DOES - because if you try to cram in the bright details you either have to do so non uniformly (not according to the EOTF) - or move all other colors away from target by so much - that you are finally able to get the entire "contrast range" mapped on a display a 10th as bright.
So no - clipping is not bad, considering that thats what your eyes would do in the first place (until they accommodate) and that the TVs capabilities arent there.
Yes - you probably wan't to scale little before you clip (thats the we saw details up to 5000 nits (no you didn't) or 7000 nits (no you didnt)), but at some point - you probably wan't to clip. And the point you are chosing is "magic". Meaning - either the Dolby "golden reference value" we can't look at - or whatever Calman tries to cut out of it as their "trademarked magic point" in the future.
To then jump on "directors intent" when you are talking about peak brighness detail is laughable. Tell me that peak brighness detail ("you have to reach 7000 nits to appreciate my intent") is used as an stilistic vice - and I will laugh at you. Currently its used for effect - and if you can't see the sun spots on the sun, directly after exiting a cave - "oh my"let say the director most likely won't obsess over it, when the movie gets screened on a Stewards Film Screen at 65 nits tops.
Clipping high brighntness detail (above a reasonable threashhold (= "planes shouldnt vanish out of the sky") is what EVERYONE DOES. And if they don't your entire scenes color representation gets out of whack quickly. Golden reference dictates where clipping starts and how "agressively it" is in practice. Calman is trying to move in on that and take the dominance of interpretaion away from Dolby. But in the Panel we have four people that agree that they would rather see a 5000 or 7000 nits detail on a 700 nits display - even if it means, that the entire luminance range has to be compressed just to make that possible.
The idiocy is close to borderless.
And I'm not even speaking about the fact that current movies are mastered on 2000 and 4000 nits displays - and then go (ideally) through a process where they are viewed again on consumer grade devices to spot if any "significant image detail" gets lost - but leave it to out panel of four morons (I think thats fair) being impressed by a 10000 nits test pattern on some OLED TVs. And in the influence of that proclaim that "clipping is always bad" - because thats what they learned with SDR.
Honestly - what do you do IF none of the journalists out there have the smarts to even fathom what they ought to be reporting on. You can't put them in school - because that stuff gets made up as we go along - and watching them trying to deduce valuable inormation from set pieces - made up by a marketing department is painfull to watch.
Heck - they apperantly had hours to "calibrate those TVs" but where unable to come up with a gamma curve that shows us what black detall they actually saw.
And the only statement any one of those four would make about color accuracy was "that color checker colors all were below the visible threadhold" - no numbers, no screens - nothing. So maybe - instead of just hawking the PR
line of how much more granular this years 3D Lut in some TVs is - show us some representation of that in action.
You were let loose in a room of TVs with measuring equipement and all you have to tell us are the same marketing lines, the PR
department fed you?
Another week another "great" showcase of where this industry is lacking in transparancy, knowledge and the ability to trust the "influencers" (they talk about display being perfect until there is a public consensus, that they are not - than they jump on that - without knowing what it indicates - or jump on the next improvement of a thing they haven't noticed in the past -- thats you "Home Theatre journalism of 2017".) Also the level of shear lack of knowledge about stuff they are supposedly reporting on "objectively" is still outrageous.
An the same pundits then try to condense the information down even further to give people a sense of what to buy. Its a joke - really. In reality they are happy for all pointers manufacturers can give them to copy - because they wouldn't be able to navigate "what changed" without them. They aren even able to sort or weigh those changes afterwards. (Heron at least tries (he at least is able to apply some knowledge about calibration to the aspects he is supposed to test, but the majority of "HiFI Journalists" have seemingly long abandoned the notion to try to understand whats going on.)
Wilkenson for all that is woth is still stuck in the cognitive dissonance loop - of "last time they said it was perfect - and this time they said it was more perfect -- I don't know what to tell my peers.. Haha - Ha..." - which prompts the question, are you a panelist in a format like "the view" or are you an actual journalist - that at some point isn't primaraly "shocked" that marketing does its job. Get your act together. Please.