Improving Madvr HDR to SDR mapping for projector - Page 192 - AVS Forum | Home Theater Discussions And Reviews
Forum Jump: 
 5780Likes
Reply
 
Thread Tools
post #5731 of 6935 Old 03-24-2019, 03:49 AM - Thread Starter
AVS Forum Special Member
 
Soulnight's Avatar
 
Join Date: Dec 2011
Location: Germany
Posts: 1,409
Mentioned: 216 Post(s)
Tagged: 0 Thread(s)
Quoted: 1044 Post(s)
Liked: 1646
Quote:
Originally Posted by Manni01 View Post
Hi Flo,

Sure but that was with measurements files, there is no real problem getting good results in that scene with measurements files, as reported in the other thread.

How would you try to resolve the issue with madVR in the live algo, until madVR starts to see into the future? Putting a non-zero value in scene in the minimal scene duration causes other issues, so in the live algo I find that a zero value is usually a better compromise.
Minimum scene duration is evil as we all found out with the tool in the past. That's why we now use in the tool the concept of sandwich scene instead.

Well, for the live algo, what if you just never cut in a first try?

If you deactivate scene in the live algo, MadVR will still roll smoothly from one camera change to the next. Probably with a less ideal value than with knowing the future and centered rolling avg, but still better than hurting the director intent of relative brightness every few seconds?

Plus, I thought madshi was going to look in the future very soon, and the live algo will be better prepared to smoothly go from one camera cut to the next without resetting the target nits.
stevenjw and Manni01 like this.

Last edited by Soulnight; 03-24-2019 at 03:54 AM.
Soulnight is offline  
Sponsored Links
Advertisement
 
post #5732 of 6935 Old 03-24-2019, 03:57 AM
AVS Forum Special Member
 
Manni01's Avatar
 
Join Date: Sep 2008
Posts: 9,128
Mentioned: 331 Post(s)
Tagged: 0 Thread(s)
Quoted: 5452 Post(s)
Liked: 5656
Quote:
Originally Posted by Soulnight View Post
Minimum scene duration is evil as we all found out with the tool in the past. That's why we now use in the tool the concept of sandwich scene instead.

Well, for the live algo, what if you just never cut in a first try?

MadVR will still roll smoothly from one camera change to the next. Probably with a less ideal value than with knowing the future and centered rolling avg, but still better than hurting the director intent of relative brightness every few seconds?

Plus, I thought madshi was going to look in the future very soon, and the live algo will be better prepared to smoothly go from one camera cut to the next without resetting the target nits.
Sure, madVR will look into the future at some point (I've already maxed my CPU and GPU queues to the max acceptable value in anticipation!) but we don't know when that will land...

I'm not sure I understand which setting you suggest to change to "never cut in a first try". Let's stick to the live algo parameters in this thread because it's confusing to reference the measurements tool here.

If I keep the min scene duration to zero as we agree it's not good to use it with the live algo, which setting are you suggesting to change to address this issue in The Meg with the live algo?

Batch Utility V4.02 May 16 2019 to automate measurements files for madVR with support for BD Folders
Manni01 is offline  
post #5733 of 6935 Old 03-24-2019, 04:06 AM - Thread Starter
AVS Forum Special Member
 
Soulnight's Avatar
 
Join Date: Dec 2011
Location: Germany
Posts: 1,409
Mentioned: 216 Post(s)
Tagged: 0 Thread(s)
Quoted: 1044 Post(s)
Liked: 1646
Quote:
Originally Posted by Manni01 View Post
Sure, madVR will look into the future at some point (I've already maxed my CPU and GPU queues to the max acceptable value in anticipation!) but we don't know when that will land...

I'm not sure I understand which setting you suggest to change to "never cut in a first try". Let's stick to the live algo parameters in this thread because it's confusing to reference the measurements tool here.

If I keep the min scene duration to zero as we agree it's not good to use it with the live algo, which setting are you suggesting to change to address this issue in The Meg with the live algo?
Can you deactivate target reset at each scene cut in the live algo? That's what I was proposing.

Also madVR live algo could look at a threshold of Fall change.

If Fall change from frame (i-1) and frame (i) is less than a factor 5 (just an example), do not cut, otherwise cut and reset the target.

We need to go away completely from target reset at each camera change and be very selective about it in order to respect relative brightness between following frames as often as possible/ needed to respect the director intend and avoid issue like the little girl in the meg.
Manni01 likes this.
Soulnight is offline  
Sponsored Links
Advertisement
 
post #5734 of 6935 Old 03-24-2019, 04:29 AM - Thread Starter
AVS Forum Special Member
 
Soulnight's Avatar
 
Join Date: Dec 2011
Location: Germany
Posts: 1,409
Mentioned: 216 Post(s)
Tagged: 0 Thread(s)
Quoted: 1044 Post(s)
Liked: 1646
We should really distinguish the 2 logics within the LIVE algo:

1) tone mapping logic to frame peak.

Using a different peak while keeping the same target nits only slightly changes the gamma curve and the kneestart.
It does not change (heavily) the global picture brightness.

This can be reset at each camera cut since it does not change global brightness.

Here we can use the great scene detection algo(s) that madVR has now build-in to reset the tone mapping peak at each camera cut to optimize picture quality.

2) Dynamic target nits

The target nits is directly controlling the frame brightness.

Changing the brightness at each camera cut during what a human consider as one single "chapter" can hurt the director intent heavily (see the meg, little girl example).

This can not be reset at each camera change. It should only be reset when the "chapter" is over (if needed).

Smooth variation during a chapter is however welcomed.

Here we can use the variable " target change speed logic " implemented in the live algo to never change to quickly so that it is invisible to the naked eye.
Manni01 likes this.

Last edited by Soulnight; 03-24-2019 at 04:34 AM.
Soulnight is offline  
post #5735 of 6935 Old 03-24-2019, 04:38 AM
AVS Forum Special Member
 
Neo-XP's Avatar
 
Join Date: Jun 2018
Location: Switzerland
Posts: 1,017
Mentioned: 155 Post(s)
Tagged: 0 Thread(s)
Quoted: 723 Post(s)
Liked: 921
Quote:
Originally Posted by Soulnight View Post
No, you don't need a new formula.
I'm not sure. The calculated "ideal" target is making the first image too dark here for me.

It shouldn't be that dark in the first place IMO, even taken out of context.

Quote:
Originally Posted by Soulnight View Post
The solution is to "not reset" the target nits when it should not be resetted.
Sometimes, there is more than one solution It is not because we didn't find it that it doesn't exist.

Quote:
Originally Posted by Soulnight View Post
Starting with a new target nits at each single camera cut goes clearly against the director intend since you loose the correct relative brightness.
It doesn't necessarily have to be (?). But it has to be, for very rare cases like the one from The Meg, with the current FALL/BT2390 formulas implementations.

Quote:
Originally Posted by Soulnight View Post
It's ok to cut but only when the content is very different.
How do you know it's a very different content? Only based on FALL and avgFALL changes? You will probably end up merging everything just to get one sequence right in the end.

The content could be very different and part of the same "chapter".

Quote:
Originally Posted by Soulnight View Post
See example from @Fer15 who got the proper relative brightness within the whole scene when it did not cut at each scene and simply rolled:
It worked for this scene, only because everything is merged and the scenes are very short. It wasn't working for Fer15 with a rolling average of 120 in the first place.

Merging scenes like this causes a lot of visible brightness adaptation with other titles, even if the adaptation is slow. Not the ideal solution for me.
Neo-XP is online now  
post #5736 of 6935 Old 03-24-2019, 05:03 AM - Thread Starter
AVS Forum Special Member
 
Soulnight's Avatar
 
Join Date: Dec 2011
Location: Germany
Posts: 1,409
Mentioned: 216 Post(s)
Tagged: 0 Thread(s)
Quoted: 1044 Post(s)
Liked: 1646
Quote:
Originally Posted by Neo-XP View Post
I'm not sure. The calculated "ideal" target is making the first image too dark here for me.

It shouldn't be that dark in the first place IMO, even taken out of context.



Sometimes, there is more than one solution It is not because we didn't find it that it doesn't exist.



It doesn't necessarily have to be (?). But it has to be, for very rare cases like the one from The Meg, with the current FALL/BT2390 formulas implementations.



How do you know it's a very different content? Only based on FALL and avgFALL changes? You will probably end up merging everything just to get one sequence right in the end.

The content could be very different and part of the same "chapter".



It worked for this scene, only because everything is merged and the scenes are very short. It wasn't working for Fer15 with a rolling average of 120 in the first place.

Merging scenes like this causes a lot of visible brightness adaptation with other titles, even if the adaptation is slow. Not the ideal solution for me.
You're mixing everything together.

1) yes, sure, we can always find a better formula. Agree with you here.

Or maybe something smarter based on a neural network which recognizes what a sky is and and what a person is. It could then put a higher weighting factor for the target nits calculation on the person that on the sky?

2) If you are still in the same "chapter" of people talking back and forth together and you reset the target nits to a very different target/ brightness at each camera cut, you do hurt the director intend every single time. This is also the same issue for your example in Lucy. You said the target nits nits should be close together at the cut and I say that you are right, the target should be even ideally identical on both side of the camera change to respect relative brightness.

3) How do you know it belongs together or does not is the key question. But it's not because the answer is not obvious that it makes cutting and resetting the target nits at each camera change any more acceptable.

But I do believe that looking at a max change in Fall could work well as I said before. A factor 5 maybe. If less than that, roll. Otherwise cut.

Or maybe again we could train a neural network by telling him what is for us a "real cut" and what is not and let him to the recognition for us

4) If it works for this very difficult scene in the Meg, it can work anywhere else.

In the live algo you have the speed limit to ensure you don't see brightness adaptation. There should therefore be no visible adaptation or are you saying you still see brightness adaptation with your settings in the live algo for max speed change?

In the tool, you need to use at least 240 frames for the same reason so that the change is not visible. It's a similar control to the speed change.

But merging those scenes has nothing to do with the "rolling avg duration " in the tool. It has only something to do with the "chapter merge" and "scene merge " which are both only looking at the realative change in Fall value (something the live algo can easily do).

5) There is no reason why we should see brightness adaptation if we control the maximum speed change in the Live algo.

Same with the tool.
Manni01 and Neo-XP like this.

Last edited by Soulnight; 03-24-2019 at 05:17 AM.
Soulnight is offline  
post #5737 of 6935 Old 03-24-2019, 05:13 AM
AVS Forum Special Member
 
Manni01's Avatar
 
Join Date: Sep 2008
Posts: 9,128
Mentioned: 331 Post(s)
Tagged: 0 Thread(s)
Quoted: 5452 Post(s)
Liked: 5656
I agree with all the above.

So with the *currently available* settings in the live algo, how do you tune the maximum speed change to avoid the situation in The Meg with the little girl?

Thanks to Neo-XP, we've established that these values should be linked to the peak nits (hopefully madshi will do this automatically in a future build), so if we take my peak nits of 115nits, how do you suggest with specify the speed values at each level?

My settings were fairly close to Neo-XP's although adapted to 115nits. Maybe that's where we're wrong. Do you have a better suggestion for these speed settings specifically?

Please be specific or explain how this work, because I might have misunderstood the way to set these.

I'll be happy to try your suggestion and report back.

Batch Utility V4.02 May 16 2019 to automate measurements files for madVR with support for BD Folders
Manni01 is offline  
post #5738 of 6935 Old 03-24-2019, 05:20 AM - Thread Starter
AVS Forum Special Member
 
Soulnight's Avatar
 
Join Date: Dec 2011
Location: Germany
Posts: 1,409
Mentioned: 216 Post(s)
Tagged: 0 Thread(s)
Quoted: 1044 Post(s)
Liked: 1646
Quote:
Originally Posted by Manni01 View Post
I agree with all the above.

So with the *currently available* settings in the live algo, how do you tune the maximum speed change to avoid the situation in The Meg with the little girl?

Thanks to Neo-XP, we've established that these values should be linked to the peak nits (hopefully madshi will do this automatically in a future build), so if we take my peak nits of 115nits, how do you suggest with specify the speed values at each level?

My settings were fairly close to Neo-XP's although adapted to 115nits. Maybe that's where we're wrong. Do you have a better suggestion for these speed settings specifically?

Please be specific or explain how this work, because I might have misunderstood the way to set these.

I'll be happy to try your suggestion and report back.
The max speed change only works within what madVR live algo calls a scene. At each scene cut, it does instantaneous target nits adaptation by itself

So first, we need to find a good way to deactivate target reset within this "chapter" with the live algo. Then we can work on the max speed adaptation.
Manni01 likes this.
Soulnight is offline  
post #5739 of 6935 Old 03-24-2019, 05:30 AM
AVS Forum Special Member
 
Manni01's Avatar
 
Join Date: Sep 2008
Posts: 9,128
Mentioned: 331 Post(s)
Tagged: 0 Thread(s)
Quoted: 5452 Post(s)
Liked: 5656
Okay, so we agree there is no fix for these situations at the moment with the current implementation in the live algo.

Can't wait for madVR to be able to "look into the future" with the live algo. Although I'm not sure that a few seconds will be enough to handle these situations. It will help with very short gunshots flares etc, but if a shot is longer than the GPU queue, I'm not sure how that will help madVR to avoid the unwanted target change.

Batch Utility V4.02 May 16 2019 to automate measurements files for madVR with support for BD Folders

Last edited by Manni01; 03-24-2019 at 05:40 AM.
Manni01 is offline  
post #5740 of 6935 Old 03-24-2019, 05:35 AM - Thread Starter
AVS Forum Special Member
 
Soulnight's Avatar
 
Join Date: Dec 2011
Location: Germany
Posts: 1,409
Mentioned: 216 Post(s)
Tagged: 0 Thread(s)
Quoted: 1044 Post(s)
Liked: 1646
Quote:
Originally Posted by Manni01 View Post
Okay, so we agree there is no fix for these situations at the moment with the current implementation in the live algo.
Not sure.

What if you put some high value in the "scene threshold " 1 & 2 ? This should cut much less often, no?
Soulnight is offline  
post #5741 of 6935 Old 03-24-2019, 05:39 AM
AVS Forum Special Member
 
Manni01's Avatar
 
Join Date: Sep 2008
Posts: 9,128
Mentioned: 331 Post(s)
Tagged: 0 Thread(s)
Quoted: 5452 Post(s)
Liked: 5656
Quote:
Originally Posted by Soulnight View Post
Not sure.

What if you put some high value in the "scene threshold " 1 & 2 ? This should cut much less often, no?
Sure, it might help with this scene, but then we create other situations where we do want a target change and it's missed.

Batch Utility V4.02 May 16 2019 to automate measurements files for madVR with support for BD Folders
Manni01 is offline  
post #5742 of 6935 Old 03-24-2019, 06:00 AM
AVS Forum Special Member
 
Neo-XP's Avatar
 
Join Date: Jun 2018
Location: Switzerland
Posts: 1,017
Mentioned: 155 Post(s)
Tagged: 0 Thread(s)
Quoted: 723 Post(s)
Liked: 921
Quote:
Originally Posted by Soulnight View Post
1) yes, sure, we can always find a better formula. Agree with you here.

Or maybe something smarter based on a neural network which recognizes what a sky is and and what a person is. It could then put a higher weighting factor for the target nits calculation on the person that on the sky?
Without going too crazy with a neural network , for that kind of cases, I noticed there is always a big brightness curve/peak at the end of the luminance graph, which is separated from the rest of the other pixels.

It happens with The Meg's famous scene, and with this sequence from Lucy too:


Lucy - Frames 116574 & 116575

If we can find a way to ignore this separated curve/peak somehow, it should already improve a lot, probably completely fix the issue for me.

Quote:
Originally Posted by Soulnight View Post
2) If you are still in the same "chapter" of people talking back and forth together and you reset the target nits to a very different target/ brightness at each camera cut, you do hurt the director intend every single time. This is also the same issue for your example in Lucy. You said the target nits nits should be close together at the cut and I say that you are right, the target should be even ideally identical on both side of the camera change to respect relative brightness.
Maybe not the same, but definitely closer (maybe the same, I don't know for sure).

Quote:
Originally Posted by Soulnight View Post
3) How do you know it belongs together or does not is the key question. But it's not because the answer is not obvious that it makes cutting and resetting the target nits at each camera change any more acceptable.
Quote:
Originally Posted by Soulnight View Post
But I do believe that looking at a max change in Fall could work well as I said before. A factor 5 maybe. If less than that, roll. Otherwise cut.
The sequence from The Meg already needs a FALL factor of ~8, so probably a factor of 10 would be "safe".

Quote:
Originally Posted by Soulnight View Post
4) If it works for this very difficult scene in the Meg, it can work anywhere else.
Yes, it should

Quote:
Originally Posted by Soulnight View Post
In the live algo you have the speed limit to ensure you don't see brightness adaptation. There should therefore be no visible adaptation or are you saying you still see brightness adaptation with your settings in the live algo for max speed change?
Not with my settings, but if I don't reset at every cut, it's awful.

I have to decrease the brightness speeds a lot not to notice the brightness adaptation, and then it doesn't work when fast brightness adaptations are needed (BvS scene for instance at 02:30:55).

Quote:
Originally Posted by Soulnight View Post
5) There is no reason why we should see brightness adaptation if we control the maximum speed change in the Live algo.

Same with the tool.
For the Live algo, unfortunately, if you make it slow enough not to notice the brightness adaptations caused by merging the scenes, you will also get against director's intent by displaying images at the wrong target for too long (like the BvS scene mentioned above which is so dark you can't see anything for a few seconds).

I'm sure you or madshi will find something to fix this little remaining issue It has already improved a lot with the FALL algo compared to the first "Flo" algo.

Time to watch some HDR movie
Manni01, Soulnight and Fer15 like this.
Neo-XP is online now  
post #5743 of 6935 Old 03-24-2019, 06:22 AM - Thread Starter
AVS Forum Special Member
 
Soulnight's Avatar
 
Join Date: Dec 2011
Location: Germany
Posts: 1,409
Mentioned: 216 Post(s)
Tagged: 0 Thread(s)
Quoted: 1044 Post(s)
Liked: 1646
Quote:
Originally Posted by Neo-XP View Post
Without going too crazy with a neural network , for that kind of cases, I noticed there is always a big brightness curve/peak at the end of the luminance graph, which is separated from the rest of the other pixels.

It happens with The Meg's famous scene, and with this sequence from Lucy too:


Lucy - Frames 116574 & 116575

If we can find a way to ignore this separated curve/peak somehow, it should already improve a lot, probably completely fix the issue for me.
You are right.
In this example but also in your Lucy first example in the hotel, the sky through the window on the 2nd frame in the reason why the target goes so high.

Same in the meg... and when the sky is gone like for the little girl, the Fall and so the target drops a lot.

I already though of something like that in the past but did not have time to try it yet. But I like the idea

The idea would be similar to the algo for our smart dynamic clipping with highlights knee detection.

We would have to detect the nits which separates the bright area (sky most often) from the rest of the picture. We could then give them different weighting factor in the calculation of this "Fall-feeling" calculation.

Maybe we also need a threshold so that we don't apply a low weighted factor on say 90% of the picture if the sky takes 90% of the picture. Maybe only limit the lower weighting factor to maximum 50% of the picture.

At the same time, we still need to consider "enough" of the sky so that it does not get overblown.


Also, everything below 100nits should have a normal weighted factor of 1. So we would have to limit this weighting logic only for the highlights above 100nits.

----

But for the crazier idea ... I am pretty sure there is already some efficient neural network available somewhere which recognizes a person from the background, or the sky even.
It may be easier to implement than we think if already available.
Manni01 and Neo-XP like this.

Last edited by Soulnight; 03-24-2019 at 06:29 AM.
Soulnight is offline  
post #5744 of 6935 Old 03-24-2019, 06:37 AM - Thread Starter
AVS Forum Special Member
 
Soulnight's Avatar
 
Join Date: Dec 2011
Location: Germany
Posts: 1,409
Mentioned: 216 Post(s)
Tagged: 0 Thread(s)
Quoted: 1044 Post(s)
Liked: 1646
Directly found something when I googled "sky detection" which could help to handle the sky brightness differently than the rest and get more stable target nits as a result.

https://pdfs.semanticscholar.org/b6a...604.1553434288

"Sky mask"
https://www.semanticscholar.org/pape...06110/figure/0


Or here:

https://www.google.com/amp/s/www.res...l_Networks/amp

Quote:
Nowadays, the wide range of approaches for sky segmentation and horizon line detection are mainly based on three different methods. Methods based on the classification of sky and non-sky regions of the image using machine learning techniques [3], [4], [5], [6],[7], [8], [9], [10], methods based on edge detection techniques, [11], [12], [13], [14], and methods based on pixel classification for belonging to the horizon line [15]. Todorovic et al. [3] proposed a Hidden Markov Tree model trained with Expectation Maximization using color and texture features, giving acceptable results for real-time constrains (120 ms per frame of 640 × 480 on an Athlon 1.8 GHz). ...
Manni01, Fer15 and Neo-XP like this.

Last edited by Soulnight; 03-24-2019 at 06:50 AM.
Soulnight is offline  
post #5745 of 6935 Old 03-24-2019, 06:42 AM
Senior Member
 
Join Date: Jan 2018
Posts: 364
Mentioned: 71 Post(s)
Tagged: 0 Thread(s)
Quoted: 203 Post(s)
Liked: 245
Yeah, the sky is the big problem in The Meg

The picture of the guy has a frame FALL of approximately 833 with a measured peak around 2600 in the live algo

All of that brightness is in the sky, there is large zone of with a constant 2000 nits or so i.e.







by contrast, the picture of the girl has a frame FALL of approximately 116 with a measured peak around 4000 in the live algo






That 4000+ peak is the reflection of light on the hair clip (if you zoom in on the scopes, there's a faint part that goes above 4000+ nits (just above 896 in the scope units)). There are other faint traces around the 768 mark (1000 nits), but most of the image is under 1000 nits
Manni01 and Soulnight like this.
Fer15 is offline  
post #5746 of 6935 Old 03-24-2019, 07:40 AM - Thread Starter
AVS Forum Special Member
 
Soulnight's Avatar
 
Join Date: Dec 2011
Location: Germany
Posts: 1,409
Mentioned: 216 Post(s)
Tagged: 0 Thread(s)
Quoted: 1044 Post(s)
Liked: 1646
Also another idea for the Live algo to jump less at a camera cut would be to only PARTIALLY reset the target nits if the Fall change is not crazy big (less than factor 10 for example).

To take the example of the Meg little girl scene/ chapter:

When we have sky in the background, the Fall algo / bt2390 algo can ask for 1500 to 2000nits, and then the little girl scene only ask for around 500nits.

What we could do is to say "ok, camera cut detected, and the target nits should now go down so let's reset to the value" to:
Max(ideal calculated target nits of frame i, 75% of the target nits of frame i-1).

In our case, we would go from the scene with background sky of let say 1500 to the little girl to max(500, 0.75×1500)=0.75×1500=1125nits

And then madVR live algo could roll from there.

Same thing backwards when we go backup.

This would better respect the director intent of relative brightness by jumping less while still making the life easier for the LIVE algo than completely rolling over the cut.

Combined with a sky "fix", this should make things much better!

Edit:
and if we create a sepate parameter for this "partial reset strength " we can vary from anything
from:
0% which would be you reset completely disregarding completely the scene before the camera cut

To:
100% which would basically ignore the camera cut entirely and just roll.

This really seems like a nice parameter to have as a compromise between ideal target nits and respect of the relative brightness from the director intent.
Manni01 likes this.

Last edited by Soulnight; 03-24-2019 at 08:26 AM.
Soulnight is offline  
post #5747 of 6935 Old 03-24-2019, 08:06 AM
AVS Forum Special Member
 
Neo-XP's Avatar
 
Join Date: Jun 2018
Location: Switzerland
Posts: 1,017
Mentioned: 155 Post(s)
Tagged: 0 Thread(s)
Quoted: 723 Post(s)
Liked: 921
Not skies, but same problem:


2001 - From 00:41:06 to 00:45:35
Manni01 and Fer15 like this.
Neo-XP is online now  
post #5748 of 6935 Old 03-24-2019, 08:19 AM - Thread Starter
AVS Forum Special Member
 
Soulnight's Avatar
 
Join Date: Dec 2011
Location: Germany
Posts: 1,409
Mentioned: 216 Post(s)
Tagged: 0 Thread(s)
Quoted: 1044 Post(s)
Liked: 1646
Quote:
Originally Posted by Neo-XP View Post
Not skies, but same problem:
[/I]
Yes, another good example.
Should be no different that a sky or window to detect with the histogram.

Last edited by Soulnight; 03-24-2019 at 08:27 AM.
Soulnight is offline  
post #5749 of 6935 Old 03-24-2019, 09:36 AM - Thread Starter
AVS Forum Special Member
 
Soulnight's Avatar
 
Join Date: Dec 2011
Location: Germany
Posts: 1,409
Mentioned: 216 Post(s)
Tagged: 0 Thread(s)
Quoted: 1044 Post(s)
Liked: 1646
Maybe this can help us differentiate what a scene is compared to what a shot is:

Quote:
Video scene detection is the task of temporally dividing a video into semantic scenes. A scene is defined as a series of consecutive shots which depicts some high-level concept or story (where each shot is a series of frames taken from the same camera at the same time). In Hollywood films, for example, different scenes may consist of a car chase scene or a relaxed picnic on a beach. In a talk show or news broadcast, a scene might be defined by a specific topic that was discussed.
Quote:
The method first performs shot boundary detection to detect the shots in the video. Then we extract an audio and visual representation for each shot, and perform optimal sequential grouping of the intermediate-fusion of this multimodal representation.
https://www.research.ibm.com/haifa/projects/imt/video/

https://www.research.ibm.com/haifa/p..._DataSet.shtml

https://www.ibm.com/blogs/research/2...ene-detection/

So madVR live algo current "scene detection" is actually a "shot detection " algo instead.

And what I called "chapter" as a collection of scene which belongs together should actually be called "scenes" instead.

And there is quite a lot of litterature online for both.

But shot detection seems much easier than real scene detection. And I believe real scene detection can only be performed before movie start when analyzing the full movie to regroup shots with the same semantics together.
Manni01 and Fer15 like this.

Last edited by Soulnight; 03-24-2019 at 10:20 AM.
Soulnight is offline  
post #5750 of 6935 Old 03-24-2019, 10:29 AM
Senior Member
 
Join Date: Jan 2018
Posts: 364
Mentioned: 71 Post(s)
Tagged: 0 Thread(s)
Quoted: 203 Post(s)
Liked: 245
Quote:
Originally Posted by Soulnight View Post
Maybe this can help us differentiate what a scene is compared to what a shot is:





https://www.research.ibm.com/haifa/projects/imt/video/

https://www.research.ibm.com/haifa/p..._DataSet.shtml

https://www.ibm.com/blogs/research/2...ene-detection/

So madVR live algo current "scene detection" is actually a "shot detection " algo instead.

And what I called "chapter" as a collection of scene which belongs together should actually be called "scenes" instead.

And there is quite a lot of litterature online for both.

But shot detection seems much easier than real scene detection. And I believe real scene detection can only be performed before movie start when analyzing the full movie to regroup shots with the same semantics together.



I don't know if either of these would also be of any use? (I came across these a while ago). One of them has over 800 Google Scholar citations (the other has 74), but they also refer to what you classify as scene detection (both links refer to it as segmentation), and are an overview of the different methods from the existing literature at the time of publication

https://www.researchgate.net/profile...7bf4dacbed.pdf

http://citeseerx.ist.psu.edu/viewdoc...=rep1&type=pdf
Manni01, Colozeus and Neo-XP like this.

Last edited by Fer15; 03-24-2019 at 11:55 AM.
Fer15 is offline  
post #5751 of 6935 Old 03-25-2019, 04:57 AM
Advanced Member
 
ddgdl's Avatar
 
Join Date: Mar 2007
Posts: 623
Mentioned: 6 Post(s)
Tagged: 0 Thread(s)
Quoted: 337 Post(s)
Liked: 311
Poor madshi goes away (or works on other projects) for a few days, and when he comes back he will be buried in requests for neural network sky and scene detection 😛
ddgdl is offline  
post #5752 of 6935 Old 03-25-2019, 05:27 AM
AVS Forum Special Member
 
Manni01's Avatar
 
Join Date: Sep 2008
Posts: 9,128
Mentioned: 331 Post(s)
Tagged: 0 Thread(s)
Quoted: 5452 Post(s)
Liked: 5656
Batch Utility V3.31 with support for measurements files in BD Folders

[EDIT: This version has been replaced by V4.0. Please go here for more details and to download the latest version.]

As we can now get full UHD Bluray menu support with jRiver, I've started ripping my discs to BD Folders.
[EDIT: to specifically measure BD Folders, I recommend using @3ll3d00d 's utility posted here. I'll update my batch utility when I have a chance.]

I'm not ready to lose the improvements brought by measurements files (yet!) so I've done some work to get these supported, as pandm1965's tool doesn't support network shares.

It's a work in progress, as for now measurements files are only detected when playing the main movie in jRiver, not when using the menu. But I'm hoping that @madshi can fix this at some point. [EDIT: it looks like Nevcairiel has made some changes in LAV that should help with this]

At least this allows us to not have to rip to both BD Folders to get menus and mkvs to get measurements files.

I'll explain how it works so that others can implement in their tools or do it manually if they wish [below EDITED following debugging with jRiver]:

1) It's pointless to measure all the .m2ts files in the STREAM folder. Not necessary and time consuming.
2) The file that needs to be measured is the main .mpls file in the PLAYLIST folder. This is the case for all players (tested with jRiver and MPC-BE).
3) However, the file name always changes, even if it's often one of a few common ones (00800.mpls for example).
4) To get around this, one can measure the index.bdmv file in the BDMV folder. This will create a measurements file equal to the one you'd get measuring the main playlist directly.
5) Then you have to copy the index.bdmv.measurements file into the PLAYLIST folder [EDIT: done automatically from V3.31], and rename it so that it relates to the main playlist, so 00800.mpls.measurements in our example).
6) To identify the main playlist, set your player to display the whole path in the title bar or seek bar and drop the whole folder in it. That way, you can see which file is the main playlist.
7) With some titles (those with different versions of the movie, TV Series, etc) you might have to measure more than one .mpls file manually, or if you just use index.bdmv you usally will only have the longest one (so extended version, not theatrical). The batch file doesn't handle this yet at this stage. [EDIT: now possible, see point 9 below].
8) Different players will occasionally identify different playlists as the main playlist. For example, on American Assassin, parsing index.bdmv will lead MPC-BE to detect 00802.mpls as the main playlist, but jRiver will play 00801.mpls. In that case, index.bdmv.measurements should be copied to 00802.mpls.measurements and you have to remeasure 00801.mpls manually to create the necessary 00801.mpls.measurements.
9) For this reason, I've added in V3.31 the possiblity to use a fast mode that only measures the main .mpls, and a slower but automatic mode that measures all the .mpls files. That mode works with all titles (TV Series, more than one version of the film, etc) but it can raise measurement time exponentially (think one hour or more per title instead of 15 minutes in fast.manual mode). Note that until madVR supports measurements files when using the menus, measurements won't be used when using menus to select the file to play in jRiver.

If Anna and Flo were kind enough to add .mpls support to the filetypes they recognize for optimization (bdmv doesn't seem to be necessary), we would be able to optimise these measurements files as well (I'll post in the other thread to suggest a few things to @Soulnight ).

I've added another feature from the V3.20 of the batch file: if you provide the registry files, it can now switch automatically to D3D11 native for better performance during measurements, and back to D3D11 copyback (if you want to) to restore software only features, such as black bars detection in madVR or UHD Bluray menus in jRiver, as these don't work with native, they need copyback.

Please don't ask questions about this batch file here in order not to derail the thread. This is provided as is, to be used at your own risks. It should be very straightforward to adapt for anyone familiar with batch files. If you're not, then it's not a tool for you .

[EDIT: I updated the file to V3.21 as V3.20 was measuring unnecessary .bdmv files in some titles]
[EDIT: I update the file to V3.30 to add the fast/slow option for BD Folders measurements and also delete an unnecessary PAUSE command that was only there for testing.]
[EDIT: I update the file to V3.31 to correct the scan of files in the BACKUP folder, copy automatically the index.bdmv.measurements file to the PLAYLIST folder, and create automatically an empty index.bdmv.measurements file in the BDMV folder to prevent re-measuring when the folder is scanned the next time. I also deleted the Auto Fast/Slow variable so that you can use one or the other method with different shares, for example index.bdmv only with movies and all .mpls files with TV Series. Examples are provided in the batch file to illustrate this.]

Batch Utility V4.02 May 16 2019 to automate measurements files for madVR with support for BD Folders

Last edited by Manni01; 05-15-2019 at 04:36 PM.
Manni01 is offline  
post #5753 of 6935 Old 03-25-2019, 11:10 AM
AVS Forum Special Member
 
Manni01's Avatar
 
Join Date: Sep 2008
Posts: 9,128
Mentioned: 331 Post(s)
Tagged: 0 Thread(s)
Quoted: 5452 Post(s)
Liked: 5656
I edited my explanations in the post above following some debugging with jRiver. It looked like index.bdmv was needed, but it's not unless you use additional rules (not sure if it's a bug or not, discussing with Nevcairiel at the moment).

[EDIT: it is a bug, it will be corrected in an upcoming version of MC25, hopefully the same one that should give madshi the ability to detect a playlist name change when using menus, which hopefully will enable measurements files with full menus].

It looks like index.bdmv is only needed as a proxy to automate the file measurements, but the only files for which the measurements need to be present during playback are the .mpls ones.

I'm going to post soon a new version with two options: a (very) slow but automatic one that measures all the .mpls files, which will be useful for TV Series and titles with more than one version of the film, and a fast but manual one that measures only the main playlist (as detected when parsing index.bdmv) but requires a manual step to copy it into the PLAYLIST folder with the correct name.

[EDIT: As promised, V3.30 uploaded in the link above to add a FAST manual / SLOW automatic option for measuring BD Folders.]
SamuriHL and Neo-XP like this.

Batch Utility V4.02 May 16 2019 to automate measurements files for madVR with support for BD Folders

Last edited by Manni01; 03-26-2019 at 05:10 AM.
Manni01 is offline  
post #5754 of 6935 Old 03-25-2019, 12:31 PM
Member
 
Join Date: Sep 2018
Posts: 49
Mentioned: 8 Post(s)
Tagged: 0 Thread(s)
Quoted: 27 Post(s)
Liked: 14
with mpc-be, when you play an iso file, if mpc-be is configured to show "full path" in title bar text you can see the mpls it plays. so you can (i do) script this to retreive the name and create the right measurements file (to save this, i rename it to : name of isofile[name of mpls file].measurements and only have tio rename it just before playing it. (obviously, everything is automatic)
xxxx5 is offline  
post #5755 of 6935 Old 03-25-2019, 01:42 PM
AVS Forum Special Member
 
Manni01's Avatar
 
Join Date: Sep 2008
Posts: 9,128
Mentioned: 331 Post(s)
Tagged: 0 Thread(s)
Quoted: 5452 Post(s)
Liked: 5656
Quote:
Originally Posted by xxxx5 View Post
with mpc-be, when you play an iso file, if mpc-be is configured to show "full path" in title bar text you can see the mpls it plays. so you can (i do) script this to retreive the name and create the right measurements file (to save this, i rename it to : name of isofile[name of mpls file].measurements and only have tio rename it just before playing it. (obviously, everything is automatic)
Yes, that's what I mention in point 6 above: set your player to display full path to identify the main playlist

As I said though, two players may not identify the same main playlist.

Please could you post your script that automatically retrieves the correct playlist name to create the correct measurement file? I haven't found a way to automate this apart from measuring index.bdmv and renaming manually.

Thanks!

Batch Utility V4.02 May 16 2019 to automate measurements files for madVR with support for BD Folders
Manni01 is offline  
post #5756 of 6935 Old 03-26-2019, 12:43 AM
Member
 
Join Date: Sep 2018
Posts: 49
Mentioned: 8 Post(s)
Tagged: 0 Thread(s)
Quoted: 27 Post(s)
Liked: 14
the script is an extract from my whole home theater script
it is writen with autoit
you can find the function searching for the right mpls in madMeasureHDR4iso.au3 => func searchplaylist($in)
Attached Files
File Type: zip pack.zip (10.3 KB, 36 views)
Manni01 likes this.
xxxx5 is offline  
post #5757 of 6935 Old 03-26-2019, 01:52 AM
AVS Forum Special Member
 
markmon1's Avatar
 
Join Date: Dec 2006
Posts: 6,223
Mentioned: 106 Post(s)
Tagged: 0 Thread(s)
Quoted: 5227 Post(s)
Liked: 3400
Quote:
Originally Posted by xxxx5 View Post
the script is an extract from my whole home theater script
it is writen with autoit
you can find the function searching for the right mpls in madMeasureHDR4iso.au3 => func searchplaylist($in)
Wow my entire theater system is scripted with a series of autoit scripts as well from control of my projector to control of my receiver to detecting true video aspect, activating masking, marking video watched in plex, etc.

Video: JVC RS4500 135" screen in pure black room no light, htpc nvidia 1080ti.
Audio: Anthem mrx720 running 7.1.4, McIntosh MC-303, MC-152, B&W 802d3 LR, B&W HTM1D3 center, B&W 805d3 surround, B&W 702S2 rear, B&W 706s2 x 4 shelf mounted for atmos, 2 sub arrays both infinite baffle: 4x15 fi audio running on behringer ep4000 + 4x12 fi audio running on 2nd ep4000.
markmon1 is offline  
post #5758 of 6935 Old 03-26-2019, 01:58 AM
Advanced Member
 
chros73's Avatar
 
Join Date: Jan 2015
Posts: 526
Mentioned: 12 Post(s)
Tagged: 0 Thread(s)
Quoted: 315 Post(s)
Liked: 140
Quote:
Originally Posted by markmon1 View Post
Wow my entire theater system is scripted with a series of autoit scripts as well from control of my projector to control of my receiver to detecting true video aspect, activating masking, marking video watched in plex, etc.
Guys, you are really professional! And I thought I'm an advanced user (scripting/automating AVR, smartbulbs, etc)

Ryzen 5 2600,Asus Prime b450-Plus,16GB,MSI GTX 1060 Gaming X 6GB(v385.28),Win10 LTSB 1607,MPC-BEx64+LAV+MadVR,Yamaha RX-A870,LG OLED65B8(04.10.25+PC4:4:[email protected]/24/25/29/30/50/59/60Hz)
chros73 is offline  
post #5759 of 6935 Old 03-26-2019, 11:43 AM
AVS Forum Special Member
 
Manni01's Avatar
 
Join Date: Sep 2008
Posts: 9,128
Mentioned: 331 Post(s)
Tagged: 0 Thread(s)
Quoted: 5452 Post(s)
Liked: 5656
Quote:
Originally Posted by xxxx5 View Post
the script is an extract from my whole home theater script
it is writen with autoit
you can find the function searching for the right mpls in madMeasureHDR4iso.au3 => func searchplaylist($in)
Thanks a lot for sharing this!

I had a look, and unfortunately as you're using autoit that's not something I can call from my batch command line utility.

But capturing the main .mpls title in the title bar of MPC-BE is very clever and I hope it will help @Soulnight and others to implement this in their windows utility, should they find it useful/relevant for their users.

Merci beaucoup

Batch Utility V4.02 May 16 2019 to automate measurements files for madVR with support for BD Folders

Last edited by Manni01; 03-26-2019 at 01:53 PM.
Manni01 is offline  
post #5760 of 6935 Old 03-26-2019, 09:39 PM
Member
 
Join Date: Jan 2018
Posts: 78
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 79 Post(s)
Liked: 82
Quote:
Originally Posted by Neo-XP View Post
I just finished watching Aquaman with these settings:



No problem found, the IMAX sequences were astonishing

PS: I lowered the dynamic tuning value to 50, because 75 was going too high on some scenes, and also to get less target differences between scenes. It's close to perfection to me now.

Yeah can confirm Aquaman didn’t show any issues with these settings, thanks for posting them. I did end up changing dynamic back to 75 but I think that all depends on the preference and setup.

Tonight I watched Bumblebee which is a 1.85:1 movie, on my scoped screen I zoom the image in for 1.85:1 movies so the target nits needed to be increased. Is there anyway to save different HDR settings for 1.85 vs 2.35 movies? Maybe not in these testing builds but is that something that can be there when this goes to release?
Neo-XP likes this.
gigq is offline  
Sponsored Links
Advertisement
 
Reply Digital Hi-End Projectors - $3,000+ USD MSRP

Tags
dynamic tone mapping , hdr , madvr , sdr , ton mapping

Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off