Improving Madvr HDR to SDR mapping for projector - Page 199 - AVS Forum | Home Theater Discussions And Reviews
Forum Jump: 
 4754Likes
Reply
 
Thread Tools
post #5941 of 6029 Old 05-21-2019, 03:49 PM
AVS Forum Special Member
 
Javs's Avatar
 
Join Date: Dec 2012
Location: Sydney
Posts: 7,836
Mentioned: 474 Post(s)
Tagged: 0 Thread(s)
Quoted: 6779 Post(s)
Liked: 6393
Quote:
Originally Posted by madshi View Post
Great, thanks! So we have now oficially completed Metric2 tuning. No more Excel sheet tests.

Tomorrow I'll release a new test build with additional options to combine Metric1 and Metric2 in new ways.
Hurrah!
Colozeus likes this.

JVC X9500 (RS620) | 120" 16:9 | Marantz AV7702 MkII | Emotiva XPA-7 | DIY Modular Towers | DIY TPL-150 Surrounds | DIY Atmos | DIY 18" Subs
-
MadVR Settings | UHD Waveform Analysis | Arve Tool Instructions + V3 Javs Curves
Javs is offline  
Sponsored Links
Advertisement
 
post #5942 of 6029 Old 05-22-2019, 04:59 AM
AVS Forum Special Member
 
madshi's Avatar
 
Join Date: May 2005
Posts: 7,571
Mentioned: 456 Post(s)
Tagged: 0 Thread(s)
Quoted: 2330 Post(s)
Liked: 2997
Quote:
Originally Posted by Neo-XP View Post
It doesn't work for me with the fade in at the beginning of HP and the Deathly Hallows part 2 (http://www.mediafire.com/file/qtb3v8..._jump.mkv/file ).

It would also be nice to have it for fades out I think.
For instance, for the BvS scene at 02:11:13 to 02:11:19. When the green spear goes in/out of frame, a fade in/out is detected, but the target is too slow to adapt.
Quote:
Originally Posted by Neo-XP View Post
Edit: This sample contains the brightness speeds issue (not adapting immediately) during fades in, followed by multiple false positives caused by Metric1 and "ignored" by Metric2: http://www.mediafire.com/file/i11tmq...tives.mkv/file
Well, it's an important question how we define "fade in" and "fade out". I know, the OSD currently reports "fade in" and "fade out" sometimes, even in the middle of a scene. And that might lead us to believe that madVR could/should react faster to brightness changes in such a situation. However, the big problem is that such a fade in/out detection could produce false alarms. E.g. if there's an explosion, it could be detected as a fade in. Or if a lamp slowly dims down and then goes off, it could be detected as a fade out. But are these really fade in/out situations? Not really.

In the stricter definition of "fade in/out", a "fade in" would be from a completely black screen to the final scene brightness. And a "fade out" would be from the normal scene brightness to a completely black screen. I suppose the same could also happen with a completely white screen, although that's much rarer in movies.

Now all 3 of the scenes you listed above are not classic "fade in" scenarios. They have valid image content throughout. There's no full black screen at any point. So it's not really a "fade in". It's more of a "the scene gets brighter for some reason in a fluid way". madVR detects that (and reports it in the OSD) as a "fade in", but actually madVR also currently ignores it - unless it really started from a completely black screen. That's why the logic works for the other scene you sent me a while ago, but not for these 3 new scenes.

Of course it's a valid question if the limitation to true black screens as a fade in starting point is necessary or not. Maybe we can consider any fluid brightening of the image to be some sort of "fade in" and then immediately adjust image brightness? However, I wonder if that wouldn't produce visible artifacts in some scenes. Also, currently the fade in/out detection just looks at the overall image APL. What happens if one half of the image gets much brighter, causing a fade in to be detected, and the other half of the image actually gets a bit darker at the same time? Do we still want to adjust to the increased APL immediately?

I'll modify the OSD logic in the next build to reflect the current behaviour that a "fade in" is only detected if it starts from a black screen. But of course we can still discuss if we should change the behaviour. Maybe we should react to fluid multi-frame APL increases/decreases faster? I'm not sure.

However, maybe this is also a topic we should postpone until we will look at brightness reaction speeds? I had the idea that brightness reaction speeds could generally be increased if Metric1 is high. This would not only - but also affect fade ins/outs. So maybe such a logic would already suffice to solve fade ins/outs completely?

FWIW, the fade in mis-detections in those 2 samples you uploaded are probably easy to fix by the new Metric1+Metric2 combination options available in the next build.

Edit: Just retested a sample you sent me earlier called "WB fades.mkv" (not sure if I named it that way or you). It's the Warner Brothers logo fading in and then out again. At some point I had made some changes to fix this, but with the latest build there's a brightness jump again during the fade out. Not sure how to fix this. In an older build I had increased the Metric1 threshold when I had detected a fade-out. Maybe I should do that in the new build, as well? But should I also increase the Metric2 threshold in that situation? I'm not sure...

Last edited by madshi; 05-22-2019 at 05:20 AM.
madshi is offline  
post #5943 of 6029 Old 05-22-2019, 05:44 AM
AVS Forum Special Member
 
madshi's Avatar
 
Join Date: May 2005
Posts: 7,571
Mentioned: 456 Post(s)
Tagged: 0 Thread(s)
Quoted: 2330 Post(s)
Liked: 2997
Here comes the next test build:

http://madshi.net/madVRhdrMeasure85.zip

There are new options, let me explain them:

1) There are now 3 "lines" of options for how to combine Metric1 and Metric2.

2) First in line 1 of the new options, madVR checks if either Metric1 or Metric2 are below a specific threshold. If Metric1 or Metric2 are below the specified thresholds, madVR considers the frame to definitely *not* be a scene change, and the lines 2 and 3 of the new options are not even looked at, anymore.

3) In case Metric1 and Metric2 are above the thresholds of line 1, madVR checks if either Metric1 or Metric2 are above the thresholds defined in line 2. If either them is, madVR considers the frame to definitely be a scene change, and line 3 of the new options is not even looked at, anymore.

4) If neither line 1 nor line 2 apply, madVR uses line 3 to decide whether we have a scene change or not. Line 3 allows you to choose which Metric1 and Metric2 thresholds suggest that there "probably" was a scene change. Also you can choose which Metric you consider more important here. A Metric1 weight of 50% means that both Metrics are considered equally important.

5) You can disable any of the options in line 1 or 2 by setting them to 0.

6) If you want to disable e.g. Metric2 completely in line 3, you can either set the Metric2 threshold to 0, or you can set the Metric1 weight to 100%.

7) There are 3 OSD histogram numbers: Metric1, Metric2 and "weighted average of Metric1+2".

I think/hope the purpose and behaviour of lines 1 and 2 are clear to everyone now? But line 3 might still be a bit unclear, so let me explain how line 3 works, based on a couple of examples:

-------

Example 1)

M1 threshold: 9; M2 threshold: 8; M1 weight: 50%
measured Metric1: 9.5
measured Metric2: 7.5

Now in the first step madVR "normalizes" Metric1/2, which means madVR divides the Metric1/2 measurements by the threshold you've chosen.

normalized Metric1: 9.5/9 = 1.0555
normalized Metric2: 7.5/8 = 0.9375

The "normalization" has the effect that any value above (or equal to) 1.0 suggests a scene change, and any value below 1.0 suggests no scene change. So looking at the numbers in this example, Metric1 suggests a scene change, and Metric2 suggests that there's no scene change.

Now the two Metrics are combined according to your chosen weight of 50% each:

final metric: 0.5 * 1.0555 + 0.5 * 0.9375 = 0.9965

So the final metric says this is not a scene change. The final metric would have to be 1.0 or higher to suggest a scene change.

Example 2)
M1 threshold: 9; M2 threshold: 8; M1 weight: 90%
measured Metric1: 9.5
measured Metric2: 7.5
normalized Metric1: 9.5/9 = 1.0555
normalized Metric2: 7.5/8 = 0.9375
final metric: 0.9 * 1.0555 + 0.1 * 0.9375 = 1.0437
result: scene change

Example 3)
M1 threshold: 9; M2 threshold: 8; M1 weight: 75%
measured Metric1: 9.5
measured Metric2: 5.0
normalized Metric1: 9.5/9 = 1.0555
normalized Metric2: 6.5/8 = 0.8125
final metric: 0.75 * 1.0555 + 0.25 * 0.8125 = 0.99475
result: no scene change

-------

open questions:

1) "disable sub of prev frame": Currently the previous frame is substracted before evaluating any of the 3 new lines of options. Is that the right way to handle this? Or maybe the previous frame should only be substracted when evaluating line 3 of the new options? I'm not sure.

2) "disable sub of prev frame": I assume this should apply to both Metric1 and Metric2, correct? Or should this maybe only apply to one of them?

3) There can be a "scene change" false positive when there are strong brightness changes multiple frames in a row, e.g. during a fade out. The "sub of prev frame" feature probably solves that. But maybe we should do something extra to make sure there's no false positive scene change detection during fade ins and fade outs, to avoid a sudden brightness jump? If so, what's the best approach? E.g. should be increase the thresholds in line 3 of the new options by 25% during a fade in/out? Should we also change lines 1 and 2 thresholds?

-------

Considering that this is not an Excel sheet style test build, anymore, I'll give you guys some time to test this. Maybe a week would make sense? Afterwards I'd like to drop at least lines 1 and 2 of the new options and hard code those values.
madshi is offline  
Sponsored Links
Advertisement
 
post #5944 of 6029 Old 05-22-2019, 01:06 PM
Advanced Member
 
Neo-XP's Avatar
 
Join Date: Jun 2018
Location: Switzerland
Posts: 835
Mentioned: 134 Post(s)
Tagged: 0 Thread(s)
Quoted: 572 Post(s)
Liked: 685
Quote:
Originally Posted by madshi View Post
Of course it's a valid question if the limitation to true black screens as a fade in starting point is necessary or not. Maybe we can consider any fluid brightening of the image to be some sort of "fade in" and then immediately adjust image brightness? However, I wonder if that wouldn't produce visible artifacts in some scenes. Also, currently the fade in/out detection just looks at the overall image APL. What happens if one half of the image gets much brighter, causing a fade in to be detected, and the other half of the image actually gets a bit darker at the same time? Do we still want to adjust to the increased APL immediately?
I don't see why it would be problem to immediately adapt to the "ideal" target in this case or why it would produce any bad effect, but I admit I also have a hard time imagining how it would look like.

Finding a real case scenario would help to check if there is any issue and how it could be improved IMO.

Quote:
Originally Posted by madshi View Post
However, maybe this is also a topic we should postpone until we will look at brightness reaction speeds? I had the idea that brightness reaction speeds could generally be increased if Metric1 is high. This would not only - but also affect fade ins/outs. So maybe such a logic would already suffice to solve fade ins/outs completely?
I don't think the brightness speeds can be fast enough for fade ins/outs, and it looks like to me that it could easily causes issues elsewhere to adapt the speeds based on Metric1 value.

Quote:
Originally Posted by madshi View Post
FWIW, the fade in mis-detections in those 2 samples you uploaded are probably easy to fix by the new Metric1+Metric2 combination options available in the next build.
Yes, we can easily fix the brightness jumps by telling madVR there is no scene change if Metric2 is below 2 for instance.

However, the brightness speeds are extremely slow for HP and the Deathly Hallows part 2 fade in. If adapting immediately on fade ins, the scene would be very different:

"normal" speeds / immediate adaptation



But what kind of result do we want here?

Quote:
Originally Posted by madshi View Post
Edit: Just retested a sample you sent me earlier called "WB fades.mkv" (not sure if I named it that way or you). It's the Warner Brothers logo fading in and then out again. At some point I had made some changes to fix this, but with the latest build there's a brightness jump again during the fade out. Not sure how to fix this. In an older build I had increased the Metric1 threshold when I had detected a fade-out. Maybe I should do that in the new build, as well? But should I also increase the Metric2 threshold in that situation? I'm not sure...
It wouldn't be a problem if adapting immediately on fade outs

Also, if it wasn't a fade to black, but a fast fade to a very dark image, the brightness adaptation would be whether to slow (the image would be too dark for too long) or too fast (visible brightness adaptation).

But if we adapt at the same time the APL changes, during fade ins/outs only of course, it should be smooth (same as the first scene of the Samsung Wonderland Demo which is now perfectly handled).

Time to play with build madVRhdrMeasure85
NoTechi likes this.
Neo-XP is online now  
post #5945 of 6029 Old 05-22-2019, 03:43 PM
AVS Forum Special Member
 
madshi's Avatar
 
Join Date: May 2005
Posts: 7,571
Mentioned: 456 Post(s)
Tagged: 0 Thread(s)
Quoted: 2330 Post(s)
Liked: 2997
Quote:
Originally Posted by Neo-XP View Post
I don't see why it would be problem to immediately adapt to the "ideal" target in this case or why it would produce any bad effect, but I admit I also have a hard time imagining how it would look like.

Finding a real case scenario would help to check if there is any issue and how it could be improved IMO.

I don't think the brightness speeds can be fast enough for fade ins/outs, and it looks like to me that it could easily causes issues elsewhere to adapt the speeds based on Metric1 value.
Well, one big problem coming to my mind right now is that a "fade in" is only a "fade in" if it lasts for several frames, and I don't know that in advance, if I don't look several frames ahead. So if I don't know that there's a fade in or fade out going on, then I can't adjust the APL immediately. But if I then after maybe 5 frames detect that we have a fade in/out, if I then at that moment start adjusting the APL immediately, we'd get a brightness jump once more. I'd have to adjust the APL immediately right with the first frame of the fade in/out to avoid a brightness jump.

The situation is different if a fade-in starts with a black screen, because as long as we have a black screen, any non-black frame after that must either be a scene change or the start of a fade-in, so I can just assume that the first non-black frame after a black frame is the start of a fade-in, and stop fading in only after the APL stops going up. So that's easy. But in the middle of a scene, it's hard to find the exact starting point of a fade-in, if I don't look many frames ahead.

Quote:
Originally Posted by Neo-XP View Post
I don't think the brightness speeds can be fast enough for fade ins/outs, and it looks like to me that it could easily causes issues elsewhere to adapt the speeds based on Metric1 value.
Well, my thinking was that we wouldn't actually increase brightness speed for high Metric1 values, but rather slow them down for low Metric1 values. But anyway, that's a test for later...
Manni01, omarank and Neo-XP like this.
madshi is offline  
post #5946 of 6029 Old 05-23-2019, 02:12 PM
Advanced Member
 
Neo-XP's Avatar
 
Join Date: Jun 2018
Location: Switzerland
Posts: 835
Mentioned: 134 Post(s)
Tagged: 0 Thread(s)
Quoted: 572 Post(s)
Liked: 685
Quote:
Originally Posted by madshi View Post
Well, one big problem coming to my mind right now is that a "fade in" is only a "fade in" if it lasts for several frames, and I don't know that in advance, if I don't look several frames ahead.
Quote:
Originally Posted by madshi View Post
But in the middle of a scene, it's hard to find the exact starting point of a fade-in, if I don't look many frames ahead.
It looks like it could be interesting, but only safe if we can look ahead. So maybe we can get back on this in some time, when madVR is capable to see in the future.

Then, it could be even better not to adapt immediately, but to smooth out the brightness adaptation during the fade ins/out to be sure it doesn't produce any artifacts.

Quote:
Originally Posted by madshi View Post
1) "disable sub of prev frame": Currently the previous frame is substracted before evaluating any of the 3 new lines of options. Is that the right way to handle this? Or maybe the previous frame should only be substracted when evaluating line 3 of the new options? I'm not sure.
We can totally block some false positives right at Line 1 if "sub of prev frame" is enabled before any evaluation is done.

Quote:
Originally Posted by madshi View Post
2) "disable sub of prev frame": I assume this should apply to both Metric1 and Metric2, correct? Or should this maybe only apply to one of them?
Applying it to both removes more false positives without hurting anything more, so... yes for both.

Quote:
Originally Posted by madshi View Post
3) There can be a "scene change" false positive when there are strong brightness changes multiple frames in a row, e.g. during a fade out. The "sub of prev frame" feature probably solves that. But maybe we should do something extra to make sure there's no false positive scene change detection during fade ins and fade outs, to avoid a sudden brightness jump? If so, what's the best approach? E.g. should be increase the thresholds in line 3 of the new options by 25% during a fade in/out? Should we also change lines 1 and 2 thresholds?
If you don't want any sudden brightness jump during fade ins/out, why don't you just ignore the metrics when a fade in/out is detected?

I didn't notice any issue with scene detection with these for now:



Line 1 removes all the previous brightness jumps I encountered before (besides very big flashes), so I have to find new problematic cases

Does anyone have some example(s) where a brightness jump is clearly visible?
Manni01 likes this.
Neo-XP is online now  
post #5947 of 6029 Old 05-23-2019, 03:01 PM
AVS Forum Special Member
 
madshi's Avatar
 
Join Date: May 2005
Posts: 7,571
Mentioned: 456 Post(s)
Tagged: 0 Thread(s)
Quoted: 2330 Post(s)
Liked: 2997
May I ask why you're using a threshold of only 4 at line 3 for Metric2? It seems very low. Shouldn't you use a threshold there which really hits the mark where Metric2 properly separates real scene changes from non-scene-changes? If you don't like Metric2 for line 3 too much, probably it would be better to choose a higher weight for Metric1 instead of low-balling the threshold of Metric2? I don't really know, though, just wondering...
Neo-XP likes this.
madshi is offline  
post #5948 of 6029 Old 05-23-2019, 03:23 PM
Senior Member
 
Join Date: Jan 2018
Posts: 350
Mentioned: 67 Post(s)
Tagged: 0 Thread(s)
Quoted: 188 Post(s)
Liked: 219
I've arranged the results by descending order for both false positives and real scene cuts in this image (including whether they were a miss or not from the final round of metric 2 tests):







I think(?) the current line 2 thresholds will have to be changed, and/or the line 2 logic might be better served by setting the logic to "and" instead of "or"

For instance, at present it is 20 *or* 20 as default, which is indicated by the blue line running horizontal.

As you can see, it would automatically flag most scenes that we tested for false positives as false positives (Row 2 - 47 false positives). That can't be good.

If it were 20 *and* 10 for instance, then only 9 would automatically be flagged as false positives (which would be better than the 10 that were flagged as misses based on metric 2 alone from Neo's results).


For real scene cuts

Instead of all of those above the blue line being "automatically" picked up as a scene cut (higher than 20 ), we would automatically miss out on another 17

However, that's not to say that these remaining 17 will be misses, but rather they're just not automatically flagged as a real scene change

Metric 2 seems to be a bit higher for those real scene cuts that are missed (i.e. not automatically flagged) than the corresponding metric 2 stats for false positives, so maybe decrease metric 1 weighting to compensate, and hopefully that would sweep up the remaining misses and correctly flag them as real scene cuts?


The red lines indicated a threshold of 30 for metric 1 and 10 for metric 2 for the second line, and the red thick lines on the right of the image just show those real scenes that would not be automatically flagged as a real scene cut (I was just experimenting to see the difference between 20 and 10, vs 30 and 10 for line 2).


(I picked those numbers to experiment with because the average for metric 1 is 27.8 for false positives / and 33.67 for real scene cuts and the average for metric 2 is 5.7 for false positives / and 14.1 for real scene cuts. So I wanted to see the effect of something around the average).


Of course, if you want to be more cautious for line 2, then you could raise the thresholds further in addition to switching to *and*


I'm sure that there must be some fancy logic formula that could be used in excel to calculate these inc weighted thresholds. I've just no idea how to do it
Manni01 and Neo-XP like this.

Last edited by Fer15; 05-23-2019 at 03:48 PM.
Fer15 is offline  
post #5949 of 6029 Old 05-23-2019, 06:19 PM
Advanced Member
 
Neo-XP's Avatar
 
Join Date: Jun 2018
Location: Switzerland
Posts: 835
Mentioned: 134 Post(s)
Tagged: 0 Thread(s)
Quoted: 572 Post(s)
Liked: 685
Quote:
Originally Posted by madshi View Post
May I ask why you're using a threshold of only 4 at line 3 for Metric2? It seems very low. Shouldn't you use a threshold there which really hits the mark where Metric2 properly separates real scene changes from non-scene-changes? If you don't like Metric2 for line 3 too much, probably it would be better to choose a higher weight for Metric1 instead of low-balling the threshold of Metric2? I don't really know, though, just wondering...
Yes, you're right. It's a bit better like this already:

Manni01 likes this.
Neo-XP is online now  
post #5950 of 6029 Old 05-23-2019, 11:04 PM
Senior Member
 
Icaro's Avatar
 
Join Date: Jul 2007
Location: Italy
Posts: 352
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 108 Post(s)
Liked: 40
Quote:
Originally Posted by Neo-XP View Post

Does anyone have some example(s) where a brightness jump is clearly visible?
Yes, in "Inferno", during the talk in the airport (while their talk on a white background, some passengers pass in front of the camera, with sudden changes of brightness). Now I can't remember the time, tonight when I go home check
Neo-XP likes this.

Sorry for my bad English
Epson LS10500 - AMD RX580 - LightSpace CMS
Icaro is offline  
post #5951 of 6029 Old 05-24-2019, 12:00 AM
AVS Forum Special Member
 
madshi's Avatar
 
Join Date: May 2005
Posts: 7,571
Mentioned: 456 Post(s)
Tagged: 0 Thread(s)
Quoted: 2330 Post(s)
Liked: 2997
Quote:
Originally Posted by Fer15 View Post
I've arranged the results by descending order for both false positives and real scene cuts in this image (including whether they were a miss or not from the final round of metric 2 tests):




I think(?) the current line 2 thresholds will have to be changed, and/or the line 2 logic might be better served by setting the logic to "and" instead of "or"

For instance, at present it is 20 *or* 20 as default, which is indicated by the blue line running horizontal.

As you can see, it would automatically flag most scenes that we tested for false positives as false positives (Row 2 - 47 false positives). That can't be good.

If it were 20 *and* 10 for instance, then only 9 would automatically be flagged as false positives (which would be better than the 10 that were flagged as misses based on metric 2 alone from Neo's results).


For real scene cuts

Instead of all of those above the blue line being "automatically" picked up as a scene cut (higher than 20 ), we would automatically miss out on another 17

However, that's not to say that these remaining 17 will be misses, but rather they're just not automatically flagged as a real scene change

Metric 2 seems to be a bit higher for those real scene cuts that are missed (i.e. not automatically flagged) than the corresponding metric 2 stats for false positives, so maybe decrease metric 1 weighting to compensate, and hopefully that would sweep up the remaining misses and correctly flag them as real scene cuts?


The red lines indicated a threshold of 30 for metric 1 and 10 for metric 2 for the second line, and the red thick lines on the right of the image just show those real scenes that would not be automatically flagged as a real scene cut (I was just experimenting to see the difference between 20 and 10, vs 30 and 10 for line 2).


(I picked those numbers to experiment with because the average for metric 1 is 27.8 for false positives / and 33.67 for real scene cuts and the average for metric 2 is 5.7 for false positives / and 14.1 for real scene cuts. So I wanted to see the effect of something around the average).


Of course, if you want to be more cautious for line 2, then you could raise the thresholds further in addition to switching to *and*


I'm sure that there must be some fancy logic formula that could be used in excel to calculate these inc weighted thresholds. I've just no idea how to do it
Thanks for your feedback!

Hmmmm... I wonder if it makes sense to use the thresholds we found via the Excel sheet as the threshold for "line 3" of the madVR options? And it seems you're suggestion to use a higher weight for M2 because it produces less misses in the Excel sheet?

@Neo-XP, what are your thoughts on that? I know you like M1 more, but maybe it would be worth giving Fer15's suggestion a try to see how it would work?

Quote:
Originally Posted by Neo-XP View Post
Yes, you're right. It's a bit better like this already:

Cool!
Neo-XP likes this.
madshi is offline  
post #5952 of 6029 Old 05-24-2019, 04:58 AM
Senior Member
 
Join Date: Jan 2018
Posts: 350
Mentioned: 67 Post(s)
Tagged: 0 Thread(s)
Quoted: 188 Post(s)
Liked: 219
Generally speaking, I put more stock in metric 2 than metric 1.


The average metric 1 value for false positives is 27.8

The average metric 1 value for real scene cuts is 33.7

So that's nearly a 1:1 ratio


The average metric 2 value for false positives is 5.7

The average metric 2 value for real scene cuts is 14.1

So that's nearly a 1:3 ratio, so there are more grounds to distinguish the two imo (at least based on the averages)


Also, for false positives the metric 1 relative to the metric 2 value is nearly 5:1

For real scene cuts the metric 1 relative to the metric 2 value is nearly 2:1


So I'd put more weight in favour of metric 2.


However, metric 1 values are quite a bit higher than metric 2 values (numerically speaking) and even weighting them by say 25 (metric 1) Vs 75 % (metric 2) might not be enough from a numeric calculation (as the latter is just a large piece of a smaller pie). So I don't know if this would work or not....but maybe doubling metric 2 stats would help? To give them more weight for the actual calculation? I'm not sure but just throwing an idea out there...





Also more thoughts on the line 2 entry.

The previous chart that I posted was arranging both false positives and real scene cuts for metric 1 from high to low.

Here's a chart comparing both false positives and real scene cuts for metric 2 from high to low.





The red line indicates a threshold of 10 for metric 1 *and* 13 for metric 2


So if you were to set it as metric 1 over 10 *and* metric 2 over 13, then only three false positive scenes would automatically be regarded as a false positive, but a majority of the real scene changes would be automatically flagged correctly.


Just by illustration:

The blue line indicates a threshold of 10 for metric 1 *and* 10 for metric 2

If you were to set it as metric 1 over 10 *and* metric 2 over 10, then nine false positive scenes would automatically be regarded as a false positive, but a large majority of the real scene changes would be automatically flagged.

Or if you wanted to ensure that no scenes were incorrectly automatically flagged as a real scene change, then you could enter a threshold of 10 for metric 1 *and* 14 for metric 2 (none are higher than 14 in metric 2 for our test scenes, and you would still pick up correct real scene changes)
Manni01 and Neo-XP like this.

Last edited by Fer15; 05-24-2019 at 05:01 AM.
Fer15 is offline  
post #5953 of 6029 Old 05-24-2019, 05:00 AM
AVS Forum Special Member
 
madshi's Avatar
 
Join Date: May 2005
Posts: 7,571
Mentioned: 456 Post(s)
Tagged: 0 Thread(s)
Quoted: 2330 Post(s)
Liked: 2997
Interesting thoughts.

About your worry about M1 and M2 having a different value range (M1 having higher values): That's not an issue because I normalize M1 and M2 by dividing them by the threshold. That automatically takes care of the value range differences.
Manni01, omarank, Fer15 and 1 others like this.
madshi is offline  
post #5954 of 6029 Old 05-24-2019, 10:10 AM
Advanced Member
 
Neo-XP's Avatar
 
Join Date: Jun 2018
Location: Switzerland
Posts: 835
Mentioned: 134 Post(s)
Tagged: 0 Thread(s)
Quoted: 572 Post(s)
Liked: 685
Quote:
Originally Posted by madshi View Post
@Neo-XP, what are your thoughts on that? I know you like M1 more, but maybe it would be worth giving Fer15's suggestion a try to see how it would work?
I don't work with the excel sheet anymore, because everything was tested with the "disable sub of prev frame" enabled.

Also, it may be a good method to know which algo produces less misses, but doesn't reflect neither the numbers after the sub of prev frame, or where false positives can be important or not, and likewise for real cuts.

To set appropriate thresholds, I think we should now concentrate more on real issues (visible artifacts) and work from there, than just looking at (mostly) non-representative values.
Neo-XP is online now  
post #5955 of 6029 Old 05-24-2019, 10:38 AM
AVS Forum Special Member
 
madshi's Avatar
 
Join Date: May 2005
Posts: 7,571
Mentioned: 456 Post(s)
Tagged: 0 Thread(s)
Quoted: 2330 Post(s)
Liked: 2997
Quote:
Originally Posted by Neo-XP View Post
I don't work with the excel sheet anymore, because everything was tested with the "disable sub of prev frame" enabled.

Also, it may be a good method to know which algo produces less misses, but doesn't reflect neither the numbers after the sub of prev frame, or where false positives can be important or not, and likewise for real cuts.

To set appropriate thresholds, I think we should now concentrate more on real issues (visible artifacts) and work from there, than just looking at (mostly) non-representative values.
Fair enough. I kinda forgot about the "disable sub of prev frame" option. Obviously, unchecking the "disable sub of prev frame" option will generally lower Metric1 and Metric2 values. So the current Excel thresholds are clearly too high.

Still, I kinda like the idea of using a somewhat "scientific" approach to find optimal thresholds, or at least an approximation of that. Would it maybe make sense to retest all the Excel scenes with Excel once more, but with "disable sub of prev frame" unchecked? That might give us a means to do some number crunching to find a good starting point for optimal thresholds? And then we could fine tune the thresholds from there?

If we don't do that, all we can do instead is trial-and-error, and that might also lead to good results, but I assume it will take much longer to optimize, because finding new scenes which still fail with the wip settings is tedious work?

Thoughts?

Oh yes, and how do you like Fer15's suggestion to modify "line 2" of the new options to do AND instead of OR logical operator? Meaning, we would judge a situation to be a clear scene change if both Metric1 and Metric2 are above the thresholds in line 2 at the same time? That said, if that happens, isn't it likely that "line 3" will come to the same conclusion? So maybe line 2 is not all that useful at all?
Neo-XP likes this.
madshi is offline  
post #5956 of 6029 Old 05-24-2019, 12:17 PM
Advanced Member
 
Neo-XP's Avatar
 
Join Date: Jun 2018
Location: Switzerland
Posts: 835
Mentioned: 134 Post(s)
Tagged: 0 Thread(s)
Quoted: 572 Post(s)
Liked: 685
Quote:
Originally Posted by madshi View Post
Would it maybe make sense to retest all the Excel scenes with Excel once more, but with "disable sub of prev frame" unchecked?


Quote:
Originally Posted by madshi View Post
That might give us a means to do some number crunching to find a good starting point for optimal thresholds? And then we could fine tune the thresholds from there?
Quote:
Originally Posted by madshi View Post
Thoughts?
It can't hurt to try I guess

Quote:
Originally Posted by madshi View Post
Oh yes, and how do you like Fer15's suggestion to modify "line 2" of the new options to do AND instead of OR logical operator? Meaning, we would judge a situation to be a clear scene change if both Metric1 and Metric2 are above the thresholds in line 2 at the same time? That said, if that happens, isn't it likely that "line 3" will come to the same conclusion? So maybe line 2 is not all that useful at all?
It would be kind of redundant IMO, not useful (and maybe doing more bad than good) if we find good values for "line 3".

I also don't find it useful at all as it is now.

Last edited by Neo-XP; 05-24-2019 at 12:23 PM.
Neo-XP is online now  
post #5957 of 6029 Old 05-24-2019, 12:58 PM
AVS Forum Special Member
 
madshi's Avatar
 
Join Date: May 2005
Posts: 7,571
Mentioned: 456 Post(s)
Tagged: 0 Thread(s)
Quoted: 2330 Post(s)
Liked: 2997
Quote:
Originally Posted by Neo-XP View Post


It can't hurt to try I guess
I'm sorry to put you through this, I'm feeling bad already...
Neo-XP likes this.
madshi is offline  
post #5958 of 6029 Old 05-24-2019, 01:06 PM
Advanced Member
 
Neo-XP's Avatar
 
Join Date: Jun 2018
Location: Switzerland
Posts: 835
Mentioned: 134 Post(s)
Tagged: 0 Thread(s)
Quoted: 572 Post(s)
Liked: 685
@madshi @Fer15



Doc: https://www.mediafire.com/file/rut1q..._V18.xlsx/file

When taking them separately, Metric1 does best at ~10 and Metric2 at ~4.

I let you play with the numbers if you wish to combine them (or add more examples).

I added the type of false positives, because I don't find flashes that important to remove (not producing disturbing artifacts Edit: unless it's a flash for only one or two frames, that we can remove later when looking ahead).

We can clearly see here that setting a low value (1.5-2.5) at "line1" for Metric2 is very useful to remove a lot of false positives if we combine it with Metric1.
stevenjw, madshi, Manni01 and 2 others like this.

Last edited by Neo-XP; 05-24-2019 at 02:02 PM.
Neo-XP is online now  
post #5959 of 6029 Old 05-24-2019, 01:37 PM
AVS Forum Special Member
 
madshi's Avatar
 
Join Date: May 2005
Posts: 7,571
Mentioned: 456 Post(s)
Tagged: 0 Thread(s)
Quoted: 2330 Post(s)
Liked: 2997
Oh cool. So basically this new Excel sheet confirms that your original thresholds for Metric1 + Metric2 were already very good...
RioBar4U and Neo-XP like this.
madshi is offline  
post #5960 of 6029 Old 05-24-2019, 01:50 PM
Advanced Member
 
Neo-XP's Avatar
 
Join Date: Jun 2018
Location: Switzerland
Posts: 835
Mentioned: 134 Post(s)
Tagged: 0 Thread(s)
Quoted: 572 Post(s)
Liked: 685
Quote:
Originally Posted by madshi View Post
Oh cool. So basically this new Excel sheet confirms that your original thresholds for Metric1 + Metric2 were already very good...
Yes , I'm tied between those two now:


They are very similar, but the first one seems safer, based on some tests I did.
stevenjw, SOWK, madshi and 1 others like this.
Neo-XP is online now  
post #5961 of 6029 Old 05-24-2019, 02:08 PM
Senior Member
 
Join Date: Jan 2018
Posts: 350
Mentioned: 67 Post(s)
Tagged: 0 Thread(s)
Quoted: 188 Post(s)
Liked: 219
Quote:
Originally Posted by Neo-XP View Post
@madshi @Fer15



Doc: https://www.mediafire.com/file/rut1q..._V18.xlsx/file

When taking them separately, Metric1 does best at ~10 and Metric2 at ~4.

I let you play with the numbers if you wish to combine them (or add more examples).

I added the type of false positives, because I don't find flashes that important to remove (not producing disturbing artifacts Edit: unless it's a flash for only one or two frames, that we can remove later when looking ahead).

We can clearly see here that setting a low value (1.5-2.5) at "line1" for Metric2 is very useful to remove a lot of false positives if we combine it with Metric1.

Nice! Those new metric stats with sub enabled are much better, much more clear cut!
Neo-XP likes this.
Fer15 is offline  
post #5962 of 6029 Old 05-24-2019, 02:41 PM
AVS Forum Special Member
 
madshi's Avatar
 
Join Date: May 2005
Posts: 7,571
Mentioned: 456 Post(s)
Tagged: 0 Thread(s)
Quoted: 2330 Post(s)
Liked: 2997
FWIW, here's a slightly extended sheet that allows you to play with the various new thresholds and weights to see how doing so modifies the overall misses:

http://madshi.net/NeoV18.xlsx
Manni01, Fer15 and Neo-XP like this.
madshi is offline  
post #5963 of 6029 Old 05-24-2019, 03:13 PM
AVS Forum Special Member
 
madshi's Avatar
 
Join Date: May 2005
Posts: 7,571
Mentioned: 456 Post(s)
Tagged: 0 Thread(s)
Quoted: 2330 Post(s)
Liked: 2997
Hmmmm... Looking at the extended Excel sheet, I get the best results with the following config:

line 1 M1: disabled (0)
line 1 M2: 3.6
line 2 M1: disabled (0)
line 2 M2: disabled (0)
line 3 M1: 10.12913152
line 3 M2: 4.075388612
line 3 M1 weight: 50%

This way I get 13 overall misses. If I set line 1 M2 to 2.0, then the number of overall misses increases to 18.

Of course this is just with the data in the Excel sheet. In real life, the situation might be different.

(Interestingly, with the config above the result is *exactly* the same as when using just Metric2 on its own without Metric1 at all. However, with the config above the actual line 3 thresholds are much less important. I can vary them a lot without any changes to the overall misses.)
Manni01, Fer15 and Neo-XP like this.

Last edited by madshi; 05-24-2019 at 03:16 PM.
madshi is offline  
post #5964 of 6029 Old 05-25-2019, 12:52 AM
Advanced Member
 
Neo-XP's Avatar
 
Join Date: Jun 2018
Location: Switzerland
Posts: 835
Mentioned: 134 Post(s)
Tagged: 0 Thread(s)
Quoted: 572 Post(s)
Liked: 685
@madshi @Fer15 Here's how it goes after adding more scenes with static/flat background:



Doc: https://www.mediafire.com/file/jt8px...oV19.xlsx/file

I see two solutions to improve this:

1) Switch "line2" with "line1" and then set Metric1 on "line1" to ~75.

This way Metric2 won't block some real scenes right at "line1".

2) (Better I think) Stop evaluating at "line1" with Metric2, but only if Metric1 is not too high.

For instance, if Metric2 is < 4 but Metric1 is > 20, then we decide if it's a real scene or not at "line3".

Thoughts?
madshi, Manni01 and Fer15 like this.
Neo-XP is online now  
post #5965 of 6029 Old 05-25-2019, 01:12 AM
AVS Forum Special Member
 
madshi's Avatar
 
Join Date: May 2005
Posts: 7,571
Mentioned: 456 Post(s)
Tagged: 0 Thread(s)
Quoted: 2330 Post(s)
Liked: 2997
Wow, adding more scenes with a static/flat background really changes things! But it's a very good thing because we can now see that combining Metric1+Metric2 is really *very* beneficial. With your new Excel sheet, we can clearly see that combining both Metrics achieves a *much* lower number of overall misses, compared to using only one (doesn't even matter which) of the Metrics alone! In your new sheet both Metric1 and Metric2 alone achieve 42 misses, but combined it's down to 22. Also, it's interesting to see that after adding those new scenes, Metric1 is now just as good as Metric2 - but at different scenes. Which shows that Metric2 has its own very specific weaknesses.

FWIW, I managed to improve results slightly by:

A) setting the Metric2 (with sub enabled) threshold to 3.2 instead of 4.
B) setting the line 1 M1 threshold to 0.

The number of overall misses goes from 24 down to 22 this way.

I'm fine with implementing your suggestion 2). So I would remove current line 2 completely, and instead add an option which disables line 1 if Metric1 is above a certain threshold. Correct?

I wonder if we really need M1 in line 1? With your last 2 Excel sheets, I got better results by disabling the line 1 M1 option (setting it to 0).

So if you agree, the new line 1 would work like this: If Metric2 is below a specific threshold (suggested value 1.5) *and* Metric1 is below a specific threshold (suggested value 20.0?) then we definitely do not have a scene change. Otherwise we decide by using line 3. So basically line 1 would actually stay the same, except that the current "OR" logical operator would be replaced by "AND". Thoughts?
Manni01 likes this.
madshi is offline  
post #5966 of 6029 Old 05-25-2019, 01:29 AM
AVS Forum Special Member
 
madshi's Avatar
 
Join Date: May 2005
Posts: 7,571
Mentioned: 456 Post(s)
Tagged: 0 Thread(s)
Quoted: 2330 Post(s)
Liked: 2997
> For instance, if Metric2 is < 4 but Metric1 is > 20, then
> we decide if it's a real scene or not at "line3".

P.S: I just tested this new approach via Excel, but it doesn't work well at all with the 4|20 thresholds you suggested. The number of misses goes from 22 to 35 this way. However, if I use 1.5|65 thresholds, then the number of misses goes down from 22 to 21.

http://madshi.net/NeoV19.xlsx
Manni01 likes this.
madshi is offline  
post #5967 of 6029 Old 05-25-2019, 01:32 AM
Advanced Member
 
Neo-XP's Avatar
 
Join Date: Jun 2018
Location: Switzerland
Posts: 835
Mentioned: 134 Post(s)
Tagged: 0 Thread(s)
Quoted: 572 Post(s)
Liked: 685
Quote:
Originally Posted by madshi View Post
I'm fine with implementing your suggestion 2). So I would remove current line 2 completely, and instead add an option which disables line 1 if Metric1 is above a certain threshold. Correct?
Yes, but even better would be, I think:

Line1: There is definitely no scene change if metric2 is below [1] (I've never seen a real scene < 1, but I will add some false positives with high metric1 values and metric2 near 0)

Line2: There is definitely no scene change if metric1 is below [20] and metric2 is below [4] (values to be defined)

Line3: Same as now.

Quote:
Originally Posted by madshi View Post
I wonder if we really need M1 in line 1? With your last 2 Excel sheets, I got better results by disabling the line 1 M1 option (setting it to 0).
It's better at 0, so it's not useful (I've never seen a case where it would really help or hurt the detection).
Manni01 likes this.
Neo-XP is online now  
post #5968 of 6029 Old 05-25-2019, 01:43 AM
AVS Forum Special Member
 
madshi's Avatar
 
Join Date: May 2005
Posts: 7,571
Mentioned: 456 Post(s)
Tagged: 0 Thread(s)
Quoted: 2330 Post(s)
Liked: 2997
Quote:
Originally Posted by Neo-XP View Post
Yes, but even better would be, I think:

Line1: There is definitely no scene change if metric2 is below [1] (I've never seen a real scene < 1, but I will add some false positives with high metric1 values and metric2 near 0)

Line2: There is definitely no scene change if metric1 is below [20] and metric2 is below [4] (values to be defined)

Line3: Same as now.
Yeah, adding some false positives with high metric1 and metric2 near 0 might help. Or any other scenes with interesting new metric1/2 measurements combinations.

I've just tried your above suggestion with Excel, but it doesn't seem to help, from what I can see. For your suggested Line2 above, I *have* to use an M2 value of 1.5, otherwise the misses go through the roof. But if I already have a 65|1.5 combo for Line2, then Line1 becomes mostly superfluous, because there's barely any difference between 1.0 and 1.5.
Manni01 likes this.
madshi is offline  
post #5969 of 6029 Old 05-25-2019, 01:47 AM
Advanced Member
 
Neo-XP's Avatar
 
Join Date: Jun 2018
Location: Switzerland
Posts: 835
Mentioned: 134 Post(s)
Tagged: 0 Thread(s)
Quoted: 572 Post(s)
Liked: 685
Quote:
Originally Posted by madshi View Post
Yeah, adding some false positives with high metric1 and metric2 near 0 might help. Or any other scenes with interesting new metric1/2 measurements combinations.

I've just tried your above suggestion with Excel, but it doesn't seem to help, from what I can see. For your suggested Line2 above, I *have* to use an M2 value of 1.5, otherwise the misses go through the roof. But if I already have a 65|1.5 combo for Line2, then Line1 becomes mostly superfluous, because there's barely any difference between 1.0 and 1.5.

That's interesting. It is not possible to get less than 21 misses with higher values at Line2 with my suggestion? I'll try...
Manni01 likes this.
Neo-XP is online now  
post #5970 of 6029 Old 05-25-2019, 01:51 AM
AVS Forum Special Member
 
madshi's Avatar
 
Join Date: May 2005
Posts: 7,571
Mentioned: 456 Post(s)
Tagged: 0 Thread(s)
Quoted: 2330 Post(s)
Liked: 2997
No, I tried, but the best I can achieve is 21, using 65|1.5. Your suggested line 1 with 1.0 threshold doesn't change anything. But maybe I missed something. Or things could change if you add more "weird" false positive and/or real scenes to the sheet.
madshi is offline  
Sponsored Links
Advertisement
 
Reply Digital Hi-End Projectors - $3,000+ USD MSRP

Tags
dynamic tone mapping , hdr , madvr , sdr , ton mapping

Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off