AVS Forum banner

21 - 40 of 212 Posts

·
Registered
Joined
·
5,585 Posts
Discussion Starter #21
If you moved to a better filter kernel, something that is edge adaptive, you should get even better results. I have been pushing Silicon Optix and Anchor Bay Technology for a few years now to switch to linear light. The problem for them is cost.


As it stands, there is still a lot of room in consumer, and professional, products when it comes to image resizing.
 

·
Registered
Joined
·
2,175 Posts
Impressive.


Stacey,


is a Lanczos-based resizing algorithm also supported with this tool or is bicubic preferred.


How much more processing power does linear light require compared to standard gamma corrected resizing? Is it "just" the difference between eg. 8bit and 14bit processing?
 

·
Registered
Joined
·
5,585 Posts
Discussion Starter #23
We have two filter kernels to choose from, Lanczos and Catmull-Rom. Catmull-Rom is our favorite. While it is an interpolation filter, by design, we stretch it to make it work as a low pass filter. With Lanczos we allow you to specify the number of lobes. We found that more than two lobes introduces ringing (a ring per addional lobe) on sharp edges, so we prefer two lobes, which is virtually identical to Catmull-Rom. When you see Lanczos2 and Lanczos3, it usually means number of lobes. I know several people seem to like Lanczos3.

Quote:
How much more processing power does linear light require compared to standard gamma corrected resizing?
A lot more processing! The big performance hit is the conversion too and from linear light because it is a power function. I have attached an xls to convert between video levels, percent (gamma corrected) and linear percent. If you look at any of the linear cell's, you will see the linearize function.

 

LevelConversion.zip 5.625k . file
 

Attachments

·
Registered
Joined
·
5,585 Posts
Discussion Starter #24
AVIA, and other test discs, contain patterns that allow you to visually determine the gamma of the display. The patterns usually consist of single pixel black and white lines with a specific color gray background. These patterns only work if there is no image resizing being done, unless you are resizing in linear light!
 

·
Registered
Joined
·
2,175 Posts

Quote:
Originally Posted by sspears /forum/post/13481983


We have two filter kernels to choose from, Lanczos and Catmull-Rom. Catmull-Rom is our favorite. While it is an interpolation filter, by design, we stretch it to make it work as a low pass filter. With Lanczos we allow you to specify the number of lobes. We found that more than two lobes introduces ringing (a ring per addional lobe) on sharp edges, so we prefer two lobes, which is virtually identical to Catmull-Rom. When you see Lanczos2 and Lanczos3, it usually means number of lobes. I know several people seem to like Lanczos3.



A lot more processing! The big performance hit is the conversion too and from linear light because it is a power function. I have attached an xls to convert between video levels, percent (gamma corrected) and linear percent. If you look at any of the linear cell's, you will see the linearize function.

Thank you, interesting indeed.


Personally I have always favored Lanczos3 - although I agree slight ringing becomes visible - with Lanczos4 and up it becomes way too distracting IMO.


What's your personal take on supersampling/oversampling? In my experience it can be a quite useful concept to apply certain filter (eg. DNR, sharpening) not in native source or output resolution, but in one much higher than that (multiples). Although it may sound unintuitive to add an additional resizing step.


Do you expect SO, Gennum and/or ABT to use linear processing in one of their upcoming generations anytime soon?
 

·
Registered
Joined
·
4,658 Posts
Stacey, this is a great thread. Thanks for the examples.


You brewing up anything specific for de-bayering and then down rezi'ng 4K RAW (as opposed to RGB) data in to the 1080p 4:2:0 for HD media use?


You know... in case you happen to end up with some 4K source any time soon.
 

·
Registered
Joined
·
6,948 Posts
Interesting how much difference there can be with conversion of the source.


And now when we have 4K sources more and more, this is getting very important.
 

·
Registered
Joined
·
5,585 Posts
Discussion Starter #28
Steve,


I believe there is always a better way to debayer. One just needs direct access to the RAW bits. e.g. The Cineform SDK allows you to plug-in your own debayer algorithm.


TheLion,


I believe that once you start processing a source, you should stay in the highest quality format until you are done. I would not necessarily resize an image just to perform NR or sharpenss, but I would do it at a higher bitdepth, like 32-bit float. If I am resizing, then I would probably perform that operation before NR or sharpness.
 

·
Registered
Joined
·
9,884 Posts

Quote:
Originally Posted by sspears /forum/post/13498182


Steve,


I believe there is always a better way to debayer. One just needs direct access to the RAW bits. e.g. The Cineform SDK allows you to plug-in your own debayer algorithm.


TheLion,


I believe that once you start processing a source, you should stay in the highest quality format until you are done. I would not necessarily resize an image just to perform NR or sharpenss, but I would do it at a higher bitdepth, like 32-bit float. If I am resizing, then I would probably perform that operation before NR or sharpness.

Stacy -


I was wondering abut bayer patterns the other day. It seemed to me that RGB (but not YUV) type bayer patterns might need the green filtered down to only the resolutions of the red and blue components to avoid (mild) chroma shift artifacts. Is that true, or a known problem?


- Tom
 

·
Registered
Joined
·
4,658 Posts

Quote:
Originally Posted by trbarry /forum/post/13503415


Stacy -


I was wondering abut bayer patterns the other day. It seemed to me that RGB (but not YUV) type bayer patterns might need the green filtered down to only the resolutions of the red and blue components to avoid (mild) chroma shift artifacts. Is that true, or a known problem?


- Tom


I've wondered a bit about the sensor bayer color filter distribution (2 G's for each R & B) as well.


I had always assumed that the additional green was useful in that the greatest amount of green energy is present in white light. My thinking was that with a green advantage to start with, you would have to throw away less of the R & B samples to get the correct balance.


Or is that completely non-applicable here?
 

·
Registered
Joined
·
9,884 Posts

Quote:
Originally Posted by scaesare /forum/post/13517981


I've wondered a bit about the sensor bayer color filter distribution (2 G's for each R & B) as well.


I had always assumed that the additional green was useful in that the greatest amount of green energy is present in white light. My thinking was that with a green advantage to start with, you would have to throw away less of the R & B samples to get the correct balance.


Or is that completely non-applicable here?

Dunno. It would seem to me that to avoid aliasing in the R & B samples you would have to filter in the lense to remove all frequencies greater than the Nyquist limits for those samples, thus also filtering the Green.


However when I first thought of it I was playing with the idea of full screen Fourier transforms and it occured to me you could probably do that diagonally and phase shift the R & B samples over and in between the green, thus increasing that limit by about 40%. I'm guessing Stacy was alluding to something like that but using one of Don's filters to do the same sort of thing to convert Bayer patterns.


But I still don't think it's possible to get the full rez many people think.


Stacy?



- Tom
 

·
Registered
Joined
·
4,658 Posts

Quote:
Originally Posted by trbarry /forum/post/13525215


Dunno. It would seem to me that to avoid aliasing in the R & B samples you would have to filter in the lense to remove all frequencies greater than the Nyquist limits for those samples, thus also filtering the Green.


However when I first thought of it I was playing with the idea of full screen Fourier transforms and it occured to me you could probably do that diagonally and phase shift the R & B samples over and in between the green, thus increasing that limit by about 40%. I'm guessing Stacy was alluding to something like that but using one of Don's filters to do the same sort of thing to convert Bayer patterns.


But I still don't think it's possible to get the full rez many people think.


Stacy?



- Tom


Yes, cameras employing bayer pattern sensors typically have an optical low-pass filter (OLPF) for exactly that reason: to avoid excessive aliasing. Between that and the necessary interpolation that domasiacing implies, observed actual resolutions seem to be ~80% of the actual sensor resolution.


But, and I certainly could be wrong here, isn't the issue of resolution and/or aliasing orthogonal to that of the spectral intensity of the individual RGB channels needed to make up a sample? Thus for each "cell" of 4 sensors, the demosiac algorithm has more G to work with than R or B, which approximates the real-life distribution if the individual channels.


Interesting stuff.
 

·
Registered
Joined
·
9,884 Posts

Quote:
Originally Posted by scaesare /forum/post/13527789


...


But, and I certainly could be wrong here, isn't the issue of resolution and/or aliasing orthogonal to that of the spectral intensity of the individual RGB channels needed to make up a sample? Thus for each "cell" of 4 sensors, the demosiac algorithm has more G to work with than R or B, which approximates the real-life distribution if the individual channels.


Interesting stuff.

Yes, that is what I was first asking Stacy about since it seems that with the simpler algorithms you could end up with more resolution or detail for G than for R,B.


But maybe you don't dare do that since in the places where the G component had more detail it seems it could change the apparent hue, creating chroma shift patterns even for a very detailed black and white image. So (again just speculating) you might have to limit the G resolution down to whatever you could debayer/extract for R & B, by whatever method.


Though I think there is another YUV (grey-U-V?) bayer pattern that might not have this problem.


I don't even know if this IS a problem, since I haven't really heard people talking about it. But it seemed possible to me.


- Tom
 

·
Registered
Joined
·
4,658 Posts

Quote:
Originally Posted by trbarry /forum/post/13533208


Yes, that is what I was first asking Stacy about since it seems that with the simpler algorithms you could end up with more resolution or detail for G than for R,B.


But maybe you don't dare do that since in the places where the G component had more detail it seems it could change the apparent hue, creating chroma shift patterns even for a very detailed black and white image. So (again just speculating) you might have to limit the G resolution down to whatever you could debayer/extract for R & B, by whatever method.


Though I think there is another YUV (grey-U-V?) bayer pattern that might not have this problem.


I don't even know if this IS a problem, since I haven't really heard people talking about it. But it seemed possible to me.


- Tom

Ah, I see... yes I would be interested to know that too.


I do believe that for video acquisition, green screen for keying is supposed to give a superior result, and I think this may be due to this channel having additional samples and/or a greater "percentage" of the intensity value contributing to each sample.


We need to go ask Graeme this on the RED forum.
 

·
Registered
Joined
·
62 Posts
I tend to disagree with doing Noise Reduction after resizing or color space conversion.


Removing noise before you clip, dither, or resize gives you the best chance of detection, especially if you are doing so across multiple frames.


When I'm working with filters often I weigh how much I'm going to need of each filter, and create masks to make sure that when I apply each filter that I'm only applying it where it needs to be used.


You'd have to do a resize and average the Single Pixel CCD noise into the pixel's neighbor.


I despeckled Stacey's Fish before reducing their color, and clipped the color based on its dynamic range rather than just doing a clip. I think that you will agree that even with out dithering there is very little banding compared with the other methods.




I am a strong believer in order of operations and discarding information only at the last minute. But it should also be noted I am willing to have a pre-processing take 20 hours for an hour of video before I pass it to the encoder to chew on it for another 15 hours.
 

·
Registered
Joined
·
8,065 Posts
I'm a bit late to the party, but I have a question to Stacey:


Why is dithering from 10bit to 8bit done *before* the encoding? As far as I can see this makes life more difficult for the encoder: Instead of reproducing true color gradations, the encoder has to reproduce colored dither dots! Wouldn't it increase compression efficiency to feed the full 10bit master into the encoder and let the encoder find its own best way to drop 2bit much later in the encoding chain?
 

·
Registered
Joined
·
6,948 Posts

Quote:
Originally Posted by madshi /forum/post/13967679


I'm a bit late to the party, but I have a question to Stacey:


Why is dithering from 10bit to 8bit done *before* the encoding? As far as I can see this makes life more difficult for the encoder: Instead of reproducing true color gradations, the encoder has to reproduce colored dither dots! Wouldn't it increase compression efficiency to feed the full 10bit master into the encoder and let the encoder find its own best way to drop 2bit much later in the encoding chain?


If you do it before, you have can QC the movie before the encode.
 

·
Registered
Joined
·
8,065 Posts
Hey Stacey & Don,


I'm working on implementing a new DirectShow video renderer with highest possible image quality (making use of GPU shaders to do all the dirty work). I've just released a first beta version here, in case anybody is interested:

http://forum.doom9.org/showthread.php?t=146228


In order to achieve best quality I've researched topics like dithering, scaling algorithms and finally linear light processing. After having read this thread (again) I have a couple of questions:


(1) I'm currently using simple TPDF dithering to do bring the floating point RGB data (coming from YCbCr -RGB conversion) down to e.g. 8bit integer. I'm aware that when going to very low bitdepths (e.g. 2bit like in your screenshots above) random dithering looks bad. But what is your opinion when going down to 8bit RGB or 10bit, 12bit? Is error diffusion really visibly better in this situation, too? Random dithering looks just fine to me with these bitdepths. Btw, is the algorithm you're using very different to Floyd Steinberg?


(2) I've implemented chroma upsampling by using Gaussian scaling which seems to produce the best (very smooth) results. But I've done that in gamma corrected light. I'm wondering whether upsampling chroma in linear light would be even better? But of course the problem is how to get 4:2:0 chroma to linear light without upsampling it first...



(3) Can I directly convert Y'CbCr to YCbCr without going to RGB? I've been told that isn't possible. But if that isn't possible then why is Y'CbCr named Y'CbCr and not Y'Cb'Cr'? I mean the name Y'CbCr suggests that the only difference when going Y'CbCr -> YCbCr is the luma channel. Is CbCr not changed at all? Sorry, this might be a stupid question...


(4) I've stumbled over your 2007 patent. How are you handling it? I mean can I implement linear light scaling in my renderer without getting into trouble or do I need to get a license from you somehow? (Of course I'd use my own algorithms/code/logic, I'd just use the idea to do scaling in linear light).


Sorry for the very indepth questions and thanks for any reply you can give!


FYI, I've developed a (mostly) ringing free new scaling algorithm based on 4-lobe Lanczos. Is that of any interest to you? If so, PM me.
 
21 - 40 of 212 Posts
Top