AVS Forum banner

Status
Not open for further replies.
101 - 111 of 111 Posts

·
Registered
Joined
·
9,884 Posts
DVD and HDTV use MPEG-2. While MPEG-2 has a 4:2:2 studio profile I believe that all commercial material you are likely to see on either DVD or HDTV will be encoded in 4:2:0. However depending upon your decoder, graphics card, drivers, or Direct-X level it may be converted to 4:2:2 on output prior to display.


- Tom
 

·
Registered
Joined
·
1,005 Posts
Sorry for the delay... on vacation.

Quote:
Originally posted by Charles Black
Have you actually done a comparison between gamma corrected video and linear-light video scaling in a streaming system?
Yes.

Quote:
If so what was the hardware you used? What would be the data rate of the linear-light stream. How much computing power is needed? How did you evaluate the results?
Ordinary PCs. The other answers I either don't know or can't say.

Quote:
Do you know of any specific problems processing Y'CbCr where the result is bad and not fixable with careful coding?
It's always bad working in Y'CbCr, because as mentioned before it's not a linear space. Any normal linear math you might want to do, like interpolation, completely fails because the final R, G, and B values are all dependent on more than one of Y', Cb, and Cr. Interpolated Y'CbCr values do not map to the same colors as the equivalent RGB values equivalently interpolated.


In R'G'B' it's a little better, but not much. Again, linear operations (like interpolation) produce incorrect results. The short version of the problem is that all edges get pulled toward black.


The "careful coding" solution is simple: convert to linear.

Quote:
Do you have an issue with 4:4:4 other than data rate?
No, not at all. I meant to type "4:2:2." Chroma subsampling is not the free lunch people seem to think it is. However, it may be that it's a decent tradeoff, given the problems caused by overquantizing. If we didn't subsample the chroma channels, we'd have to quantize them more severely to get the same compression ratios, and that might look even worse.

Quote:
That's funny since you just recommended doing exactly that (Y'CbCr->R'G'B'->RGB...) in your best pipeline for scaling scenario.
Not at all. I said you can't linearize Y'CbCr, and you can't. You can convert it to R'G'B', and then linearize that to RGB. Y', for example, cannot be converted directly to Y. You can convert Y'CbCr to R'G'B', then R'G'B' to RGB, and then calculate Y from that, but there is no simple calculation to get directly from Y' to Y. This is what Poynton is getting at when he says that luma (Y') isn't luminance (Y). Luma isn't even gamma-corrected luminance, which is why the abbreviation Y' is misleading.


Don
 

·
Registered
Joined
·
681 Posts
Many years ago I remember watching a CGI clip on a 486 Cd-rom and the computer froze for a second on a frame. I remember thinking, that the static image looked like junk but when animated the image looked sharper. This was with a clip that was below 320x240.


This was probably with uncompressed video (imagine the bandwidth)and I suppose to get the most quality each frame was made uniquely. I noticed how different aliasing methods were used on a bar that wasn't moving in the video. Where a aliasing would occur instead of inserting a grey, relative to the neighbouring pixels, that pixel would flicker different shades of grey and when in motion it certainly was a sharpening technique.


So has anybody examined the inherit benefits from temporal interpolation? How does a 854x480x48Hz bicubic compare to a 1280x720x24Hz bicubic?


Or even better why not alternate a frame of linear(1280x720)+lanczos (1280x720)+bicubic(1280x720)=72frames/sec



This would certainly ramp the noise but personally I don't mind noise. It's alot like the crack and hissing on a record. However I do dispise any sort EE.


While i'm here are there any kind of metrics on measuring video sharpness? Shouldn't renderers analyze the video and apply more appropriate filters. Personally very soft transfers (mainly first runs from China) look worse when I apply any sort of filtering
 

·
Registered
Joined
·
23,130 Posts
Trimention DNM does exactly what you're talking about (I think), but it is VERY processor intensive, and has recieved mixed reviews, working wonders on some things and destroying others.
 

·
Registered
Joined
·
8,103 Posts

·
Registered
Joined
·
2,164 Posts
@cyberbri


AVISynth resizing does not reduce CPU usage, in fact it increases it as it is not optimised the way the directshow version is.


Cheers...

Duy-Khang Hoang
 

·
Registered
Joined
·
5,344 Posts
"Why not use a mathematical simulation of an optically perfect lens (raytracing) to enlarge an image?"


nice big, fat pixels :D
 

·
Registered
Joined
·
9,884 Posts
"nice big fat pixels"


Some fixed pixel display have well defined borders, like displaying square pixels on a checker board. I think this in turn may require some extra filtering to avoid motion dithering on highly detailed moving scenes.


But the simplest solution I could think of would indeed be "nice fat pixels", overlapping somewhat. This could probably be done at home for RP's, by just unfocusing them a tiny bit. I think even some DLP movie theaters might look better from the front rows if they unfocussed it some, just enough so adjacent pixels slightly overlapped.


Though higher resolution displays also likely solve the problem.


I wonder if there are known video sizing & fltering algorithms that adjust for these issues with fixed pixel displays?


- Tom
 

·
Registered
Joined
·
263 Posts
Quote:
Originally Posted by dmunsil
Lobes vs. Taps. The short version. :)

Any signal-processing filter has "taps" which are the discrete input values that are used to calculate each output value. In audio processing it's pretty straightforward - a 20-tap FIR resampling filter takes 20 audio samples, multiplies each by one value in a 20-element filter "kernel," adds all the results together, and spits out a single output sample. Lather, rinse, repeat. In video it's a little different, but that's the rough idea.

...


For upsampling (making the image larger), the filter is sized such that the entire equation falls across 4 input samples, making it a 4-tap filter. It doesn't matter how big the output image is going to be - it's still just 4 taps.


...

Don
I might have to design or at least recommend the architecture for a hardware video-scaler, so to ressurect an old thread -- I noticed ATI's AVIVO whitepaper ( http://www.ati.com/technology/Avivo/...Whitepaper.pdf ) claims "10x6" output filtering. Based on what you said, for upscaling, the additional taps (beyond 4x4) are totally superfluous?
 
101 - 111 of 111 Posts
Status
Not open for further replies.
Top