or Connect
AVS › AVS Forum › Blu-ray & HD DVD › HDTV Software Media Discussion › Video pre-processing
New Posts  All Forums:Forum Nav:

Video pre-processing - Page 3

post #61 of 212
^^^Expert dr1394's reply here , in a thread about DCI-color displays, might offer some related information. Noticed his home-page thread in the calibration forum just introduced a test pattern for extended color. -- John
post #62 of 212
OK, if I keep waiting until I get a chance to build graphs I'm never going to get to this, so let's see if we can continue where we left off, sans visual aids.

Quote:
Originally Posted by madshi View Post

If you were right shouldn't Bilinear scaling produce the best results (apart from not doing sharpening)?

No, because the whole point is that the quality of the scaling is determined by how natural-looking the interpolation is. There's no particular reason to think that a straight-line interpolation will look good, and in fact multiple theoretical reasons to think it wouldn't look very good. And in fact it doesn't look good. It's better than nearest neighbor, but still not that great. Sharpening is only part of it.

Quote:


You seem to think that using more than 2 pixels (more than Bilinear) only helps sharpness. I'm sorry to say, but I think you're wrong here. Using more than 2 pixels helps sharpness, but it also allows a smoother (less jaggied) interpolation of curves.

I absolutely grant you that in the specific samples you posted, some of the curves looked better using 4 lobe filters. But there's no specific theoretical reason to think that a 4-lobe would produce smoother curves across the board. They do in many of the cases in the specific image that you provided and in the lighthouse, but I'd want to see a lot more images before I'd make a blanket claim that 4-lobe in general produces smoother curves.

But even the ones that looked better, looked better because of a specific change in the filter curve at the edge - the specific curve interpolated in between a black pixel and a white pixel. Using energy gathered from 4 pixels away to decide what shape the interpolation curve should have makes no physical sense. It makes sense when you analyze interpolation in frequency space, but our eyes don't analyze edges in frequency space.

Moreover, if you decided you liked that interpolation curve better, there's no reason to go look at the pixels far away to get that curve. Just design a shorter filter kernel that produces that specific curve. Done.

The thing is, if we gather information from many taps, then we end up with energy migrating across the image in a completely non-physical way. No optical or imaging system causes the kinds of ripples that long sinc filters produce.

Here's a thought experiment. Imagine curved edge on a black field sitting on a white field. The interpolated curve is awesome-looking. I drop a black pixel 4 pixels away from the edge. Should the edge interpolation change? Of course not. In reality, I can stick a dot in the image, and that doesn't tell you anything about an edge 4 pixels away. What if the pixel is bright red? Should the edge interpolation incorporate some red? Or the complementary color? If I move the red pixel left or right, it changes whether the edge is slightly red or slightly cyan. Why? Does that make physical sense?

Say I'm interpolating in one dimension and I have as my original pixels this line (in a single channel):

1 1 1 1 0 0 0 0

The interpolation between the middle pixels (the 1 and 0) is going to be some curve. Depending on our algorithm, it'll be some shape which will look better or worse. If you believe the Lanczos4 is a good idea, then you're arguing that I should use a subtly different curve between that middle 1 and 0 if my original data is:

0 1 1 1 0 0 0 1

Or

0 0 1 1 0 0 1 1

That makes no sense, physical or otherwise. Changing a value four pixels away cannot possibly have any physical effect on the local image area. If the local interpolation gets better, it's a coincidental effect of Lanczos4 happening to have an interpolation curve that is "smoother looking," but there's no particular need to gather all the extra data in order to have that smoother looking curve.

Quote:


My "non-ringing" algorithm is based on allowing ringing in some areas of the image and not allowing it in others. So I think it could be tweaked a little to accommodate your likings, but I haven't tried that yet, because personally I can't stand any ringing, no matter how small it is. The only reason that my algorithm allows ringing in some parts of the image is that if I suppressed any and all ringing, the image would lose all its smooth curves for whatever reason.

I think your algorithm is a huge improvement, but I still think it's sub-optimal to use a ringing filter, then suppress the ringing. You can easily design a short filter that will produce the exact same curve on the edges you care about, but doesn't ring in the first place. Of course, maybe that's what you're doing, in which case: I approve.

But look at the specific curve generated between a black and white (or dark gray and light gray, etc.) by your favorite filter and ask yourself - how could I design a short filter that would produce exactly that curve on that same edge. And would that be the optimal curve across the board? The answers should be interesting.

Quote:


Please check out especially the up/downscale. If you look at the ringing Lanczos4 up/downscale, you should notice that it looks almost identical to the original, while the Catmull-Rom result looks a lot softer.

Going up then back down to the same size isn't actually the best way to test a filter. Lousy filters do seriously degrade the image when doing it, but as you note, doing upsampling to a simple integer ratio followed by downsampling to the original size makes it seem like long sinc filters are optimal, because they preserve more of the original image. Since the filter lobes during the downscale exactly line up with the ringing in the image, the ringing you get on the upscale gets largely erased on the downscale. That doesn't tell you whether the upscale looks good or not, and I think we all agree that many-lobe filters have issues (though maybe not so much with your improvements).

Anyway, this came out sounding negative and I didn't intend that. I love what you're doing, and I hope you take my arguments as encouragement to continue. Keep it up! Dig deeper! Prove me wrong!

Best,
Don
post #63 of 212
Quote:
Originally Posted by dmunsil View Post

I absolutely grant you that in the specific samples you posted, some of the curves looked better using 4 lobe filters. But there's no specific theoretical reason to think that a 4-lobe would produce smoother curves across the board. They do in many of the cases in the specific image that you provided and in the lighthouse, but I'd want to see a lot more images before I'd make a blanket claim that 4-lobe in general produces smoother curves.

Well, if you have some good image samples, I can scale and upload them for you. That way you wouldn't have to do the work yourself, you'd just have to provide the samples...

Quote:
Originally Posted by dmunsil View Post

Say I'm interpolating in one dimension and I have as my original pixels this line (in a single channel):

1 1 1 1 0 0 0 0

The interpolation between the middle pixels (the 1 and 0) is going to be some curve. Depending on our algorithm, it'll be some shape which will look better or worse. If you believe the Lanczos4 is a good idea, then you're arguing that I should use a subtly different curve between that middle 1 and 0 if my original data is:

0 1 1 1 0 0 0 1

Or

0 0 1 1 0 0 1 1

That makes no sense, physical or otherwise. Changing a value four pixels away cannot possibly have any physical effect on the local image area.

You are discussing this from a logical, non-scientific point of view. I like that. But here comes the problem: If you were consequent with what you wrote above shouldn't you strictly only use 1-tap filters? Why is the filter of your choice a 2-tap filter then? You say a value 4 pixels away can not have any physical effect. Then why would a value two pixels away be so important? And even stranger: From a logical, non-scientific point of view: Why does it make any sense to actually *substract* the values which are 2 pixels away? Which is what both Catmull-Rom and Lanczos are doing! I'm curious to hear your logical explanation about why negative lobes make sense. Can you explain that without mentioning the frequency space?

Quote:
Originally Posted by dmunsil View Post

Here's a thought experiment. Imagine curved edge on a black field sitting on a white field. The interpolated curve is awesome-looking. I drop a black pixel 4 pixels away from the edge. Should the edge interpolation change? Of course not. In reality, I can stick a dot in the image, and that doesn't tell you anything about an edge 4 pixels away. What if the pixel is bright red? Should the edge interpolation incorporate some red? Or the complementary color? If I move the red pixel left or right, it changes whether the edge is slightly red or slightly cyan. Why? Does that make physical sense?

What you describe above sounds more like computer graphics and less like real life image content to me. I'm coming from audio resampling. In that area there are test sounds like a simple pulse. If you use a typical audio resampling filter to resample such a pulse, the final resampled curve looks really bad (compared to the original). If you want to faithfully resample a pulse, you should use a different audio resampling filter, which however would produce awful results for real audio content. I think the same is true for video content: If you want to faithfully resample computer graphics, you should use a different resample filter than for real life images/photos.

Quote:
Originally Posted by dmunsil View Post

Going up then back down to the same size isn't actually the best way to test a filter. Lousy filters do seriously degrade the image when doing it, but as you note, doing upsampling to a simple integer ratio followed by downsampling to the original size makes it seem like long sinc filters are optimal, because they preserve more of the original image. Since the filter lobes during the downscale exactly line up with the ringing in the image, the ringing you get on the upscale gets largely erased on the downscale. That doesn't tell you whether the upscale looks good or not, and I think we all agree that many-lobe filters have issues

I tested various combinations of up/down filters. E.g. upscaling with Lanczos4, downscaling with Catmull-Rom and vice versa. Or Mitchell/Mitchell, or Catmull-Rom/Catmull-Rom. Etc... To my eyes, everytime I threw in Lanczos4, the final image was nearer to the original than when not using Lanczos. The simple reason was that using any other filter blurred the image quite noticeably (compared to the original).

I've also seen a test (can't seem to find the URL right now, unfortunately) where they rotated an image 360° in 1° steps. Using a sinc filter the final image was quite near to the original. Using Catmull-Rom was a blurry mess.

Quote:
Originally Posted by dmunsil View Post

Keep it up! Dig deeper! Prove me wrong!

I'll try... Unfortunately I'm far from a science guy. So I feel that I don't have the knowledge to prove you wrong on a scientific level. I can only do tests and compare the results. So that's what I'll be doing.

Quote:
Originally Posted by dmunsil View Post

Moreover, if you decided you liked that interpolation curve better, there's no reason to go look at the pixels far away to get that curve. Just design a shorter filter kernel that produces that specific curve. Done.

[...]

I think your algorithm is a huge improvement, but I still think it's sub-optimal to use a ringing filter, then suppress the ringing. You can easily design a short filter that will produce the exact same curve on the edges you care about, but doesn't ring in the first place. Of course, maybe that's what you're doing, in which case: I approve.

Here is a new comparison (upscaled 425%) based on your suggestions to shorten the Lanczos4 filter to less taps. I've also thrown in Lanczos64, just for fun.

Please note that my resampling algorithm is probably overextended by Lanczos64 calculations cause I'm using simple integer math. So some of the artifacts might be caused by math not being exact enough.



When comparing Bilinear to Catmull-Rom, I notice the following changes:

(1) more taps
(2) less aliasing
(3) form of the fonts is better reconstructed
(4) sharper
(5) ringing is added

When I compare Catmull-Rom to Lanczos4 (4 taps), I notice *exactly* the same changes as listed above!! That tells me that going from 1 tap to 2 taps has a similar effect than going from 2 taps to 4 taps. Remember above, where I asked for the reason why you were using 2 taps instead of 1? I think whatever reason you can find me for that, the same reason will probably also apply for using 4 taps instead of 2 taps.

The "Lanczos4 (2 taps)" screenshot uses the normal Lanczos4 resampling coefficients - but shortened to 2 taps. Same with "Lanczos64 (2 taps)". To me the shortened images look less smooth compared to the full taps images. Furthermore ringing is actually worse with Lanczos4 (2 taps) compared to Lanczos4 (4 taps). To my eyes Lanczos4 (4 taps) looks clearly better than Lanczos4 (2 taps). Lanczos64 (64 taps) looks awful due to excessive artifacts, but I think that the fonts are formed most natural of all.

There are three areas which I find especially interesting in the screenshots:

(1) Look at the "t" in "meter". In the "Bilinear" image it almost looks like having two vertical lines. That is still the case (but less so) in the Catmull-Rom image. The Lanczos64 (64 taps) image gets it best: There it actually looks like a normal "t". The shortened filters do quite well here, too, though.

(2) Look at the first "a" in "No parking at". I like the "a" best in the Lanczos64 (64 taps) image compared to all other images. Unfortunately the extremely excessive ringing hurts image quality so much that it it's even worth talking about Lanczos64 at all.

(3) Look at the separation between the two letters "OU" in "OUT OF ORDER". In the Lanczos64 (64 taps) image we can see that the letters are nicely separated. In all other images they are more or less connected.

All screenshots above were made without using my anti-ringing tweaks.

Edit: Obviously 64taps is totally overkill and I think most probably the coefficients of the first few taps is what makes the fonts look nicer. So shorting e.g. 64bit coefficients would be an option. However, the shortened results look over sharpened to me compared to the unshortened filters. So shortening doesn't really look good to me, either.
post #64 of 212
This is a very interesting thread!

Maybe my knowledge about scaling algorithms is not advanced enough. But in my understanding (and to my eyes) the frequency response of the algorithm is a very important parameter. Real world images are always filtered to not harm Nyquist-Shannon's sampling theorem and to avoid aliasing. So in my eyes it is no problem to use a scaling algorithm like Lanczos 8 which has a very good frequency response and very low beats/aliasing, because the introduced ringing occurs on very high frequencies which a filtered DVD/Blu-ray barely contain. And the inherent ringing of the medium is usually much stronger.
There are still a few parts (like letters) where introduced ringing is visible, but on the other hand the more of detail and less of aliasing improves the picture a lot. Of course it is just a compromise and a matter of taste. I'm very sensitive to aliasing so I prefer Lanczos 8 over Lanczos 3 or Catmull-Rom.

Btw., I measured the frequency response of a few algorithms using a H-sweep from the AVIA disc scaled to 1920x1080 and a waveform monitor.



Bilinear: bad frequency response and much aliasing. No ringing.


Bicubic: better frequency response. Still beats and aliasing and very slight ringing.


Lanczos 10: very good frequency response, nearly no aliasing/beats. But very strong ringing on the step (left).


Again no real world DVD contains such a step, because it breaks Shannon's law. Computer graphics may be a different story (but also here Shannon's law is valid). I hope the visualizations of the frequency response can help a bit in this discussion.
post #65 of 212
^^^Far less scaling algorithm knowhow here, but your mention of 'real-world' video and plots showing 1920-line test-pattern falloff reminded me of sspears' OP image regarding picture details, plus sspears' '03 archived AVS post :
Quote:


A spectrum analyzer was used to look at high frequency information. This was done on the Restaurant scene and on several motion pictures. This is how the 1300 vs. 800 was calculated.

AIUI, those measurements were from HD-D5 pro tapes (~250 Mbps?) and showed typical limits for optically filtered movies (cameras) as well as motion CGI. Hope it's not OT to inquire whether Blu-ray discs and masters, say using HDCAM-SRs, and even 4k downconversions, now exceed these maximum effective horizontal resolutions. -- John
post #66 of 212
Quote:
Originally Posted by FoLLgoTT View Post

This is a very interesting thread!
Maybe my knowledge about scaling algorithms is not advanced enough. But in my understanding (and to my eyes) the frequency response of the algorithm is a very important parameter. Real world images are always filtered to not harm Nyquist-Shannon's sampling theorem and to avoid aliasing. So in my eyes it is no problem to use a scaling algorithm like Lanczos 8 which has a very good frequency response and very low beats/aliasing, because the introduced ringing occurs on very high frequencies which a filtered DVD/Blu-ray barely contain. And the inherent ringing of the medium is usually much stronger.

Certainly not correct for all the wide screen top and bottom edges from black to bright material. Properly done there is no ringing there. Using a ringing filter there will be.
post #67 of 212
Quote:
Originally Posted by mhafner View Post

Certainly not correct for all the wide screen top and bottom edges from black to bright material. Properly done there is no ringing there. Using a ringing filter there will be.

I never thought about that, because usually there is inherent ringing on the border to the black bars. But you are right. In fact filling the picture of a cinemascope movie with black pixels to the full 480/576 ends in an image that breaks Shannon's law and leads to ringing when upscaled.
post #68 of 212
Quote:
Originally Posted by FoLLgoTT View Post

I never thought about that, because usually there is inherent ringing on the border to the black bars. But you are right. In fact filling the picture of a cinemascope movie with black pixels to the full 480/576 ends in an image that breaks Shannon's law and leads to ringing when upscaled.

Usually it would be downscaled a bit, from 2K to 1080p, or a lot from 4K to 1080p. I suspect the best results happen when the 2K is from 4K scanning and then simply a bit cropped to get 1080p. No resampling applied.
post #69 of 212
Quote:
Originally Posted by mhafner View Post

Usually it would be downscaled a bit, from 2K to 1080p, or a lot from 4K to 1080p. I suspect the best results happen when the 2K is from 4K scanning and then simply a bit cropped to get 1080p. No resampling applied.

And even when resampling, I always lay a Mod16 black matte over the scaled image to make sure that the matte precisely overlays block boundaries. That should eliminate ringing extending into the matte area at least.

One test of a scaling algorithm is that there shouldn't be any obvious "sweet spot" at 2x, 3x, etcetera.
post #70 of 212
Quote:
Originally Posted by benwaggoner View Post

Downsampling and upsampling are different beasts, of course. I like to arrange my affairs to that I'm never upsampling on either axis. Downsampling I've grown quite to like the Super Sampling implementation in Expression Encoder 2 SP1.

Do you know (and are you allowed to say) how that Super Sampling implementation works technically? Is it a completely different technique compared to e.g. Bicubic or Lanczos resampling?
post #71 of 212
Are there any Linux tools to convert 10 bit DPX RGB to optimally dithered 8 bit YUV 4:2:0?
post #72 of 212
Quote:
Originally Posted by mhafner View Post

Usually it would be downscaled a bit, from 2K to 1080p, or a lot from 4K to 1080p. I suspect the best results happen when the 2K is from 4K scanning and then simply a bit cropped to get 1080p. No resampling applied.

From some comparisons I've done it would appear that most 1080p transfers are simply cropped from 2048 to 1920 rather than resized. good news.

BAD NEWS

However there is still some sort of resize going on for reasons that are beyond me and frankly infuriating. I've ballparked it at about 0.03 rescale (seemingly not centred) in addition to the simple 1920 crop.

Its definitely not 1:1 pixel with the 2k scans but with seemingly no good reason for it not to be.

End result...what I'm seeing on BD isn't even close to the 2k in terms of sharpness. I'd estimate its closer to 1k. I'm comparing 2k with frame grabs from TMT which are unfortunately jpegs but even so its a shocking difference.

Always possible that the compression and jpeging is getting in the way but I find it impossible to believe its entirely to blame.

These are big budget films I'm talking about ..I can't post the comparisons and I won't divulge the titles of the films so please don't ask.

They are generally regarded as having excellent transfers though.
post #73 of 212
Quote:
Originally Posted by Mr.D View Post

From some comparisons I've done it would appear that most 1080p transfers are simply cropped from 2048 to 1920 rather than resized. good news.

BAD NEWS

However there is still some sort of resize going on for reasons that are beyond me and frankly infuriating. I've ballparked it at about 0.03 rescale (seemingly not centred) in addition to the simple 1920 crop.

Its definitely not 1:1 pixel with the 2k scans but with seemingly no good reason for it not to be.

End result...what I'm seeing on BD isn't even close to the 2k in terms of sharpness. I'd estimate its closer to 1k. I'm comparing 2k with frame grabs from TMT which are unfortunately jpegs but even so its a shocking difference.

Always possible that the compression and jpeging is getting in the way but I find it impossible to believe its entirely to blame.

These are big budget films I'm talking about ..I can't post the comparisons and I won't divulge the titles of the films so please don't ask.

They are generally regarded as having excellent transfers though.

Is the 2K scan done at 4:4:4?
post #74 of 212
Quote:
Originally Posted by Lee Stewart View Post

Is the 2K scan done at 4:4:4?

10bit log RGB.
post #75 of 212
Quote:
Originally Posted by Mr.D View Post

I'm comparing 2k with frame grabs from TMT which are unfortunately jpegs but even so its a shocking difference.

I've never been able to get TMT working on my own computer to see for sure, but many caps that I've seen from others seem inaccurate vs the DirectShow method.
post #76 of 212
That's why they (all major studios) should be doing 4K DI so they can supersample down.
post #77 of 212
Quote:
Originally Posted by mhafner View Post

Are there any Linux tools to convert 10 bit DPX RGB to optimally dithered 8 bit YUV 4:2:0?

No more than what's available for Windows.
post #78 of 212
Quote:
Originally Posted by ChuckZ View Post

That's why they (all major studios) should be doing 4K DI so they can supersample down.

2k scans are already downsampled from 4k these days. They need to crop 2048 to 1920 and not do any resampling.
post #79 of 212
Cropping 2048 to 1920 doesn't sound like a good idea. Wouldn't you lose a good portion of the image and screw up the aspect ratio?
post #80 of 212
Quote:
Originally Posted by Kram Sacul View Post

Cropping 2048 to 1920 doesn't sound like a good idea. Wouldn't you lose a good portion of the image and screw up the aspect ratio?

No. The ratio would not change. You lose a bit on top and bottom too. If that looks much sharper I can do without all the extra edge information which is practically never relevant (neither for content or esthetics).
post #81 of 212
The problem is that they're doing too many steps. They go from 4K scan to 2K. Then they go from 2K to Blu-Ray. Instead they should go directly from 4K to Blu-Ray. That would remove any need to crop and still maintain perfect sharpness.
post #82 of 212
Quote:
Originally Posted by Mr.D View Post

However there is still some sort of resize going on for reasons that are beyond me and frankly infuriating. I've ballparked it at about 0.03 rescale (seemingly not centred) in addition to the simple 1920 crop.
Its definitely not 1:1 pixel with the 2k scans but with seemingly no good reason for it not to be.
End result...what I'm seeing on BD isn't even close to the 2k in terms of sharpness. I'd estimate its closer to 1k. I'm comparing 2k with frame grabs from TMT which are unfortunately jpegs but even so its a shocking difference.

Even when resampling is confirmed the loss of sharpness can also partially be due to AVC encoder settings (deblocking filters at too low bit rates (WB are you listening! What's this crap encoding? http://comparescreenshots.slicx.com/comparison/10911/ ) etc.) and other processing steps in these long chains. If BDs were made directly from the 2K DPX files with no resampling, optimised conversion to YUV 4:2:0 Rec 709 and high bit rates with minimal deblocking filtering there should be no big loss of sharpness compared to the DPX files. There are some BDs around that went this route (for example Tamil film Sivaji).
Quote:
These are big budget films I'm talking about ..I can't post the comparisons and I won't divulge the titles of the films so please don't ask.
They are generally regarded as having excellent transfers though.

No surprise here. Big budget does not necessarily correlate with high quality BD. If it were so easy... Lots of not so high budget RED/D21... shot stuff is likely to look technically better on BD than many high budget projects with too many cooks spoiling the long winded broth...
post #83 of 212
Quote:
Originally Posted by Kram Sacul View Post

Cropping 2048 to 1920 doesn't sound like a good idea. Wouldn't you lose a good portion of the image and screw up the aspect ratio?

Its tiny. Probably less intrusive than the gate innacuracies on your average film projector. 64 pixels either side out of 2048. These are shots I created and I'm not in the least bit fussed about losing a sliver off each side if it preserves the sharpness.
post #84 of 212
Quote:
Originally Posted by madshi View Post

The problem is that they're doing too many steps. They go from 4K scan to 2K. Then they go from 2K to Blu-Ray. Instead they should go directly from 4K to Blu-Ray. That would remove any need to crop and still maintain perfect sharpness.

I've done tests wth 2k and 4k to 1080p and there is little to no perceptable difference and thats with no encoding stage other than a film to video color correction . You can even find them on here if you search.

If you do 2k properly I see no reason that you'd need 4k , certainly not for 1080p generation.
The practicalities of working 4k are still nightmarish even for the top 5% of post production facilities ...for everyone else its pretty much unthinkable.

So whilst its nice to point at the blue sky and say 4k is the occam's razor the reality is that 2k works and should be completely effective in generating 1080p IF its not screwed up. ie it should be pixel for pixel with the 2k ( color sampling notwithstanding)

The softness I'm seeing on my tests I would say is primarily down to the weird unnecessary resize , the grain structure looks to be preserved ( 4:2:0 mushing it a fair bit but certainly the luminance looks to be free of DNR).

The weird resize may be down to a lot of things:

Maybe its a rescan off the filmed out DI : the gates on scanners are always a little bit different.

Maybe the director asked for a slight repo on certain shots ( I've looked at more than one shot on more than one film and they all exhibit this resize).

Maybe the DI had some resize left on by mistake.

Maybe they rescaled to hide a filtering artifact off the edges...but one of the edges is within a pixel of being the result of a 1920 crop

Usually there is a camera lineup test slate that shows the working area of the film shot with the principle cameras. Its really more of a guide for the camera guys than gospel as its not terribly accurate ( I often get ones that aren't even straight to the camera) but perhaps they built in some default correction to account for this unecessarily for every shot in the film.

Really I don't know why its there I only know two things: I think its completely unecessary and its knocked somewhere in the region of 33% of the resolution out of the image.

What's really weird is that they seem to have gone down the correct route by cropping to 1920 (I'm 99.99% sure I'm seeing a 1920 crop) but then it has this weird non-centred rescale...and its really too small to have been something that was deliberately dialed in.
post #85 of 212
Quote:
Originally Posted by mhafner View Post

No. The ratio would not change. You lose a bit on top and bottom too. If that looks much sharper I can do without all the extra edge information which is practically never relevant (neither for content or esthetics).

When you crop 2048 to 1920 you make a 2.35:1 film 2.2:1 or so and you zoom in about 10% on a 1.78:1 frame. IMO it's not worth it just to bypass some scaling which shouldn't be too damaging.
post #86 of 212
Quote:
Originally Posted by Kram Sacul View Post

When you crop 2048 to 1920 you make a 2.35:1 film 2.2:1 or so and you zoom in about 10% on a 1.78:1 frame. IMO it's not worth it just to bypass some scaling which shouldn't be too damaging.

Its somewhat irrelevant. From what I've seen cropping to 1920 would represent no more innacuracy than is already present in the camera viewfinders , film gates and the scanners etc. For exmple if you scan the same material on the same scanner twice they never exactly line up with each other (Northlight and other pin registered scanners are more consistent though)

The camera line up charts on every film I've ever looked at always crop in at the sides anyway ( they are not accurate most of the time as I've said)

The films I've looked at all apear to have been cropped to 1920 anyway , they just have this ridiculous sub-pixel resize on top. Any sub-pixel resize is going to soften compared with maintaining 1:1 pixel.In direct comparisson to the 2k the 1080p is shockingly soft. Its more like looking at 1k ie half the resolution of the 2k.

When I'm creating shots I always ensure that any resizing or repo-ing is kept to the absolute minimum necessary and that its done in such a way as to impact resolution to the least amount possible. ( I usually request 4k plates for these shots but I don't always get it...some of the examples I've looked at were originated with 4k plates...I tend to request them for bluescreen work...even though the end delivery is 2k I like to key at 4k...signal to noise ratios and all that).

I've had artists given video rushes as a lineup guide who've matched the 2k material a little too literally to the video and unfortunately matched the innacuracies in the telecine gate and even the non-square pixels
I then have to go in and tell them to take off all the rescaling and just use integer pixel offsets to rerack the plate...magically the resolution all comes back.

So when you go to the trouble to preserve as much resolution as possible it grates a bit when I see the 1080p has needlessly sacrificed it.

If I get time I'll post some tests using "marcie".
post #87 of 212
Slightly OOT, but I'm wondering right now:

The 2K pixels are supposed to be square, just like with 1080p, right? So 2K gives us an AR of 1.8963. That's quite a weird aspect ratio. How did they end up using this AR?
post #88 of 212
Quote:
Originally Posted by madshi View Post

Slightly OOT, but I'm wondering right now:

The 2K pixels are supposed to be square, just like with 1080p, right? So 2K gives us an AR of 1.8963. That's quite a weird aspect ratio. How did they end up using this AR?

Its to give the greatest level of compatability with common 2k aspect ratios and 1080p video.

When I play 1080p video on a 2k Barco DP100 I just lose 128 pixels off the panel and keep the 1:1 pixel map.
post #89 of 212
Quote:
Originally Posted by Mr.D View Post

The films I've looked at all apear to have been cropped to 1920 anyway , they just have this ridiculous sub-pixel resize on top. Any sub-pixel resize is going to soften compared with maintaining 1:1 pixel.In direct comparisson to the 2k the 1080p is shockingly soft. Its more like looking at 1k ie half the resolution of the 2k.

This would only affect the horizontal domain, right?

I'm wondering if this is what happened with Paramount's release of Watchmen (I should be posting screencaps tonight in the Blu-ray Software forum). It appears to have the same vertical resolution as the Warner disc, but it's been robbed of the finest 1-pixel vertical-running lines and even has very slight ringing around those areas.
post #90 of 212
Quote:
Originally Posted by Kram Sacul View Post

When you crop 2048 to 1920 you make a 2.35:1 film 2.2:1 or so and you zoom in about 10% on a 1.78:1 frame. IMO it's not worth it just to bypass some scaling which shouldn't be too damaging.

Nobody is forced to change the aspect ratio. You crop on all 4 sides so the ratio stays the same, if you want that. Top and bottom are often cropped anyway to mask image edge issues (bad edits, hairs etc.).
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: HDTV Software Media Discussion
AVS › AVS Forum › Blu-ray & HD DVD › HDTV Software Media Discussion › Video pre-processing