Originally Posted by Mr. Hanky
Since the end-use format will be 1080p, then the required Mpix will inherently be constrained. By definition of making a 1080p rendition, you explicitly agree to give up any detail or grain that exists beyond that bandwidth.
You made a faulty assertion here. If the grain is smaller than the pixel, the pixel will be the sampled average of the grains in that locality. There is no contingency that dictates extra noise will be generated due to this.
Exactly, so all is well, as long as adequate bitrate is in hand. Also note that grain isn't strictly one size nor completely random, either. So there may very well be grain contributions all along the bandwidth of the media, not just 1 for every 4 pixels. So averaging isn't necessarily going to make all individual grains disappear or blend nicely.
The end result could turn out any number of ways. So it is a bit arrogant for someone to dictate how it should look in the end or that one way is categorically right which makes anything different wrong.
"arrogant", now? Why can't you have discussions without making personalisations?
I think it's clear that you may have missed the meaning of what I posted.
What I am saying applies to the telecine process whereby the film stock is scanned to a digital master - it has nothing to do with any particular codec, but more to do with preventing random bits of grain from being overemphasized to the extend where it compromises the frame.
There is a lot of material on the web to describe this, but here is a "rough" portrayal:
Remember, for the sake of this example, that the distribution of the grains (of each color) are randomly distributed within the area to be sampled...
lets say that a particular area covered by a 1920x1080 "pixel" has 15 specks of grain on it, and the "color" of the pixel should be a particular "shade". I'm going to show the Red, green and Blue specks with the letters RGB. Spaces without pigment and transparent and show white light.
This is an oversimplification, so don't everyone get twitchy
- in reality there will be "bits" or "specks" that have multiple colors filtered out to create blacks, etc, and they are not lined up pretty like this...
. . R . . . . R . . . . .B
. . B. . . . . . G. . . . . . G
. . B. . . . G. . . . . . R
. . . R. . . . . . . G. . . . . . B
. . . . G. . . . R. . . . B
I'd love to draw a fancy scope that shows a little "scanner" that doesn't view the whole "clump", but if you visualise a circle that covers a portion of the image, you can see that it will show a slightly different overall value depending on where it lands.
Not only that, but picture that a lot of those values above could be "clumps" of grain from different colors that stuck together during the film processing.
Not only that, but figure that most masters are not made from the original film, but copies. Anyone who has made photographic copies of negatives (I used to have a developing lab) knows that the "grain" from the original mostly doesn't line up with the particles on the new film, so the copy becomes much more "grainy" than the original.
Being able to average the grains or specks which describe the image would be crucial to getting a smooth representation of the image to the end viewer. Grain is NOT good, most of the time - it is simply just an artifact of the medium.
Spatial averaging is pretty standard - but I suspect that there are differences in how particular encoders handle averaging the grain TEMPORAlY - meaning, perhaps, that some encoders track a particular spot in a scene through multiple frames of motion. Some of the superior Video Processors on the market do this already to improve the picture on playback - so it would not be far fetched to assume that the competing encoders may have a few tricks up their sleeves as well.
Is the more "desirable" nature of the VC1 encodes (as expressed by some users here) a direct result of the "codec" or the "encoder"? We don't know.
Right now, one can be forgiven for assuming it is the codec, since most of us agree that the VC1 encodes (on the whole) look better.
But it is also possible that the MS encoder itself is doing a few tricks in the prepocessing to improve the structure of the picture before it is encoded.
All a guess - we can only speculate - but the end result is a very subjective matter. I prefer what I am seeing in the VC1 encodes at this time...