AVS Forum banner
1 - 5 of 5 Posts

·
Registered
Joined
·
3,873 Posts
Discussion Starter · #1 ·
 http://www.spectracal.com/downloads/...alibration.pdf


This link is to the 16th installment in the 'Poynton's Vector' series. The man who literally "wrote the book" on digital video, comments on consumer display calibration and the importance of viewing environment conditions in reproducing reference images.


Best regards and beautiful pictures,

G. Alan Brown, President

CinemaQuest, Inc.

A Lion AV Consultants Affiliate


"Advancing the art and science of electronic imaging"
 

·
Registered
Joined
·
3,458 Posts

Quote:
A studio-grade reference display has... a 2.4-power electro-optical function (EOCF)

Again he relates gamma with effectively the video decode. Considering how various displays vary light output depending on the image displayed, the usual method of measuring windows to look at gamma does not necessarily provide such a relationship on a number of displays. The practical aspect is that there can be a noticeable difference in how an image looks depending if you calibrate to the sort of thing he's talking about or if you calibrate to however a particular window happens to measure on an individual display. Personally I think the whole "recreate the experience of the director" talk is a bit silly, considering gamma may change the look of an image, and gamma calibration is basically not defined.
 

·
Registered
Joined
·
3,314 Posts

Quote:
I have been describing what you could call the scientific aspects of calibration. If you are running a small business producing computer- generated imagery (CGI) or visual effects (VFX) serving the movie busi- ness, then your own viewing preferences aren't relevant; what matters is bringing your equipment into conformance with the appropriate standards, and straight science and engineering suffices.

Actually its a little more complicated than that.


VFX pipelines can usually be categorised this way in decreasing terms of priority with regard to color pipeline attributes; Transparency , Robustness, Accuracy.


Film VFX are usually carried out to what is sometimes referred to as "full negative density". The easiest way to describe this is that the final VFX shots have the same latitude in comparison to any other bit of negative thats been exposed in the camera ie non-vfx shots.


This is to simplify the grading (color correcting) stage when the final subjective look is applied to the negative material according to the production wishes and according to a specific end rendering intent ( usually some sort of print stock model).


However VFX shots are usually executed in parrallel with the DI stage ( the colorist is given the same scans at the same time or even earlier than the VFX guys and in all liklihood has already set a grade look for those shots).


So the VFX company has to ensure that what they are handing back to the client for their DI has not changed in any way relative to the original scan except in those areas of the scan where they are expected to effect a change and those changes must tally with the original scan photographically. The actual Input Output process itself has to be Transparent. So that is the top priority in a color pipeline for VFX.


Next we come to Robustness. Any created imagery added into a film scan has to exhibit the same latitude performance as the original scan itself. This is more information than will actually make it to the end print target (digital or physical) pretty much ( with modern DIs it is actually possible to transfer everything from black to the top of the neg into the print target...it doesn't always look nice though and is secondary and somewhat unconciously done due to the subjective creative process of the DI stage.


In a nutshell the VFX guys don't want to have to tweak their CGI if the end look of the shot changes so all the imagery is created so that it will stand up to the grading stage in the same way that a non-vfx shot would. VFX get on with their job , DI gets on with theirs without constantly getting in each others ways. This is what Robustness entails and so its the next priority and results from the artists knowing what they are doing and the color pipeline thats employed allowed them to appraise the data in the scan in a clear meaningful way on its own terms ( not in terms of a given end look which they know will be in flux anyway).


Then we come to accuracy. An image gets its ultimate colorspace from the rendering intent of the system used to appraise it during its creation. VFX are not responsible for the end subjective creative look of a given shot ( mostly) however they are responsible for ensuring any imagery they create matches up with the film scan in terms of overall dynamic range . So they are mostly involved with a technical objective color matching process relative to the original scan. What this means is that they are only concerned with how it matches photographically to the scan itself not what the end grading stage will do to it. (contientous artists will grade their shots all over the place to ensure they can appraise the necessary intensity and color detail to ensure they have a match both photographically and mathematically ( by the numbers).


Simple example if my scan is heavily biased towards green and I have to add something in thats green I'll apply a neutral grade ( or even some crazy magenta grade) to the scan to better see where the green has to live to balance up with the full range of the scan. However this is just an investigative grade for my own use , the shot goes out the same as the original scan "grade" once I'm happy with the robustness.


In some ways its actually technically more complex than the final end grading stage at the DI ( indeed one of the last things that VFX artists get good at is color-matching as you have to conciously employ mechanisms that break the way the human visual system clouds the issue and understand a few things about the tolerances going on in the software and your display chain). However what they don't need is notional accuracy to a given print model (especially if they don't know ultimately what it is ) as certain/most accurate print models will hide some of the information that is necessary for VFX to appraise.


So notional "accuracy" to a given print standard is somewhat down the list of priorities for VFX as they are primarily interested in achieving an objective color match to a data set from a film scan with full knowledge that data will be getting warped and twisted and truncated further down the line at the DI where the colorist is making subjective creative decisions for a notionally accurate end rendering intent.


For the DI rendering intent accuracy is critical , but for VFX ; interrogation of the initial data is critical.


And a few other things to think about. CGI lighting models work with linear light calculations . So a CGI lighter is usually given a linearised version of the film scan to reference. Now the way that lighting models work means that lighters generally visually light the blacks and the midtones and let the lighting model itself handle and scale the superwhite areas of the image ( they tend not to visiually appraise them to the same level). This makes prefect sense if you understand how human being see things and how energy stimulus gives rise to increasing intensity levels in the real world. ( the real world is linear by the way).


However most of the material shot on film is overexposed to a degree ( one or two stops if not more) to push the exposure up above the toe of the film where it starts to chemically fail to record variation in the blacks. For this very reason negative is designed to record a lot of white detail variation above where it would become mapped to the print. This is called headroom. And this is something that a lot of people don't get the white headroom on film is actually for benefit of the DARK variation detail.


The overexposed negative has recorded nice detail variation more effectively in the blacks relative to what a nominal exposure would manage , however this has resulted in a thin milky washed out look to the image if it was transferred as a one light print. So to compensate the neg is "printed" back down again to restore the blacks to a lower level and the white headroom then becomes necessary as it gets dragged down into a recordable area of the print ( the superwhites become visible). So we have nice subtle dark variation into black and nice smoothly increasing white detail into the peak whites.


if the white headroom wasn't there we would have flat clipped looking white detail ( video anyone??). This paradigm is fundamental to modern film shooting ( has been for decades).


However what this means is that most film scans are not "nominally" exposed and as such to linearise them with the standard Kodak loglin values (google them) results in a reference image that is not always that useful for the artists to light to. ( most film scans are never nominal they are always under or over exposed).


Again this is another reason why VFX pipelines are concerned with issues a little bit more complex than notional accuracy as theirs is an interrogative process as much as a creative one. Most colorists do not worry about this stuff, they just assume that their end rendering intent is accurate and make the onscreen depiction look the desired way interactively and push the gamma about to make sure they haven't crushed things.


So VFX often use synthetic display targets that are not based on any real world colorspace but are used as a tool to aid appraisal for better robustness more easily for the artists. They can use accurate print models and usually have them to hand but in all honesty in 15 years of VFX work I've required an accurate gamut environment once for one type of shot ( it was jimi mistry's head in the book gag in Ella Enchanted ...nothing too fancy just artificial VFX generated color that was falling out of gamut and outside the tolerance of my display system to actually show the effect , solution was to employ a 3d lut accurate print model...but that was useless for normal appraisal as it crushed the blacks albeit notionally accurately)


( I have to say most colorists do not know their ass from their elbows when it comes to this stuff , thankfully reality caught up with them a few years back and they do not demand the ridiculous over inflated pay packets that they once did and do not go around with an over inflated ideal of their own creative genius, there are only about 4 in the world that are any good anyway the rest are cowboys to be honest and will utter things like "linear video" which usually sends me guffawing out of the DI suite)
 

·
Registered
Joined
·
3,314 Posts
A slightly more succint way of putting it.


I've got a bucket of water and I know that eventually its going to be poured into a scientific beaker with exact markings on the side. I don't know exactly how much water is going to end up in the beaker all I care about is that my bucket is clean : doesn't taint the water , doesn't leak and contains the same amount of water I started with.


I give the bucket to the client , they pour out the amount they want into the measured standardised container.


Then we all party.
 

·
Registered
Joined
·
2,250 Posts

Quote:
Originally Posted by Charles Poynton /forum/post/0

A studio-grade reference display has BT.709 primary chromaticities,
CIE D65 white reference, a 2.4-power electro-optical function (EOCF),
and 100 cd·m2 reference white luminance.
Is the size and viewing distance of the reference display standardized?

... the Rec709 standard looks just right on a 42" monitor, it looks dull on a 120" screen.

http://www.projectorcentral.com/pana...ge=Performance

Quote:
Originally Posted by Evan Powell, ProjectorCentral.com /forum/post/0

The AE7000 has a pre-calibrated mode for Rec709, the industry standard for ideal video that specifies a color temperature of 6500K and a gamma curve of 2.2. In theory this should be the ideal viewing mode for home theater. However, while the Rec709 standard looks just right on a 42" monitor, it looks dull on a 120" screen. The reason is that the human eye perceives color values differently in large scale. If you look at a red circle that is 30" diameter, and the same precise value of red in a circle that is three times the size, the smaller one will appear to the eye to be more saturated, even though it isn't. Rec709 standards do not take this perceptual phenomenon into account.

Panasonic addresses this issue with the Cinema 1 mode on the AE7000. Cinema 1 is based on Rec709, but it is color enhanced to compensate for perceptual differences on a large screen. When you flip back and forth between Cinema 1 and Rec709 modes, the latter looks decidedly flat and dull, while Cinema 1 is more vibrant. It is clearly the best choice for the display of film and video in a dark home theater.
 
1 - 5 of 5 Posts
This is an older thread, you may not receive a response, and could be reviving an old thread. Please consider creating a new thread.
Top