White level and contrast display vs mastering and video dynamic range? - AVS Forum
Forum Jump: 
Reply
 
Thread Tools
post #1 of 40 Old 05-25-2011, 03:19 PM - Thread Starter
Advanced Member
 
dovercat's Avatar
 
Join Date: Apr 2008
Posts: 574
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 2 Post(s)
Liked: 13
If one of the aims of calibration is to maintain artistic integrity to what the director or person doing the video mastering saw. Are modern flat panels and projectors too good for the job? Since they exceed what may have been used during mastering in contrast and often in brightness.

What is the dynamic range of video and what is the connection between the dynamic range of the source and the display contrast, white level. Does the source limit how it should be displayed. High dynamic range displays do not use consumer video sources as they are unsuitable. So is there a point where display brightness or contrast gets so great that consumer video is no longer good enough, fit for display? What would be that point?
dovercat is offline  
Sponsored Links
Advertisement
 
post #2 of 40 Old 05-26-2011, 08:24 AM
AVS Special Member
 
rickardl's Avatar
 
Join Date: Nov 2003
Location: Stockholm, Sweden
Posts: 1,939
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 4 Post(s)
Liked: 42
Quote:
Originally Posted by dovercat View Post

If one of the aims of calibration is to maintain artistic integrity to what the director or person doing the video mastering saw. Are modern flat panels and projectors too good for the job? Since they exceed what may have been used during mastering in contrast and often in brightness.

THX says:
Motion pictures are mastered in a room that's moderately dark. White levels (contrast) are adjusted to 35-foot lamberts using a light meter
so, should we limit/calibrate our displays to 35-fL?
rickardl is offline  
post #3 of 40 Old 05-26-2011, 09:24 AM
AVS Special Member
 
sotti's Avatar
 
Join Date: Aug 2004
Location: Seattle, WA
Posts: 6,585
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 12 Post(s)
Liked: 164
The target luminance is determined by ambient light.
So theaters use 14fl, Mastering rooms are usually 80cd/m - 120cd/m
But having more light for a different environment could be entirely appropriate.

As for black level, the holy grail is OLED, where per pixel can switch off, effectively giving infinite contrast ratio. Increasing the contrast ratio is always a good thing.

Joel Barsotti
SpectraCal
CalMAN Lead Developer
sotti is offline  
post #4 of 40 Old 05-26-2011, 11:59 AM - Thread Starter
Advanced Member
 
dovercat's Avatar
 
Join Date: Apr 2008
Posts: 574
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 2 Post(s)
Liked: 13
Thanks for the replies.
The thing is flat panels and projectors now exceed the displays that seem to be used in mastering the content they display.

Projectors can do 100,000:1 native, and several hundred thousand to one dynamic sequential, flat panels can do millions to one contrast. White levels on projectors are being done at 23fL if you follow Joe Kanes example, and flat panels are capable of over 100fL. But mastering seems to be done at 2,000:1 or less and 35fL or less.

Are there any problems in displaying video on displays with much higher white levels and/or contrast than was used to master the video? Is it maintaining artistic integrity?
Video is called a low dynamic range format and high dynamic range displays do not use material mastered or stored in the same format as domestic consumer video. Is this because video is too limited, is that down to limits in how it can be displayed, brightness or contrast or both, does it becomes unfit for purpose?


The standards seem pitiful in comparison to what consumer displays claim to be capable of.

Commercial Cinema

SMPTE 196M-1995
Target 16fL open gate, which is about 13.9fL with 2.35:1 aspect ratio film, 11.7fL with 1.85:1 film due to film transparency and different lenses
Theatrical presentation tolerance 12-22fL open gate, which is about 10.4-19.4fL with 2.35:1 films, 8.7-16fL with 1.85:1
I have read an ideally setup 35mm film projector with pristine first generation print is ~2,000:1 on/off

DCI Digital Cinema Initiatives, LLC. Digital Cinema Systems Specification Version 1.2 March 07, 2008
Nominal projector 14fL 2000:1 on/off
Review room screen 13.3-14.7fL 1,500+:1
Theatrical presentation 11-17fL 1,200+:1 but no more than the contrast ratio used during mastering
Screen black level with the projector off for theatrical presentation is encouraged to be <0.01fL but maybe higher, 14fL>0.1fL = 1,400:1

Consumer Video

Monitors for mastering for TV
Europe EBU TECH 3320 version 2 Oct 2010,
70 to at least 100cd/m2 (20.43-29.19+fL) white, the old EBU TECH 3320 version 1.1 May 2008 used 80cd/m2 (23.35fL) as an example of a reference white point, while version 2.0 Oct 2010 uses 70cd/m2 as full screen white in its contrast requirements, the displays go over reference white to 109% super white. 1% patch contrast ratio is 2,000+:1, full screen white contrast ratio 1,400+:1
USA NTSC, SMPTE RP166 'Critical Viewing Conditions For Evaluation Of Color Television Pictures'
35 foot lamberts.

Domestic Viewing

ITU-R BT.500-11 requires monitor brightness up to at least 200cd/m2 (58.37fL) for tests simulating domestic viewing conditions. EBU Tech 3321 recommends consumer 50" diagonal or smaller displays are capable of white level of at least up to 200cd/m2. I have read that SMPTE has a target for CRT of about 50fL
dovercat is offline  
post #5 of 40 Old 05-26-2011, 12:08 PM
AVS Special Member
 
sotti's Avatar
 
Join Date: Aug 2004
Location: Seattle, WA
Posts: 6,585
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 12 Post(s)
Liked: 164
Well if you have a reference viewing environment with controlled lighting, then you should use parameters similar to the established mastering recommendations. That factors the luminance out, because regardless of what the display is capable of you would use the reference luminance recommendations.

So basically your question boils down to, "Do ultra low black levels adversely effect content?"
The answer to that question is no, they don't.

Also many of those large contrast numbers, are due to local dimming or auto dimming features. Ansi contrast in the display environment are much smaller numbers.

Joel Barsotti
SpectraCal
CalMAN Lead Developer
sotti is offline  
post #6 of 40 Old 05-26-2011, 01:25 PM - Thread Starter
Advanced Member
 
dovercat's Avatar
 
Join Date: Apr 2008
Posts: 574
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 2 Post(s)
Liked: 13
Do you think displays with overly high white levels might be a problem.
The standards mention up to 400cd/m2 (116.75fL) in adverse conditions. So I guess displays are still not brighter than brightest they were envisioned to ever be in the standards. Is the only reason high dynamic displays need content mastered for them, due to being able to go far brighter than 400cd/m2

You say very high contrast is fine. But both increased contrast and white level have an obvious effect on the picture. I would of thought altering them would result in seeing something different to the director, person doing the video mastering. So artistic integrity seeing what the director saw, not as important as maximizing contrast for increased perceived picture quality?


Commercial cinema DCI standards state "Reference Image Parameters and Tolerances. In order to eliminate unwanted detail or discoloration in near blacks, it is critical that Mastering Projectors have an equal or higher sequential contrast than all exhibition projectors."

DCI tracks gamma down to 5% of peak white, the last 5% is determined by the displays black level.


Video standards state contrast in terms of minimums, the minimums for grade 2 and 3 monitors used to replicate domestic viewing are substantially lower contrast than for grade 1 monitors. But since they are minimums I would guess higher contrast is always better, no undesired effects?

Is there anything to see in the bottom of the greyscale or is video mastered so black is actually higher than reference black so no unseen during mastering information is seen on higher contrast displays

Video Europe EBU, tracks gamma from 90% to 10%, above 90% and below 10% it is determined by the displays white level and black level. Greyscale is accurate 100cd/m2(the monitors white level) to 1cd/m2 with below 1cd/m2 requiring no visible deviation.


Video calibration I think is done to make 2% above black just visible on a black screen in dim surroundings, or 4% above black just visible on black with a higher APL picture or in bright surroundings. Doing calibration to make 2% above black or 4% above black just visible on a display capable of 100,000:1 native contrast I would expect to result in making the 2% or 4% darker than it would be following gamma (With European EBU 2.35 gamma 100% white to 2% white is about 9,831:1, 100% white to 4% white is about 1,928:1). So do you track gamma to 10% peak white like a video monitor, or 5% like a cinema projector then do the last bit according to display black level.

Vision is more sensitive to dark areas, hence the way gamma curves work, smaller contrast steps at the bottom of the curve. I would of thought at some point the steps at the bottom of the greyscale are going to become visible rather than a smooth gradient, as contrast is increased, or is it so dark down there that it all just looks black.


Consumer display ANSI checkerboard contrast is also higher than the standards for displays used to master cinema and video.

Consumer projectors are up to 750:1 ANSI contrast, plasma flat panels can be over 1,000:1, do not know about LCD, but with LED back lighting I would guess pretty high.

Cinema
SMPTE 196M only requires a 400:1 contrast relative to stray light.
Typical figures are
original sequential contrast 30 stops, simultaneous contrast 13 stops
capture sequential contrast 24 stops, simultaneous contrast 12 stops 4000:1
post sequential contrast 12 stops 4000:1, simultaneous contrast 10 stops 2000:1
mastering sequential contrast 11 stops 2000:1, simultaneous contrast 9 stops 500:1
distribution sequential contrast 9 stops 500:1, simultaneous contrast 8 stops 200:1

I think SMPTE is 100+:1 ANSI contrast
DCI is intra-frame checkerboard contrast 150+:1 nominal projector, 100+:1 for the review room screen
Video
Europe EBU contrast with EBU box pattern, grade 1 monitor 200+:1, grade 2 and 3 monitors 100+:1
dovercat is offline  
post #7 of 40 Old 05-26-2011, 02:35 PM
ADU
AVS Special Member
 
ADU's Avatar
 
Join Date: Apr 2003
Posts: 6,135
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 51 Post(s)
Liked: 128
Quote:
Originally Posted by dovercat View Post

Does the source limit how it should be displayed. High dynamic range displays do not use consumer video sources as they are unsuitable. So is there a point where display brightness or contrast gets so great that consumer video is no longer good enough, fit for display? What would be that point?

It depends on your definition of "suitable".

If you're asking when the contrast or brightness of a display exceeds the limits of a JND (just noticeable difference) for 8-bit consumer video, that's a difficult question to answer.

If you try to calculate the contrast ratio of a display with a JND for 220 codes (8-bit video nonlinearly encoded), the number may be quite low, perhaps on the order of 100:1 or less according to Poynton. If you look closely at the lefthand square with no dithering below you can probably detect some banding at regular intervals between the different shades of gray which run from 120 to 130 on a 256 code scale.

(The banding may be a little easier to see if you turn up the contrast on your display and/or switch to the AVS Black Skin in your User Profile, and give your eyes a few moments to adjust to the lower light levels.)



Add some dithering to the image though and those subtle "steps" between the shades should disappear completely, as in the square on the right (provided your monitor is doing it's job correctly).

The addition of dithering and noise (in the video) and other spatial considerations help to get around the implied JND limit in the 220-code consumer video system, and significantly extend the range of what might be considered "acceptable".
 

Quote:
Originally Posted by dovercat View Post

Vision is more sensitive to dark areas, hence the way gamma curves work, smaller contrast steps at the bottom of the curve. I would of thought at some point the steps are going to become visible rather than a smooth gradient, as contrast is increased.

On linear technology displays (e.g. plasma), the steps between display codes are perceptually farther apart at the lower luminance levels. To keep those steps below a JND threshold, you need to increase the bit depth of the display. The issue here is not so much in the video (which is non-linear), it's in the linear technology used to display it.


ADU
ADU is offline  
post #8 of 40 Old 05-26-2011, 11:21 PM
AVS Special Member
 
Mr.D's Avatar
 
Join Date: Dec 2001
Location: UK
Posts: 3,307
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 15
Quote:
Originally Posted by ADU View Post


On linear technology displays (e.g. plasma), the steps between codes are perceptually greater apart at the lower luminance levels. To keep those steps below a JND threshold, you need to increase the bit depth of the display. The issue here is not so much in the video (which is non-linear), it's in the linear technology used to display it.

(NOTE: You must be logged in to see the attached image in this post.)

I'm not so sure I would describe a plasma as inherently linear . The response of the phosphor as a chemical stimulus response is unlikley to be linear I would imagine.

Digtal display engines themselves are linear , LCD pixels behave in a linear fashion and the increase in bit depth is necessary to apply LUTS (curves/gammas) internally in the hardware to produce an end display response that is compatable with gamma encoded video. However as domestic video is inherently 8bit you can't really bend 8bit native content about without introducing some banding regardless of how high its inflated in the display hardware.

As to the contrast and white ref issue. Digital displays ( at least domestic ones ) arw on the whole incapable of modulating at the actual screen with precision greater than about 6 bits (and thats usually the best ones). Correspondingly if you limit the display hardware in terms of its peak white ouput you normally start to impact the precision ( ie; it starts to get worse banding) .

As such with digital displays towards the lower end I tend to let the hardware hit a white point as high as it will go without inducing clipping (254).

I then use other offboard hardware (such as an HTPC for example) to map 240 to 254 on the display ( I don't see any need to map domestic video higher than that) rather than use the often far from transparent color engine in the display .

The only displays I tend to limit to a specific standardised white point are CRTs ( as the precision is not normally an issue) and high end displays in DI suites ( which usually have more advanced internal engines) in conjunction with higher end color management systems (usually with a 3dlut). And thats usually because they are trying to simulate a projected print type environment rather than video ( which I generally regard as being a bit like soggy tissue paper in image terms anyway).

digital film janitor
Mr.D is offline  
post #9 of 40 Old 05-27-2011, 01:34 AM - Thread Starter
Advanced Member
 
dovercat's Avatar
 
Join Date: Apr 2008
Posts: 574
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 2 Post(s)
Liked: 13
Quote:
Originally Posted by ADU View Post

It depends on your definition of "suitable".

If you're asking when the contrast or brightness of a display exceeds the limits of a JND (just noticeable difference) for 8-bit consumer video, that's a difficult question to answer."

Yes in part.
I am asking does 8-bit consumer video break, become unfit for display on high white level or high contrast displays. If the picture was noticeably worse due to visible grey scale steps rather than having a smooth grey scale gradient. Then despite having a higher white level or higher contrast the picture would be worse due to the 8-bit video not being up to the task, unfit for display.

Quote:
Originally Posted by ADU View Post

The addition of dithering, noise and other spatial considerations help to get around the implied JND limit in a 220-code system, and significantly extend the range of what might be considered "acceptable".

I thought dithering was undesirable image noise. Caused by displays that are not capable of displaying 8-bit grey scale on a per pixel per frame basis, so having to use temporal or spatial dithering. Modern displays seem to be trying to reduce reliance on dithering, for example DLP projectors tried using ND green segments, then pulse lamps and now LED lamps, all of which increase the bit depth the display is able to show without using dithering.

You seem be saying dithering image noise is desirable to hide limits of the video format, banding.

Quote:
Originally Posted by ADU View Post

On linear technology displays (e.g. plasma), the steps between codes are perceptually greater apart at the lower luminance levels. To keep those steps below a JND threshold, you need to increase the bit depth of the display. The issue here is not so much in the video (which is non-linear), it's in the linear technology used to display it.

You say increasing bit depth of the display removes banding. But if banding is caused by the source being 8-bit rather than the display being lowering than 8-bit, how does the display extrapolate the extra grey step gradients. Is it not going to smooth the picture detail, sharpness is a key factor in perceived picture quality.
dovercat is offline  
post #10 of 40 Old 05-27-2011, 01:50 AM - Thread Starter
Advanced Member
 
dovercat's Avatar
 
Join Date: Apr 2008
Posts: 574
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 2 Post(s)
Liked: 13
Quote:
Originally Posted by Mr.D View Post

Digtal display engines themselves are linear , LCD pixels behave in a linear fashion and the increase in bit depth is necessary to apply LUTS (curves/gammas) internally in the hardware to produce an end display response that is compatable with gamma encoded video. However as domestic video is inherently 8bit you can't really bend 8bit native content about without introducing some banding regardless of how high its inflated in the display hardware.

Apologies for being a bit dense.
I am not sure I follow what you mean. Are you saying that non CRT displays do not naturaly track the gamma curve of video. So they use a best fit approach to produce the grey scale. But best fit is not a prefect fit.
What do you mean by bend 8bit native content and inflated in the display. Are you talking about trying to display 8bit video at higer bit depth to remove banding resulting from increasing the contrast.

Quote:
Originally Posted by Mr.D View Post

As to the contrast and white ref issue. Digital displays ( at least domestic ones ) arw on the whole incapable of modulating at the actual screen with precision greater than about 6 bits (and thats usually the best ones). Correspondingly if you limit the display hardware in terms of its peak white ouput you normally start to impact the precision ( ie; it starts to get worse banding) .

As such with digital displays towards the lower end I tend to let the hardware hit a white point as high as it will go without inducing clipping (254).

Ok I think you are saying that non CRT displays need to use their high white point to give them the room black to white to display all the steps, because they are incapable of making/displaying small steps in grey scale. So if the room is too small black to white, they have to jump steps, which causes more banding issues.

Apologies again for being a bit dense. If I appear to be simply restating or misinterpreting what you posted.

You appear to be recommending for consumer displays doing the opposite to what sotti said to do.

Quote:
Originally Posted by sotti View Post

Well if you have a reference viewing environment with controlled lighting, then you should use parameters similar to the established mastering recommendations. That factors the luminance out, because regardless of what the display is capable of you would use the reference luminance recommendations.

dovercat is offline  
post #11 of 40 Old 05-27-2011, 11:04 AM
ADU
AVS Special Member
 
ADU's Avatar
 
Join Date: Apr 2003
Posts: 6,135
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 51 Post(s)
Liked: 128
Quote:
Originally Posted by dovercat View Post

You seem to be saying dithering image noise is desirable to hide limits of the video format, banding.

That's precisely what I'm saying.

Let me back up a bit though, because I think I may have misconstrued what you were asking about in the following comment, and it may reflect a slight misunderstanding of how perceptual coding works. (Hopefully this will address a few of your other questions above as well.)
 

Quote:
Originally Posted by dovercat View Post

Vision is more sensitive to dark areas, hence the way gamma curves work, smaller contrast steps at the bottom of the curve. I would of thought at some point the steps are going to become visible rather than a smooth gradient, as contrast is increased.

While it's true that the codes in video are closer together toward the darker luminance range when measured on a linear scale of brightness; on a nonlinear scale of lightness, the video codes will appear more or less perceptually equal across the entire luminance range, thanks to nonlinear encoding/decoding.

Nonlinear encoding/decoding does a pretty good job of maintaining this perceptual uniformity in video regardless of the display's brightness or contrast ratio.

However, it's the dithering, noise, etc. (in the video, not the display) in combination with this perceptual coding that essentially keeps the "contouring/banding" in video below the JND threshold over a fairly wide range of brightness and contrast levels on different types of displays, and in different viewing conditions. The two processes (dithering & perceptual coding) work hand-in-hand to produce a more satisfying result. (See picture below as an example.)

You may be able to see the difference between dithered and non-dithered images a little better in the picture below, with the gradients running from left-to-right, rather than top-to-bottom.



The "steps" between shades should be a little more visible on the gradient with No Dithering here because your sensation of detail is slightly better in a horizontal direction than in a vertical direction.

You may notice a little uneveness on the Dithered side here as well. Some (or perhaps all) of that uneveness may be the result of less than 100% perfect color processing/reproduction on your monitor.

As you state in your post above, spatial and temporal dithering can also be used in the processing on a linear technology display to help smooth the transitions between the display codes in the lower luminance range. I made a couple of minor changes in my first post above to hopefully make the distinction between "video codes" and "display codes" a little clearer there as well.


ADU
ADU is offline  
post #12 of 40 Old 05-27-2011, 12:56 PM
AVS Special Member
 
sotti's Avatar
 
Join Date: Aug 2004
Location: Seattle, WA
Posts: 6,585
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 12 Post(s)
Liked: 164
Quote:
Originally Posted by dovercat View Post

Ok I think you are saying that non CRT displays need to use their high white point to give them the room black to white to display all the steps, because they are incapable of making/displaying small steps in grey scale. So if the room is too small black to white, they have to jump steps, which causes more banding issues.

Apologies again for being a bit dense. If I appear to be simply restating or misinterpreting what you posted.

You appear to be recommending for consumer displays doing the opposite to what sotti said to do.

You have to understand the tech at a lower level.

I was referring mostly to lamp based techs, the ones that can blow white level targets out of the water. LCD(CCFL and LED) and Projecters(front and back) are bulb based, you set white level by turning the backlight or lamp setting, or iris up and down, not with the contrast control. You don't loose digital steps, you just meter the output.

Plasma is a different animal, but it usually only does around the target light output anyway, very similar to CRT.

Joel Barsotti
SpectraCal
CalMAN Lead Developer
sotti is offline  
post #13 of 40 Old 05-27-2011, 02:13 PM - Thread Starter
Advanced Member
 
dovercat's Avatar
 
Join Date: Apr 2008
Posts: 574
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 2 Post(s)
Liked: 13
ADU
Ok so you mean dithering is encoded in the video source to prevent banding.

Perceptual coding = gamma used to define the size of grey scale steps. So perceptually the steps are a similar size across the grey scale.

But digital cinema and video monitors do not faithfully follow the perceptual coding / gamma at the bottom of the grey scale. Digital cinema is only faithful down to 5% of peak white, video monitors down to 10%.
Video monitors are also not faithful at the top of the grey scale above 90%.

If consumer displays work the same as video monitors. The grey scale step sizes at the bottom and top are dependent on the displays black level and white level.

Does that not cause problems. Hence DCI cinema cautioning against using theatrical projectors with higher contrast, lower black levels than the projector used during mastering.

Reading more I would guess not because not only are the DCI cinema nominal projector 2,000:1 and video mastering monitor 2,000+:1 but so is the footage they are mastering.



35mm film is a lazy S curve, in the middle linearly 100:1 (6.5 stops) and at each end non-linearly another 5:1 (2.2sotps) for a total of around 2,000:1. At the extremes the contrast is crushed making details difficult or impossible to extract. During mastering white balance point is fixed at the onset of white crushing, leaving about 1,000:1 (10 stops)

Traditional film cameras can over or under expose then adjust it when transfered to film print. Film print has a higher gamma/contrast than the real world to compensate for the lower peak white level of the display in comparison to the real world, to compensate for the dark surround effect lowering perceived contrast, to make the image more appealing and for artists intent. This contrast expansion is about 50% so 1,000:1 > 1,500:1

The DCI mastering review room on screen 1,500+:1 contrast makes sense.

DCI comercial theaters are still within spec as long as they are 1,200+:1 but DCI uses relative luminance encoding below 5% so all levels are displayed, but with luminance levels dictated by (relative to) the projector/screen black level. When the master is encoded a high contrast projector is used and the lowest code value of luminance is assigned to the lowest luminace / black level it can display. When displayed with a higher black level - lower contrast than the master the bottom curve is made gentler - compressed so all the shadow detail is displayed.

Color film print is also contrast limited pre 1997 to about 400:1 intra-scene, 1600:1 sequencial, and from 1997 just under 4000:1 sequencial. Kodak Vision Color Print Film print is in theory capable of a density range of over 4.0 10,000:1, Kodak Vision Premier Color Print Film density range over 5.0 100,000:1 these immense range are limited by the dynamic range of the negative and further reduced by projection flare.

SMPTE 196M-1995, 35mm film print reference white 16ftL open gate center of the screen.
Which due to film transparency and the different lenses used is about 13.92 foot Lamberts with 2.39:1 aspect ratio film, 11.658 foot Lamberts with 1.85:1 aspect ratio film. Which is where I assume the often quoted 14 foot lambert and 12 foot lamberts comes from.

The 0.0029fL black level with projector off, for a cinema review room makes sense, with 14fL white that gives up to 4,827:1, with 12fL up to 4,138:1

Screen black level with the projector off for theatrical presentation is encouraged to be <0.01fL but maybe higher, 14fL:0.1fL = 1,400:1, 12fL:0.1fL = 1,200:1 (The DCI minimum for digital theatrical presentation makes sense in comparison to film print theatrical presentations)



Standard video gamma correction curves have an exposure contrast range of about 30:1 linearly and a total of 1,000:1 (10stops) including non-linearly. Using Knee and black stretch at 85% and 5% peak white expands contrast to 10,000:1 but only 500:1 is noise free. Camera noise limits exposure contrast to about 2,000:1

The EBU grade 1 video monitor for mastering with 2,000+:1 1%patch contrast makes sense.



So is all the picture info in contrast range 2,000:1

Setting up a display uses

Pluge pattern so that 4% black can be distinguished from black in a average APL image.
100% white to 4% above black at video gamma of 2.35 (I am European) is 1,928:1
That makes sense it ensures you are seeing all the picture information.

Pluge pattern so that 2% black can be distinguished from black in a otherwise black image in the dark
100% white to 2% above black at video gamma of 2.35 is 9,830:1.
I think this might be to ensure the display is tracking properly, so those details down to 4% above black are being displayed correctly as far as grey scale sloping down to black, and to set black level of the display. It also ensures that in dark scenes the 4% above black details stand out.
I assume this means display black level about 10,000+:1 is desired, or was expected with CRT.
Other displays are designed to emulate CRTs and are designed to be setup with the pluge patterns, so I guess doing so makes sense regardless of the display contrast ratio.



Is there any detail below 4% above black, that is not a test pattern or computer generated?
Does higher than 2,000:1 contrast just mean black is blacker. The picture detail since it should be following the gamma curve off white, is not effected by having higher contrast. Higher contrast is just contrast to black? No disadvantage because there is no picture info down there to see. You are just making the step between image and black bigger.

Assuming contrast to black of 10,000+:1 is desired. Hence the 2% pluge pattern for setting black level. What is the advantage of massively higher contrast lower black level with video, apart from fade to black scenes being displayed in rooms with no ambient lighting, and making demo images displayed on a digital 16 black background appear to be floating in space, like say a fish or a space ship.



Coming at the issue of contrast from the other direction human vision.
Overtime contrast adaption up to 5 minutes to max light, 30 minutes to dark adaption. 100,000,000:1 to 10,000,000,000:1 depending on who you cite.
Rapid contrast gain control redone when the eye moves including after saccades, takes about 100ms, 100,000:1 perceived in a single image without iris adaption.
Eyes photoreceptors 1,000:1 perceived at a single focus point in the image
Output of retina 100:1 to 300;1 depending on who you cite
(the maximum local boundary contrast before visual glare reduces details in the dark side to invisible due to visual blur glare is about 150:1 higher contrast is perceivable but detail is lost)
Visual pathway retina to brain 32:1 signal to noise ratio

So I guess 100,000,000 - 10,000,000,000:1 dynamic if you give the audience a long enough time to adapt, currently films are not edited with viewer vision adaption time light to dark scenes.
100,000+:1 intra-frame concurrent contrast should look great (1,000+:1 should look ok, 100-300:1 watchable, 32+:1 poor but can tell what is happening)
150+:1 local contrast boundary black pixel next to white pixel
dovercat is offline  
post #14 of 40 Old 05-27-2011, 02:25 PM - Thread Starter
Advanced Member
 
dovercat's Avatar
 
Join Date: Apr 2008
Posts: 574
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 2 Post(s)
Liked: 13
Quote:
Originally Posted by sotti View Post

You have to understand the tech at a lower level.

OK that is what you meant.

I assumed you were making a general comment on how displays should be setup. For most displays setting the white level is done with the contrast control and does not lower black level, so reduces display contrast ratio. I think this is true of CRT monitors that the standards were originally written for. So I assumed you were referring to doing that. Most projectors do not have varible fixed irises, and only have high/low lamp mode, so you are mostly refering to just LCD displays with non-dynamic fluorescent backlights or dynamic contrast turned off.

Would you opt for correct lower white level at the cost of lower display contrast ratio, or only when it does not adversely effect contrast ratio.

With projectors switching dynamic lamp mode off and using low lamp mode or disabling dynamic iris and fixing it at a smaller aperture, reduces display contrast ratio.
With LED zone lighting, since LEDs can switch off or near off regardless, I would assume lowering the white level would not lower the black level, so it reduces contrast ratio.
dovercat is offline  
post #15 of 40 Old 05-27-2011, 02:30 PM
AVS Special Member
 
sotti's Avatar
 
Join Date: Aug 2004
Location: Seattle, WA
Posts: 6,585
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 12 Post(s)
Liked: 164
Quote:
Originally Posted by dovercat View Post

OK that is what you meant.
I assumed you were making a general comment on how displays should be setup. For most displays setting the white level is done with the contrast control.

For most displays lowering the white level, does lower the black level.
The exception would be plasma.

Joel Barsotti
SpectraCal
CalMAN Lead Developer
sotti is offline  
post #16 of 40 Old 05-27-2011, 09:22 PM
AVS Special Member
 
Mr.D's Avatar
 
Join Date: Dec 2001
Location: UK
Posts: 3,307
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 15
Quote:
Originally Posted by sotti View Post

LCD(CCFL and LED) and Projecters(front and back) are bulb based, you set white level by turning the backlight or lamp setting, or iris up and down, not with the contrast control. You don't loose digital steps, you just meter the output.

.

I'm aware of how LCD works , I can only tell you that locking the white level to something a lot lower than the maximum the display can reach increases banding.

Letting it scale to max and then using an offboard LUT (with much more precision...properly transparent) to correct and target a standardised white point results in less banding but is still usually too much of a precision loss on the display.

Only thing that works is to sacrfifice the rigid white point standard , let it scale to the maximum the hardware will manage and correct everything else with the offboard LUT. Miximise precision at the screen.

In practice having the screen optimised in precision terms is more important than hitting the standards defined peak output target. The accurate output tends to be completely un-noticable visually as well as technically ...whilst the precision loss is immediately a problem.

As to why the back-light should impact precision when its just a light behind the panel....maybe its not entirely independant to the image engine in the display. I can tell you I get this behaviour on LCDs from Dell , Apple , HP ,and various others...for years.

For example turn the backlight down on an apple display and it will bring the RGB channels out of clip...spectral properties of the light panel???? I don't know. All I can tell you is how a lot of them behave.

digital film janitor
Mr.D is offline  
post #17 of 40 Old 05-27-2011, 09:31 PM
AVS Special Member
 
sotti's Avatar
 
Join Date: Aug 2004
Location: Seattle, WA
Posts: 6,585
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 12 Post(s)
Liked: 164
Quote:
Originally Posted by Mr.D View Post

For example turn the backlight down on an apple display and it will bring the RGB channels out of clip...spectral properties of the light panel???? I don't know. All I can tell you is how a lot of them behave.

I would assert that any LCD that has increased banding, isn't actually turning down the back light.

The variability with all the different implementations of controls makes a universal answer impossible.

I can tell you on the Dell u2410 and Dell u2408 I have, that I do not see increased banding with adjusting th backlight.

Joel Barsotti
SpectraCal
CalMAN Lead Developer
sotti is offline  
post #18 of 40 Old 05-28-2011, 03:08 PM
ADU
AVS Special Member
 
ADU's Avatar
 
Join Date: Apr 2003
Posts: 6,135
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 51 Post(s)
Liked: 128
Might also have something to do with ambient light sensors or dynamic contrast on the display. Disabling those features, or switching to a different picture mode without D/C might make a difference (or not).

ADU
ADU is offline  
post #19 of 40 Old 05-28-2011, 04:28 PM
ADU
AVS Special Member
 
ADU's Avatar
 
Join Date: Apr 2003
Posts: 6,135
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 51 Post(s)
Liked: 128
Quote:
Originally Posted by Mr.D View Post

I'm not so sure I would describe a plasma as inherently linear . The response of the phosphor as a chemical stimulus response is unlikley to be linear I would imagine.

Digtal display engines themselves are linear , LCD pixels behave in a linear fashion and the increase in bit depth is necessary to apply LUTS (curves/gammas) internally in the hardware to produce an end display response that is compatable with gamma encoded video.

I've always heard that LCDs have an irregular response. From wikipedia...

Quote:


In LCDs such as those on laptop computers, the relation between the signal voltage VS and the intensity I is very nonlinear and cannot be described with gamma value. However, such displays apply a correction onto the signal voltage in order to approximately get a standard γ = 2.5 behavior.

However, looking around on the web, it appears the electro-optical response curve of an LCD is roughly sigmoidal in shape. The twisted nematic (TN) panels are more linear (dashed line below), while the super-twisted (STN) panels are more S-shaped (solid line below).



[add'l text retracted]

ADU
ADU is offline  
post #20 of 40 Old 05-29-2011, 12:47 AM
AVS Special Member
 
Mr.D's Avatar
 
Join Date: Dec 2001
Location: UK
Posts: 3,307
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 15
Quote:
Originally Posted by ADU View Post
Might also have something to do with ambient light sensors or dynamic contrast on the display. Disabling those features, or switching to a different picture mode without D/C might make a difference (or not).
These are film workstation displays in a totally dark environment. I've also done tests with complimentary apl patterns to mitigate dynamic contrast mechanisms ( although I don't believe these particular displays impliment it).

Also apart from the visual impact of the precision loss being somewhat obvious I use about 3 different measurement systems to appraise the displays , some of which have confidence tests which will generally fail the display due to precision issues (levels unable to be read) if you attempt to target the peak output lower than the displays will natively do.

In fact one place I walked into ( color pipeline is not my main role its just something I can do ...quite unusual as I'm in a senior creative role) had every single apple cinema display failing calibration and I all I needed to do was bring the white level out of clip by lowering the backlight ( apples have no other picture controls) and not target the peak output of a given standard (film and rec.709 in this case). In fact with a cinema display I'm pretty good at visually setting the backlight to get it out of clip and still be high enough to not induce precision issues.

If it was just a simple backlight mechanism I don't see how it could induce clipping and "crushing".

digital film janitor
Mr.D is offline  
post #21 of 40 Old 05-29-2011, 12:59 AM
AVS Special Member
 
Mr.D's Avatar
 
Join Date: Dec 2001
Location: UK
Posts: 3,307
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 15
Quote:
Originally Posted by ADU View Post

However, looking around on the web, it appears the electro-optical response curve of an LCD is roughly sigmoidal in shape. The twisted nematic (TN) panels are more linear (dashed line below), while the super-twisted (STN) panels are more S-shaped (solid line below).
I would say both those curves are far closer to "linear" than I would have imagined a real world LCD would manage even with the Luting turned off!

I quite often have to calibrate LCDs to be truly linear (usually done with offboard lutting rather than the display hardware itself). This is obviously to undo the luting the displays itself does in the main.

Quote:
Originally Posted by ADU View Post

Re plasma... my understanding is that the short bursts of light they use to generate different levels of shading roughly follow Stevens' power law for a point source briefly flashed, which is 1.0 (or linear). Still looking for an example of a plasma EO curve to confirm this.
I find plasmas more problematic in terms of output behaviour than just about any other display. Most(all) have dynamic contrast mechanisms again complimentary apl patterns can mitigate this ( see John Adcocks Upsilon Mixer). This is ironic as I do find a good plasma kills any pretty much any other direct view display stone dead in visually pleasing picture terms.

I must say though with the dithering , precision and dc issues not once have I ever felt plasmas were close to linear . In the real world we see but through a glass darkly though in terms of figuring out what is actually happening inside most displays.

digital film janitor
Mr.D is offline  
post #22 of 40 Old 05-29-2011, 01:22 PM
ADU
AVS Special Member
 
ADU's Avatar
 
Join Date: Apr 2003
Posts: 6,135
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 51 Post(s)
Liked: 128
Quote:
Originally Posted by Mr.D View Post

I would say both those curves are far closer to "linear" than I would have imagined a real world LCD would manage even with the Luting turned off!

I quite often have to calibrate LCDs to be truly linear (usually done with offboard lutting rather than the display hardware itself). This is obviously to undo the luting the displays itself does in the main.

Interesting.

From the other sites I looked at, I believe that LCD voltage-to-intensity graph does approximately represent the inherent (to borrow your word) physical response of the liquid crystal panel before any corrective processing/lutting is added into the mix.

Some of the other graphs I looked at were a bit less regular and less linear toward the bottom of the curve, but they all pretty much had that roughly sigmoid shape, oddly reminiscent of a Hurter-Driffield curve. Perhaps the curved "toe" near the bottom of the LCD curve may make them a little less problematic in terms of shading in that region than plasma.

I retracted my earlier comment, btw, about plasma following Stevens' power law for a briefly flashed point source. After thinking that over some more, I don't think that was a correct statement because it would mean that the physical display codes on the plasma panel would be perceptually uniform.

I'm (fairly) sure though that the inherent "voltage-to-intensity" characteristics* of a plasma panel are linear (or pretty close to it), and that the physical display codes/steps are perceptually non-uniform... hence the need for higher bit depths in the panel's physical color coding to keep banding/contouring and other unpleasant shading artifacts from becoming too noticeable at the lower luminances. Maybe you can prove me wrong on that though.

The non-linearity in CRTs btw is not a function of the phosphors. It's a result of the electron guns. So there's no particular reason to expect that a plasma display would have the same type of physical response curve as a CRT.

From A Technical Introduction to Digital Video:

Quote:


The nonlinearity in the voltage-to-intensity function of a CRT originates with the electrostatic interaction between the cathode and the grid that controls the current of the electron beam. Contrary to popular opinion, the CRT phosphors themselves are quite linear, at least up to an intensity of about eight-tenths of peak white at the onset of saturation.

[*Or the plasma equivalent, namely the physical method it uses to produce different shades/levels/steps of brightness by varying voltages.]

ADU
ADU is offline  
post #23 of 40 Old 06-01-2011, 07:43 PM
ADU
AVS Special Member
 
ADU's Avatar
 
Join Date: Apr 2003
Posts: 6,135
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 51 Post(s)
Liked: 128
Quote:
Originally Posted by dovercat View Post

Ok so you mean dithering is encoded in the video source to prevent banding.

Correct.

Dithering is automatic in most image processing and video authoring applications, so most of the time it works transparently without the user even being aware of it. In the examples posted above, some extra work had to be done to create the images without any dithering because the Gradient tool in Photoshop automatically applies dithering.

Most good IP and video apps will perform all their image processing at higher bit depths, or "in float" as Mr.D likes to say, meaning in floating point. The image is then resampled down to 8 bits of nonlinear information per color component for display, or when saving to a 24-bit* file. Intelligent dithering techniques are applied in the resampling process to produce the best approximation to the original higher bit depth or floating point image data.

(* "24-bits" (8-bits x 3 colors) in desktop imaging lingo is more or less the same as "8-bits" in video lingo, though the color palette in consumer video is slightly reduced from 256 steps/codes per color down to 220 studio-swing levels to leave some foot and head room in the data.)

Small sidenote: if you're authoring B&W content, it may be best to downsample your final FP/higher-bit depth B&W data to a full 24-bit color palette, and leave it that way, rather than converting to an 8-bit B&W palette. If you downsample or convert to a true B&W palette, you loose many of the add'l potential "virtual shades" that are available in a full-color dithered image to produce a smoother, more "accurate" version of your original B&W data. That may seem a little counter-intuitive, but if you run an eyedropper over the dithered gradients posted above, you'll notice that all three RGB color components are varied by small amounts in the image, even though it still appears B&W to your eye.


ADU
ADU is offline  
post #24 of 40 Old 06-02-2011, 04:31 PM
ADU
AVS Special Member
 
ADU's Avatar
 
Join Date: Apr 2003
Posts: 6,135
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 51 Post(s)
Liked: 128

You folks like difficult math problems, right? Hold onto your propellered beanies then, cuz here comes some...
 

Quote:
Originally Posted by dovercat View Post

Perceptual coding = gamma used to define the size of grey scale steps. So perceptually the steps are a similar size across the grey scale.

 

Basically, though I'd probably turn your statement around a bit and say that nonlinear encoding/decoding (aka gamma compression/expansion) is a form of perceptual coding.

If you want to split hairs, dithering could also be considered a form or aspect of "perceptual coding" in that it takes advantage of our eye's tendency to fuse small patches of colors together into new virtual shades*, allowing more colors to be represented with fewer bits; and it's also part of the overall perceptual coding scheme or strategy in consumer video which more broadly includes such things as nonlinear coding (aka gamma correction/compression), nonlinear decoding (aka "display gamma"), dithering, MPEG compression, chroma subsampling, and "picture rendering" (aka compensation for the surround effect).

 

 

(*Seurat, Monet and other impressionists/pointillists also use this phenomena to nice effect in their paintings.)

 

The last item, picture rendering, or "compensation for the surround effect" is actually part of the nonlinear encoding/decoding process. Poynton does a fairly good job of explaining the concept here. (There's a particularly nice graph at the top of page 12 that gives an indication how well different display gammas do in terms of perceptual uniformity in relation to L*.) There are couple of points that are somewhat glossed over or not fully explored there though that I think are important or at least pertinent to a better understanding of this subject. (This also has relevance to your questions re artistic intent and the robustness of consumer video in different viewing conditions, or I probably wouldn't bother going into it here.)

To put it in a nutshell, "picture rendering" is the slight bit of nonlinearity that's left in the final image on the screen after all the other nonlinearities in the video pipeline are multiplied out. A simple example would be...

 

.50 camera encoding gamma * 2.4 display gamma = 1.2 final screen gamma.

 

The 1.2 screen gamma here is used to compensate for "the surround effect", which is basically a change in the perceived lightness of an image when it's viewed in a dim or dark surround.

Picture rendering is one of the other tools in the video calibration toolkit that can be used to help preserve the "fidelity" or "integrity" of video content in different viewing conditions. It allows an image to be adapted to different surround lighting conditions, while still maintaining reasonably good perceptual similarity to the original mastered image. The farther away you get from the conditions in the mastering environment however, the less well the original "artistic intent" will probably hold up. In a knowledgeable calibrator's hands though it can work pretty well.

All of that you may already know. But here's what you may not know (and what Poynton doesn't really delve into too deeply in his proposal). We'll title this section...

How Our Current Home Video Paradigm Is Somewhat Perceptually Flawed or...
How Perceptual Uniformity in Video Could Be Improved... But Probably Won't Be, Cuz It Just Ain't Practical at the Mo'

As alluded to above, the "surround effect" is essentially a change in the magnitude or nonlinearity of lightness perception.

In the PDF above, Poynton estimates the average (or "best fit") exponent for L* (lightness) as ~.42. L* represents perceived lightness in an average surround. It corresponds approximately with Munsell's value scale, where a perceptually middle gray has 18% reflectance. Note the similarity in exponent between middle gray on the Munsell scale and Poynton's .42 estimate.

 

.18 reflectance ^ .4042 = .50 (or 1/2) "lightness"

 

Most of the experts (including Stevens) agree that lightness perception in an average surround falls between a cubed and squared root. So for simplicity's sake, what we'll do here is use the geometric mean of those two roots (which falls nicely in the middle of Munsell, Stevens and Poynton's estimates)...

 

(1/3 * 1/2) ^ 1/2 = .4082

 

This value is almost exactly 1/2.45. (Technically it's 1/2.44948974278317809819728407470589.)

IMO, we can't really come to a more accurate estimate of lightness in an average surround than this, without more extensive research and investigation. This value is also very close to the implicite exponent of L* 50, which is .4097 (or ~1/2.44).

Assuming these values are more or less in the ballpark, then in a system that uses simple power-law functions, perceptual uniformity should be best achieved in an average surround by encoding with an ~.4082 exponent and viewing on a display with roughly the inverse of that value, or ~2.45 gamma.

 

.4082 encoding gamma * 2.45 display gamma = 1.0 screen gamma

 

In the dim to dark surround typical of home viewing though the average exponent of perceived lightness drops somewhat closer to a cubed root. (See Stevens, Bartleson & Breneman, etc.) IOW, our perception of lightness gets less linear the darker the surround is, and the brighter the image is by comparison. You can see this influence of the surround in L* as well.
 

Y/Yn
(Relative Luminance)
L*
("% Lightness" in Average Surround)
Equivalent Exponent
.0113 10 .5136
.0442 25 .4444
.1842 50 (Middle Gray in Average Surround) .4097
.4828 75 .3951
.7630 90 .3895
.8762 95 .3881

 

The brighter the L* shade is in relation to an average surround, the lower (ie less linear) the equivalent exponent becomes.

To achieve better perceptual uniformity in a dim to dark surround, ideally you'd want your image/video to have an encoded gamma that's somewhat closer to a cubed root (ie 1/3 or .3333), and your display gamma closer to 3.0. There are a couple problems with this arrangement though.

Problem #1

No "picture rendering". The steps between codes may be more perceptually uniform in a darker surround with this setup, but the image still appears too bright because the final screen gamma is only 1.0.

 

 

1/3 encoding gamma * 3.0 display gamma = 1.0 screen gamma

 

That's easily rectified though by simply using the encoding exponent for an average surround of ~.4082 in place of a cubed root.

 

.4082 encoding gamma * 3.0 display gamma = 1.2247 screen gamma

 

(Bartleson & Breneman would probably like that ~1.225 screen figure because it's the geometric mean between their suggested values of 1.0 to 1.5 for image adjustment in "bright" and "dark" surrounds. )

In this setup with ~.41 encoding and ~3.0 display gamma, both perceptual uniformity and picture rendering might be closer to the optimum range for a darker surround, based on what's generally understood on these subjects.

Problem #2

Most displays currently in use have a much lower gamma than 3.0, and are more in the ballpark of the 2.45 gamma suggested above for an average surround. With ~.41 encoding, that basically takes us back to square one, and an ~1.0 final screen gamma, which means no picturing rendering.

Enter Rec. 709 with it's ~.50 encoding to save the day!!

 

.50 encoding gamma * 2.45 display gamma = 1.225 screen gamma

 

In this setup, picture rendering is accomplished with only a slight hit to perceptual uniformity on most current displays.

The moral to this story is that our current system of ~.50 encoding and ~2.20 to 2.50 display gamma may not necessarily be ideal, but it's probably about the best that can be hoped for, under the current circumstances, in terms of both perceptual uniformity and picture rendering. And it probably does a pretty decent job in most situations.

If you haven't already guessed what my recommendation for reference display gamma would be in this current ~.50 Rec. 709 encoding paradigm, it's the square root of six or 2.45, for the reasons outlined above (which is fairly close to Poynton's suggested ~2.4).

The advantage to using something closer to 2.45 is that it makes the math a little easier (from an engineering standpoint anyway), and it offers some clearer points of reference (e.g. Stevens' cubed & square roots, Bartleson & Breneman's 1.0 to 1.50 ratio, and the exponent at L* 50) than the "best fit" approach suggested by some other experts. And it's more consistent with Munsell's classic 18% gray. On a CRT with 2.45 gamma, for example...

 

.50 source voltage ^ 2.45 ∝ .1831 intensity

 

Or more generally...

 

.50 stimulus ^ 2.45 = .1831 relative luminance

 

Research is ongoing in all of these areas though, so it's quite possible that the models and numbers above might need some tweaking to better fit results "in the field" as they say.

While working this up, I also ran across this white paper. It relates more to desktop imaging than video, but thought some here might find it an interesting read. (The author btw repeatedly makes the "error" of referring to L* as "luminance". Since L* is a perceptual scale, it more correctly refers to "lightness". Given that he's talking about encoding with L* though, his use of the term "luminance" is most likely shorthand for the "nonlinearly encoded representation of luminance".)


ADU
ADU is offline  
post #25 of 40 Old 06-03-2011, 03:23 AM - Thread Starter
Advanced Member
 
dovercat's Avatar
 
Join Date: Apr 2008
Posts: 574
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 2 Post(s)
Liked: 13
Thanks for the post I will reread it and read the white paper when I have a bit more time.

I have read one of the problems with gamma is that it also effects display color.

I think dark surround increases colorfulness, chroma and saturation as well as lightness. Art galleries use the effect to enhance paintings by using tightly vignetted spotlights in otherwise dimly lit rooms.

My limited understanding of video gamma is that it is increased to compensate for the dark surround effect (making black and white look lighter but having more effect on black, so lowering perceived contrast). But increasing gamma also effects color saturation and hue. It increases color saturation as the lowest component RGB determines the amount of white R+G+B de-saturating the color. Hue is determined by the ratio between the colors in the mix, changing gamma alters the ratios as the individual colors are at different points on the gamma curve, and tints the color hue towards the color largest in the mix.

Since perception of color ratio - hue errors decreases when the color mix is de-saturated. Maybe it makes sense that mastering is done in dim surroundings while most consumer viewing is done in brighter surroundings. As lowering gamma or decreasing the dark surround effect is less likely to result in the color hues looking wrong.

I have read separating color and gamma, especially color hue mix ratio, would enable displays to retain better artistic integrity when being used with different gamma than that used during mastering.
dovercat is offline  
post #26 of 40 Old 06-04-2011, 12:55 PM
ADU
AVS Special Member
 
ADU's Avatar
 
Join Date: Apr 2003
Posts: 6,135
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 51 Post(s)
Liked: 128
Quote:
Originally Posted by dovercat View Post

Thanks for the post I will reread it and read the white paper when I have a bit more time.

My pleasure. I'm still doin a a bit of fine-tuning, so the longer you wait, the more intelligible some of my ramblings there may be. I think most of the major wrinkles have been ironed out though.

Since you expressed some concern re the robustness of consumer video especially in the darker shades, I wanted to draw your attention in particular to the graph at the top of page 12 in this PDF which compares the perceptual performance of different display gammas using L*. There are a couple things to notice there. First, the relatively good performance (as indicated by the straightness of the curves) of gammas in the 1.8 to 2.6 range typical of off-the-shalf consumer displays.

The other interesting feature is the slight bend (or "hockey stick shape" as he describes it) at the bottom of those curves. That bend indicates a slightly denser (rather than sparser) perceptual distribution of video codes in that region, making it somewhat less susceptible to source-based banding or contouring issues than the rest of the luminance range, which should reinforce my point that the problems some people see in darker shades on their displays are more likely display technology-related than video source-related.

You can also get an idea of the challenge that display mfrs. face in terms of the processing and physical display coding on linear technology displays from the 1.0 curve.

As you might surmise from my previous post above though, I have some "issues" with using L* as a yardstick (not hockey stick ) for perceptual performance in the dim to dark surround typical of home viewing, because L* is really designed more to represent perception of lightness in an average surround. If the graph was redrawn using a model of lightness similar to L* but with an average exponent closer to a cubed root (ie something more representative of perception in a darker surround), then I suspect that the higher display gammas would begin to appear even more "ideal" in terms of perceptual uniformity than some of the lower display gammas.

ADU
ADU is offline  
post #27 of 40 Old 06-04-2011, 01:04 PM
ADU
AVS Special Member
 
ADU's Avatar
 
Join Date: Apr 2003
Posts: 6,135
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 51 Post(s)
Liked: 128
Quote:
Originally Posted by dovercat View Post

I have read one of the problems with gamma is that it also effects display color.

I think dark surround increases colorfulness, chroma and saturation as well as lightness.

It's easy to get hung up on terms like these. Depending on the model of color you're using, these terms can refer to different things. In generic layman's lingo though, I'd say that colors appear "brighter", but not necessarily "richer" or "more deeply saturated" when looking at an image with 1.0 screen gamma in a darker surround (quite the opposite in fact). It probably depends on your exact definition of things like "saturation" though.

Quote:


Art galleries use the effect to enhance paintings by using tightly vignetted spotlights in otherwise dimly lit rooms.

If you increase the gamma on a photo and look at it in a dark surround, the image will have more "pop" and a greater sense of depth to it. It's basically the same principle as "picture rendering" in video.

Quote:


My limited understanding of video gamma is that it is increased to compensate for the dark surround effect (making black and white look lighter but having more effect on black, so lowering perceived contrast). But increasing gamma also effects color saturation and hue. It increases color saturation as the lowest component RGB determines the amount of white R+G+B de-saturating the color. Hue is determined by the ratio between the colors in the mix, changing gamma alters the ratios as the individual colors are at different points on the gamma curve, and tints the color hue towards the color largest in the mix.

Sounds like you have a better understanding of gamma than I do.

I believe you're correct that there is some color distortion associated with gamma compression/expansion. That should be taken into account though in the video mastering/color correction process, so it's not somethin I worry about too much. (The way I see it, the cinematographer and video colorist are pretty much gonna do what they want to the color anyway.)

However, the increase in "saturation" that's associated with a greater than 1.0 screen gamma is a desireable thing IMO which compensates for the loss in perceived richness or depth of saturation that occurs in a dark surround.

Color distortion, and increased camera sensor noise for that matter could also be considerations in keeping the video encoding/display paradigm approximately where it currently is, rather than going to something more extreme. So you could potentially also add those as problems #3 and #4 to my 3.0 display gamma scenario above.

Quote:


Since perception of color ratio - hue errors decreases when the color mix is de-saturated. Maybe it makes sense that mastering is done in dim surroundings while most consumer viewing is done in brighter surroundings. As lowering gamma or decreasing the dark surround effect is less likely to result in the color hues looking wrong.

I think the objective of the video mastering environment is to mitigate the influence of the surroundings on the final product as much as possible. IOW, to make the final product "environment neutral". While that sounds like a good idea in theory, I'm not sure it always works so well in practice. (See my remarks here for an example.)

Quote:


I have read separating color and gamma, especially color hue mix ratio, would enable displays to retain better artistic integrity when being used with different gamma than that used during mastering.

Interesting idea.

ADU
ADU is offline  
post #28 of 40 Old 06-06-2011, 12:46 AM
AVS Special Member
 
Mr.D's Avatar
 
Join Date: Dec 2001
Location: UK
Posts: 3,307
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 15
Seems to me guys what you are talking about is a linear delivery format.

The whole reason we have to deal with gammas and curves and the like is nowadays purely a mechanical legacy hangover. Everything could be linear ; linear capture , linear display and probably will be in the future as there is really no practical reason why delivery couldn't go this way.

Film and other inherently non-linear material can be linearised before delivery ( it pretty much is in post these days anyway). So if you imagine everything linear then its just a case of baking in the required rendering intent as a lut (again this is pretty much what happens these days).

The end display is what is holding this back and ironically most of the displays these days are closer to linear than not but use internal lutting to make the end response like an old video display.

Mark my words....in the future everything will be linear.

digital film janitor
Mr.D is offline  
post #29 of 40 Old 06-07-2011, 10:46 AM
ADU
AVS Special Member
 
ADU's Avatar
 
Join Date: Apr 2003
Posts: 6,135
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 51 Post(s)
Liked: 128
Quote:
Originally Posted by Mr.D View Post

...in the future everything will be linear.

Based on how dark some transfers appear these days, there may be some folks who believe we've already transitioned to a linear system. I can assure those folks we haven't though. (All you have to do is look at the gamma graphs posted in this forum to see that.) Nor is it in our interest to do so in the near future from either a PQ or delivery standpoint. A linear system would only compound most of the problems in both those areas.

Re the color distortion thingie though... I haven't really delved into that issue as deeply (so take anything I have to say on it with grain of salt), but for the few folks who may have some concerns about it, I suspect there are probably ways of dealing with it now on the front end, via linear capture and scanning systems, that would not (or do not) require any changes to our nonlinear delivery system on the back end.

Judging by the content that I've seen lately though, most directors and cinematographers have little or no interest in accurate color reproduction anyway. I can hardly remember the last new film I saw that didn't have some major color tweakage. The exception would probably be nature shows made for PBS or Imax.

Both are valid choices artistically. But I think folks are missing out on the opportunities that new technology offers by over-tweaking their color sometimes. What's the point in spending the time and effort to accurately adjust color on your TV if the content has most of the color filtered out of it anyway, leaving just an orange and cyan picture to watch?! Makes no sense to me... But I digress.

As far as upgrading our current system is concerned, I think there'd be more benefits (more bang to the bit, if you like) in going to something like a 10-bit or 12-bit wide gamut nonlinear system. The easiest upgrade path right now though would be lossless or less-lossy systems of delivery, because the weakest link in our current 8-bit system is probably the high level of data compression (MPEG) used by most providers and broadcasters. That's the first problem that needs to be tackled (and a linear system would only multiply the difficulties in that area).

Going back to the distortion and sensor noise issues... There are precedents for using encoding values as low as the ~.41 value suggested above as being more "perceptually ideal". Encodes in that range were fairly routine in the "bad ole days" of NTSC. Given current viewing habits though, and the ~2.2 to 2.5 range of most calibrated displays, there probably isn't much benefit to using encoding values that low right now.

There is a potential down side though to encoding with values much higher than ~.50. The higher you go in terms of encoding, the worse the perceptual performance gets, and the greater chance there is of introducing various shading issues. Your picture may look "more dynamic" on some of the insanely bright displays out there now, but you'll sacrifice quality in other areas to acheive that. IMO, there is really nothing to gain in terms of PQ for most viewers by going higher than the implicite ~.50 in Rec. 709. If folks really need that add'l dynamism in their images, then they should be adding it via gamma controls on their own displays rather than bringing the quality of video down for everybody else.

I don't expect anyone in the industry to heed my advice on this though, so expect more dark, dingy, muddy-looking transfers... especially from mastering facilities that continue to cling to the notion that "good ole 2.2" is best for their displays.

In terms of recent live-action films, one of the better transfers I've seen lately is Babylon A.D.. There's some color filtering evident on that as well, but it's not as heavy-handed, monochromatic, or dark & muddy-lookin (generally) as some other stuff I've seen lately. (Not sayin it's a great movie, but it's a decent watch for color junkies.)

ADU
ADU is offline  
post #30 of 40 Old 06-07-2011, 10:13 PM
AVS Special Member
 
Mr.D's Avatar
 
Join Date: Dec 2001
Location: UK
Posts: 3,307
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 15
Quote:
Originally Posted by ADU View Post

Based on how dark some transfers appear these days, there may be some folks who believe we've already transitioned to a linear system. I can assure those folks we haven't though. (All you have to do is look at the gamma graphs posted in this forum to see that.) Nor is it in our interest to do so in the near future from either a PQ or delivery standpoint. A linear system would only compound most of the problems in both those areas.

90% of the throughput in modern post production chain happens with natively linear or linearised material. There will be no practical difference in look , a linear image on a linear display will look exactly the same as the same image in video on a accurate video display. The advantage is with standardising the rendering intent and optimising the maths processing.

Its very rare these days that even video material is processed in its native colorspace , its common practice to linearise it.

For legacy video displays you would be able to change the rendering intent at source and for example have a player apply a lut to the outgoing material .

digital film janitor
Mr.D is offline  
Reply Display Calibration

User Tag List

Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off