or Connect
AVS › AVS Forum › Display Devices › Display Calibration › Go-to Guide for Source Options
New Posts  All Forums:Forum Nav:

Go-to Guide for Source Options - Page 2  

post #31 of 160
The trick here is that you are fiddling with both ends of the signal and so you may be causing problems at one end that you are then trying to bandaid fix at the other end.

Normally for people with commercial players I'd recommend you keep the commercial player set at its factory defaults (except possibly for having to change 0 IRE vs 7.5 IRE for analog video output). The factory default settings for most modern DVD players really are pretty likely to produce the best signal that player is capable of -- particularly if you pick a "picture mode" on the player described as doing the LEAST to the image. This is quite unlike the case with TVs where the default factory settings are likely to be truly awful.

Then, trusting that your player is already doing the best it can, you now make ALL calibration adjustments at the display end.

What you are doing on the other hand is adjusting your CRT to show it's full dynamic range and then separately adjusting your computer-based player to calibrate the signal from the DVD against that CRT range.

Just to give you an idea of what might go wrong here, suppose the 100% = "white" level on your CRT happens to be driving the CRT a bit too hard so that you are getting beam focus "blooming" and perhaps even modest levels of geometry distortion as the CRT's power supply tries to keep up. Or perhaps the CRT will start to show color shift if you drive the phosphors that hard for very long.

Now when you play Avia test charts via your computer, you will see such problems and will turn down the top end of the output range of the computer player so as to eliminate these problems -- effectively driving the CRT at less than the 100% you previously set up. But that means you have just compressed the dynamic range of the signal coming out of your player. That may in fact be the best way to image perfection, but another possibility is to adjust the top end of your CRT to a lower light output level and leave the player's end at a wider dynamic range so that "white" from the player is a higher voltage but only drives the display at, say 95% of it's maximum light output.

Since neither your player nor display is dependably calibrated to begin with, you might need to play around to find which combo works best.

But on most TVs, for example, you would be very badly served if you set up the TV to show "white" from the DVD as the maximum possible light output level.

A "safer" starting point would be to set the CRT such that the 100% signal level generates an image which is just beyond the point where you perceive it as "gray". I.e., "white" but only just barely so -- as opposed to maximal white. This would be the equivalent on a home TV of lowering Contrast with the intention of watching the TV in a dimmed room.

Ideally you would need a video signal scope and a light sensor to do both ends of this calibration. First making sure your computer-based player is sending the best signal it can, with proper linearity of the gray steps, and then adjusting your CRT to accurately reflect that signal as steps of light output.

I don't know enough about the nature of your computer based setup to know if it has dependable defaults. But if it does, then you might also try leaving the player end at the default settings and doing all calibration adjustments at the display end, just as if you were using a commercial player.


Also note that if both your computer based player and your display are equally happy with analog video signals at 0 IRE and 7.5 IRE voltages, then after you calibrate you will be unable to distinguish on the basis of any test pattern or image exactly which voltage you ended up using. The images will look identical once properly calibrated each way.

In particular, any given test pattern off a DVD can't know what voltage standard you are using on the analog video cable. So a test chart that identifies "Black" as "0" for example, doesn't necessarily mean that the signal for black on the analog video cable is 0 IRE (voltage). It could just as easily by 7.5 IRE (voltage).

So when you say you have set up your component connections to use 0 IRE, you can only know this for sure if your test image generator is driven by the 0 vs 7.5 IRE selection, and thus changes according to which voltage you select for output -- which won't be the case for any DVD calibration disc.

The only problem to watch out for is that not all players and not all displays work equally well with both voltages. The player or display might clip BTB data at one voltage and not the other for example, or the display might not have enough calibration range to be adjusted properly at one voltage when it works fine with the other.

The most important thing is to pick an output voltage standard that allows you to get the display into proper calibration at both ends of the gray scale -- both blacks and whites. If only one voltage level allows for that, then that's the one you've got to use unless you can find a setting in your display that lets you use the other. Even if that means you lose BTB data.

Having found a voltage that lets you calibrate blacks and whites, then you should also check that BTB data and Peak White data is getting through properly. If it isn't there may be some setting on either the player or display which enables it -- without forcing you to use a voltage level that won't calibrate.


The black levels test chart in the THX Optimizer on some commericial DVDs is OK to use to double check that BTB data is being passed, but it shouldn't be used for actually setting calibration levels because it is not as dependably accurate as what you'll get off of Avia or DVE. On some commercial DVDs the THX images have been massaged as part of the final editing process that created the DVD, and thus inaccurate levels are introduced.
post #32 of 160
I realize I'm sort of thread jacking here since this isn't the HTPC forum, but hey, I got here from there so perhaps others have, so perhaps this will help them.

I think the boys at ATI have set up the component output pretty well. My calibration procedure was as follows:

1. Video card set to defaults
2. Set contrast and brightness using PC test patterns.
3. Check contrast and brightness of AVIA.

As it turns out, I didn't need to change anything in AVIA. After setting up using the PC tests, I was bang on.

I see what you're saying with respect to not knowing whether or not I'm actually sending 0IRE or 7.5IRE from the computer. I haven't actually connected a scope to the HTPC output so I could in fact be sending either. Looks like I'm going to have to bring a scope home one of these days and check it out.

I've heard that the THX optimizers can be "off" so I haven't ever used them before. I only just checked this one out for the BTB test.

One more questioin, I believe the IRE values are for RGB. What are the corresponding voltage levels for YPbPr?
post #33 of 160
Yes, that sounds OK. If you are going to spend money on test equipment, you'd probably be better off getting a light sensor first so that you can check gray scales precisely and verify things like gamma correction. [I believe Ovation has a "such a deal" on a combo of Avia Pro and a good light sensor if you need to lighten your wallet.]

Also be aware that all CRT based displays drift in settings and geometry both as they warm up and as they age. Give the CRT a good 20-30 minutes warm up time before you adjust it. You might also want to get in the habit of double checking your calibration periodically (at least once a month) until you get a feel for how rapidly your particular display drifts. Re-check image geometry and color convergence at the same time. Critical studio monitors are always re-adjusted on a regular schedule for just this reason.

I don't think there's a variation in voltage standards between the two analog formats, but I'm not sure on that one. Chris?
post #34 of 160
I wasn't planning on buying a scope. We've got plenty around the office that I can borrow for the weekend. I was eyeballing some light sensors just the other week. Maybe next Christmas after the reno / new TV.

Only once a month? My wife/kids would be happy if I was in service mode that infrequently. :)

I'm one of those guys who cringes when my set is out of convergence so it's checked pretty regularly. Up here in the great white north (-40C today with the windchill) my basement gets pretty cold overnight so I get a fair amount of drift in the winter due to thermal cycling of the set so I'm pretty good at reconverging the set.
post #35 of 160
Sorry to be so blunt, but unless you follow the directions and have a player that acts as I have described, the demonstration will not work: you're adding variables to the comparison and therefore will learn nothing from the experiment.

Phat Phreddy
>>>"If I am still stuck miss understanding then just say so but if a display is perfectly calibrated would all those values not be outside of the dynamic range of the display.. Eg all that below black would be 'clipped' at the display ??

Thats why I dont understand the advantage of below black in the outputted signal.. If I am supposed to tune black so that below black is not visible this information is then clipped at display ??"

Not "clipped" at the display, just not "visible" to your eyes. The signal is still there and is part of the entire image processing chain in order to "support" what you are supposed to see.

The side affects of clipping signals in digital video is more or less the same in digital audio. When the audio signal clips in digital, it's usually a rather obvious and detrimental sound. Besides the frequencies clipping, those around them feel the shockwave that is digital clipping. Same with digital video: when you clip a signal, the signals around it (above and below) are negatively assaulted as well -- think of the "pebble dropped in water" scenario. This is why clipping black doesn't mean that digital signals from 1 - 15 are lost. It also means that the signal above the clip (the darker and, to some extent, the lighter grays) feel the affect of that loss. This is why that scene I described looks different on a capable display between the unclipped vs the clipped image.

The range for human hearing is said to be 20 Hz - 20 kHz. Music goes below 20 Hz and the range of instruments/harmonics may extend as high as 100 kHz. Do you think you could hear the difference between these two presentations:

A) a full-range SACD or DVD-A recording of a classical symphony
B) the same recording, but with hard truncation of the frequency extremes at 40 Hz and 15 kHz?

I think if anyone actually takes the time to read through the topic post in this thread and the links that were provided, all questions should be answered. This FAQ was an effort to assemble the information in one place, rather than having it scattered around the forum. If that doesn't do it, then I have to say that Bob said it best -- "TRUST US ON THIS!".
post #36 of 160
Joe, I thought I had it pinned down more or less, but the audio analogy you presented actually confused me. So, please correct any mistakes in the following:

My SACD setup reproduces frequencies that are outside my hearing range but do exist in reality. Fortunately, the recording and playback systems were able to capture them and so the overall feeling I get from the music is closer to reality.

In the visual domain, however, our recording (and reproduction) media are not able to cover our range of vision. As a result, there is real-world information that does not make it to the film and any BTB information has nothing to do with it. BTB, introduced further down the path, is just engineering tricks that serve to counter display technology and image processing artifacts. Moreover, the PC does not need such tricks, because it (usually) creates information rather than reproducing it.

If we ever make a film with a dynamic range outside our seeing range, and if the processing and display media are able to reproduce it, we will have no need for BTB information, because any artifacts will go undected by our eyes.
post #37 of 160
Thread Starter 

Computers don't really follow the 0IRE/7.5 IRE output standards quite the same. Guy covered this in a thread that I linked, here is an excerpt that may help overall:

Computer RGB puts black at RGB 0,0,0 and absolute max white at 255,255,255. No blacker than black footroom nor whiter than white headroom are provided. The output is set up such that black RGB 0,0,0 corresponds to 0 mV and white RGB 255, 255, 255 is at 700 mV. Unfortunately, actual video material also requires footroom and headroom data in the signal to be represented or else image information is lost. This need was provided for by allocating the bottom and top of the digital signal range to permit "blacker than black" and "whiter and white." Digital video encodes a wider dynamic range that goes from blacker than black to whiter than white. Black is at digital 16 and white is at digital 235. Blacker than black data is allocated digital 1 to 15 and whiter than white is 236 to 254.

This creates a problem when it comes time to implement how digital video should be represented on a computer video card. Once could clip both ends of the digital signal range and then expand the remaining range to map black at RGB 0,0,0 and white at 255,255,255. This irretrievably clips video information and can create banding artifacts because a smaller range [16..235] is expanded to [0..255]. The value mapping isn't completely monotonic so banding can be induced. Another way to handle this would be to simply shift the digital data downward by 16 so no range rescaling is done, but this still cuts off blacker than black info and makes white on video much dimmer than white in computer video. The preferred solution for Media Center Edition PC's - computers specifically designed for multimedia use, is one that preserves video signal integrity over computer graphics. Black and white are kept at the values 16 and 235 and the MCE qualified displays are adjusted to properly display digital 16 as black and digital 235 as white. This avoids banding issues and displays video at full range. Computer RGB is less accurately displayed (unless it is in studio RGB with black = 16,16,16 and white = 235,235,235), but since the MCE's primary function is to provide high fidelity multimedia playback, the tradeoff is a reasonable one. Some older (esp. LCD panels) displays may lack the controls needed to adjust black level to make digital 16 true black. Displays with such limitations are not considered to be MCE compatible.

In HTPC's, the end user is free to make their own choices about how video signal levels are to be ranged and offset to fit within their video card output range. The degree of signal clipping and banding will vary depending on the user choices. The tradeoff between computer graphics and video graphics fidelity is certainly grounds for debate in HTPC's and no one right answer will satisfy all owners. For that reason it is not possible to state for HTPC's what mV is "correct" for black or white. It depends on the user goals and display capabilities. For MCE's the choice has already been set and both MCE and display manufacturers will very likely come to follow or allow for Microsoft's standard for the operating system. Black is at digital 16 and white is at digital 235.

If you really need to think of this in mV on an MCE....

Video Black = digital 16 = 16/255 * 700 mV = 43.9 mV
Video White = digital 235 = 235/255 * 700 mV = 645 mV

On such a system the Avia and Avia PRO 7.5 IRE labeled patterns would be at Video Black or 43.9 mV
and 100 IRE labeled patterns would be at Video White or 645 mV.

On a HTPC, your guess is as good as mine because the video signal scaling and offsets are not standarized. At any rate, I hope people adjust their HTPC and display COMBINATION to make black be displayed as black and white displayed as white.

Guy Kuo
Director - Imaging Science Foundation Research Lab
Video Test Design - Ovation Multimedia / Home of OpticONE Colorimeter, AVIA and Avia PRO
post #38 of 160
The fault is that you are, in essence, saying that the only signals that need to be recorded and reproduced are the ones that we can actually see. The fact is recording and reproducing signals below reference black and above reference white support and serve as a safety net, if you will, for the entire film-to-display chain. The decision where to put/reference black and white was not up to the DVD, the display or you. That decision was made by someone further back in the chain. All you can do is try to get your display to reproduce those reference levels based on agreed upon guidelines (eg; a calibration disc). In the film-to-disc chain, if someone got sloppy -- it does happen -- and encoded black a bit low or white a bit high (hence the headroom at both ends), you want your display to get all of the signal, not a clipped/compromised version.

I thought the audio analogy would actually help: I apologize if it made things more complicated. What it was meant to show was that extra bandwidth is needed to properly convey the defined bandwidth. Nasty things happen in digital very fast, unlike analog where things happen slower (distortion, clipping, etc), and those nasty things affect things around them. How's that for a technical explanation? :D You want to keep those nasties as far away from the meaningful data/signal as possible.

If it helps, don't think of digital 16 and digital 235 as black and white (ie; colors): think of these as reference points for signals (ie; intensity levels).
post #39 of 160
Thread Starter 
I am usually careful to call the two terms "Reference Black" with data below this point as "Blacker-than-Black." First, this point is fairly concrete at digital 16 (except from some floats in the 'real-world' number value of black to compensate for CRT float as explained). BTB data will not be visible when correctly calibrated (again, except for this float), but the presence of the BTB data through the video chain will affect the final visible image.

I call 235 "Nominal Reference White" and values above this as "Peak Whites." I think this is more clear than calling them "whiter than white." Peak Whites will be normally visible on any CRT display that is properly calibrated. They don't just affect values below reference white, but but peak whites are *directly* visible as detail in bright objects.

Note the difference between Reference Black, and 'Nominal' Reference White. You won't directly see BTB data(usually). You *will* directly see peak white data. This ends up providing video a *larger* dynamic range than for graphics, since graphics does not allow for any data outside the bounds of black and "white."
post #40 of 160
Chris, how is a digital display device supposed to handle peak whites? Should they be visible there, too?
post #41 of 160
Thread Starter 
Displays with hard limits on on/off contrast like current digital displays are a little bit more difficult to deal with. In my opinion, you should calibrate so that peak whites are included in the final image without colorshifting or clipping them. Others do this as well, but I also understand that some may want to increase the white levels slightly, to increase perceived contrast a little bit. I would still hope that at least a portion of peak whites are being maintained, and this choice is a very subjective one. I won't be as stringent to advocate that the full range of whites *must* be seen in the final image, but I would certainly feel that it is preferred. It also does depend on the display. If a display hard-clips, I would find this slightly more acceptable than colorshifting. You don't want your whites to be off-white. If your display is colorshifting with peak whites, I would probably calibrate so that all the peak whites are maintained correctly as white (below the max white that starts to colorshift) to avoid this problem.
post #42 of 160
Btw, the Radeon's overlay correctly maps the 16-235 range to 0-255. VMR9 does the same except for YV12 or NV12 input modes (I guess this is just a bug).
And as quoted above the 7,5 IRE setup is irrelevant for digital and VGA outputs.

This means that you usually don't have to calibrate a HTPC.
post #43 of 160
That's going to need some more explanation.

There IS NO correct mapping from 16-235 to 0-255. If you shift "black" from it's proper location at 16 for home theater video to the PC-style encoding where "black" is represented as 0 then (1) you have eliminated any possibility of properly passing Blacker than Black data (there's no space below digital 0) and (2) you have inserted additional steps in the gray scale which may very well result in banding. Now there are better and worse ways to conceal that last problem, but the loss of BTB data should not be tolerated.

Furthermore, saying that typical HDTV-ready displays don't need to be calibrated simply because the source is an HTPC setup is also false. Most such displays ship with manufacturer's default settings -- even for digital video input -- that are just ghastly: The justifiably disparaged "torch" mode settings. At the very least you need to calibrate to extinguish the torch.
post #44 of 160
There IS NO correct mapping from 16-235 to 0-255. If you shift "black" from it's proper location at 16 for home theater video to the PC-style encoding where "black" is represented as 0 then (1) you have eliminated any possibility of properly passing Blacker than Black data (there's no space below digital 0) and (2) you have inserted additional steps in the gray scale which may very well result in banding. Now there are better and worse ways to conceal that last problem, but the loss of BTB data should not be tolerated.
BTB should not be displayed. This is a part of the DVD/HDTV standard. If you calibrate your chain that it will pass BTB then you not only don't view the movie like it is intended to be also the majority of all movies will have a too high black level causing a "washed out" picture.

Banding may have been an issue with old graphic cards, but with the introduction of internal 10 bit precision this problem is nearly obsolete nowadays.

Furthermore, saying that typical HDTV-ready displays don't need to be calibrated simply because the source is an HTPC setup is also false.
I didn't say that! ;)

I stated there is no need to calibrate the HTPC. The display is a completely different story. In fact every display should be properly calibrated with Colorfacts or similar measuring equipment, because "out of the box" most displays are not calibrated or calibrated wrong.
post #45 of 160
Thread Starter 

again, we've just been over this and you are not correct. Expanding to PC levels is not desired. Have you read my explanations in the guide, the linked threads, and the subsequent posts I've made explaining why?

I will not continue to argue this. You have a pm.


***I will NOT argue this again on this thread. Do not post about expanding to PC levels as being ok. It's been covered for 20 pages before, I will not repeat that here. If this keeps getting dragged down to newbie debates about this, I will close the thread, something that I really don't want to do.***
post #46 of 160
OK, we agree on the calibration. I wanted to make sure we didn't end up confusing people.

If you trust that your HTPC video output stage (or DVD player for that matter) is made properly by people who care about quality, then it is usually best to ASSUME it's default settings will produce the best image it is capable of. For commercial DVD players (unlike TV sets), the factory default settings really are likely to be the correct ones to use. At least as a starting point. If you discover persistent problems after doing the best you can achieve with display calibration, then you need to revisit what your HTPC (or DVD player) is doing.


As for the BTB data, I'll have to suggest you re-read the earlier posts in this thread.

BTB data should not be DISPLAYED but, nevertheless, it should be PRESENT so as to keep any signal processing in the TV, panel, or projector from generating artifacts in the Black and Above data that IS supposed to be seen. This is all explained in detail in this thread.

If your display exhibits "floating black levels" as is typical with CRT-based systems, it is even MORE important that the BTB data get to the display because it will float up into visibility depending upon (usually) the average light level of what's currently being displayed.

However, if you generate a PC-style digital video signal where "Black" is represented as digital 0, there is no possibility of passing BTB data from the DVD to the display so that the display can take proper advantage of it to protect the quality of the Black and Above data.

So you should generate a video signal that INCLUDES BTB data (and check that it is really there by *TEMPORARILY* raising Brightness on your display until you can see it), but then calibrate your display so that the BTB data is *NOT* visible for normal viewing.

Even though it is not visible, the portions of the image that ARE visible will be improved because the signal processing in the display can take advantage of that "guard" data recorded below the light level which was arbitrarily selected by the producer to be represented as "black".
post #47 of 160
If you calibrate your chain that it will pass BTB then you not only don't view the movie like it is intended to be also the majority of all movies will have a too high black level causing a "washed out" picture.
In theory this is true.

Films are processed using CRT displays. CRTs can't hold black at black, they float. As the APL in the image changes, the some BTB info will be displayed.

You do lower your CR when you calibrate to have all values above 235 visible. I went from 1500:1 down to 1000:1 when I did that. Too me it is worth the loss in CR to not have blow out whites.

The gamma of the display is critical. A lot of new digital displays have a gamma of 2.2, but this will reproduce an image that is not correct. That CRT I mentioned above has a gamma of ~2.5. In order to see what the film makers intended, your display must have a gamma that matches.
post #48 of 160
Big apology, guys. I didn't read carefully enough. Now I understand why you suggest to use the whole 0-255 range.

I will try this on my 1292 tomorrow.
post #49 of 160
There's another issue besides the "CRT floating black level" that applies to all technologies, not just CRT: the floating black level of your visual system.

The SMPTE standard for black level calibration is to put up a 75% color bars+pluge pattern, increase the black level (brightness) control until the below-reference-black bar is visible, then back it off until the bar just disappears. In most cases, this will not be exactly when 16 is at the minimum light level. Because of the light washout effect of the 75% color bars, there will still be a little room under 16 that is visible when the average picture level of the whole screen is lower. This effect is by design.

Your visual system calibrates its sensitivity range based on the brightest object in view. Thus when you have a bright pattern (the color bars) on screen, your black sensitivity goes down, partially because your iris closes and partially because of other biochemical processes in your retina. So your perceived extinction point on the monitor will generally not match the physical extinction point on the monitor. This partially explains why your calibration will be different if you use a pure pluge pattern instead of a pluge pattern combined with color bars or a white field. Given the SMPTE recommendations, it's clear you should be setting black level with a pattern that includes either color bars or a white field.

In practice, code 16 from a digital video source like DVD will be calibrated at close to the physical extinction level (the physical minimum black level) of the display. But not exactly - when the SMPTE procedure is used, the perceived extinction level will not match the physical extinction level. Again, remember that the telecine operators are using a CRT monitor, and using the SMPTE calibration procedure. If you want to see what they see, you'll want to use a similar procedure.

Thus I am uncomfortable with statements that suggest that the below-16 values will never be seen. The telecine operators absolutely do see some below-16 values, as well as seeing the indirect effect of below-16 values (the aforementioned image processing and analog issues). Again, if you want to see what they see, you should calibrate your system accordingly. Are you going to see, say, a pixel at 8 surrounded by pixels at 9? No. But you'll almost certainly be able to see the difference between a dark suit with details that vary from 13 to 16 versus a suit that has all of the pixel values clamped to 16.

The upshot is this: toeroom is not just for the pluge pattern. There are good technical reasons for it, and it's not supposed to be clipped off at any stage of the video pipeline.

post #50 of 160
Apolgies in advance Chris..

There are good technical reasons for it, and it's not supposed to be clipped off at any stage of the video pipeline.
Now we are back to where I began.. If the video chain results in an image that resolves blacker than black at the screen then exact black (16) will be an elevated black level.. If you use that image as a reference then all your programmings 'black' will be higher than the best black of the device and you will have continually compromised CR. If you include that toe room (and the head room) you DRASTICALLY reduce CR when measuring the 16 - 235 range.. You cant help it..

While it may be a desirable compromise in a CRT based system I can see that it would be a good idea in the digital realm where CR is such an issue..

Also could someone explain why above white data is different from below black data ?? Why is above white desirable to be seen and below black calibrated out ?? Why is the reduced CR of above white a good idea (surely a correctly mastered title should have all intended data in the 16 - 235 range)..

Again sorry Chris...
post #51 of 160
Thread Starter 
Phreddy, the BTB data is confusing because as we've seen in my explanations and those by Don and Stacey, the "real" level for black that is encoded and seen can float around a bit. If, as Don mentions, you calibrate to make the BTB bars *not* visible in a high APL pattern, in low APL material you probably will see data that is ostensibly "below black."

whiter than white material should be preserved in the final image, and will easily be seen. The BTB detail is more complicated, because you are *not* calibrating for 0 (really 1) to be black. You are calibrating 16 to be black, but data below 16 can and will become visible in a CRT setting, or a display that properly mimics the behavior of a CRT. Meanwhile, you would ideally want to place 254 just at the range of the maximum white of your display, so that peak whites are not clipped off or colorshifted. I can forsee and understand that some digital users may clip off *some* peak whites for a touch more CR, but I think going so far as to put nominal reference white (235) at the absolute max white of your display and clipping or shifting all values above that is too damaging to the picture.
post #52 of 160
Phat Phreddy,
I'll just add one point to what Chris said, without trying to get too technical:

What you are trying to achieve is not some technical standard of data reproduction perfection -- some law that says this pixel will be visible and that one won't. Rather what you are trying to achieve is to see WHAT THE DIRECTOR WANTED YOU TO SEE.

This is much tougher than it might appear, particularly in the face of the numerous human factors that arise as choices are made during the reproduction chain, as well as the expected mangling that occurs during signal processing.

But correct transmission of BTB data throughout the reproduction chain makes it easier.


For Whites, think of how you calibrate for loud sound. You want to calibrate the loudest sounds coming out of your system to a reference sound level -- again matching what the director wanted you to hear in a theater environment as the sustained level of loudest sound. But there are PEAK sounds that go above that to a certain degree. Good sound systems have a reserve capacity to go beyond the reference sound level, even though that higher sound level should not be sustained -- for comfort reasons if no other.

By calibrating "Reference White" properly, you give your display the best chance of showing typical brightly lit scenes properly, and without blurring, color bias, or geometry distortion. However highlights will go beyond that, and the BEST displays will be able to give that extra 110% performance for small sections of the image for short periods of time, even though calibrating the display to show White as at that level on a sustained basis might be beyond what the display technology can accomplish without introducing other problems.

Of course the display can only do this if the Peak White data MAKES IT too the display in the first place.


So Blacks and Whites really are different, just as inaudible sounds are distinct from peak sounds above the reference maximum sound level.
post #53 of 160
But you are approaching this from a CRT background
showing typical brightly lit scenes properly, and without blurring, color bias, or geometry distortion.
On my DLP whites do none of the previous..

All exposure of BTB data wrecks my black level and causes dithering for the majority of 'correctly' leveled black data at 16..

Peak white and nominal studio white is less of an issue for me as I have a bright DLP so that I already have a pretty bright image even at 235.. Also rasing whites by 16 points at the upper end of the scale causes far less CR decrease than having black higher than reference (to me 16)..

Black float etc is not an artifact I see with digital.. Dithing and CR is more of a problem and the only 'float' I can imagine in black levels would be due to room reflections.
post #54 of 160
As an audio analogy I would consider studio white to be simply running a tiny bit short of peak db but leaving data below studio black would create a constant hiss..

What would be worse in an audio system ?? Little bit under absoulte peak sometimes or a constant hiss ??
post #55 of 160
Thread Starter 
Phreddy, you're missing the whole point. You calibrate 16 to be black on your DLP. A DLP that does not emulate a CRT in this regard and holds a very constant and precise level for black may be a weakness, but you would still calibrate black (no dithering) to be 16. Some will suggest that you bump your brightness up a couple clicks to maintain some detail a little bit below 16, this is a subjective preference. But we are *not* saying that you calibrate 0 (or 1) to be black on your DLP. Indeed, black (16) would be extremely elevated, and this is *not* what we're after.

I think the audio analogy may serve to obscure things, since we can't really talk about the white or black levels in an audio system quite the same way.

But you're misunderstanding the calibration, maintaining BTB data through to the display does NOT mean that you calibrate your black levels up so that you *see* the BTB data. As Don pointed out, with a CRT you will end up seeing some data that is ostensibly below 16. however, a DLP that does not emulate a CRT in this regard, and maintains an fixed black level should be calibrated to be black at 16, or ever so slightly below it. Either setting is a compromise. But you *don't* calibrate so that you see the full range of values. You shouldn't see the PLUGE bar. If you decide that you want a little bit of the BTB data, you may just barely see the BTB pluge.

Got it?
post #56 of 160
I am not miss understanding what you say at all..

What I am pointing out is that while you advocate a 'hard' cutoff at 16 (I agree.. Its the only way to keep a solid low black level for most content) yet Don is saying that 'some' below black info should be seen.. This is why I pointed out that some are argueing this point with a purely CRT mindset (blooming, color shifts, distortion) and not considering the digital side.

If you show the below black info you raise black and destroy 'average' black level and hence destroy CR on correctly mastered titles..

I personally would rather the vast majority of titles looked good and had high CR and low black level, than sacrifice this to see some below black that argueably should not be seen if correctly mastered.
post #57 of 160
Thread Starter 
Phred, the point is that the standards and mastering chains and etc are all based on the behaviors of CRTs. A display that does not mimic a CRT will not perform in the same way, thus not quite match what the engineers intended. Some may call this a weakness for now, some may call it an improvement. But in any case, you have to compromise this setting since your display doesn't behave like a CRT does. You can fix blak (16) at the lowest black level of your display, and possibly lose some visible shadow detail, OR you can bump up your brightness a couple clicks and maintain a little bit of BTB data. *BOTH* settings are a compromise. You have to decide for yourself how to deal with this weakness, as I've explained. I don't have a recommendation here, since I'm not using a DLP. I know some users that keep some data below black, and I know some that set 16 as the blackest they can go on their DLP (no dithering). I have my own predictions about what setting I'd settle for on a DLP, but I don't live with one, so I haven't spent a lot of time fiddling around with this compromise. I know more about setting my own CRT and dealing with a different set of problems in coming up with a black level setting.
post #58 of 160
however, a DLP that does not emulate a CRT in this regard
Depends on the DLP. The Samsung HP700AE was designed to emulate a CRT broadcast monitor.

Most DLPs don't do this, however.
post #59 of 160
I have an NEC 50XM4 Plasma and a Denon 2910 DVD player. DVD's are connected via DVI, HD satellite through component. In general the default display settings look quite nice. I changed the NEC's DVI setting to STB/DVD from PC and black level to high, per the manual black level should be set to low for PC and high for STB/DVD. The Denon's DVI black level is set to normal. Here is my conundrum, when I calibrate with DVE (DVD pluge with gray scale) these default settings are clearly passing BTB, all 3 bars are quite visible. When I throttle back brightness so that the 3rd bar just disappears the performance of DVD's with dark scenes looks terrible, a lot of detail in these dark areas just disappears. Any thought as to why this occurs?
post #60 of 160
If you are correct that near black detail actually disappears then I can only assume your player or display is rounding near black levels incorrectly.

Are you sure that you are not seeing noise instead? I ask this because the below black bar in DVE is reportedly encoded as digital 7 where black (the background on that chart) is digital 16. If you adjust black levels (Brightness control) so that the third black bar just barely vanishes when compared to "Black", then in fact you have adjusted to make a portion of the Blacker than Black data (8 to 15 in this case) VISIBLE. Since Blacker than Black data tends to be of lower quality, this can show up as noisy low black regions of the image. Try lowering black levels (Brightness control) another notch or two and see if that improves the apparent image quality. Be sure to re-check white levels (Contrast control) since these two settings often interact. Remember that if you have black levels adjusted properly apparent "details" recorded in the Blacker than Black data WON'T be visible, and this is CORRECT -- i.e, it's what the film-maker intended.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Display Calibration
This thread is locked  
AVS › AVS Forum › Display Devices › Display Calibration › Go-to Guide for Source Options