All guides and how-tos explain that white level should be low enough for it to not clip, but high enough to fit all whites. I would like to see some comments on the latter.
My understanding is that by adjusting white level I adjust the maximum displayable white, and all other whites should be properly arranged and mapped inside my range from current black to current white. Technically, I should not lose anything, especially in analog system. I might lose something in digital system if native pixel depth is too low, and not all input values could be properly converted into native values without losing difference between pixels.
If native pixel depth is deep enough, then I should not lose information. Most modern TVs have at least 10 bits per color, some have 12, some have even 16. Therefore my white range should be properly scaled like a rubber band when I increase/decrease white level.
Yet, I see recommendations to ensure that white level is high enough to see near-white tones, but low enough to not see whiter than white tones. This makes me think that there is only one "proper" white level. This seem not to hold true.
Case in point: I am adjusting white level on a 50" Panasonic plasma using THX optimizer. Despite of the white level, I keep seeing whiter than white as well as almost-white tones. This seems proper to me.
Therefore, it seems that the actual reason for adjusting white level is adjusting gamma curve to make all gray tones throughout the grayscale to be discernable by human eye. But I think this concept is different from a totally different idea of losing bright whites on the top end of the scale (only brights, but not midtones or dark tones!) because of low white level.
My understanding is that by adjusting white level I adjust the maximum displayable white, and all other whites should be properly arranged and mapped inside my range from current black to current white. Technically, I should not lose anything, especially in analog system. I might lose something in digital system if native pixel depth is too low, and not all input values could be properly converted into native values without losing difference between pixels.
If native pixel depth is deep enough, then I should not lose information. Most modern TVs have at least 10 bits per color, some have 12, some have even 16. Therefore my white range should be properly scaled like a rubber band when I increase/decrease white level.
Yet, I see recommendations to ensure that white level is high enough to see near-white tones, but low enough to not see whiter than white tones. This makes me think that there is only one "proper" white level. This seem not to hold true.
Case in point: I am adjusting white level on a 50" Panasonic plasma using THX optimizer. Despite of the white level, I keep seeing whiter than white as well as almost-white tones. This seems proper to me.
Therefore, it seems that the actual reason for adjusting white level is adjusting gamma curve to make all gray tones throughout the grayscale to be discernable by human eye. But I think this concept is different from a totally different idea of losing bright whites on the top end of the scale (only brights, but not midtones or dark tones!) because of low white level.