Originally Posted by Bear5k
Since you found some of the links, unless you want to get into the matrix math itself, you want to begin reading this thread here:http://www.avsforum.com/avs-vb/showthread.php?t=877191
(post 969) for a discussion about going from a broader to a narrower color gamut. The math is the same and the intuition is the same, but the specific example is slightly different. Bruce Lindbloom has the raw matrix math on his site if you want to look at that. The punchline is that the primary mixes are governed by a combination of the chromaticities of the primaries and the white point. Since the white point is the same between SD and HD, the changes in the mix derive from the differences in the primary chromaticities.
Thanks Bill. I'll read that post; I have already been through Bruce Lindblooms site a few times as well. What I dont quite understand is if an upscaling DVD player somehow tries to compensate for differences in the primary chromacities by adjusting the luminances or if it just ignores the primary chromacities. Let me give an example:
Take a film frame that is "pure red" according to the 601 standard. The RGB for that frame would be (I think):
Now I don't know the coeffiecients for the 709 YPbPr<->RGB conversion, so I'll do it "air code" style. We convert the RGB to YPbPr using the 601 definitions to get (these numbers are fake):
Y = 1
Pb = 0.4
Again, those are just made up numbers. That is how the luminance info is stored on the DVD. Now, if we decode that signal using 601 we get:
RGB: 235, 0, 0
But if we decode it using 709 we get something different, say (again made up numbers):
RGB: 215, 0, 0
In order for this not to happen, we have to scale the 709 equations with the 601 equations, which is simple enough, so the DVD player changes the YPbPr signal before sending it. It alters the YPbPr signal from so that when it is decoded by the TV using 709, we get:
Which is what we should get. The problem with this scheme is that it ignores the fact that the primary chromacities in the TV are different, so the same RGB of 235,0,0 actually produces a slightly different color than the original film frame. This is due to the fact that RGB 235,0,0, just tells the set to turn on the RED 100%, with the BLUE and GREEN at 0%. It does not tell the TV what RED, GREEN, and BLUE should be in terms of chromacity.
2) The second scenario I can think of is that, since the SD gamut falls inside the HD gamut, you try to adjust the signal to reproduce the correct SD chromacity by altering the luminances (I'm not even sure this is possible). For instance:
The DVD player recognizes that (same fake numbers as before)
Y = 1
Pb = 0.4
is decoded in SD as:
The DVD players knows that, since the HD primary chromacities are different, that RGB 235,0,0 is going to produce the wrong color for the upconverted HD signal. So it adjusts the RGB to something like (again made up numbers):
Where the BLUE and GREEN components shift the color reproduced by the TV from 100% 709 primary red to some point inside the color gamut that actually corresponds to the 601 primary red (again, not sure that even works).
It seems like scenario 2 is considerably more complicated and would only yield marginally better results considering that the 601 and 709 primaries are already pretty close to each other. Do you know if either of these scenarios are actually what happens, or is it something else?
Off to read the post you linked to...