Is this because of the different weights placed upon the red, green, and blue sensitivities in the eye with YCC? Despite it's ease of understanding, RGB by itself is not a data-efficient format because it places equal color resolution on all three channels when green should be given the lion's share, and blue about 1/6th of green's share. No?
If the YCC starts off as 8 bit integers, you then use floating point to convert them to RGB in the floating domain and then down cast them back to 8-bit integers. I'm not sure where you lose precision, unless the ranges themselves (not precision) for each of the R, G, and B don't map to the YCC ranges 1:1.
No, the displays don't approach our eyes, but that's why I'm asking what YCC is doing. I'm not talking about chroma subsampling but was asking if YCC was weighting the bits where the sensitivities lie (roughly a 30/59/11 split). Regardless of how overall sensitive our eyes are, it'd make better use of the colors if more depth (gradations) were allowable for green than blue. If 24 bits is all we got, it makes no sense to give only 8 of them to the most sensitive cone in our eyes and a full 8 of them to blue.
Nevermind, I just looked up the equations. It is a range issue, I understand what you were getting at now, because when range is squashed you lose precision....I think that's what you meant anyway. The YCC color model can provide many YCC values that translate to invalid RGB values as a result when using the same nominal ranges of 0...1.0 for Y, Cr, Cb, and expecting 0...1.0 for RGB.
I'll have to think on this unless you can correct me one way or the other.
The YCbCr color space is used for component digital video and was developed as part of the ITU-R BT.601 Recommendation. YCbCr is a scaled and offset version of the YUV color space.
The Intel IPP functions use the following basic equations [Jack01] to convert between R’G’B’ in the range 0-255 and Y’Cb’Cr’ (this notation means that all components are derived from gamma-corrected R’G’B’):
Y’ = 0.257*R' + 0.504*G' + 0.098*B' + 16
Cb' = -0.148*R' - 0.291*G' + 0.439*B' + 128
Cr' = 0.439*R' - 0.368*G' - 0.071*B' + 128
R' = 1.164*(Y’-16) + 1.596*(Cr'-128)
G' = 1.164*(Y’-16) - 0.813*(Cr'-128) - 0.392*(Cb'-128)
B' = 1.164*(Y’-16) + 2.017*(Cb'-128)
The Intel IPP color conversion functions specific for the JPEG codec use different equations:
Y = 0.299*R + 0.587*G + 0.114*B
Cb = -0.16874*R - 0.33126*G + 0.5*B + 128
Cr = 0.5*R - 0.41869*G - 0.08131*B + 128
R = Y + 1.402*Cr - 179,456
G = Y - 0.34414*Cb - 0.71414*Cr + 135.45984
B = Y + 1.772*Cb - 226.816
YCCK model is specific for the JPEG image compression. It is a variant of the YCbCr model containing an additional K channel (black). The fact is that JPEG codec performs more effectively if the luminance and color information are decoupled. Therefore, a CMYK image should be converted to YCCK before JPEG compression (see description of the function ippiCMYKToYCCK_JPEG for more details).
Possible RGB colors occupy only part of the YCbCr color space (see Figure "RGB Colors Cube in the YCbCr Space") limited by the nominal ranges, therefore there are many YCbCr combinations that result in invalid RGB values.
There are several YCbCr sampling formats such as 4:4:4, 4:2:2, 4:1:1, and 4:2:0, which are supported by the Intel IPP color conversion functions and are described in Image Downsampling.RGB Colors Cube in the YCbCr Space
Since the PhotoYCC model attempts to preserve the dynamic range of film, decoding PhotoYCC images requires selection of a color space and range appropriate for the output device. Thus, the decoding equations are not always the exact inverse of the encoding equations. The following equations [Jack01] are used in Intel IPP to generate R’G’B’ values for driving a CRT display and require a unity relationship between the luma in the encoded image and the displayed image:
R' = 0.981 * Y + 1.315 * (C2 - 0.537) G' = 0.981 * Y - 0.311 * (C1 - 0.612)- 0.669 * (C2 - 0.537) B' = 0.981 * Y + 1.601 * (C1 - 0.612)
The equations above are given on the assumption that source Y,C1, and C2 values are normalized to the range [0..1], and the display primaries have the chromaticity values in accordance with [ITU709] specifications.
The possible RGB colors occupy only part of the YCC color space (see Figure "RGB Colors in the YCC Color Space") limited by the nominal ranges, therefore there are many YCbCr combinations that result in invalid RGB values.
Aside from the 4K hardware issues, the H.264 vs. H.265 issues are completely solvable by firmware updates, no?
Not that you were strictly implying this, but the only thing I might raise in response to this is (in my opinion) that the adoption of OTA-HD last time around is a poor analogy to adoption of furthering standards.
Last time around, there were two things going on at once:
Now that we've finally bitten the digital transmission bullet, ever increasing formats (along any axis---resolution, color, frame rate) have the backward compatibility problem far more easily solved. I think.
Aside from "where to put it" point you made below, I'm still not sure why this is a tough issue. Firmware updates are nearly everywhere. Even TVs that aren't connected to the internet have had SD card slots allowing updates. And asside from bandwidth, MPEGanything is a software issue. Unless the raw processing power required suddenly went up, which isn't out of the question.
And this is an astoundingly good point! Makes me wonder though, I'm not 100% sure we can't mathematically supply an single format that has 2K-MPEG2 in it, and an incremental amount of data bringing it to 4K-MPEG4/5, such that the sum total is close to what 4K-MPEG4/5 would be alone. That is, having the MPEG4/5 information make use of the MPEG2 data. I'm trying to think back about JPG, and how the DCT cosine was managed and if there could be added data to make it a "better jpg" such that the original JPG wasn't wasted. Some of the craziest formats have been incrementable---I remember how ingenious YIQ was and how it didn't waste the black & white signal previously sent.
But again: we're up against the firmware update argument again....which is another point you might be right on but I don't yet agree with.
This might be cart before the horse (?) The reason we had to have such a plan on the books was because of the monumental leap we were attempting. And by the way, that plan wasn't around for long before because we were forward thinking that many years before hand. AFAICT, it only seems that way because we kept delaying the adoption over and over and over, scared silly that Mom & Pop would wake up one day with broken TVs and vote their congressman out. Does it ever suck having congress in the way of everything. This is why we're constantly looking over the lake at Japan and saying "aw.......@#$%, I want that."