Yes yes, shifting from RGB to YUV and back is still a good idea, and back in the '50s, simply deleting whole lines of pixels from the yellow/green and red/blue channels was (like interlaced scan) a pretty clever compromise.
But nowadays? We don't worry about the bandwidth impact of resolution going up to 1080p60 and beyond, even over the internet. Since the flexibility of destructive compression at variable bit rates allows us to more precisely choose what to sacrifice, HD is typically streamed well below the bitrate of SD DVDs without looking too shabby. Heck, if fractal compression ever catches on, the notion of frame resolution will mostly go out the window in favor of bandwidth consumption.
In spite of this, every consumer codec from JPEG to h.264 is chroma subsampled (and almost always at godawful 4:2:0,) producing increasingly huge blocky multicolored jaggies that absolutely can't be removed.
Imagine if, instead, after the RGB-YCbCr stage, codecs simply biased to compress each chroma channel at higher ratios (like HD greyscale.) Then, when there was something, even on part of the frame, that compressed really well but produces horrible chroma artifacts (like a slow pan over thin, sharp, colorful line art) it would often be in 4:4:4.
Even people that dive into the intricate minutia of codec architectures, like Jason Garrett-Glaser, seem to just take chroma subsampling for granted.
Has anybody in a standards body ever expressed interest in fixing this hoary old hack?