Originally Posted by DrewL
To round up:
- The purpose of those values is to allow for quantization errors.
That's my understanding. The Rec/ITU 601 levelspace was decided in the very early 80s - and in those days it was designed to allow component digital devices to be used within largely analogue composite environments where there would be composite decoding and A/D conversion on input and D/A conversion and composite encoding on output. The source content may also have been pretty artefacty - as it would have been through a largely analogue production chain - edge overshoots and undershoots would have been commonplace. If you clip over/undershoots you potentially generate nasty harmonics when you convert back to analogue which would appear as ringing (HF edge artefacts) so it is important to preserve these - hence the requirement to capture levels below black (for undershoots on low level transitions) and above peak white (for overshoots on high level transitions) when you're a digital island (and these clipped transitions can also cause problems with digital processing downstream too). All very sensible decisions in the early 80s.
3. Does anyone know why so much headroom was chosen? Does it have to do with corresponding analog IRE values?
I guess the levels were chosen based on the likely levels of over/undershoot that could be expected in a 'production' PAL or NTSC analogue composite signal (I think SECAM production was probably increasingly in PAL already by this point)
Once 601 kit (D1 VTRs, DVEs, Harrys, Paintboxes, Slidefiles/DLSs etc.) started appearing in the 80s the 601 standard became dominant - initially with parallel 656 interconnects (on 25 way D-types) then in the early 90s the SDI addition (not the original 656 serial standard which wasn't used) became widespread. Even if 656 wasn't used then 601 level space was.
I guess a level space change could have been made in the switch to HD - but there was a lot of sense in keeping to the same levels within broadcast interconnects where SD and HD signals are often used within the same studio or facility.
4. Broadcasters may choose to fail QC for invalid levels. It seems that there's nothing technically wrong with those levels, they just don't want video that may look crushed due to some oversight, so they err on caution.
AIUI most broadcasters in the UK doing manual QC would fail you if 'real' picture information was outside the 16-235 range (8bit) but if there were brief incursions, particularly on analogue-sourced archive, then that would be deemed better than clipping and causing ringing and should pass?
Any bit of broadcast kit should preserve 1-15 and 241-254 (8 bit) AIUI for this reason.
(There are similar issues in SD with the horizontal differences between analogue and digital standards. In 625/50-land an analogue line - 4:3 or 16:9 - is 52us, and this translates to 702 samples at 13.5MHz. However a digital Rec/ITU 601 line is 720 samples long - and these 720 x 576 digital images are thus slightly wider than 4:3 or 16:9... Digital sources should really produce a 'wider than 4:3 or 16:9' image of 720x576, but analogue sources will usually arrive somewhere around 702x576 (depending on analogue blanking errors). If you then shrink a 720x576 image in a DVE - what do you do about the edges? Do you preserve the 9 samples that don't have picture content on the left and right in some sources - say a 702x576 digitised-analogue source, or do you crop the 9 samples either side?)