It is complicated, and here is a little write-up I made for myself to reference after some research.
Dolby Vision HDR RGB Tunneling
The method Dolby Vision HDR (DV) uses to transport the signal over HDMI is referred to as “RGB Tunneling”. The 12-bit ICtCp or ITP colorspace DV signal + Metadata is encapsulated inside the regular RGB 8-bit video signal. The DV “tunneling” carries 12-bit YCbCr 4:2:2 data in an RGB 4:4:4 8-bit transport. This is possible because both signal formats have the same 8.9 Gbps data rate requirements.
DV requires dynamic luminance data which cannot be explicitly carried in an HDMI 2.0 (18 Gbps max) data stream, so it is designed to transport over HDMI 1.4 (8.9 Gbps max); at least up to [email protected]
DV base content and DV luminance (meta) data is encapsulated in an HDMI 1.4 compatible (except HDCP 2.2) RGB 4:4:4 8-bit video stream. That's why Dolby claims that DV can be sent via HDMI v 1.4, but in reality, HDMI v2.0 is needed due to the HDCP v2.2 encryption.
The DV metadata is encoded into the least significant bits of the chroma channels. Upon the HDMI EDID exchange (handshake), the sink (AVR, Display, or HDMI switch) signals the source that it supports Dolby Vision HDR "tunneling". The source then signals the sink that it's transmitting Dolby Vision HDR through an AVI Infoframe, which therefore triggers the Dolby Vision HDR mode in the sink. The display DV engine extracts the components and produces a tone mapped image.
As a result, video pass-through components must be DV 'aware' to not alter the signal, which is in effect 'hidden' inside the 8 bit RGB 'container'.
AVR’s may report DV signals in one of two ways, but both are correct:
Resolution: 4k:24Hz ->4k:24Hz
HDR: Dolby Vision
Color Space: RGB 4:4:4 -> RGB 4:4:4 -OR- YCbCr 4:2:2 -> YCbCr 4:2:2
Color Depth: 8 bits -> 8 bits -OR- 12 bits -> 12 bits