Originally Posted by EC1602
Understood. I'm not hung up on the color depth. I am just trying to find out if DV is 12-bit.
DV enhancement layers have two types. MEL and FEL. I am guessing most of DV blurays are MEL by ripping/demuxing .m2ts into .hevc several of them and comparing hevc file sizes of base layer and enhancement layer. By looking at demuxed file sizes, most of them looks too small to contain any extra color depth data. (Pure speculation on my part).
Apple 4K iTunes streaming and Netflix streaming use DV Profile 5 which is single layer. And Apple uses H.265 HEVC Main 10 YUV420 10-bit. This is documented on Apple's developer support pdf.
That being said, also DV does not require a dedicated hardware. Dedicated chipset requirement was in early days of DV.
So, we can rule out chipset doing any magic creating 12-bit.
So, 10-bit + 10-bit can equal to 12-bit? Even though the source files are graded/encoded into 10-bit due to codec limitation? If yes, how? How can a lost color depth be recovered from a lossy codec? (I said lost assuming 12-bit is used in editing process and transcoded into 10-bit in encoding stage).
And iTunes 4K DV and Netflix DV are in 10-bit while 4k bluray discs are in 12-bit?
Originally Posted by giomania
I think that not all DV is 12-bit, and some services/devices are taking advantage of the looser requirements. Just like streaming services cannot provide lossless audio due to bandwidth constraints (they use lossy Dolby Digital Plus.
Most people can’t tell the difference between the “lesser” DV and lossy audio.
But we are not most people, are we?
Sent from my iPhone using Tapatalk Pro
Originally Posted by EC1602
Well let's not derail the discussion by bringing audio here. TrueHD + Atmos, lossy & lossless audio codec and their differences being audible or not is different subject.
Also, question of differences/improvement of 12-bit color being visible in currently available 10-bit consumer panel is laughable. But this is not something I care either.
It seems no one knows for sure if DV on 4k bluray disc is 12-bit or 10-bit.
It's beyond me how 10-bit data in two different streams in a lossy codec can be combined to construct a 12-bit data. Lost data is not retrievable, only interpolatable.
Also, 12-bit of data is awful lot to distribute. The difference in file size (HDR10 vs DV) is too small to accommodate this.
Originally Posted by steverobertsbbc
According to their 'Dolby Vision for the Home' white paper ( https://www.dolby.com/us/en/technolo...hite-paper.pdf
"Dolby Vision is compatible with a wide range of video codecs. It’s currently qualified with HEVC and AVC decoders. There are multiple ways to encode and decode Dolby Vision signals – depending on the needs of the content creator and on the capabilities of the target display hardware, Dolby Vision signals can be delivered using a single HEVC Main10 stream or as two AVC-8 or HEVC-8 or HEVC-10 streams. The single layer HEVC Main-10 profile of Dolby Vision can be decoded by a standard HEVC decoder, then post-processed using a Dolby Vision module to produce the full range 12 bit Dolby Vision signal.
For dual layer AVC or HEVC Dolby Vision profiles, the source stream is split, and the base and enhancement streams are fed through separate decoders. The Dolby Vision composer is responsible for reassembling the full-range signal from the base layer, the enhancement layer, and the metadata."
Clearly there's either snake-oil or clever jiggery-pokery going on here to enable an expansion from 10 to 12 bits from a single HEVC stream. I have no problem understanding that it should be possible to combine data from two separate HEVC streams to form a 12 bit signal.
We're probably going a long way off topic here now.
I performed some additional research because I wanted to understand the different Dolby Vision options and what the OPPO UDP-203 supports. It turns out --assuming I understood this correctly-- that Blu-ray players support Dolby Vision HDR Profile 7, which is comprised of a a Base Layer (BL) and an Enhancement Layer (EL) + metadata that are recombined into the 12-bit DV signal using a Dolby Vision module. I adjusted my notes accordingly. I am posting below, but the links and formatting from my Word document will not work here:
Dolby Vision HDR Overview
References & Key Technologies
Dolby Laboratories 2015 patent awarded for the Non-Linear VDR Residual Quantizer.
This recombination process is the “secret sauce” in Dolby Vision HDR.
Dolby Laboratories’ Dolby Vision White Paper.
Dolby Laboratories’ ICtCp / ITP Colorspace White Paper.
Dolby Laboratories’ Profiles and Levels White Paper.
The European Telecommunications Standards Institute (ETSI) Compound Content Management Specification.
This standardizes combining the BL with the EL + metadata to recreate an HDR picture.
The Dolby Vision HDR (DV) signal is encoded in 12-bit Perceptual Quantizer (PQ) SMPTE ST 2084 EOTF using 12-bit ICtCp / ITP colorspace. This 12-bit video signal is required to be transported within an existing architecture comprised of 10-bit High Efficiency Video Coding (HEVC) encoders and decoders. At the encoder stage level (post-production), the 12-bit DV signal is split into a Base Layer (BL) and an Enhancement Layer (EL) + metadata. The BL and EL + metadata are encoded by a standard 10-bit HEVC encoder. At the decoder stage level (source device or display), the BL and EL + metadata are decoded by two standard 10-bit HEVC decoders.
For an HDR10 compatible display, only the BL (HDR10) is decoded and sent as a HDR10 video. For a DV compatible display, the BL and EL + metadata are recombined using a Dolby Vision module and sent as 12-bit DV video. This is a simplified overview and was the original implementation of DV. As the technology has evolved to reduce the processing requirements, the complexity has increased; keep reading for more details.
The Dolby Vision (DV) signal is transported over HDMI via “RGB Tunneling”. The DV signal is encapsulated inside the regular RGB 8-bit video signal. The DV “tunneling” carries 12-bit YCbCr 4:2:2 data over HDMI in an RGB 4:4:4 8-bit transport. This is possible because both signal formats have the same 8.9 Gbps data rate requirements. https://www.dolby.com/us/en/technolo...hite-paper.pdf
During the HDMI EDID exchange (handshake), the sink (AVR, Display, or HDMI switch) signals the source that it supports Dolby Vision via an AVI Infoframe, which allows the DV signal inside the 8-bit RGB container to pass through without alteration. AVR’s may report DV signals in one of two ways, but both are correct:
Resolution: 4k:24Hz ->4k:24Hz
HDR: Dolby Vision
Color Space: RGB 4:4:4 -> RGB 4:4:4 -OR- YCbCr 4:2:2 -> YCbCr 4:2:2
Color Depth: 8 bits -> 8 bits -OR- 12 bits -> 12 bits
Profiles and Levels
Dolby Vision HDR (DV) profiles and levels are designed to facilitate implementation of a DV product, such as an encoder or decoder, based on consideration of various requirements from typical multimedia applications. The DV profiles specify the requirements for the Base Layer (BL) and Enhancement Layer (EL) + metadata. A DV level specifies the maximum pixel rate, decoded bitstream video width, and bit rate supported by a product within a given bitstream profile. They provide a rich feature set to support various ecosystems, such as over the-top (OTT) streaming and Blu-ray Discs. DV for UHD Blu-ray uses Profile 7, and DV for streaming video uses either Profile 4 or Profile 5.
Profile 7 is the standard used for DV on UHD Blu-ray players. As noted in this post, at the encoder stage level (post-production), the DV signal is split into a Base Layer (BL) and an Enhancement Layer (EL) + metadata. The BL is 10-bit YcBcR 4:2:0 HDR10, and the EL + metadata is 1080p (¼ resolution of the BL). The BL and EL + metadata are encoded by a standard 10-bit HEVC encoder. At the decoder stage level (i.e. UHD Blu-ray player), there are two options:
1) For an HDR10 compatible display, only the BL (HDR10) is decoded and sent as a HDR10 video.
2) For a DV compatible display, the BL and EL + metadata are decoded by two standard 10-bit HEVC decoders. These are recombined using a Dolby Vision module and sent as 12-bit DV video.
Profile 4 is the standard used for some video streaming services; it provides a backward-compatible framework for HDR. As noted in this post, the Dolby Vision (DV) video is split into a BL and EL + metadata. The difference is the base layer is SDR with an SDR Gamma EOTF, and the enhancement layer + metadata includes the instructions to convert the base layer SDR stream into an HDR (DV) stream. The DV base layer can be decoded by a standard 10-bit HEVC decoder, while a DV module is required to perform the SDR to HDR (DV) conversion. This allows streaming companies to store one version of the file for both SDR or HDR streaming.
Profile 5 is the newest iteration of the DV technology, and it includes the entire DV signal in the Base Layer (BL); the BL is a 10-bit signal with the ICtCp / ITP colorspace. The ICtCp or ITP colorspace features constant luminance for a higher quality color volume mapping versus the legacy non-constant luminance YCbCr color. This profile was the result of a collaboration between Dolby Laboratories and Sony to bring DV to some Sony 2017 TV’s that did not have a System on Chip (SoC) integrated circuit design capable of decoding two video streams (the BL and EL). At the time it was created, Profile 5 was incompatible with many existing source devices on the market at the time, and existing source devices required updates to support it.
Profile 5 was designed so that the source device performs the DV processing versus the display for the lowest latency; this is beneficial for gaming. Therefore, Profile 5 is referred to as Low Latency Dolby Vision (LLDV) by the A/V press. Profile 5 requires the source device obtain the display’s capabilities via the HDMI Extended Display Identification Data (EDID). This information is a critical factor in mapping the DV content to fit within the display’s capabilities. Microsoft has updated their Xbox One S and Xbox One X gaming consoles to support DV for streaming apps. There is some confusion if this is Profile 5 or a new Profile than what was initially created. There is a discussion and dissection of the issue here.
As noted above, the Dolby Vision HDR (DV) signal is split into a Base Layer (BL) and an Enhancement Layer (EL) + metadata. These layers can be delivered via a single stream or a dual-stream methodology. The single stream method contains the BL in an H.265 / High Efficiency Video Coding (HEVC) Main10 profile. The dual-stream method contains the BL and EL in either two separate HEVC-Main10 profiles or two separate H.264 / MPEG-4 Advanced Video Coding (AVC) AVC-8 profiles.
In this linked post, Stacey Spears (sspears on AVS Forum) described Dolby Vision’s various enhancement layer versions.
Minimal Enhancement Layer (MEL)
The MEL contains only the basic Dolby Vision HDR metadata, and when combined with the base layer, provides a 10-bit YcBcR 4:2:0 video signal. MEL requires a System on Chip (SoC) integrated circuit design capable of decoding only a single video stream (the base layer). MEL is compatible with a wider range of SoC’s due to these minimal processing requirements. The MEL was first to debut in the initial SDK.
Full Enhancement Layer (FEL)
FEL contains both the basic Dolby Vision HDR metadata and second 1080p SDR video stream (¼ 2160p HDR resolution) that when combined with the base layer provides a 12-bit YcBcR 4:2:0 video signal. The device then uses a Dolby Vision module to convert to an 8-bit RGB (12-bit YCbCr 4:2:2 data in an RGB 4:4:4 8-bit transport tunnel) video signal. FEL requires a System on Chip (SoC) integrated circuit design capable of decoding two video streams (the BL and EL). FEL was not supported in the initial SDK but is now supported.
Most Dolby Vision HDR content today uses MEL, not FEL, even though the SDK now supports both MEL and FEL. Perhaps this is a result of the initial SDK only supporting MEL. The forthcoming Spears & Munsil UHD HDR Benchmark disk will have a video montage encoded using both MEL and FEL for comparison. Due to the brightness and large color volume of the content in the montage, it will be easy to see the difference between DV, LLDV, HDR10, and HDR10+. Once you see the difference, it will be visible in other sources, such as Netflix.