Deep Color Test Pattern - AVS Forum
Forum Jump: 
Reply
 
Thread Tools
post #1 of 35 Old 05-21-2012, 07:20 AM - Thread Starter
Member
 
Plasma54321's Avatar
 
Join Date: Sep 2008
Location: Florida
Posts: 164
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Has anyone created or knows of an existing deep color test pattern? Something to test whether deep color setting in the player/AVR/display chain is working?

In addition, if you wanted to compare normal color depth to deep color for detecting any visible differences or improvements. How would you do it?
Plasma54321 is offline  
Sponsored Links
Advertisement
 
post #2 of 35 Old 05-22-2012, 12:16 AM
AVS Special Member
 
Doug Blackburn's Avatar
 
Join Date: May 2008
Location: San Francisco - East Bay area
Posts: 3,453
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 20 Post(s)
Liked: 226
Deep Color requires the source to be encoded in Deep Color to have any benefit. There are no consumer sources encoded with Deep Color except perhaps some digital video cameras... and that mode would have to be enabled on the camera and you'd have to write the camera footage to a Blu-ray disc with Deep Color in-tact. Turning it on with any consumer source (all are 8-bits) just gets you 8-bits of color resolution spread over 10 or 12 bits. You can't really see a difference, nor should you. If you play an 8bit source with Deep Color enabled nd you see a difference, something is broken.

"Movies is magic..." Van Dyke Parks
THX Certified Professional Video Calibration
ISF -- HAA -- www.dBtheatrical.com
Widescreen Review -- Home Theater & Sound
Doug Blackburn is offline  
post #3 of 35 Old 05-22-2012, 08:31 AM - Thread Starter
Member
 
Plasma54321's Avatar
 
Join Date: Sep 2008
Location: Florida
Posts: 164
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by Doug Blackburn View Post

Deep Color requires the source to be encoded in Deep Color ...
...some digital video cameras... and that mode would have to be enabled on the camera and you'd have to write the camera footage to a Blu-ray disc with Deep Color in-tact. Turning it on with any consumer source (all are 8-bits) just gets you 8-bits of color resolution spread over 10 or 12 bits. You can't really see a difference, nor should you. If you play an 8bit source with Deep Color enabled and you see a difference, something is broken.

What I would be looking for is any effective reduction in posterization - if possible. I was thinking about creating a test pattern from scratch. Should I just create 10 - 12 bit RBG color PNG drawing then save to file that could be streamed via wired network to a BluRay player/AVR/HDTV?

It looks like there are no ready made test patterns available?
Plasma54321 is offline  
post #4 of 35 Old 05-22-2012, 09:24 AM
Advanced Member
 
Smackrabbit's Avatar
 
Join Date: Sep 2001
Location: Portland, OR, USA
Posts: 893
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 2 Post(s)
Liked: 30
Quote:
Originally Posted by Plasma54321 View Post

What I would be looking for is any effective reduction in posterization - if possible. I was thinking about creating a test pattern from scratch. Should I just create 10 - 12 bit RBG color PNG drawing then save to file that could be streamed via wired network to a BluRay player/AVR/HDTV?

It looks like there are no ready made test patterns available?

Using a PNG would unfortunately prove nothing. I tested this in another thread and when I compared color processing results for a PNG to those with Blu-ray encoded content on multiple players (Sony BDP-S590, Oppo 83SE and 93) I got totally different output values from the PNG than from the 4:2:0 content. There's no correlation between how a player handles one or the other, so it doesn't really work as an effective test. For all we know the player would sample the PNG down to 8-bits per pixel before output, but wouldn't downsample the Blu-ray content.

Chris Heinonen
Senior Editor, Secrets of Home Theater and High Fidelity, www.hometheaterhifi.com
Displays Editor, AnandTech.com
Contributor, HDGuru.com and Wirecutter.com
ISF Level II Certified Calibrator, ReferenceHomeTheater.com
Smackrabbit is offline  
post #5 of 35 Old 05-23-2012, 08:38 AM
AVS Special Member
 
Doug Blackburn's Avatar
 
Join Date: May 2008
Location: San Francisco - East Bay area
Posts: 3,453
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 20 Post(s)
Liked: 226
Quote:
Originally Posted by Plasma54321 View Post

What I would be looking for is any effective reduction in posterization - if possible. I was thinking about creating a test pattern from scratch. Should I just create 10 - 12 bit RBG color PNG drawing then save to file that could be streamed via wired network to a BluRay player/AVR/HDTV?

It looks like there are no ready made test patterns available?

Changing 8-bit video to 1o or 12 or 16 bits will do absolutely ZERO for contouring or posterization. If that is NOT encoded in the source, there's a defect in the video display. If it is encoded in the source, when you upconvert to 10 bits or more, it will be retained in the image because the upconversion will assume it was INTENTIONAL since that's the way the source was encoded.

The only control I've ever seen that reduces (but not by 100%) was in Sony's new VPL-VW1000ES 4K projector... it has several settings that FIND contouring in images and softens the edges along contours so they either disappear or become far less obvious. That's video processing doing that... not just upconversion. Upconversion is a "dumb" process... it only moves color from one format to another. To remove contouring from images you have to analyze each frame on the fly and apply fixes ONLY to the countouring and not to any other detail in the image... it's NOT a simple process. And it doesn't even necessarily require more bits. You can simply dither the data along the contour edges to remove the contouring and stay in 8-bit space. But having more bits could be a little more effective and less prone to doing something inappropriate that creates a different artifact.

"Movies is magic..." Van Dyke Parks
THX Certified Professional Video Calibration
ISF -- HAA -- www.dBtheatrical.com
Widescreen Review -- Home Theater & Sound
Doug Blackburn is offline  
post #6 of 35 Old 05-23-2012, 11:15 PM
AVS Special Member
 
Chronoptimist's Avatar
 
Join Date: Sep 2009
Posts: 2,561
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 1 Post(s)
Liked: 203
Quote:
Originally Posted by Doug Blackburn View Post

Changing 8-bit video to 1o or 12 or 16 bits will do absolutely ZERO for contouring or posterization. If that is NOT encoded in the source, there's a defect in the video display. If it is encoded in the source, when you upconvert to 10 bits or more, it will be retained in the image because the upconversion will assume it was INTENTIONAL since that's the way the source was encoded.

4:2:0 encoded content (all commercial MPEG video) requires far more than 8-bit precision to avoid errors when upsampling to 4:2:2/4:4:4 or converting to RGB (especially RGB) as the chroma resolution is 1/4 that of luma.

As HDMI requires a minimum of 4:2:2 YCC for transmission, you cannot send the original untouched 4:2:0 source directly from the disc to the display, which means that video must be upconverted in the player, and therefore greater than 8-bit internal precision is required to avoid introducing errors into the final image. If you are using greater than 8-bit internal precision, it is far better to retain that precision as far down the video chain as possible, as the display is going to be doing its own processing to the image it receives, which also requires as much precision as possible, for the best possible final image.

If you have the option of using greater than 8-bits of precision at the player and display, you absolutely should. It is fallacious to state otherwise.

8-bit upsampling:


16-bit upsampling, dithered to 8-bit output:


Even with an 8-bit output, you can clearly see the benefits of using 16-bit upsampling over 8-bit internal upsampling. Because the final output is 8-bit however, the image must be dithered to avoid introducing banding. This dithering noise will be made more noticeable as the image moves further down the display chain, and the display applies its own image processing on top of what the player has done. (greyscale, gamma, gamut calibration etc.)

If both the player and display supported it however, you could pass the raw 16-bit data to the display without dithering it, avoiding introducing unnecessary noise into the image before the display has performed its own internal processing and then dithers down to the final image. (I don't believe there's a consumer display on the market with more than a 10-bit native panel)
Chronoptimist is offline  
post #7 of 35 Old 05-24-2012, 12:22 AM
Member
 
kyokushinkai's Avatar
 
Join Date: May 2011
Posts: 23
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by Chronoptimist View Post

4:2:0 encoded content (all commercial MPEG video) requires far more than 8-bit precision to avoid errors when upsampling to 4:2:2/4:4:4 or converting to RGB (especially RGB) as the chroma resolution is 1/4 that of luma.

As HDMI requires a minimum of 4:2:2 YCC for transmission, you cannot send the original untouched 4:2:0 source directly from the disc to the display, which means that video must be upconverted in the player, and therefore greater than 8-bit internal precision is required to avoid introducing errors into the final image. If you are using greater than 8-bit internal precision, it is far better to retain that precision as far down the video chain as possible, as the display is going to be doing its own processing to the image it receives, which also requires as much precision as possible, for the best possible final image.

If you have the option of using greater than 8-bits of precision at the player and display, you absolutely should. It is fallacious to state otherwise.

8-bit upsampling:


16-bit upsampling, dithered to 8-bit output:


Even with an 8-bit output, you can clearly see the benefits of using 16-bit upsampling over 8-bit internal upsampling. Because the final output is 8-bit however, the image must be dithered to avoid introducing banding. This dithering noise will be made more noticeable as the image moves further down the display chain, and the display applies its own image processing on top of what the player has done. (greyscale, gamma, gamut calibration etc.)

If both the player and display supported it however, you could pass the raw 16-bit data to the display without dithering it, avoiding introducing unnecessary noise into the image before the display has performed its own internal processing and then dithers down to the final image. (I don't believe there's a consumer display on the market with more than a 10-bit native panel)

Show me a point in a movie where 10-12-16 bit processing does more than 8 bits and believe. Not a test pattern, please
kyokushinkai is offline  
post #8 of 35 Old 05-24-2012, 12:42 AM
AVS Special Member
 
Chronoptimist's Avatar
 
Join Date: Sep 2009
Posts: 2,561
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 1 Post(s)
Liked: 203
Quote:
Originally Posted by kyokushinkai View Post

Show me a point in a movie where 10-12-16 bit processing does more than 8 bits and believe. Not a test pattern, please

It's particularly noticeable any time there's a shallow depth of field, the camera goes out of focus, or with fades in/between scenes.

Showing screenshots will not be the best illustration however, as that is only one part of the display chain. The display is going to be doing additional processing on top of it, exaggerating any errors/noise introduced.
Chronoptimist is offline  
post #9 of 35 Old 05-24-2012, 05:34 AM
amt
Senior Member
 
amt's Avatar
 
Join Date: Jul 2004
Posts: 449
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 12
Best way to do this is find a computer which is truly capable of 10-bit 4/4/4 RGB output. There are very few graphics cards capable of that and have a proper driver, but I do believe they are out there.
amt is offline  
post #10 of 35 Old 05-24-2012, 06:54 AM
AVS Special Member
 
cinema mad's Avatar
 
Join Date: Jun 2006
Location: Here Nor There..
Posts: 1,826
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 1 Post(s)
Liked: 31
Even On my 8 bit note book I can definitely observe the difference between the two,
with the 16-bit upsampling, dithered to 8-bit output having smoother gradation particularly noticeable towards the lower end....
cinema mad is offline  
post #11 of 35 Old 05-24-2012, 08:36 AM
AVS Special Member
 
Chronoptimist's Avatar
 
Join Date: Sep 2009
Posts: 2,561
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 1 Post(s)
Liked: 203
Quote:
Originally Posted by amt View Post

Best way to do this is find a computer which is truly capable of 10-bit 4/4/4 RGB output. There are very few graphics cards capable of that and have a proper driver, but I do believe they are out there.

While more cards are outputting a 10-bit signal, only the AMD/Nvidia pro cards actually support 10-bit data paths, and virtually no video playback software supports a 10-bit output.

While madVR (the best video renderer available) does all internal calculations at 16-bit, it currently only supports 8-bit output.

That said, it still bests any stand-alone player I've seen so far.
Chronoptimist is offline  
post #12 of 35 Old 05-24-2012, 10:07 AM
AVS Special Member
 
PlasmaPZ80U's Avatar
 
Join Date: Feb 2009
Posts: 7,171
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 79 Post(s)
Liked: 197
Will enabling the deep color setting on a BD player or PS3 reduce any banding when watching BD movies?
PlasmaPZ80U is offline  
post #13 of 35 Old 05-24-2012, 10:14 AM
Advanced Member
 
Smackrabbit's Avatar
 
Join Date: Sep 2001
Location: Portland, OR, USA
Posts: 893
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 2 Post(s)
Liked: 30
Quote:
Originally Posted by PlasmaPZ80U View Post

Will enabling the deep color setting on a BD player or PS3 reduce any banding when watching BD movies?

Likely not. The example being shown is dealing with larger internal bit-depth calculations and then dithering down to 8-bit output. Since any player could already do this without needing Deep Color support, it's really making no difference in this example. Also, this example is using a grayscale gradient, which is already being passed at full resolution in 8-bits per pixel. If there are large bits of banding occurring instead of dithering, as seen in the example, that can be fixed at the mastering stage instead of in the processing stage.

However nothing in the example shown would require either Deep Color, or better processing in the Blu-ray player, but could all be done at the mastering step, which would be a better place for it to occur.

Chris Heinonen
Senior Editor, Secrets of Home Theater and High Fidelity, www.hometheaterhifi.com
Displays Editor, AnandTech.com
Contributor, HDGuru.com and Wirecutter.com
ISF Level II Certified Calibrator, ReferenceHomeTheater.com
Smackrabbit is offline  
post #14 of 35 Old 05-24-2012, 10:18 AM
AVS Special Member
 
PlasmaPZ80U's Avatar
 
Join Date: Feb 2009
Posts: 7,171
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 79 Post(s)
Liked: 197
Quote:
Originally Posted by Smackrabbit View Post

Likely not. The example being shown is dealing with larger internal bit-depth calculations and then dithering down to 8-bit output. Since any player could already do this without needing Deep Color support, it's really making no difference in this example. Also, this example is using a grayscale gradient, which is already being passed at full resolution in 8-bits per pixel. If there are large bits of banding occurring instead of dithering, as seen in the example, that can be fixed at the mastering stage instead of in the processing stage.

However nothing in the example shown would require either Deep Color, or better processing in the Blu-ray player, but could all be done at the mastering step, which would be a better place for it to occur.

so there is no setting on a BD player or PS3 that would affect banding for better or worse?
PlasmaPZ80U is offline  
post #15 of 35 Old 05-24-2012, 10:18 AM
AVS Special Member
 
Doug Blackburn's Avatar
 
Join Date: May 2008
Location: San Francisco - East Bay area
Posts: 3,453
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 20 Post(s)
Liked: 226
If it does, the upsampling is broken, period. If video was treated like a static test pattern, potential detail within an image would be eliminated by the upsampling examples posted severl posts ago. The upsampling doesn't know if the contouring steps are image detail or encoding errors. If it removes the contouring, it could be removing detail if subtle detail exists. So upsampling alone should NEVER remove contouring if it was present in the source. Only "intelligent" processing of the image to find and identify contouring vs. actual image detail and removing the contouring (as the Sony 4K projector's control does) can remove contouring effectively in moving video images. There are MANY ways to manipulate static images into showing or not showing "steps" in a fade that have nothing to do with how you would treat video.

Upconversion to 4:2:2 from 4:2:0 can certainly be done poorly -- but it is such a "commodity" in Blu-ray players that you'd have to be a dumbass to not do it the right way. Players like Oppo and PS3 and some others are PROVEN to output 4:2:2 with exceptional fidelity to the original 4:2:0 encoded on the disc. If you see contouring while using one of those players, it is encoded on the disc and only appropriate processing (like that in the Sony projector) will remove it without damaging image detail.

"Movies is magic..." Van Dyke Parks
THX Certified Professional Video Calibration
ISF -- HAA -- www.dBtheatrical.com
Widescreen Review -- Home Theater & Sound
Doug Blackburn is offline  
post #16 of 35 Old 05-24-2012, 10:54 AM
amt
Senior Member
 
amt's Avatar
 
Join Date: Jul 2004
Posts: 449
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 12
IMO, >8 bit is certainly valuable today for color gamut correction, even with just 8 bit source. Some resulting values in the correction would require >8 bits in precision to be accurately displayed, and without that could introduce new banding/contouring. Dithering can improve this, but more bit depth would certainly be better.

Now, the really good news is (www.hdmi.org/download/HDMI_Specification_1.1.pdf, page 68):
Quote:
Because 4:2:2 data only requires two components per pixel clock, more bits are allocated per
component. The available 24 bits are split into 12 bits for the Y component and 12 bits for the C
components.


Which means if you stay in 4:2:2 YCbCr, you get 12 bits, even with HDMI 1.1!

This is why I always set my output on Lumagen Radiance to YCbCr 4:2:2, so it can actually keep >8 bits of precision for the color and send it to the display.

Now, the question I would have is: Do players like the Oppo, when configured to 4:2:2 YCbCr (no deep color) make use >8 bits when doing the chroma upsampling?
amt is offline  
post #17 of 35 Old 05-24-2012, 12:50 PM
AVS Special Member
 
PlasmaPZ80U's Avatar
 
Join Date: Feb 2009
Posts: 7,171
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 79 Post(s)
Liked: 197
Quote:
Originally Posted by amt View Post

IMO, >8 bit is certainly valuable today for color gamut correction, even with just 8 bit source. Some resulting values in the correction would require >8 bits in precision to be accurately displayed, and without that could introduce new banding/contouring. Dithering can improve this, but more bit depth would certainly be better.

Now, the really good news is (www.hdmi.org/download/HDMI_Specification_1.1.pdf, page 68):



Which means if you stay in 4:2:2 YCbCr, you get 12 bits, even with HDMI 1.1!

This is why I always set my output on Lumagen Radiance to YCbCr 4:2:2, so it can actually keep >8 bits of precision for the color and send it to the display.

Now, the question I would have is: Do players like the Oppo, when configured to 4:2:2 YCbCr (no deep color) make use >8 bits when doing the chroma upsampling?

also, is 4:4:4 YCbCr 12 bits?
PlasmaPZ80U is offline  
post #18 of 35 Old 05-24-2012, 01:38 PM
Advanced Member
 
Smackrabbit's Avatar
 
Join Date: Sep 2001
Location: Portland, OR, USA
Posts: 893
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 2 Post(s)
Liked: 30
Quote:
Originally Posted by amt View Post

IMO, >8 bit is certainly valuable today for color gamut correction, even with just 8 bit source. Some resulting values in the correction would require >8 bits in precision to be accurately displayed, and without that could introduce new banding/contouring. Dithering can improve this, but more bit depth would certainly be better.

Now, the really good news is (www.hdmi.org/download/HDMI_Specification_1.1.pdf, page 68):

Which means if you stay in 4:2:2 YCbCr, you get 12 bits, even with HDMI 1.1!

This is why I always set my output on Lumagen Radiance to YCbCr 4:2:2, so it can actually keep >8 bits of precision for the color and send it to the display.

Now, the question I would have is: Do players like the Oppo, when configured to 4:2:2 YCbCr (no deep color) make use >8 bits when doing the chroma upsampling?

Just because the Y and CbCr channels can carry 12-bits per pixel doesn't mean that they do. With Y, they're almost certainly just sending the 8-bit signal, zero padded at the front, with those 12-bits of address space. Similarly with the Radiance if you are taking an 8-bit CbCr data, and then sending out 4:2:2 as well, there is no additional chroma upsampling to do there as all the bits are already spoken for. The output format from the Radiance should be the one that is best supported by your display, not 4:2:2 by default because it could theoretically hold mode bits.

Also, there is no guarantee that any display would handle more than 8-bits or do anything more with it beyond truncating the additional bits and displaying them. When reading data from the Oppo in 4:2:2 mode, there is no value passed beyond 254, so they are not using the 12-bits of address space. Some devices send 10-bit or 12-bit data, but never in 4:2:2 mode.

Since the Y data on the Blu-ray disc is not subsampled, and is able to be perfectly reproduced at 8-bits per pixel at all supported HDMI specifications, any device sending this as 12-bits per pixel would be fundamentally broken. It would be making the assumption that the data on the disc, while bit-perfect (for the Y channel), is incorrect and it should do its own interpolation on the data to convert it to 12-bits. It's completely pointless. If you have a device that wants to take that 8-bit data, store it in 12-bit or 14-bit values, and then perform calculations on it there, that's fine, but a Blu-ray player shouldn't convert it to that since it is completely pointless to do so and is only introducing error.

Quote:
Originally Posted by PlasmaPZ80U View Post

also, is 4:4:4 YCbCr 12 bits?

Deep-Color can be 10 or 12 bits per pixel, and Deep Color only works in 4:4:4 or RGB, not 4:2:2. All content is 8-bits, however.

Chris Heinonen
Senior Editor, Secrets of Home Theater and High Fidelity, www.hometheaterhifi.com
Displays Editor, AnandTech.com
Contributor, HDGuru.com and Wirecutter.com
ISF Level II Certified Calibrator, ReferenceHomeTheater.com
Smackrabbit is offline  
post #19 of 35 Old 05-24-2012, 02:43 PM
amt
Senior Member
 
amt's Avatar
 
Join Date: Jul 2004
Posts: 449
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 12
Quote:
Originally Posted by Smackrabbit View Post

Just because the Y and CbCr channels can carry 12-bits per pixel doesn't mean that they do. With Y, they're almost certainly just sending the 8-bit signal, zero padded at the front, with those 12-bits of address space.

Actually they always do "carry" the 12 bits, but it's a matter if they contain extra data. HDMI spec states that you must left-shift the 8-bit value with zeros (if that's all you use) to have it in 12-bit size. I think we are saying the same thing here, but I want to clarify. The values are always 12 bits, but the lower 4 bits (the least significant) could be all 0's.

Quote:
Originally Posted by Smackrabbit View Post

Similarly with the Radiance if you are taking an 8-bit CbCr data, and then sending out 4:2:2 as well, there is no additional chroma upsampling to do there as all the bits are already spoken for.

I was not stating that the radiance would do this, but that the source has simply has the opportunity to do this. Of course the Radiance would not do this, as it has already been done.

Quote:
Originally Posted by Smackrabbit View Post

The output format from the Radiance should be the one that is best supported by your display, not 4:2:2 by default because it could theoretically hold mode bits.

Actually in this case Radiance does suggest to always use 422 as long as the display supports it

Quote:
Originally Posted by Smackrabbit View Post

Also, there is no guarantee that any display would handle more than 8-bits or do anything more with it beyond truncating the additional bits and displaying them.

And conversely there's no guarantee that all displays always drop >8 bits as well. If the display is capable of dealing with this and has >8bit panels, it's a win.

Quote:
Originally Posted by Smackrabbit View Post

When reading data from the Oppo in 4:2:2 mode, there is no value passed beyond 254, so they are not using the 12-bits of address space. Some devices send 10-bit or 12-bit data, but never in 4:2:2 mode.

But in this case that is exactly what the Radiance does.

Quote:
Originally Posted by Smackrabbit View Post

Since the Y data on the Blu-ray disc is not subsampled, and is able to be perfectly reproduced at 8-bits per pixel at all supported HDMI specifications, any device sending this as 12-bits per pixel would be fundamentally broken. It would be making the assumption that the data on the disc, while bit-perfect (for the Y channel), is incorrect and it should do its own interpolation on the data to convert it to 12-bits. It's completely pointless. If you have a device that wants to take that 8-bit data, store it in 12-bit or 14-bit values, and then perform calculations on it there, that's fine, but a Blu-ray player shouldn't convert it to that since it is completely pointless to do so and is only introducing error.

I agree it should not, but if you (for some reason) want the blu-ray player to adjust for brightness or contrast, or gamma, it could do this.
amt is offline  
post #20 of 35 Old 05-25-2012, 12:42 AM
AVS Special Member
 
Chronoptimist's Avatar
 
Join Date: Sep 2009
Posts: 2,561
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 1 Post(s)
Liked: 203
Quote:
Originally Posted by PlasmaPZ80U View Post

Will enabling the deep color setting on a BD player or PS3 reduce any banding when watching BD movies?

Potentially, it could reduce banding and/or image noise. Whether or not it results in a visible change depends on the source player and the display. I would expect little to no benefit on a Plasma display for example, as they already use a lot of dither to display images.

Quote:
Originally Posted by Smackrabbit View Post

Likely not. The example being shown is dealing with larger internal bit-depth calculations and then dithering down to 8-bit output. Since any player could already do this without needing Deep Color support, it's really making no difference in this example.

Also, this example is using a grayscale gradient, which is already being passed at full resolution in 8-bits per pixel. If there are large bits of banding occurring instead of dithering, as seen in the example, that can be fixed at the mastering stage instead of in the processing stage.

To be clear, the gradient source is not 1080p native, that is an upscaled image. And yes, it's illustrating the benefits of higher internal bit-depth processing.

This shows that even though the source material was 8-bit, 8-bits is clearly not enough precision to do things like image scalingand all images on Blu-ray require chroma to be upscaled.

So if all Blu-ray (or MPEG video in general) requires that chroma be upscaled to 4:2:2, 4:4:4, or RGB, then there must be more than 8-bits of internal precision for good image quality.

If you are using more than 8-bits of internal precision, it's stupid not to pass that on to the display when you have the option (deep color) especially if your display has a 10-bit native panel as many LCD and SXRD displays do.

Furthermore, while the 16-bit image dithered down to 8-bit looks fine there, this is only half of the image processing chain. As soon as it gets to your display, that image is going to have further image processing applied to it (greyscale, gamma and gamut calibration) which can exaggerate the noise added to the image by the dither process and/or introduce banding. If you were able to pass the undithered 16-bit data to the display, you avoid this.

Quote:
Originally Posted by Smackrabbit View Post

However nothing in the example shown would require either Deep Color, or better processing in the Blu-ray player, but could all be done at the mastering step, which would be a better place for it to occur.

MPEG video is mastered as 4:2:0 data and requires upsampling to be displayed. It applies to all consumer video.

Quote:
Originally Posted by Doug Blackburn View Post

If it does, the upsampling is broken, period. If video was treated like a static test pattern, potential detail within an image would be eliminated by the upsampling examples posted severl posts ago. The upsampling doesn't know if the contouring steps are image detail or encoding errors. If it removes the contouring, it could be removing detail if subtle detail exists. So upsampling alone should NEVER remove contouring if it was present in the source. Only "intelligent" processing of the image to find and identify contouring vs. actual image detail and removing the contouring (as the Sony 4K projector's control does) can remove contouring effectively in moving video images. There are MANY ways to manipulate static images into showing or not showing "steps" in a fade that have nothing to do with how you would treat video.

Doug, my example has nothing to do with Sony's super bit-mapping technology. It shows the difference between scaling an image with 8-bits of precision vs 16-bits of precision, even though the source image, and final output are both 8-bit.

It demonstrates without a shadow of a doubt, that more than 8-bits is required to process an 8-bit image, even when your final output is still going to be 8-bit. If it requires more than 8-bits to process, what good reason is there to not pass that on to the display?

While I respect that you are a knowledgeable calibrator and reviewer, it is clear that you are not involved in video mastering or image processing.


Quote:
Originally Posted by amt View Post

Which means if you stay in 4:2:2 YCbCr, you get 12 bits, even with HDMI 1.1!

This is why I always set my output on Lumagen Radiance to YCbCr 4:2:2, so it can actually keep >8 bits of precision for the color and send it to the display.

Now, the question I would have is: Do players like the Oppo, when configured to 4:2:2 YCbCr (no deep color) make use >8 bits when doing the chroma upsampling?

It is certainly possible to do that, but there are no guarantees and virtually no sources will tell you whether or not they are doing this. Deep color on the other hand does indicate the bit-depth that is being passed down the video chain. I would also recommend that you scale the image directly to 4:4:4/RGB in the player, because any good display will be showing full resolution 4:4:4/RGB images, and you want to avoid two upscaling steps. (4:2:0 to 4:2:2 then 4:2:2 to 4:4:4/RGB)

And for the record, one of the reasons I do not recommend the Radiance, is because it only processes the image in 4:2:2, throwing away resolution.
Chronoptimist is offline  
post #21 of 35 Old 05-25-2012, 09:14 AM
AVS Special Member
 
Doug Blackburn's Avatar
 
Join Date: May 2008
Location: San Francisco - East Bay area
Posts: 3,453
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 20 Post(s)
Liked: 226
Quote:
Originally Posted by Chronoptimist View Post

It's particularly noticeable any time there's a shallow depth of field, the camera goes out of focus, or with fades in/between scenes.

Showing screenshots will not be the best illustration however, as that is only one part of the display chain. The display is going to be doing additional processing on top of it, exaggerating any errors/noise introduced.

8-bits is enough to display images without artifacts (in fades or anywhere else) but only if it is done properly and there's no room for error. Human vision can only distinguish about 200 (maybe 210) shades of gray or color. So even wth consumer video using 216-235, for 219 levels, there are enough levels present to make fades/transitions seamless, but ONLY if the encoding is perfect. And the playback chain has to preserve the encoding with fiddling it in any way that changes the distribution of digital values.

"Movies is magic..." Van Dyke Parks
THX Certified Professional Video Calibration
ISF -- HAA -- www.dBtheatrical.com
Widescreen Review -- Home Theater & Sound
Doug Blackburn is offline  
post #22 of 35 Old 05-25-2012, 09:20 AM
AVS Special Member
 
Doug Blackburn's Avatar
 
Join Date: May 2008
Location: San Francisco - East Bay area
Posts: 3,453
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 20 Post(s)
Liked: 226
Quote:
Originally Posted by cinema mad View Post

Even On my 8 bit note book I can definitely observe the difference between the two,
with the 16-bit upsampling, dithered to 8-bit output having smoother gradation particularly noticeable towards the lower end....

That's meaningless as you do not know exactly what that process is doing. How does it distinguish between a step in a contour versus a difference between 2 shades of gray in a dark jacket along a fold, for example? Do you want the line along the fold in the jacket to be dithered too? No you don't. You want that detail preserved. How does the dither process know the difference between a contour line and detail in the image unless it is doing VERY sophisticated processing? You just can't apply dither to a video signal and get any sort of reliable image improvement that doesn't remove detail in the image. Doesn't matter how many bits are used, the original encoding may have single-digial-value differences within real detail in the image and you do NOT want dither removing those differences that are CORRECT and meaningful. There is no intelligence in dither if that's all it is. Just "dithering" an image will remove detail, period.

"Movies is magic..." Van Dyke Parks
THX Certified Professional Video Calibration
ISF -- HAA -- www.dBtheatrical.com
Widescreen Review -- Home Theater & Sound
Doug Blackburn is offline  
post #23 of 35 Old 05-26-2012, 12:01 AM
AVS Special Member
 
Chronoptimist's Avatar
 
Join Date: Sep 2009
Posts: 2,561
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 1 Post(s)
Liked: 203
Quote:
Originally Posted by Doug Blackburn View Post

8-bits is enough to display images without artifacts (in fades or anywhere else) but only if it is done properly and there's no room for error. Human vision can only distinguish about 200 (maybe 210) shades of gray or color. So even wth consumer video using 216-235, for 219 levels, there are enough levels present to make fades/transitions seamless, but ONLY if the encoding is perfect. And the playback chain has to preserve the encoding with fiddling it in any way that changes the distribution of digital values.

The DCI specification calls for 12-bits, because people could clearly distinguish stepping/contours at 10-bit in their testing. 8-bits is not nearly enough, even if it's perfect. (which is not the case with today's displays when you feed them an 8-bit input)
Quote:


Results:
  • Many subjects could distinguish 2 counts in 12-bits (=11-bits)
  • Almost all subjects could distinguish 4 counts in 12-bits (=10-bits)
Note that patterns were noise freea small amount of noise will reduce discrimination.

Furthermore, if you actually want to have 219 discrete levels by the time you get to the end of the color space conversion, image scaling, white balance, gamma and gamut adjustments, you need far more than 8-bits precision in these processing steps.

Quote:
Originally Posted by Doug Blackburn View Post

That's meaningless as you do not know exactly what that process is doing. How does it distinguish between a step in a contour versus a difference between 2 shades of gray in a dark jacket along a fold, for example? Do you want the line along the fold in the jacket to be dithered too? No you don't. You want that detail preserved. How does the dither process know the difference between a contour line and detail in the image unless it is doing VERY sophisticated processing? You just can't apply dither to a video signal and get any sort of reliable image improvement that doesn't remove detail in the image. Doesn't matter how many bits are used, the original encoding may have single-digial-value differences within real detail in the image and you do NOT want dither removing those differences that are CORRECT and meaningful. There is no intelligence in dither if that's all it is. Just "dithering" an image will remove detail, period.

The software I am using does not just dither for the sake of it, it is processing the image in 16-bits (just image scaling in this example, though it can also perform gamma and LUT operations) and then downsamples to 8-bit, dithering where necessary.

As I have previously stated, the higher the output bit-depth is from the player, the less dither is required because discrete levels exist. If you were not using dither in this step, you may as well have processed the image at the output bit-depth (8-bit if you're not using deep color) which will result in clear banding.

Using your example, where the source has details next to each other that are a single value apart, as soon as you upsample chroma from 4:2:0, you are creating an intermediate value between the two. Let's say those values were 64 and 65. The intermediate value would be 64.5, but this cannot be represented in 8-bits and must be rounded to the nearest number. Congratulations, you have now added banding to the image which did not exist in the source!

To properly perform that upsampling operation, you would convert the source to 10-bit (input values ×4) so that the two are now 256 and 260. The intermediate step created when you upsample the image is now 258, avoiding rounding errors. To properly represent this, you should then pass on 10-bit data to the display, but failing that, this image now needs dithered down to 8-bit to preserve the appearance of that intermediate step as much as possible. (but this adds a small amount of noise not present in the source)

While that example shows what happens when two details next to each other are a single value apart, if you are doing a more complex operation such as conversion to RGB, LUT operations etc, it takes far more than expanding to 10-bit to avoid rounding errors.
Chronoptimist is offline  
post #24 of 35 Old 05-26-2012, 11:20 AM
AVS Special Member
 
Doug Blackburn's Avatar
 
Join Date: May 2008
Location: San Francisco - East Bay area
Posts: 3,453
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 20 Post(s)
Liked: 226
Quote:
Originally Posted by Chronoptimist View Post

The DCI specification calls for 12-bits, because people could clearly distinguish stepping/contours at 10-bit in their testing. 8-bits is not nearly enough, even if it's “perfect”. (which is not the case with today's displays when you feed them an 8-bit input)


Furthermore, if you actually want to have 219 discrete levels by the time you get to the end of the color space conversion, image scaling, white balance, gamma and gamut adjustments, you need far more than 8-bits precision in these processing steps.

The software I am using does not just dither for the sake of it, it is processing the image in 16-bits (just image scaling in this example, though it can also perform gamma and LUT operations) and then downsamples to 8-bit, dithering where necessary.

As I have previously stated, the higher the output bit-depth is from the player, the less dither is required because discrete levels exist. If you were not using dither in this step, you may as well have processed the image at the output bit-depth (8-bit if you're not using deep color) which will result in clear banding.

Using your example, where the source has details next to each other that are a single value apart, as soon as you upsample chroma from 4:2:0, you are creating an intermediate value between the two. Let's say those values were 64 and 65. The intermediate value would be 64.5, but this cannot be represented in 8-bits and must be rounded to the nearest number. Congratulations, you have now added banding to the image which did not exist in the source!

To properly perform that upsampling operation, you would convert the source to 10-bit (input values ×4) so that the two are now 256 and 260. The intermediate step created when you upsample the image is now 258, avoiding rounding errors. To properly represent this, you should then pass on 10-bit data to the display, but failing that, this image now needs dithered down to 8-bit to preserve the appearance of that intermediate step as much as possible. (but this adds a small amount of noise not present in the source)

While that example shows what happens when two details next to each other are a single value apart, if you are doing a more complex operation such as conversion to RGB, LUT operations etc, it takes far more than expanding to 10-bit to avoid rounding errors.

Aside from being a "knowledgeable calibrator", perhaps more importantly, I have 34 years of engineering and technical experience with Eastman Kodak Company, more than 20 years of which was involved with digital imaging systems. I never said you would EVER... as in NEVER... process the 8-bit source in 8-bit space (today)... you made that up somewhere along the line. All I said was, that 8-bits, used properly, will display a fade or other pattern (and real images) without banding or contouring or any other obvious defects if putting the image into 8-bit space is done correctly. Back in the early days we had to be able to process images in 8-bit space without adding artifacts simply because there were no options to use more bits, so it was critical to have compression and "expansion" methods that could be used in 8-bit space without introducing artifacts. While chroma subsampling was used, I don't think (hard to remember for sure) we went as far as 4:2:0 simply because of the processing issues. It was FAR more difficult to do that than to use 10-bit (or more) space but 10-bits simply wasn't an option in the early days of digital imaging systems. We got by, but it wasn't easy... and we always used 0-255 since consumer digital sources didn't exist. Spatial processing was required to insure that however the image was manipulated, it wouldn't introduce a visible artifact... that was a slow thing in those early days, but there was simply no option to use more bits. In fact, in some products, the operator was warned when an adjustment they made would result in a visible artifact in the image. When you were stuck in 8-bit space, you did what you had to do to prevent processing from impacting image quality -- large images on the slow (single) processor speeds of the day (1980s) required very long processing times to avoid introducing artifacts.

But that's not the issue today... the point I was making is that 8-bits, even with the range reduced to 16-235 is enough resolution to create "8-bit" images without banding/contouring. If you then have to do ANYTHING to that image (i.e. processing), you either have to go back to the processing methods used in the 1980s, or you use more bits or you WILL introduce problems. I don't know any Blu-ray player that converts 4:2:0 to 4:2:2 in 8-bit space. And the math of doing the conversion in 10-bit space without introducing errors is well documented... nevertheless... some manufacturers manage to introduce errors in that conversion that don't HAVE to be there if the simply used the right math in 10-bit space. Some products have more than 10 bits to work with, though it may be overkill. Samsung video displays, for example, for the last 4 years or so have had 18-bit internal data paths while some other brands have never gone over 10-bits for internal processing. If the source is 8-bits 4:2:0, 10-bits should be enough workspace to do the job properly, but some products... you have to wonder what they were thinking (or not thinking).

"Movies is magic..." Van Dyke Parks
THX Certified Professional Video Calibration
ISF -- HAA -- www.dBtheatrical.com
Widescreen Review -- Home Theater & Sound
Doug Blackburn is offline  
post #25 of 35 Old 05-29-2012, 03:15 AM
AVS Special Member
 
Chronoptimist's Avatar
 
Join Date: Sep 2009
Posts: 2,561
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 1 Post(s)
Liked: 203
Quote:
Originally Posted by Doug Blackburn View Post

But that's not the issue today... the point I was making is that 8-bits, even with the range reduced to 16-235 is enough resolution to create "8-bit" images without banding/contouring. If you then have to do ANYTHING to that image (i.e. processing), you either have to go back to the processing methods used in the 1980s, or you use more bits or you WILL introduce problems. I don't know any Blu-ray player that converts 4:2:0 to 4:2:2 in 8-bit space. And the math of doing the conversion in 10-bit space without introducing errors is well documented... nevertheless... some manufacturers manage to introduce errors in that conversion that don't HAVE to be there if the simply used the right math in 10-bit space. Some products have more than 10 bits to work with, though it may be overkill. Samsung video displays, for example, for the last 4 years or so have had 18-bit internal data paths while some other brands have never gone over 10-bits for internal processing. If the source is 8-bits 4:2:0, 10-bits should be enough workspace to do the job properly, but some products... you have to wonder what they were thinking (or not thinking).

But on today’s modern displays that are not analogue like CRTs were, it absolutely is not sufficient to feed them an 8-bit signal if you want a banding-free image, even if it’s a “perfect” 8-bit signal.

And it has been demonstrated that even a pristine 8-bit source is not sufficient to produce a banding-free image, as shown in the tests that were performed when deciding on the DCI spec (and others) where even a pristine 10-bit source was not deemed sufficient for the majority of observers.


If you now agree that at least 10-bits is required when processing the image, why do you still insist that deep color is not necessary? The image hasn’t finished being processed when it leaves the player—not by a long shot. If anything, the display is doing more processing to the image than the player will. So why throw away all that data in the middle of processing the image?

I am not saying that by outputting a deep color image you are getting the equivalent of a native 10/12-bit source, but it is necessary if you want the best possible image that you can get from the source.

And yes, if it's handled correctly, you can still get good results with an 8-bit output, but when the option for deep colour output exists on that source, you will always have worse results if you restrict it to an 8-bit output.
Chronoptimist is offline  
post #26 of 35 Old 03-08-2013, 10:38 AM
Member
 
Ilya Volk's Avatar
 
Join Date: Sep 2012
Location: Belarus
Posts: 196
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 16
Quote:
Originally Posted by Chronoptimist View Post


If you have the option of using greater than 8-bits of precision at the player and display, you absolutely should. It is fallacious to state otherwise.


8-bit upsampling:




16-bit upsampling, dithered to 8-bit output:




Even with an 8-bit output, you can clearly see the benefits of using 16-bit upsampling over 8-bit internal upsampling. Because the final output is 8-bit however, the image must be dithered to avoid introducing banding. This dithering noise will be made more noticeable as the image moves further down the display chain, and the display applies its own image processing on top of what the player has done. (greyscale, gamma, gamut calibration etc.)

Chronoptimist, these pictures blew my mind.

Are you saying that the first picture is what we actually see in a blu-ray videos with 8-bit precision encoding/decoding?

So all those super-duper blu-rays don't even fully benefit from 8bit display devices??
Ilya Volk is offline  
post #27 of 35 Old 03-08-2013, 10:43 AM
AVS Special Member
 
sotti's Avatar
 
Join Date: Aug 2004
Location: Seattle, WA
Posts: 6,610
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 21 Post(s)
Liked: 167
Quote:
Originally Posted by Ilya Volk View Post

Chronoptimist, these pictures blew my mind.

Are you saying that the first picture is what we actually see in a blu-ray videos with 8-bit precision encoding/decoding?

So all those super-duper blu-rays don't event fully benefit from 8bit display devices??

No he's saying they are both 8-bit data, but the second shows the benefit of having higher bit depth, even if it gets dithered back down.

8 bit is the absolute minimum amount of bits required and that if you can use more bits you should.

Joel Barsotti
SpectraCal
CalMAN Lead Developer
sotti is offline  
post #28 of 35 Old 03-08-2013, 10:47 AM
Member
 
Ilya Volk's Avatar
 
Join Date: Sep 2012
Location: Belarus
Posts: 196
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 16
Quote:
Originally Posted by sotti View Post

Quote:
Originally Posted by Ilya Volk View Post

Chronoptimist, these pictures blew my mind.

Are you saying that the first picture is what we actually see in a blu-ray videos with 8-bit precision encoding/decoding?

So all those super-duper blu-rays don't event fully benefit from 8bit display devices??

No he's saying they are both 8-bit data, but the second shows the benefit of having higher bit depth, even if it gets dithered back down.

8 bit is the absolute minimum amount of bits required and that if you can use more bits you should.

Don't you think the second picture looks like it has more colors compared to the first?
Ilya Volk is offline  
post #29 of 35 Old 03-08-2013, 10:57 AM
AVS Special Member
 
sotti's Avatar
 
Join Date: Aug 2004
Location: Seattle, WA
Posts: 6,610
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 21 Post(s)
Liked: 167
Quote:
Originally Posted by Ilya Volk View Post

Don't you think the second picture looks like it has more colors compared to the first?

Absolutely it looks smoother, but it's still 8bit data it's just dithered.

They add a subtle noise pattern to confuse your eyes/brain to seeing more subtle shades.

When they master blu-rays, the source content will be high bitdepth, and they will likely use dithering as they process the video down to the 8bit YCC 4:2:0 they put on the disk.

Joel Barsotti
SpectraCal
CalMAN Lead Developer
sotti is offline  
post #30 of 35 Old 03-09-2013, 12:28 PM
AVS Special Member
 
Doug Blackburn's Avatar
 
Join Date: May 2008
Location: San Francisco - East Bay area
Posts: 3,453
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 20 Post(s)
Liked: 226
And the 2 images prove my point also... they are both 8 bits. If 8 bits was not enough to display a perfectly smooth fade in a static image, the 16-bit image down-converted to 8 bits would still have visible steps in it. It does not... so if you put that "perfect" 8-bit fade through processing that doesn't change the bits, the image coming out the other end will look identical (i.e. perfect).

8-bits is enough resolution to display images without visible contour lines.... anything that happens in the mastering or processing of the image can produce contour lines. And yes, having more than 8 bits in the processing is necessary to stop contour lines from appearing in the end images. BUT having more than 8-bits available does not guarantee there won't be visible contouring in the 8-bit images that result from the higher-bit processing. The Lumagen Radiance processors have 10 bit processing paths and can do all the processing without introducing artifacts that were not present in the original

Furthermore... 4:2:2 is 12 bits all the time (Jim Peterson of Lumagen has explained this in at least 1 thread here) unless you switch it to 8 bits or 10 bits with a disc player setting. Some products have no more than 10 bits anywhere in their imaging path and they are HORRIBLE at doing anything to the image. Others have up to 18 bits in their processing path (Samsung TVs for example have had 18 bits in portions of their imaging paths more more than 4 years now.

You never know where the contouring is coming from... it could be something as simple as 1 too many clicks on a Red gain control or on a blue cut control or even 1 too many clicks on Contrast or some other control. It could be encoded in the disc. It could be coming from an AVR with an internal video processor that cannot be completely bypassed... you just don't know.

"Movies is magic..." Van Dyke Parks
THX Certified Professional Video Calibration
ISF -- HAA -- www.dBtheatrical.com
Widescreen Review -- Home Theater & Sound
Doug Blackburn is offline  
Reply Display Calibration

User Tag List

Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off