With 8K TVs, More Pixels Make a Better Picture

The combination of more pixels and advanced, high-tech upscaling technology on 8K TVs can make any content look it’s best


The state-of-the-art in consumer TVs for 2019 is defined by two significant trends: 8K and sophisticated AI processing. While at first glance these may appear to be two separate features, one being resolution and the other encompassing a wide range of algorithmic computation, they are closely intertwined. Simply put, if you have more pixels to work with, and better algorithms, you can do really cool things with content—including improving apparent resolution of upscaled content on 8K TVs.

Advanced noise reduction and resolution upscaling algorithms have been used on still images in Photoshop for years, because there is no penalty for waiting a few seconds for a filter to process on the still image. But on a TV, it’s a whole different ballgame; whatever processing you use, with 8K it needs to be robust enough to handle over 33-million-pixel resolution at (at least) 60 frames per second. So, what’s new is the application of computation-intensive image processing techniques to video content that’s playing on consumer TVs.

With multiple algorithms available for noise reduction and upscaling, you can get the best results by choosing the one that works best for a given image type. Using digital noise reduction as an example, you would use a different algorithm for 1080p video shot on a consumer camera, versus 4K video shot on a pro rig. Furthermore, you’d want to use stronger noise reduction in shadow areas, but keep it from smearing texture detail. There are algorithms and parameters that are optimal for each situation and the point of AI processing is to recognize these qualities and pick the right algorithm for the best processing.

For upscaling, you might use one algorithm for a picture of a city skyline that has a lot of horizontal and vertical sharp lines, and a different algorithm to for a picture of a human face, which in turn might be different than the algorithm used on a picture of food. Moreover, bandwidth=starved 720p or 1080i content (cable TV) requires distinctly different upscaling than 4K coming off of in ultra HD Blu-ray disc. It’s a matter of using the right tool for the job.

Now, it would be too much for a TV to try and match what’s on screen at any given moment with the right algorithm on its own. So, the AI element is derived from the fact that supercomputers are churning through massive quantities of constantly updated imagery, which in turn create tables that the TV can use to more easily match up what’s on screen with the most appropriate way to process it.

What’s interesting is that this processing allows for legitimate gains in apparent resolution, especially if the content you’re starting with is already detailed. The reason for this is directly related to the AI algorithms and imagery databases that they are examining. It is a remarkable fact that in 2011, worldwide people took more pictures in one year than had ever been taken before in history. Needless to say, the number of photos shared online has only increased since then. And while I don’t have a statistic for it, something similar has clearly happened with video. Moreover, with video, there have been drastic increases in fidelity over the past few years. Now, a high-end phone takes surprisingly great 4K video. This matters because it means that there is a huge amount of imagery to draw from and feed to the computers that churn out the AI algorithms.

For the viewer, this means that TVs are better than ever at preserving detail when upscaling. In particular, maintaining edge definition can yield visual benefits when upscaling. If you have a sharp line, then you double the resolution of the display but preserve that edge, the line will appear sharper, at least as long as you preserve that “quality” of the sharp transition when upscaling. When calculating the distance at which you can see individual pixels on a screen, it’s instructive to look at a diagonal line that’s almost vertical or almost horizontal. The way the grid of a TV has to deal with these lines is to draw a series of straight pixels, and then skip over by one pixel, draw another series of pixels, skip over, etc.

Now, let’s consider what happens when you upscale from 4K to 8K. You have four times the total number of pixels to work with, and you have twice the number of rows and columns with which to express those stair-steps in the diagonal lines. As an experiment, I put a 4K and an 8K TV side-by-side (82″ TVs) in a large room and walked backward until I could no longer see stair-steps in the lines. For the 8K TV, an 82″ Samsung Q900F, the line became smooth at about a 15-foot viewing distance. But the 4K TV (also an 82″ Samsung) required that I step back to somewhere around 40-50 feet away before the lines became smooth—I was very surprised that I had to triple the distance, I thought it would happen at double the distance. This is one provable, observable reason upscaling to 8K can yield a visible benefit.


Do I Really Need More Pixels?

Every time consumer video makes a major leap in resolution, the same discussions occur some whether you’d ever need anything more. The reality is 8K represents a significant leap forward in picture quality, and that applies to upscaled as well as native 8K content.

To fully grasp why 8K is better than 4K, even for showing upscaled 480, 720, 1080 or UHD (2160) video, it’s necessary to understand how a computer sees that imagery, because these days major gains in picture quality are the result of sophisticated processing and the use of advanced algorithms.

Let’s go back in time for a minute. When 4K TVs first came out, the processors available for picture processing were comparatively slow and basic versus today. Consequently, TVs had to use rudimentary upscaling algorithms that did not always do justice to content. But now, processors are much more robust and are able to analyze an image to determine the best processing to apply. The results can even offer an improvement in perceived detail and we’ll discuss the reasons for that.

We just covered how upscaling can increase perceived detail discussing the nature of a sharp line or edge. The more pixels you have, the sharper that transition looks. Normally, to avoid jaggies with lower resolution imagery “anti-aliasing” is used, a process where adjacent pixels are used to “soften” the transitions on the grid. But, that process blurs edges, at the further expense of perceive sharpness. This is a situation that’s very familiar to gamers, but applies to all sorts of imagery.

The point here is, if you have more pixels to work with, and use a good upscaling algorithm, you can preserve those sharp edges as you upscale. In so doing, the perceived sharpness is increased without the introduction of visual artifacts.

Another artifact that rears its ugly head when you don’t have enough pixels is directly related to fine detail and sharp lines. Have you ever noticed a pattern that flickers on screen as the cameras moves? It could be windows on a building, or the texture of a woven fabric, or the pattern of carbon fiber material. It’ll often occur when slowly zooming in and out of a still photo that contains a lot of detail. Whatever the cause, that flickering is the result of not having enough pixels to describe exactly where the sharp transition of the lines defining that pattern are located The result is the line “jumps” from one pixel to the next instead of moving smoothly. Viewers see this manifest as flickering.

However, if you have more pixels to work with, a good processor can figure out how and where to draw those sharp line/edges without creating visible artifacts.

Of course, while today we are discussing upscaling, in the future we’ll be looking at more and more native 8K content. All of the advantages discussed here pertaining to upscaling apply to the native 8K footage as well. It’ll look cleaner, sharper and all around more realistic than 4K. Gaming on 8K Tvs will be blessedly free of jaggies. Not because you have superhuman vision, but because human vision is complex and so are modern TVs. Having more pixels creates a better canvas upon which a processor can perform its magic and deliver a better picture.

Twitter Auto Publish Powered By : XYZScripts.com