Why didn’t this happen?

The Boss

PC

Why didn’t this happen?

didnt, happen

If you’ve ever wondered how it’s possible that we haven’t seen the definitive jump to HDR displays leaving SDR out, read on as you’ll find out why.

What is HDR and what makes it different from SDR?

HDR vs SDR

HDR stands for High Dynamic Range, it is based on increasing the amount of bits per RGB component from 8bit to 10bit or even 12bit, but HDR is not based on increasing chroma and therefore the quantity of colors, but rather their luminance, allowing a higher brightness and contrast range for each color and a better representation of them.

However, while 24-bit color monitors quickly replaced those with lower color definition, this is not the case with HDR monitors where one can still find a large number of them that don’t take them. not in charge. The transition was therefore never fully realized.

One of the reasons is that HDR is hard to promote on a screen that doesn’t support this color gamut, so in the age of internet marketing it’s hard to show something that’s based on visual perception. people that they cannot see. .

Chrominance and luminance

Luminance chrominance

Each pixel represented on the screen is previously stored in an image buffer memory which stores the values ​​of the three RGB components: Red, Green and Blue. But, the peculiarity of this way of storing information is that not only the chrominance is stored but also the luminance level, which is the color or luminosity level of that color.

In older televisions in which there was no color, images were represented by luminance values ​​and when color came in the two values ​​were separated. But in the case of computing, although CRT displays had been used for almost three decades, in fact at the internal representation level in the frame buffer, both were represented the same.

For a long time, the maximum color representation was 24 bits of color, which gives 16.7 million different combinations, this type of screen is called SDR screens, to differentiate them from HDR screens, which support greater amount of information per pixel, with 10 and even up to 12 bits per component.

HDR and real-time graphics rendering

SDR vs HDR games

HDR can be represented in two different ways:

  • The first is to add more bits per component to all the pixels in the frame buffer.
  • The second is to add a new component.

Both solutions make sense when it comes to playing stills and video files, but in the case of GPUs used to render 3D scenes in real time, it gets complicated, as it means all GPUs manipulate the pixel data on the one hand and on the other hand those which transmit these same data must increase their precision.

This therefore requires GPUs with a higher number of transistors and therefore larger GPUs under the same power or less powerful if the size is maintained. The solution taken by the manufacturers? The second, it has the consequence that the performance of games in HDR mode is lower than when rendered in SDR.

GPUs were never designed for HDR but for SDR

If we increase the amount of information bits per pixel, logically when moving the data throughout the render, we will find that the required bandwidth will be much higher, which can result in a decrease in performance if between the memory that stores the data and the unit that processes it, they don’t move fast enough.

But. the movement of information is expensive within a processor, especially when it comes to power consumption, this is one of the reasons why none of the GPUs of recent years have been prepared to work natively with values ​​per pixel beyond 8 bits per component. Due to the fact that the greatest value for the design of new processors, the greatest performance premise for a long time is power per watt.

If, like other technologies, GPUs had been designed for HDR, this technology would have been completely standardized and now all PC hardware and software would be designed for HDR, plus it would be used by everyone.

Leave a Comment