A GPU generates an enormous amount of information per second, in the form of multiple frames that pass quickly over our retina from the television, the size of these frames and the information that is stored and processed is enormous. Image buffers due to storage in VRAM are one reason why high bandwidth memory is required.
What is the image buffer?
The frame buffer is part of the VRAM where the information of each pixel on the screen of the next frame is stored, this buffer is stored in the VRAM and is generated by the GPU every x milliseconds.
Current GPUs work with at least two image buffers, in the first the GPU writes the next image that will be seen on the screen, in the second is the previous image already generated, which is sent to the screen .
Backbuffer or rear buffer is called the one generated by the GPU and Front buffer or front buffer is the one which is read by the display driver and sent to the video output. While the front buffer stores the value of the red, green, and blue components for each pixel, in the rear buffer it can store a fourth component, alpha, which stores the transparency or semi-transparency value of a pixel.
What is the display driver?
The display driver is a hardware present in the GPU, it is responsible for reading the image buffer in the VRAM and converting the data into a signal that the video output, whatever the type, can understand.
In systems like NVIDIA G-SYNC, AMD FreeSync / Adaptive Sync, etc., the display controller not only sends data to the monitor or TV, but also controls the start and end of each frame.
Image stamps and traditional 3D rendering
Although our screens are in 2D, for more than twenty years 3D scenes have been rendered in real time. To do this, it is necessary to divide the back buffer into two differentiated buffers:
- Color buffer: where the value of the color and alpha components of each pixel is noted.
- Depth buffer: where the depth value of each pixel points.
When rendering a 3D scene we can see that several pixels have different values in the depth buffer but are in the same place in the other two dimensions, this is where those extra pixels need to be removed as they are not not visible to the viewer. , this removal is done by comparing the position of each pixel in the depth buffer, removing with them the furthest from the camera.
The depth buffer
The depth buffer or better known as the Z-Buffer stores the distance of each pixel in a 3D scene from the point of view or the camera, this can be generated in either of these two times.
- After the raster step and before the texturing step.
- After the texturing phase.
The heaviest step in calculation being that of the Pixel / Fragment Shader, the fact of generating the Z-Buffer just after the texturing of the scene involves calculating the color value and therefore the Pixel or Fragment Shader of hundreds of thousands and even millions of pixels not visible.
The downside of doing this before texturing is that by not having the values of color and transparency, alpha channel, we can see that one pixel after another transparency is removed beforehand, making it invisible in the final scene. .
To avoid this, developers usually render the scene in two parts, the first does not render the objects behind an object transparent or semi-transparent, while the second only renders those objects.
Image buffering and making lazy
One of the new things that was seen in the late 2000s when rendering graphics was lazy rendering or lazy rendering, which was to first render the scene by generating a series of additional buffers called G -Buffer, then in a next step calculate the lighting of the scene.
The need for more than one image buffer increases the demands on memory and therefore requires more memory for storage, as well as a fill rate and therefore higher bandwidth.
But, lazy rendering was done to correct one of the performance issues of classic rasterization in which every change in a pixel’s value, whether due to luminance or chroma, involves rewriting in the image buffer, which results in a huge amount of data to be transmitted in the VRAM
With lazy rendering, the cost of the number of lights in the scene changes from exponential to logarithmic, reducing the computational cost when rendering scenes with multiple light sources.
Image buffers as shadow map
As rasterization is not very efficient in calculating indirect lighting, some tricks must be found to generate shadow maps.
The simplest method is to render the scene taking the light or shadow source as if it were the camera, but instead of rendering a complete image, only the depth buffer is generated, which will be later used as a shadow map.
This system will be phased out in the years to come due to the rise of Ray Tracing, which makes it possible to create real-time shadows in a much more precise way, requiring less computing power and without needing to generate a huge data map. shadow in VRAM.
Table of Contents