El Temporal Antialiasing
In order to understand DLAA, we must first understand how temporal anti-aliasing or TAA works as it evolves. And how does it work ? Well, in a very similar way to interpolating textures, but only for the lines that suffer from the irregularity problem. They do this by looking for the color values of neighboring pixels and looking for a transition to create that makes the affected line smoother and thus makes the sawtooth disappear, at least in appearance.
The problem is that doing this only with the information of the current frame is not very precise and this is why we use the information of the previous frame, where a temporary buffer is used, which consists in giving an identifier to each object on the screen that then this will help the GPU to know the speed and movement of each of them. You should also be able to extract information from previous frames to perform the anti-aliasing process more accurately.
Temporal anti-aliasing is therefore the most effective way to avoid sawtooths so far, but NVIDIA wanted to give it a twist with DLAA.
What is NVIDIA DLAA?
As the name suggests, DLAA is anti-aliasing through deep learning, which uses the advanced capabilities of these algorithms provided by the Tensor colors RTX 2000 and RTX 3000 gaming graphics cards.
The first advantage is the ability to recognize which pixels have changed from frame to frame, so that the GPU wastes less time running an algorithm equivalent to TAA. This translates to less milliseconds to generate an image with the same quality and therefore a higher FPS rate in games. Something that looks like DLSS, although it has its differences as we will see later.
But the biggest advantage of DLAA is the fact that being a deep learning algorithm, it can be trained to see nuances in images with better anti-aliasing. If in DLSS we train the algorithm with higher resolution images, with DLAA the GPU is trained with sawtooth removal techniques that the algorithm learns to observe and then apply via DLAA with a fraction of the necessary power.
DLAA derives from DLSS, but it’s not the same
The big difference between DLSS and DLAA is that the latter is not designed to generate higher resolution images, but rather keep the resolution compared to the original sample and is based on improving image quality. At the moment, DLAA has not been applied in many games and is completely green, but not all games require increasing the resolution and for many users image quality is preferable to resolution.
The question here would be: which do you prefer, more pixels or more “beautiful” pixels? Many games use image post-processing techniques that involve taking the final buffer before sending it to the monitor and adding a series of filters and graphics techniques. DLAA can learn from the existence of these and apply them to improve the appearance of the final image we see on the monitor.
Today’s post-processing effects are performed in games through Compute Shaders, but deep learning algorithms have long been used in graphic design and video editing programs. Anti-aliasing is a post-processing effect, so it’s no surprise that NVIDIA developed this technique.
DLAA requires training
Being a deep learning algorithm, the system has to learn a series of visual models of each game to make the inference and apply the DLAA correctly. Let’s keep in mind that every video game has its own visual style and applying the same inference algorithm for all games can cause more serious visual issues than it can solve.
However, most games have a number of common visual issues that DLAA could solve by learning how to locate and fix them. In this case, the algorithm would not learn to copy the visual aspect, but to correct said inherited errors by the use of certain graphic techniques, this being one of the advantages of learning.
The second advantage is the enormous computing power of the Tensor Cores, which is almost an order of magnitude compared to the ALU SIMD or CUDA cores, so the speed at which this type of algorithm is solved is very fast and like us the We said before, the idea is to get the best picture quality and the highest frame rate at the same time.
Table of Contents