AMD FSR has revolutionized the GPU landscape in recent weeks, not only rivaling NVIDIA’s DLSS in delivering higher resolution images using fewer resources. Otherwise also because of its open nature, which allows it to be implemented in any GPU regardless of its architecture.
What are Super Resolution algorithms?
Super-resolution algorithms have become very popular lately, especially those that use convolutional neural networks like NVIDIA’s DLSS. The particularity of the AMD solution? The fact that it is not based on artificial intelligence and therefore does not need to be trained with a set of previous images from the same game. Since a convolutional neural network for the computer vision, he learns a series of common patterns to perform a reconstruction.
The advantage of the method AMD FidelityFX Super Resolution uses over DLSS when it comes to be ing implemented in any game is that the AI does not tend to generate the new frames. To understand the concept, what we need to do is imagine that we are forming an AI with a set of images with one artifact or image error in common. Then we make the AI replicate those images or their variations. How there is the tendency learned during the learning phase of AI to learn these image errors will reproduce them with said image error.
This means that in an AI-based super-resolution system, you can draw conclusions that are not exact. This is why NVIDIA DLSS games come out in a controlled and trickle fashion, while FidelityFX Super Resolution is an algorithm that can be applied to any game.
FidelityFX Super Resolution is a spatial and not a tempor al algorithm
The method AMD chooses is based on taking information from the current image and only the current image, so it differs from other image resolution scaling methods such as checkerboard rendering. When we talk about temporality, we refer to it in order to generate the higher resolution version of the current image, it partly proceeds from the previous image. So it lacks what we call temporality and takes the frame information at a lower resolution than the GPU just generated to create the higher resolution version of the image.
But what do we mean by resolution? Well, to the number of pixels that make it up, so when we increase the resolution of an image, what we do is increase the amount of those, with that new pixels are generated that occupy the space , but whose color value we do not know. The simplest solution? Use interpolation algorithms, which are based on painting missing pixels with colors halfway to neighboring pixels. The more neighboring pixels you take as a source of information, the more accurate the information will be.
The problem is that the raw interpolation is not good enough and is not used, the quality of the resulting images is very low and often differs from the reality. Today, most image editing applications use artificial intelligence algorithms to generate versions at higher resolutions. If one already focuses exclusively on the FidelityFX Super Resolution, its method of obtaining the missing pixel information is not based on direct interpolation, but is more complex.
This increases the resolution of AMD FidelityFX Super Resolution
We will stick to the official explanation given by AMD, which we will quote below:
The FidelityFX super resolution consists of two main stages.
This results in two consecutive algorithms, which run one after the other or rather, where the second takes the information generated by the first. Let’s see what each of these steps are.
A scaling pass called EASU (Edge Adaptive Spatial Up Sampling), which also performs an edge reconstruction. In this step, the input frame is analyzed and the main part of the algorithm detects gradient inversions, essentially examining how neighboring gradients differ from a set of input pixels. The intensity of the gradient inversions defines the weights that will be applied to the reconstructed pixels at the screen resolution.
To understand the quote, the first thing we need to know is what the explanation with edge detection in an image refers to. To do this, it’s a black and white version of the final image, which is in RGB format. It is therefore a question of adding the values of each of the channels and dividing by 3 to obtain the value in a black and white image. If we leave in the image in gray levels only the purely white values, FFFFFF, or purely black 00000, then we will obtain an image which will delimit the edges.
In AMD FidelityFX Super Resolution, it runs the image generated by edge detection at an output resolution much higher than what was originally rendered, but which is the output resolution you want to achieve. All of this will be combined with an image buffer which stores the gradient changes of each of the pixels. Which measures the changes in color intensity between pixels. This information is combined with conventional interpolation to obtain the image at a higher resolution.
A sharpening step called RCAS (Robust Contrast Adaptive Sharpening) which extracts details from the pixels in the enhanced image.
The image generated in the first step is enhanced by a modified version of Contrast Adaptive Sharpening, the end result is an image halfway between pure and hard interpolation and that of artificial intelligence.