Last weeks we started a survey on the subject of DLSS. You have actively participated in it, after all 5,834 votes have been cast to date.
The evaluation of the data shows that Nvidia’s AI renderer is very popular in the GameStar community. However, almost one in nine respondents stated that they did not know what deep learning super sampling actually is. We therefore take a closer look at the numbers and explain what is hidden behind the technology.
So you voted
Almost every third respondent said they use DLSS whenever it is available. Every seventh respondent uses the technology if the performance is too poor, for example due to the use of ray tracing. For every twentieth participant, DLSS is out of the question and the native resolution remains despite the corresponding hardware.
Above average number of RTX graphics cards
How many of the respondents have a DLSS-capable graphics card (RTX 2000 or RTX 3000) is not clear from the data. But it can be seen that this is the case with at least half (50.19 percent) up to a third (65.28 percent), because DLSS is basically available to them.
Hardware guides on GameStar Plus
Do you already own a graphics card from Nvidia’s Geforce RTX 3000 series and want to know which components you should still replace in order to fully bring the performance of your graphics card onto the road? Then our upgrade guides on GameStar Plus might be something for you:
For comparison: The Hardware survey on Steam found for April 2021 that only 11.74 percent of the systems checked had a DLSS-capable graphics card installed. The AI renderer is therefore particularly popular with the GameStar readership.
Nevertheless, around twelve percent of the participants in our survey said they did not know what DLSS is. For this reason, we want to quickly explain again what DLSS is and what difference it makes.
What is DLSS?
9:05
What is DLSS? – Nvidia’s new anti-aliasing explained in detail
DLSS is an algorithm based on artificial intelligence, which in some cases significantly increases performance. The AI is trained to infer from a low resolution to a high resolution. For example, a game can initially be calculated in Full HD (1,920 x 1,080 pixels) before DLSS intervenes in the rendering process and scaled to 4K (3,840 x 2,160 pixels).
In contrast to conventional scaling methods, which only extrapolate from the outgoing resolution, DLSS can add image information that is not included. The reason for this is that DLSS was previously trained on very high-resolution images of a game (for example in 16K, 15,360 x 8,640 pixels).
As a result, DLSS can sometimes even deliver better results than native rendering. This is the exception, but in most cases DLSS manages to look at least as good as the native resolution and deliver significantly more images per second. The GameStar test also proves this:
more on the subject
Is Nvidia DLSS 2.0 a game changer?
The algorithm is trained in Nvidia’s data centers in the form of machine learning. The results are then incorporated into the Geforce driver and implemented by the tensor cores of the RTX-2000 and RTX-3000 cards.
Anyone who has not yet been able to get hold of a new graphics card will find an update on the current market situation with prices and availability for the RTX 3000 cards, as well as for the PlayStation 5 and Xbos Series X.