One of the problems with Path Tracing or better known as Monte Carlo Ray Tracing is the number of samples needed per pixel to get a sharp image, i.e. no noise of any kind, which requires external computing power. out of reach of personal computers.
In Path Tracing, the rays are distributed randomly in each pixel of the scene, when an intersection occurs then an indirect ray is generated (reflection, shadows, etc.) which points to a random direction. So, after a few bounces, the ray leaves the scene or is absorbed by hitting an object with a refractive quotient of 0 or close to 0. When each of the rays has finished bouncing around the scene, then a d-value is calculated. ‘sample, based on information obtained from the beam path through the scene.
The fact that the ray distribution is done randomly results in the need for a large number of samples, so that enormous computational capacity is required to have a clear image in a scene using Path Tracing, at least. point where CG movies using this technique require powerful supercomputers.
El Denoiser engine
Such is the computational load of this type of rendering that companies like Disney and its subsidiary Pixar decided a few years ago to embark on the use of deep learning networks for the reconstruction of images so that they imagine / hallucinate the complete images of their films made in CG using a few samples.Thus, with an image with much less samples per pixel, we obtain the same result as an image with many samples per pixel by using a “Denoising” or noise elimination algorithm via Deep Learning and processors. dedicated teams applying this algorithm to images. produced by a GPU with much less samples.
There is a Disney / Pixar patent titled Multi-scale architecture for denoising Monte Carlo renderings using neural networks where exactly they talk about the Denoiser Engine as a concept and I think it’s something that is going to be standardized in the world of 3D graphics, not for games, but in terms of composing 3D images.
In other words, the idea is to assign a coprocessor to the GPU specializing in the execution of neural networks and ready to perform the Denoising process, said coprocessor in its early versions if applied to contemporary GPUs can be connected to the GPU through NVLink interface (NVIDIA) or xGMI interface (AMD) in the same graphics card and receive the noisy frame buffer from a DMA mechanism to copy it to its memory and apply the noise elimination process on the image generated by the GPU.
Denoiser engine could mean a return to dual GPUs
Contemporary GPUs, at least those from NVIDIA, have a series of special units to speed up artificial intelligence algorithms. In the case of AMD, these units are in the CDNA architecture of their recently launched AMD Instinct MI100.
This can lead to the return of two GPUs, but in which while one makes the scene for Path Tracing, the other GPU is responsible for applying the AI denoising algorithm. So the idea is very similar to that of DLSS, but instead of using the Tensor Cores to generate an image at a higher resolution, they are used to reduce the noise in the scene and to be able to get a clear image with less. samples.
The usefulness of the Denoiser Engine in games
There is still a long time to go before we see the exclusive use of ray tracing in games, or failing that we will see more games that are purely based on it and not on a combination of ray tracing and rasterization. But we have cases like Minecraft RTX which are rendered using pure ray tracing.
As you can see, the Minecraft RTX rendering process ends up spending a lot more time removing noise from the scene than rendering it. As more and more games use Ray Tracing to render the scene, it will be more necessary to perform the denoising process efficiently, hence the need for a Denoiser Engine in the medium and long term.