In the 3D graphics pipeline, the various triangles that make up the objects in the scene go through a series of transformations until they become the pixels you see on your screens. These transformations occur throughout the graphics pipeline. But what happens when a triangle that shouldn’t be in the scene isn’t removed in time?
Well, the GPU ends the entire graphics pipeline with it, which in environments where millions of polygons move per frame is a huge waste of resources. This is why we explain how the use of AI will be used not only to increase resolution in games but also to give more detail and also to eliminate everything that is not seen, but which is unfortunately calculated.
Adaptive tessellation
Tessellation is based on subdividing the vertices that make up a character’s polygonal mesh, but without changing its external appearance, which gives a more polished look to the different models in the scene.
But tessellation is something counterproductive because if we do it in an exaggerated way, we can see that the most distant objects have superfluous geometry which cannot be seen but which is calculated by the GPU, hence a algorithm which allows to control the degree of tessellation according to the distance.
These kinds of algorithms in which the tessellation is controlled by distance are called adaptive tessellation algorithms, they are not new and have been in use for a long time, but controlling these consumes a lot of GPU resources.
Superfluous geometry
Adaptive tessellation is a case of superfluous geometry, but there are many more, which in general can be categorized into three common problems:
- The geometry is too small to perform the raster and texturing process because the user will not see it.
- The vertices are hidden by another larger object.
- The geometry is out of scope.
Each triangle in the scene that is not ignored will also be rasterized and then textured. We therefore see that a large number of triangles that the player does not see are calculated by the GPU. Which means unnecessarily consuming graphics card resources that could go to other components.
Pre-Culling for the elimination of superfluous geometry
One way to avoid superfluous geometries is what is called pre-culling, which involves pre-rendering the scene so that you can catalog that the geometry contributes nothing at all.
What is? The entire scene is simply rendered but without taking into account the shaders, with the exception of tessellation like the Hull Shader and the Domain Shader. Once this is done what is done is removing from the vertex list to render any geometry that does not meet the parameters.
Today, the pre-elimination process is assisted by fixed functional units which are responsible for removing any unnecessary geometry from the list of items to render, such as the geometry processor in AMD RDNAs. Said processor is not responsible for detecting geometry, but this is done by a shader program which works in conjunction with said unit.
Adaptive tessellation and geometry using AI
In the same way that we can train an AI to generate or destroy pixels, in order to increase or decrease the detail of a 2D image, we can also train it to be able to see the superfluous geometry of a scene. and eliminate it directly without Pre-drawing must be done.
How? ‘Or’ What? Well, using the same techniques that were used to train an AI in automatic driving, character recognition, and other elements of computer vision, but th is time training the AI so that it be able to visualize all the superfluous geometry and eliminate it in the process.
Another application is the creation of models with more or less detail with the distance, because in the development of video games several models of the same object are usually created according to the distance, thanks to the use of artificial intelligence, this can dynamically generate versions with less detail from a single, most detailed model, saving designers time.
Control of scene geometry via AI on current hardware
In the case of AMD hardware where we don’t have units like Tensor Cores the answer is no, on the other hand in NVIDIA hardware it is possible to do that and it wouldn’t surprise us that NVIDIA will launch a style DLSS. short or medium term algorithm but with the geometry of the scene.
It must be taken into account that the Tensor Cores in games are hardly used, the proof is that if we take the tool to measure the performance of NSight then we can see how the GPU the Tensor Cores are hardly used, only in phase image post-processing for increased resolution via DLSS.
The idea is that instead of using a fixed-function tessellation unit or shaders for tessellation, they should be the AI units themselves, such as tension cores, so that the just as they generate and destroy pixels, they do the same with geometry. Which will not only result in more detail in games, but also a higher frame rate.
Animations via AI
A preview of what you’re going to see what Epic Games showed a few years ago at GDC 2018 is the animation of geometry via AI, which consists of artificial intelligence learning a person’s movement patterns. to be applied to modeling.
You have to keep in mind that animations in games, especially facial ones, are one of the hardest things to do in development studios and through the use of artificial intelligence it will be possible to give them a correct animation.
This is another of the applications to make the AI take care of generating geometry in real time, there are many more like the generation of intermediate animations or modeling derived from others and generated from procedural way.
Table of Contents