The use of micro-polygons is undoubtedly one of the most important changes that we will see in the years to come, but for this we must resolve the limitation of the minimum size of the fragments to 2 × 2 pixels, this That Makes Common GPUs are extremely inefficient when rendering micropolygons with the conventional 3D pipeline.
What is a fragment? Well, it’s a triangle that has been rasterized and converted to an array of pixels, a process that occurs in each of the frames that are processed in the conventional 3D pipeline, which is used by 100% of games. What is the limitation of 2 × 2 pixels? Due to the fact that Pixel / Fragment Shaders always work in 2 × 2 pixel blocks. The reason is that in order to apply the bilinear filter, a total of 4 adjacent pixels is needed. Is it a limit? For years this was not the case, but recently it has become the case since UE5 will use this type of these to create
The advantage behind using micro polygons? It allows to create models in a simpler and more realistic way than conventional polygons, but its use brings changes in the pipeline, such as the compulsory nature of SSD in UE5.
Nanite, one of the pillars of the EU5
Last year, Epic previewed its Unreal Engine 5, its two most impressive technologies were Lumen and Nanite. This is the last one called Nanite that we are going to talk about because it is going to be crucial for the mass adoption of SSDs in gaming PCs as it requires this type of storage to function efficiently.
Nanite is nothing more than the application of REYES to the conventional graphics pipeline. being REYES the old graphics pipeline that Pixar used for its films before giving way to ray tracing with Cars. The concept of REYES is the use of micro-polygons, with a micro-polygon being a bit so small that when rasterized it can be the size of a pixel and not a mesh.
The counterpart to this is in the fact that for Ray Tracing we must have a map of all the geometry of the scene in the form of a spatial data structure. Having the position of each micro-polygon would result in a BVH of such a size that there would be no memory to store it. However, basic shapes made up of micro-polygons can be created to compose more complex ones and this is where the use of the mandatory SSD comes into play in games that use UE5.
Why will NVMe SSDs be mandatory with UE 5?
The idea is to create a map of the geometry of the scene, that is to say of the level, but using two different indexes
- Each basic shape built through micro-polygons that is used to build the geometry of the scene is stored in memory and assigned an index.
- Each object on the screen consists of a hierarchical structure in the form of a graph or tree where, on the one hand, the basic shape that constitutes this part of the model is specified and, on the other hand, what other shapes are connected to build the model.
- The second index does not store the base shapes, but rather the reference to the base shape, thus reducing the size of the geometry map memory.
What does this have to do with SSDs? Simple, instead of storing the entire hierarchical structure of a level in memory it is stored on the SSD, the method is the same as a virtual texture system, with the difference that in this case we bring back the geometry of the scene from SSD to VRAM directly. Thanks to this, SSD is not only used to transfer large blocks of textures on the fly, but we can make geometrically complex structures appear in front of users, at high speed and without overloading VRAM.
In the same way that the SSD makes it possible to store the unused parts of the maps of the virtual textures in a mega texture to load only the visible parts or close to the game scene, but this time with the geometry of this one.