Ideally, the code that generates the graphics should be optimized as much as possible with the aim of occupying as many resources as possible at all times in the most efficient way possible. That is to say without overloading each of the resources and without leaving some of them unused. However, this requires optimization not only for each architecture, but also for any graphics card configuration. Considering the number of models available, you can already understand that this is a Sisyphus work for developers.
Well, a few days ago we already told you how NVIDIA plans to use AI in its drivers, however, they cannot use any function that is not implemented before in terms of hardware. This is where the other part of our story comes in, which concerns the use of RISC-V on GPUs.
RISC-V has been used on GPUs for a long time
We have to assume that RISC-V is a completely free ISA and therefore, unlike ARM, this not only means that those who use it do not have to pay a license fee, but also does not limit how it can be used. ‘utilize. This means that special purpose processors can be used, which do not have all the instructions of a conventional processor although they operate in the same way, allowing them to be used in specific scenarios.
And it is that in the case of NVIDIA we find that the brand has RISC-V processors in its graphics cards. Since when? Well, from the GTX 1000 with Pascal architecture. And on specific items, like now:
- The GPU command processor itself has been RISC-V for at least 5 years.
- Second-level cache management processes are performed by one or more such processors.
- If we talk about mechanisms for managing consumption, voltage and clock speed, they are not only carried by temperature sensors, but also by cores of this type.
These are therefore specific tasks in which this type of kernel is used. The use of RISC V on GPUs is therefore not new.
And how can RISC-V on GPUs solve the driver problem?
Well, with the simple act of downloading the work to the CPU having to manage the different processes that need to be done in parallel on the GPU. Specifically, it would be an evolution of the current command processor that would use inference algorithms through deep learning, one of the disciplines of artificial intelligence and therefore it would use evolutionary algorithms , in order to be able to better manage the GPU Resources.
In any case, entrusting the work of the pilots to a RISC-V processor for the GPU placed in the command processor of the latter has the following advantages:
- This relieves the CPU of having to handle events itself, causing its time in each frame to decrease and thus increase the FPS rate.
- They are responsible for precompiling the shaders, of which they also offload an element to the central processor.
- This makes adding new features much easier, as well as more efficient bug handling.
- For programmers, this gives them a more accurate ability to debug programs running on graphics chip cores.
- It facilitates communication with the accessory blocks of the GPU without the processor having to intervene. That is to say, it makes the use of video codecs, the use of the DMA unit or the display controller more efficient. In concrete terms, this makes it possible to reduce the various latencies more effectively.
- They can learn how each game works on our PC and automatically manage graphics settings.
So, and in conclusion, it all comes down to the use of AI through the RISC-V cores on the GPU.
The big common problem of Intel, NVIDIA and AMD
The problem with graphics cards is the large amount of personnel and capital resources just for creating graphics controllers or drivers. Precisely if these do not work in the end, it ends up affecting the performance of the hardware and with it its price, because you cannot sell a system with less performance at an equal or higher price. In other words, a bad driver can cost you a lot of money.
This type of optimization, however, requires hardware changes and, therefore, the creation of new chips. Everything indicates that this is the great advantage that NVIDIA has over its biggest rival and that they are going to exploit it to the fullest in order to win in the benchmarks and further consolidate their huge market share. On the other hand, it demonstrates the consequences for AMD of ignoring the artificial intelligence market and underestimating its application in gaming graphics hardware.
Fortunately, RISC-V is a completely free-to-use ISA and such solutions won’t be made by NVIDIA alone. Additionally, other companies like Imagination Technologies are already implementing them in their mobile GPUs, where the issues are similar to those on PC.