There are different ways to speed up artificial intelligence algorithms, some more complex than others, some faster, but less versatile, while the latter are prepared for various utilities. The types of processors that we are going to present to you are used daily in different places and for different applications.
Any material is good for AI
Before we get into the different types of hardware for AI, we have to keep in mind that deep down we are talking about running programs, so any kind of processing unit can run dedicated algorithms. to artificial intelligence, but in the same way that we don’t use. the CPU to move graphics programs, nor does it deal with artificial intelligence.
So the claim that all hardware is used for artificial intelligence is taken with a grain of salt, obviously we can run the algorithms on any hardware, but the level of efficiency is much lower when we are talking about more general units that are not specialized.
In general, it is the units designed for calculating matrices that have a huge advantage over other types of units when performing artificial intelligence algorithms. More than anything, why at the mathematical level said linear algebra calculation is used continuously and constantly and repeatedly and believe us, CPUs and GPUs are not optimized for this type of calculation.
First type of processors for AI: systolic tables
Systolic tables are a type of unit that we have already discussed in the article titled “What are AI processors and how do they work?On this same website which is HardZone, so if you want a more detailed version of them, we recommend that you read this article where we explain to you in more detail how it works.
The systolic tables are based on the same basic concept as the rest of the processors, in this case it is an ALU table where each one does not send the result to the reg isters but to the ALU next to it, except at the ends which is where they come in and the data to be processed comes out.
This setup has enormous computing power relative to the area they occupy and the energy they consume, but its simplicity limits the amount of artificial intelligence algorithms they can run, so their capacities are limited, not in power but in type. can work and their complexity, not their size.
Second type of processors for artificial intelligence: ASICs
The second type of specialized IA processors is an evolution of the first type, because with systolic arrays all units are interconnected in a matrix, but with one important difference.
Each element is not an ALU but a complete processor which has its own local memory and communicates with the one adjacent to it. Therefore, in this type of units, more complex algorithms for AI can be executed, so there is greater versatility when programming the algorithms with tools like Tensorflow and Pytorch, but they don’t have not the excellent power / surface ratio. and the consumption of systolic tables due to their greater complexity.
Its main advantage? The fact that their units are more complex allows to execute any type of algorithm, while in the case of systolic arrays their capabilities are limited in this regard. Since these are designed with the area and the consumer in mind. In particular, systolic arrays can be found in other types of processors, while specialized ASICs are units by themselves.
Third type of processors for artificial intelligence: FPGA
The third type of specialized processors for AI are FPGAs, not only dedicated chips, but also those integrated into SoCs or eFPGAs. The reason is that FPGAs by their nature allow multiple inputs and outputs at the same time and the interconnection between the different elements that compose it thanks to their ability to be configured.
The use of FPGAs configured as processors for AI is not a rarity, for example Microsoft does not use systolic arrays and ASICs for AI in its Azure servers but FPGAs. Its biggest drawback? As the cores are configurable, their performance is in terms of surface area and consumption compared to the other two solutions.
Its biggest advantage? The fact that being configurable we can make an FPGA behave like an ASIC or a systolic array, so when the programming capability of ASICs is needed, the FPGA or the set of FPGAs can be configured as such, if on the contrary it is necessary the power of the systolic table, the FPGA can be configured as this type of reader.
Fourth type of processors for artificial intelligence: GPUs
Graphics cards can also be used for computing algorithms for AI and no, we are not referring to those of NVIDIA and its Tensor Cores, but any computation with matrices can be vectorized and therefore turned into a computation. of vectors from the mathematical point of view to be executed in conventional SIMD units in GPUs. The efficiency is not as great as other units and its performance in comparison is much lower, but it is superior to that of a processor.
One of the keys to using graphics cards for AI is support for low-precision data formats, which are typically not used for graphics computing, but in processors for artificial intelligence. This means that these GPUs support these formats and can work with data with these precisions.
For the calculation of matrices, GPUs do a vectorization of the matrix, since they were not designed to work with matrices, but with vectors. This vectorization process is necessary for the GPU to do the calculations, but these are much slower units than the other three types of units we talked about above.
Table of Contents