You should know that AMD currently offers two very different families of chips. On the one hand, we have Ryzen chips intended for the consumer sector, such as gaming, office automation or workstations. While EPYCs are intended for servers, data centers and high-performance computing systems.
Note that, silently, AMD has de facto abandoned Threadripper chips. These were intended, among other things, for workstations and small servers.
AMD presents and benefits from the new EPYC
The first thing AMD CEO Lisa Su wanted to point out about the new EPYC was its AI capabilities. These new chips provide excellent performance for complex artificial intelligence tasks.
One of the most interesting data from the presentation was the comparison between the 128-core EPYC Turing and the 64-core Intel Xeon Platinum 8592+. Specifically, two EPYC Turing processors with a total of 256 cores and two Intel Xeon Platinum 8592+ with 128 cores were compared.
They used the larger Llama2 model, which has more than 7 billion parameters, for comparison. Concretely, they used it for synthesis, dialogue and translation operations.
As AMD points out, in summary, they deliver a 3.9x performance improvement. In the dialogue process, it offered an improvement of up to x5.4 times and in translation, it offered a performance of up to x2.5 times.
Obviously, this was not very appreciated by Intel, which dedicated an article to this subject on its blog. He said the comparison was manipulated, as AMD allegedly deliberately modified the performance of its processors. This is the first time we have seen something like this, since until now the company has never refuted the data presented by its main competitor.
Intel claims that the fifth-generation Xeon Emerald Rapid delivers excellent AI performance thanks to its AMX AI acceleration engine. It also stands out because its chips offer more cores than the previous generation, a larger amount of cache and faster memories. In addition, they assure that they have no rival in the market.
Additionally, Intel highlights its commitment to open source AI software. They also emphasize that they are working on exhaustive optimizations for the entire artificial intelligence ecosystem.
They didn’t just stick to words, they also showed data. They showed the performance of the Intel Xeon Platinum 8592+ compared to an AMD EPYC using unoptimized software and testing methods. The results of the conversation showed a 5.5x performance improvement than AMD initially claimed. It assumes that in this test, a performance improvement is obtained compared to the EPYC Turing.
They also wanted to emphasize that Xeons aren’t just great for large AI models. In various integer deep learning inferences, INT8 with fourth and fifth generation Xeon chips provides better performance in image classification, natural language processing and other similar tasks compared to the AMD EPYC 9754.
It seems that a new war has just broken out between Intel and AMD to demonstrate their supremacy in terms of artificial intelligence. The truth is that Intel’s response is very harsh and has never been seen before. This shows that no one wants to lose their piece of the AI market pie.