The deal, valued at more than $3 billion, represents a significant victory for Samsung, a company that has always considered itself second class in the development of memories HBM3always behind SK Hynixwhich also has an exclusive agreement with NVIDIA for the memories of its GPUs.
Among the leaked details of the deal, it is mentioned that Samsung will purchase these new GPUs to improve its model of machine learning which is now available both in the new SmartTV range presented a few days ago and in the Galaxy S24 range.
Samsung will supply memories for AMD GPUs
For now, Samsung has not yet started producing this new type of 12-layer memorybut they plan to do so in the coming months, before the end of 2024.
The first model that implements this new type of memory is the Instinct MI350, a GPU with CDNA 3 architecture that uses TSMC’s 4nn manufacturing process. CDNA 3 offers an Artificial Intelligence performance greater than 500% compared to CDNA 2
In addition, this new memory standard that future AMD GPUs will use is 50% faster than HBM3 and has a bandwidth of 10 TB/s per system and 5 TB/s per chip with a capacity of 141 Go.
What is clear is that AMD has a long way to go to become a true alternative in the data center GPU segment and, for now, it is still behind.
This is not the first collaboration between AMD and Samsung. In the past, both have collaborated on GPU design with RDNA architecture of Samsung’s Exynos processors, even if the first results of the collaboration did not bear the expected fruits, the two companies are continuing this project.
a long way to go
Just yesterday, as we announced to you a few hours before, the boss of NVIDIA, Jensen Huang, delivered (symbolically) the first DGX H200 GPU to the CEO of OpenAI, Sam Altam, even if it is Microsoft who pays the OpenAI invoices. (they could already have counted on Satya Nadella).
This model has 141 GB of memory HMB3A manufactured by SK Hynix and has a processing speed of up to 4.8 TB/s. AMD’s MI350 has similar features. However, even though production of the H200 has begun, AMD does not plan to start production for a few months.
According to Huang during the presentation of this new GPU model, it offers performance of more than 60% with GPT-3 and up to 90% in Llama 2. It is striking that the comparison was made with GPT-3 and not with GPT. -4 the most recent version or even with GPT-4.5, which in theory will be the next version of GPT.