The boom in AI with the development of solutions such as ChatGPT, Stable Diffusion, Dall-E and many others has caused the demand for graphics cards to skyrocket, as has mining. Although this time it’s not something AMD can benefit from due to its policy of not wanting to combine graphics and artificial intelligence. Right now they are paying dearly for not being able to take advantage of this trend. Which led the head of the company’s graphics division, now headed by Lisa Su, out to clarify Radeon Technology Group’s future plans.
David Wang confirms AMD’s poor GPU strategy
For some time we have been pointing out the claim that it is AMD’s graphics division policy to oppose NVIDIA in terms of new technologies or rather to try to sabotage or boycott them with poor implementation of these. Especially in the face of Ray Tracing and AI. A strategy that, from our point of view, is wrong, because in the end it harms the image of the brand and harms the end consumer.
In recent days there have been some statements from David Wang, the head of the Radeon technology group, that have made us frown, because if we’re being honest, we’d be wrong about our observation about the future of the brand, but only confirms it.
What Did I Say AMD’s David Wang?
We believe that what should be done with inference accelerators (Tensor Cores) installed on GPUs should not be limited to image processing, as is the case with NVIDIA’s DLSS. Take the example of FidelityFX Super Resolution which can be done without an inference accelerator and which offers performance and quality that can rival NVIDIA’s DLSS.
The reason NVIDIA is actively trying to use AI technology, even for applications that can be done with it, is because they have installed a large-scale inference accelerator on the GPU. And to make effective use of it, it seems that they are working on a file that requires mobilization. It’s their graphics card strategy, which is great, but I don’t think we should have the same strategy.
We focused on the specs users want and need to enjoy our mainstream GPUs. Otherwise, users are paying for features they will never use. We believe inference accelerators should be used to make games more advanced and fun. For example, in the movement and behavior of enemy characters and NPCs as the most obvious examples. Also, even if AI will be used for image processing, it should be in charge of more advanced processing. More specifically, on a subject like “neural graphics” which are attracting more and more interest in the 3D graphics industry.
Why are such statements problematic?
Specifically, AMD has been developing inference accelerators for a long time, just like NVIDIA’s Tensor cores, but by Wang’s own decision, they weren’t integrated into any of the RDNA architectures that appeared and remained only in CDNA. The strategy is clear, leave their graphics cards for the gaming market without functionality and then justify its lack by saying that it is something that does not benefit users. Please, we’ve all seen the positive effects of DLSS before and we know AMD could have something similar without a problem, but they don’t feel like disagreeing with NVIDIA.
Worst of all is the example-themed animations and behavior of NPCs in games. Do you know what it depends on? of the central processor. Also, using inference accelerators for graphics on a graphics card doesn’t defeat their primary purpose. Also, it doesn’t matter that something can be achieved with other methods such as FSR, what matters is that the functionality is there to be used. By the user on foot? No, by those who make games, who are the ones who use these technologies and implement them in their titles.
We don’t think any developer would mind having a previously missing resource to use to create new games. AMD from the mouth of Wang takes advantage of the fact that the average user does not see the value of a technology as a means, of the fact that it consumes the end, that is to say the result of its use. Good results from DLSS and other technologies should be enough to see the error in these statements.
David Wang is wrong, Radeons need AI
The calculation between matrices must be done with a specialized unit for this, since with this it is possible to completely reduce the number of clock cycles needed to execute certain instructions. Especially in the face of artificial intelligence algorithms that use them constantly. However, AMD’s solution on the RX 7000 is to use a worse solution than what they already had in place at CDNA.
Looks great for the gallery, developing a solution that can be used in older graphics cards to scrape some extra frames, but it’s not inconsistent with giving next generation gaming graphics cards a feature that has already proven to be useful. Everything else is just lip service to justify a policy that is flawed and can be perceived as giving an inferior product.
And no, this is not about applauding NVIDIA, but about the fact that we finally have AMD confirming its strategy in terms of graphics cards. Not only with Ray Tracing, but also when it comes to AI. Which is a huge mistake that not only the Radeons pay but the entire PC gaming industry and especially the RTG which sees how its rival continues to dominate with a fist of fire.
Table of Contents