For some time now, graphics cards have been adding more and more features, to the point of not only going beyond the generation of realistic graphics on our screen, but also supporting the rise of artificial intelligence. This allowed the creation of services via the Internet for the generation of automatic images such as GIVE HIM or other services like ChatGPT. But, What is the cost of being able to create an AI service because of hardware?
Despite the fact that we still have a long way to go in the field of artificial intelligence, the appearance of services in which, by placing a series of key words or phrases, it is able to generate a story in the text or, failing that, an image which may be more or less correct. And despite the level of failures that this technology has, many people are fascinated by it, but they are unaware that a level of complexity is required and that it is impossible to emulate even on the most powerful computer you can assemble with the most expensive components you can find right now.
How much does ChatGPT or Dall-E hardware cost?
Well, several thousand dollars, even tens of dollars, because the amount of data they process to do their job and the amount of energy required to do it requires configurations of tens or even hundreds of graphics cards . Not only to generate the answers of the inference algorithm, with which the user interacts, but also to train the AI, i.e. to learn the values and draw its own conclusions.
The material used in many of these cases is the NVIDIA DGX SuperPOD, a server built by NVIDIA consisting of hundreds of graphics cards, but not for games, but those used for high performance computing. Think, for example, that the price of an NVIDIA H100 can cost us a total of 5,000 dollars and we even have models that go to five figures, it is much more than a permanent user will spend on his computer, even with a last generation i9 and an RTX 4090 today.
And high, that the thing does not stop there. The volume of data is such that it does not fit on a single graphics card and you have to use several of them. For example, ChatGPT requires teams of 8 graphics cards of this type, a cost of at least 40,000 dollars per server minimum. And if we talk about Dall-E, which handles images and is more complex, the cost can skyrocket by tens. So we still have a long time to have something like this domestically and for that we will have to wait a whole decade to have something of this capability in our home PC.
Memory is the biggest bottleneck to achieve this
This is all due to the amount of information the AI algorithm needs to draw its conclusions, just like it happens with the human brain drawing conclusions from the information and knowledge it has. So you will have to store internet search data as a base to do your work. Which is huge and requires the use of extremely expensive infrastructure.
In addition, they are not fully functional, just see some aberrations that ChatGPT gives in response to some questions or the drawings worthy of the worst nightmares and without any sense that Dall-E sometimes shows us and of which we do not know how it could come to such a conclusion. Although it must be recognized that some are even curious and worthy of being supervised, there are still many years before they have such a high margin of error in what they are asked and what ‘they show.