In a few years, we may forget that Nvidia made graphics cards for video games. The company has become one of the most valuable in the technology sector thanks to the ability of its chips to perform tasks other than gaming; hence its commitment to Artificial Intelligence with Blackwell’s release is understandable, as is using the power of AI to design humanoid robots.
Blackwell is Nvidia’s new line of GPUs, even if its CEO, Jensen Huang, preferred to call it a “platform”; It is not in vain that this is where the future of the company now rests, and where the biggest technological giants have already found their place.
And Blackwell-based GPUs will be the ones running language models and algorithms from companies like Google, Tesla, Microsoft, Meta, Dell, Amazon, OpenAI, and many others who have already confirmed that they will be the first to benefit from it. This means that all major platforms, including Google Cloud, AWS, Microsoft Azure and Oracle Cloud, will be based on these chips.
The chip that will control everything
The reason why all these giants came to the same conclusion when betting on Nvidia is obvious: Blackwell is the most powerful platform on the market, place. We’re looking at Hopper’s direct successor, but for practical purposes it’s as if Nvidia skipped a few generations; In fact, Blackwell has 208 billion transistors, a spectacular number compared to Hopper’s 80 billion.
To achieve such a figure, Blackwell is actually made up of two separate GPUs, each with 104 billion transistors; They are connected by a single 10TB/s link to effectively operate as a single GPU. The chips are based on TSMC’s N4P manufacturing process, the most advanced at the 4-nanometer scale, and feature on-board memory to achieve a maximum transfer rate of 16 TB/s.
All this means that the most avant-garde model in the range, the GB200, is the GPU the most powerful in the world, especially if we talk about AI-related tasks. Compared to Hopper, it is up to 30 times faster in AI inference (based on machine learning) and has a power of 20 petaflops in FP4, leaving behind any other rival in the market.
The GB200 is made up of two B200 GPUs, and the most interesting part: an integrated Grace CPU, better able to do the calculations necessary to help the GPUs. Nvidia will also offer the B200 and B100 separately, with just the GPUs, depending on customer needs.
But perhaps most surprising is that Nvidia achieved all this by reducing power consumption; Blackwell is their most efficient GPU, reduce energy costs 25 times. A model with 1.8 billion parameters would have required 8,000 Hopper-based GPUs with a consumption of 15 MW; The same task can be performed with just 2,000 Blackwell GPUs consuming just 4 MW.
To demonstrate the potential of this technology, Nvidia did something surprising: created another Earth. The “Earth-2” project is a digital twin of our planet, which simulates weather and climate on scales never seen before. This digital Earth is accessible via the Nviida API, so anyone can create simulations to study the effect of weather events, from tornadoes and megastorms to simple low clouds.
For most users, the effects of this release will be noticeable in future releases from companies like Google or Microsoft, with more advanced AI-based features based on a greater amount of data.