Currently, NVIDIA dominates the artificial intelligence segment with an iron fist. The company has years of advantage over its competitors because it has been in this field for a long time. Its innovative solutions for this sector offer unrivaled computing power.
An interesting aspect of the presentation is the name of the company’s new architecture. As with previous generations, NVIDIA uses the names of women who have made great contributions to computing and science.
NVIDIA Introduces Rubin Architecture and Blackwell Ultra GPUs
The first thing that stands out from the presentation is the name of the new architecture of its GPUs, called Rubin. This architecture is named after astronomer Vera Rubin, who made important contributions to the understanding of dark matter and is a pioneer in the study of the rotation speed of galaxies.
It is important to note that the Blackwell architecture was only recently announced. The announcement of the Rubin architecture is surprising, but it seems that NVIDIA is accelerating its developments. This new architecture would be the company’s answer to the AI segment, which demands high-powered solutions.
The new Blackwell GPUs (B100/B200) will arrive in data centers later this year. These solutions will use 12Hi memory stacks in 8 zones, compared to the current 8Hi memory stacks in 8 zones. These new solutions are expected to be launched on the market in 2025.
Shortly after Blackwell, the first solutions based on the Rubin architecture should see the light of day. The first R100 GPUs would hit the market in the fourth quarter of 2025 and would be mounted on the new DGX and HGX systems, which would launch in the first half of 2026. NVIDIA has confirmed that the Rubin GPUs will already use HBM4 memories.
Rubin R100 GPUs could use a x4 reticle design (compared to Blackwell’s x3.3 design). Rubin GPUs will use the TSMC CoWoS-L packaging process manufactured under the 3nm node. TSMC recently revealed plans for up to a 5.5x grid by 2026, with a 100 x 100mm substrate that would allow up to 12 slots for HBM memory compared to 8 currently for 80 x 80 cases. mm.
A pretty notable fact is that the R100 GPUs will use new HBM4 memories. Remember that NVIDIA’s B100 GPUs use HBM3E memories and that by the end of 2025 they could be updated to the new HBM4. Samsung and SK Hynix have revealed plans to develop a new generation by 2025 with batteries up to 16 Hi.
Additionally, NVIDIA is reportedly set to update Grace GPUs for GR200 Superchip systems, which will feature two R100 GPUs and one Grace CPU manufactured using TSMC’s 3nm process. Current Graces are based on TSMC’s 5nm process and allow up to 144 cores in a Grace Superchip solution. NVIDIA has even confirmed that its next next-generation ARM processor will be called Vera.
NVIDIA wanted to emphasize that for the Rubin R100 they want to improve energy efficiency. The company is aware of the need to improve the efficiency of its chips intended for Data Centers. Although they did not provide specific data, they also emphasized that AI capabilities would be significantly improved.
The roadmap would be as follows:
Code name | N / A | Rubin (Ultra) | Dark Well (Ultra) | Hopper | Ampere | Time | Pascal |
---|---|---|---|---|---|---|---|
Family GPU | GX200 | GR100 | GB200 | GH200 / GH100 | GA100 | GV100 | GP100 |
GPU Reference | X100 | 100 rand | B100 / B200 | H100 / H200 | 100 | V100 | P100 |
Memory | Is it HBM4E? | HBM4 | HBM3E | HBM2E/HBM3/HBM3E | HMB2E | HBM2 | HBM2 |
Launch | ¿2028? | 2026-2027 | 2024-2025 | 2022-2024 | 2020-2022 | 2018 | 2016 |