When designing a processor, whether it is a CPU or something more specialized like a GPU, a relevant part is the memory controller, which is designed according to the specifications given by the JEDEC. However, these specifications are within a range. So, if we are talking about graphics cards, we must take into account things such as the amount of memory available per chip and the bandwidth. All of this information will not be known until the memory vendor contacts NVIDIA or AMD during final assembly of the card.
Sub-section: It is impossible to predict the technical specifications years in the future
It is therefore not possible to know quantitatively what the generational leap of a graphics card will be while it is in the design phase. If someone tells you, for example, that the RTX 5090 is going to have 48 GB of GDDR7, ignore it, because they haven’t decided on the capability of each chip yet. Same if they tell you it will be 2.6 times more powerful. This is speculation ignoring that memory chips usually ship a few months before the launch of the graphics card.
1 GB of GDDR6 should never have existed
And here we come to the most important part of the story, that of the VRAM conspiracy. To understand this, you have to go back to 2013, when SONY decided to launch the PS4 with 8 GB of GDDR5 memory instead of 4 GB, but with the problem that there are no chips larger than 0.5 Go and that they are limited by the bus. The best solution? Pull clamshell mode, in which two memory chips, on the front and back of the board, plug into the same interface. Bandwidth is not increased, but the amount of addressable memory is. The problem? The associated costs of having to put in twice as many memory chips.
In 2016, Slim versions of Xbox One and PS4 appear, with this the number of memory chips is halved to 1 GB, Clamshell mode is no longer needed, and costs drop. The big problem? It happens when in 2018 instead of making the leap to 2GB memory chips, which would be the natural step, we still have to settle for 1GB capacities and the worst happens when Micron, NVIDIA’s partner for GDDR6X in The RTX 30 unilaterally decides to continue with 1 GB density.
Think about it, four years with the same density per chip in terms of memory, that’s not normal and that’s where you might think that “well, GDDR6X was only for NVIDIA and, for therefore, they may have accepted this ability” and This is where things get murkier.
What is the VRAM plot?
The console contract is extremely succulent and even more so if we talk about a system whose predecessor exceeded 100 million and ensures a constant flow of money. SONY’s conditions for making GDDR6 memory for the PS5? 2 GB of capacity per chip, which left the inventory completely empty for NVIDIA. However, one of NVIDIA’s strategies over the past few generations is to seek out proprietary build nodes and technologies to ensure full availability. For example, they scrapped TSMC’s 7nm node at the last minute so they wouldn’t have to compete with Samsung for manufacturing resources and went with GDDR6X for the same reason.
And here we are already at an interesting point, this memory was designed to work with the AD102 chip, but not the AD104 of the RTX 3070 and RTX 3060 Ti at first. However, NVIDIA knew that adding faster video memory would be a benchmark boost and ended up sacrificing the 16GB versions as well as the 20GB versions of the RTX 3080 in the story process, the other party is that their partner has decided not to make 2 GB chips during the last generation so called GDDR6 variant.
The goal? Create a planned obsolescence so that users can move to the next generation when the time comes, as it has been. The question here is: if Micron had made the 2GB GDDR6X chips available, would they have been used by NVIDIA? We think so, it doesn’t make sense to have the lower midrange with more VRAM than the upper midrange.
What were the consequences of these decisions?
For heavy PC gamers, if you have an 8GB RTX 3070 and, even worse, an RTX 3060 Ti, you’re a much better seller than 12GB RTX 3060 users due to the lack of VRAM. The results? Lots of frames mucho más bajas en ciertas partes de los juegos, molesta aparición, repentina de textures y en algunos casos tener que bajar incluso la calidad media de los juegos, teniendo que desaprovechar por completo la capacity de su GPU, lo cual es una auténtica sadness.
This is not the only problem, in case we want to use the video encoding capabilities, now increased with the latest driver revisions, we are going to be limited in terms of memory. Honestly, it makes sense that NVIDIA wouldn’t want to cede some of its market to its main rival for a limitation like video memory. Although planned obsolescence is necessary in every industry to maintain a speed of consumption, there are things that don’t make sense and are so absurd that they could be considered self-sabotage, this which is total nonsense.
Conclusion: yes, there was a VRAM conspiracy
So that you understand, NVIDIA is not interested in the fact that there are games that run better in the competition than in their own graphics cards let alone for something that they cannot control and that solving it supposes an increase in costs and, therefore, obtaining much lower margins. Therefore, we believe there was a VRAM conspiracy where hBasica sought to limit the availability of 2GB GDDR6X chips for an entire generation to create more than clear planned obsolescence.
Which is a shame, because this decision does more damage to the PC gaming hardware market, but they don’t care since they also sell their memory chips in consoles. Plus, we think we can all agree on this, there shouldn’t have been 1GB GDDR6 memory in any scenario, not even almost five years ago.
Table of Contents