Using 2 graphics cards working in tandem we remember when the last but legendary 3Dfx released their Voodoo 2 and then in later models this became multi-chip designs. The concept? It was based on Scanline Interleaving or SLI where each scanline on the CRT monitor was drawn by one of two 3D cards. Years later, NVIDIA revived the acronym as Scalable Link Interface, but this time each graphics card was in charge of a different frame, so when one completed the frame, it passed on. to the other. In AMD’s case, they weren’t far behind and created their own version of the concept which they called Crossfire. Today both are forgotten and it is not by chance.
Having 2 graphics cards does not give more performance in games
To understand the dual graphics card issue, we need to understand that the game code must be explicit about booting an image on one graphics card and booting the next. The problem is that the time cost of each frame is not the same for all, so one frame may cost more than another, so a dual configuration will not always achieve double the performance. And by the way, SLI and Crossfire required symmetric settings to work, meaning both models had to have the same settings and speed.
In principle, there should be no problem when using dual graphics cards in games, the problem arises when we find that in games code should explicitly state on which graphics card each frame should be run. What’s going on? so what very few people had setups like this and there was no incentive to take to adapt the games. If we add to this that SLI and Crossfire worked differently and had to be programmed separately, then it becomes very clear why 2 graphics card support eventually died out in PC games.
This is why the use of 2 graphics cards in a computer has become nonsense in the face of games. There is no market, and therefore there are no stocks that profit from it.. However, we must clarify that there are many systems today with dual graphics hardware, since many Intel and AMD processors have integrated graphics, but for the reasons we have already mentioned, they do not work in tandem and therefore, when one is activated, the other is completely disconnected, i.e. they do not add their power.
The gaming GPUs of the future will be dual
However, it won’t take long to see not dual graphics cards, but graphics cards with two or more interconnected GPUs placed in their circuitry. For example, AMD has a patent for a future design where graphics cards are unified into one. In the sense that we have two graphics chips or GPUs in combination with one memory in common. However, the important thing in this technology is that AMD has managed to make applications behave like a classic graphics card, since it is the first chip that takes care of controlling the times of the second in an automated way.
No doubt NVIDIA and Intel will have developed a similar solution, however, as far as we know, their RTX 40 will continue to be all-in-one and therefore will not consist of separate chips. As these chips become more and more complex and costs increase with chip size, we will see how configurations will be broken down into several different chips. The first to do so will be the RX 7000, but not with a dual configuration, although it does so by splitting the memory controller, which we’ve seen before in its Ryzen processors.