Thus, development on Stadia requires creating games according to specific specifications, like on a video game console. The new servers, instead of accepting increasingly powerful and complex games, what they allow is to accept new players to the service and with something as simple as renewing the hardware in nodes, which would be increasingly cheaper for Google, while costs for users would not come down, but would be hidden in the form of a large library of games that would initially serve to attract audiences to the platform.
Commitment to Cloud Gaming
Making a video game console goes against the DNA of a service-focused Google, so the approach that its ideologues put on the table was the following: create a starter kit at a very low price with a control unit and a decoder to be connected to each user’s television. The idea was to sell these two pieces with a high profit margin, but above all to charge a group of users for the entry-level hardware. Who, because of the time differences, would not share the use of the servers.
However, running Google games required a powerful infrastructure and this was seen by AMD as a great business opportunity. The original proposal was to use ARM, but the graphics cards from Lisa Su’s company have their IMC for use with an x86. NVIDIA was not chosen as they already had their own service in the form of GeForce Now for them. The problem? It would take a few years to get the hardware ready, at least three of them. Google therefore opted for a solution for the first deployment of the architecture, instead of developing its own hardware they would end up opting for parts already available on the market.
The AMD Opportunity
To put yourself in the situation, you have to take into account how AMD had suffered a huge setback with its RX Vega graphics cards, which had been crushed in every respect by NVIDIA’s Pascal architecture, known as GTX 1000 in all its aspects, but especially under a crucial. yield per area. In the world of PC hardware, price is determined by performance and the margins for the RX Vega were very fair, as it used what we call a 2.5DIC configuration where the GPU, HBM2 memory and interposer where they are mounted is the same part. .
The solution for inventory cleanup came from two fronts. The first of these was the massive sell-off to cryptocurrency mining farms. The second was the creation of a new graphics card which they called Radeon Pro V340, a dual graphics card which incorporated two sets of RX Vega 56 and its memory which was touted at the time as the heart of the Google Project Stream servers, which would end up drifting Stadia.
Stadia’s hardware was a PC
Google never showed the Stadia servers inside, just some promotional images with some specs. However, months before the launch of Stadia, AMD revealed the internal images of the servers, with four RV340 cards, which when you take into account that they are dual, this equates to a total of 8 GPUs per server. Its power was enough to give 4K cloud gaming for each GPU, but it was possible to virtualize the graphics so that up to four Stadia virtual machines shared the power. Since 4K has four times more pixels than Full HD, two profiles have been created for gaming. One in which you didn’t have to pay anything else except for the game itself and another which was the subscription and which allowed you to use a full graphics chip and access content at a higher resolution.
The parts of a console are designed to last five years in continuous manufacture, on PC there are components which after three or four years have disappeared from the board. The problem is not that Google configured Stadia servers to match those of a computer, but that some of them already had an expiration date. Additionally, the graphics card and shader compilation in games were tied to a GPU that was soon to be discontinued at the time. In any case, Google, facing developers, has always had a Stadia Gen 2, with console-like hardware in terms of cost and the same performance.
Think PS5 has much higher power than Stadia and much lower cost by using fewer and cheaper components, like using GDDR6 memory instead of expensive HBM2.
Stadia’s problem initially was its infrastructure
Considering that a user has access to at least one of the GPUs that are in a server, we need to know how many of those servers were deployed in the first phase of Stadia to know what the minimum number of users supported by the service. Those are 60,000 concurrent users and a maximum of 240,000 since Stadia supports 4 virtual machines per GPU, but dividing the available graphics power by a quarter if that is the case. Which is a very low number for a service to be efficient and profitable if you start analyzing the numbers. The Stadia team didn’t have time to set up the servers, they didn’t have enough capital, and they had to fall back on quickly obsolete PC parts.
It wasn’t a CPU, RAM and storage issue, but a graphics card issue, because Google had opted for an architecture that counted its hours and that AMD itself had already decided to withdraw. Which forced them to consider Stadia Gen 2, a project that never saw the light of day due to the low number of users and which was to be based on the use of a single chip for the processor and the graphics card. in order to reduce costs. However, it never went beyond on-the-table documentation and a promise to the developers.
The big publishers haven’t seen it quite clearly
A service needs games that are not ephemeral, they cannot launch games with high narratives and short duration because once completed they will stop paying the monthly money. Unfortunately, video games aren’t like movies and TV, where the time and cost of rolling out a new series is much lower. So Google envisioned Stadia creating two games as a service that would take years to appear. What do we do in the meantime? Count on the major independent publishers, i.e. “The Industry” to support your project.
And what are companies like Ubi Soft, Activision-Blizzard, Electronic Arts and others doing? The conversion of games already released on the market is very lucrative; however, with this you do not differentiate yourself from the competition. So you need the Super Mario in service for people to buy your box or sign up for your service or buy your product. Unfortunately, Google didn’t have that, nor the patience, nor the time.
Of them all, the only one who officially joined was Ubi Soft, but behind the scenes at Google they were rubbing their hands. The reason? Stadia got plaudits from the industry, but they preferred to wait until there was a bigger infrastructure behind it that would support more users. Independent publishers were not interested in an infrastructure of less than a million concurrent players at the same time. As we said before, the initial infrastructure was designed to support a maximum of 240,000 concurrent users.
The recipe for disaster
Stadia’s big problem is clear from the start, a complete lack of infrastructure on its part and games that stand out to an audience that saw no motivation to ditch other platforms. As if that weren’t enough, subscription-based business models offered flat-rate pricing in games like Amazon Luna, which cleverly focused on a single market instead of trying to cover it, in hardware that was not extinct and also in a PC in terms of hardware, but also software.
Google should never have thought of Stadia as a console in the cloud, each of the games needed to convert the code, a problem that neither GeForce Now nor Luna share. At the same time, cloud-based consoles have the problem of being limited in performance and having lower quality, but they have the advantage of the catalog of games. On the other hand, the case that concerns us had hardly any catalog and even less differential. In any case, it is a demonstration that Stadia was dead from the start since they never managed to have enough subscribers to justify an increase in their infrastructure and the motivation to continue with a second generation. Rest in peace Stadia, you are going to be the classic example of how things shouldn’t be done.
Table of Contents