There is a myth, ingrained in the minds of many, that the power consumption of a processor is something that manufacturers suddenly discover as they move from the design phase to the pre-production phase. The reality is much different, after all, a processor is nothing more than a very, very small-scale electrical circuit. So we are talking about how the way the electrons move in the circuit is crucial and part of the initial design of the processor from the start.
How is the power consumption of a processor measured?
We cannot know what the exact energy consumption of a processor is, because a series of physical phenomena can occur which vary the result and are only known once the design has been carried out and therefore we go from the conceptual to the real. An approximation is therefore made, which helps engineers to get a rough idea of what the energy consumption will be.
The general formula is as follows:
Power (watt) = number of logic gates * capacity * clock frequency * squared voltage.
But this this is a very generic estimate, within the same processor, designers can use different types of logic gates even within the same type and with different consumption levels. But above all it depends on the way in which the different logic gates that make up the different elements of the processor are connected to each other. This is where we come into the Power Delivery Network or PDN. This is part of the design of each processor and refers to how power is distributed among the various logic gates.
What is the energy distribution network?
When designing a processor, we get to the point where we have to organize the different blocks that make it up and interconnect them for communication. But each element requires a flow of electric current to function. Because what happens when constructing a building in which the distribution of the power grid is to be designed, so does when designing a processor.
In a CPU, what consumes the most are the interconnections, today between the internal and external interconnections inside there is 3/4 of the internal consumption of the same and it is one of the biggest challenges for engineers today. This makes it a challenge when creating new processors with more and more cores or not only the communication interface, but also when supplying the various blocks of the processor.
It doesn’t matter whether we’re in front of a 1W processor from a smartphone, 45W from a high-end gaming laptop, or 200W from a server processor. All have been designed with a specific Power Delivery Network. Which implies that each of the hundreds of millions, but billions, must be operating at the correct voltage. If, for example, the voltage was too low, the data could vary and the processor would not only operate with incorrect data, but could also affect the stability of the processor.
What are the current challenges when designing an PDN?
Over time, the voltage at which both processors and memories operate has decreased. Initially, the designs of a complete computer were made using several interconnected chips under the TTL interface, transistor-to-transistor logic, with a voltage of 5 V. Currently with the use of 7nm FinFet transistors we we move around 0.5V and 1V. This presents a challenge for system designers.
In a digital processor, the signal is binary processed and therefore the voltage fluctuates between two voltage values, one on and the other with the processor off. Thanks to this, the values are separated enough so that the highs and lows of them do not end up disturbing the sent signal. However, with the voltage lower and lower comes a problem and it is that in order to supply the most powerful processors with enough power, we have to increase the amount of amps that feed it. Since the consumption of any electronic circuit is proportional to the square of its voltage, most designers have focused on keeping it as low as possible within specifications.
The low voltage and high amperage paradigm is difficult because of because more cables are needed to carry the greatest amount of current needed. Making the power distribution network more complicated than it should be, not only within the processor itself, but also externally. Where the organization of VRMs on the motherboard or expansion board is important in the complex electrical system.
Electricity distribution networks today and in the future
In recent years, measures have been taken to save power and increase the efficiency of processors. These include the use of energy distribution networks constructed in a modular fashion. Which are designed so that certain parts of the processor turn off completely when not in use to consume less power. Let us not forget either the mechanisms which make it possible to dynamically vary the tension of a processor in order to make fluctuate the clock speed and the energy consumption.
A processor running at 1 GHz with a voltage of 1.2 V will have the same performance as a processor of 0.6 V at the same clock frequency, but it will consume 4 times more for the same work. It’s because of it Many modern processors and GPUs have their power distribution networks designed to reduce voltage to the minimum necessary when the clock speed is low. This increases the level of complexity of the processor, as it is necessary to design the processor in such a way that it can work with different voltages in its design.
Today’s processors are made up of billions of transistors which form hundreds of millions of logic gates and with them tens of millions of combinatorial and sequential systems. The design of the PDN has therefore become extremely complex over time. And if we add what we mentioned a few lines ago, then it becomes one of the most important parts in the design of any processor.
Things get complicated with chiplets
The adoption of chiplets means that the Power Delivery Network is not only integrated in each of the chiplets, but also in the interposer which communicates them between them. Considering the fact that the intercom is the most power hungry in a monolithic processor, and the wiring of a chip system increases its length between different chips, it turns out that the biggest challenge is in the distribution of power. in these configurations.
The solution? It went through the use of vertical interconnections, much shorter and more numerous. The latter allows them to operate at a lower clock speed and therefore at a lower voltage. Something that is crucial in moving the huge amount of data that applications like artificial intelligence or graphics rendering require. But at the same time this poses a series of problems in the design of interposers, which marketing departments do not usually talk about in public, but which for engineers become a huge headache.
In any case, in the concept of chips, despite the fact that we have physical chip differences, its PDN is actually designed as if it were a single processor.
Table of Contents