One of the things you learn when studying computer architectures is how processors can take advantage of performance testing to manipulate benchmarks. This is something that is not new, moreover, since the beginning of the PC, reports are usually sent where the manufacturer, be it Intel, AMD or any other brand, usually accompanies information about its architecture with figures and graphs. However, just as things like financial manipulation also exist, it also happens, oddly enough in the world of processors.
Common tricks designed to manipulate hitpoints
To know if a processor is better than another, we need a cardinal rating, that is, a number that measures performance and specifies whether processor A is better than processor B. It It’s not enough for us to read the specifications, because this data does not. help us compare one CPU with another if they are of different architectures. This is where benchmarks come in and knowing this, processor designers and manufacturers use a number of tricks or traps to manipulate benchmarks.
cheat instructions
A benchmark is nothing more than a program, that is, a series of ordered instructions that are executed in sequence. Well, one of the traps that all hardware architects have been doing for years, even decades, is to optimize those that are used the most in benchmarks. What do we mean? For example, make them take less time to run. When deciding which statement has the most resources in the design, more importance is always given to the most used ones, and if they are at the top, they can give a better result in performance tests.
Clock speed and cache to manipulate benchmarks
Although the two are separate elements, they are apparently very much related if we talk about manipulating the results of the chip itself. Today, the clock speed of a processor, called GHz, can fluctuate depending on the level of work. The fact is that depending on where the data is located, if at a certain level of the cache or in the RAM the consumption is lower. Typically, a program runs in RAM, but there are benchmarks that are firmware that fit in cache memory.
So when they run they don’t need to access the memory controller, they just do it recursively from the cache. Which gives them not only very low latency, but also lower power consumption, so it helps increase the clock speed in the long run. In a typical environment, such a sustained increase in consumption would not be recommended, but the performance test does not measure this aspect, but the speed at which the program runs.
The trick is none other than to maintain the Turbo or Boost speed for as long as possible. There are even designs that specify a speed in this aspect lower than what they achieve in said tests. But make the law, make the trap.
Multi-threaded tests and heterogeneous configurations
Due to poor marketing, the E-Cores of the last two generations of Intel are confused with the cores in the face of energy efficiency. Rather, they are optimized for the region. And what do they mean? When a processor is running a program, sometimes there are what are called cause deadlocks that let it continue, generating periods of inactivity called bubbles. Well, the idea of multithreading is to give mechanisms to the processor so that it switches context and can take care of other processes.
The problem is that it dramatically increases power consumption and is deadly for portable devices. The solution? Instead of wasting space and transistors on multithreading, simpler CPU cores were added to low-power devices. Strategy that Intel has apparently adopted, however, the P-Cores continue to maintain multithreading, so the usefulness of the E-Cores is not that. Rather, they are so that the more powerful kernels don’t waste time dealing with things that affect the main program. So if apps like Mail Manager are suddenly activated, for example, they shouldn’t end up running in the foreground.
In multi-threaded benchmarks, the score is a sum of the performance of all cores, which cannot be turned off. This is one of the pitfalls of comparisons between homogeneous and heterogeneous processors. Therefore, the ideal would be to mark how much each kernel type adds to the total.
false results
This is the most cheeky case, in which, depending on the brand or model, additional points are awarded. This is possible thanks to the fact that each CPU has an instruction that identifies it, so that the reference program can know where it is executing and apply a positive or negative condition as appropriate.
Table of Contents