Computers are getting faster, applications run more smoothly for us, however, when it comes to interacting with Windows, it seems that no matter how innards our PC always seems to go at the same pace. Is this something exclusive to the Redmond OS? Is it a bad optimization in terms of hardware or is there a technical reason behind it? We explain it to you.
The normal thing, and by simple logic, is to think that the more computing power you have in a computer, the more theThe programs consume less and less resources. However, we often find that this is not the case and the lack of optimization outweighs other things. Is it the laziness of the programmers or the failure of a series of decisions at the technical level or unavoidable circumstances that lead to it?
Windows still consumes the same resources as a PC
We must start from the fact that in the current operating systems it is this same and not the applications, which is in charge of managing the different processes and, therefore, they decide not only where they are executed and in which order, but also under what conditions do it and this is where we come into two different ways of using hardware resources, both by the system itself and by applications.
However, there is one that is a nightmare in terms of performance and that turns certain processes into real vampires of the power of our computer, despite the fact that they do not really require a lot of power to run and that is that a fixed percentage of the capacity is allocated to them. Whether we have a powerful high-end computer or a modest miniPC. For example, Windows 11 comes standard with virtualization-based security, a feature that provides little enough for home users, but is capable of consuming 5% performance, whether you’re running an application on a Celeron or a Celerone. the difference in power that this implies.
Why are these measures taken?
In reality, it is impossible to predict the performance that a program will require, since it must be taken into account that it will depend on the latency of each instruction and it is impossible to predict it due to the fact that it is not known where it will find the corresponding data. Will it have the information already in the registers or failing that in some level of the processor’s data cache? It must be taken into account that the latency of the processor, measured in its clock cycles, will vary for each instruction depending on the information fetch period.
Therefore, under this premise, as well as the huge number of hardware configurations that exist, it is perfectly understood that for system processes, an operating system, whether it is Windows, Linux or any other, takes some fixed utilization percentages on power processor. What’s more, this is done on consoles and over the years these usage percentages are cut to give games more power. However, we cannot forget that in this case we are talking about closed systems and it is a completely different situation than computers.
Additional cores for the operating system?
This all brings us to one of the new things we’ve seen implemented in the last two generations of Intel Core processors, but not in all models. We’re talking about the so-called E-Cores, which on paper were designed for background tasks. However, their assignment is not automatic. In other words, what the operating system will do is look for, say, 2% of the power of a free core to run a key process.
The way the operating system runs processes in threads for the CPU is similar to finding as much space as possible in a box to put things in. It will not activate another core if there is still a percentage left in the existing ones. The results? Well, despite the fact that E-Cores are a good idea for this type of task, Windows can ignore it, because in the eyes of the operating system itself, what it has is a certain number of execution threads, it does not Know the power it provides. Contribute everyone.