Every few years, new generations of processors from Intel and AMD appear on the market, competing to see which achieves the highest performance and, therefore, the interest of each buyer. However, we are not going to talk about road maps or key names that appear in a PowerPoint in the middle of a lecture to talk about three ideas we’ll see in the processors of the future. If it is no longer a series of common trends which we will review.
The world of PC processors is evolutionarily quite boring, from time to time we see new processors appear with a higher number of cores, these being faster per MHz and having more cache memory. Apart from that, evolution has been kept in that direction, which is no small feat due to the titanic effort required to be able to bring these beasts to market. That we speak today of chips of billions of transistors.
What improvements will we see in the processors of the future?
We’re not going to talk about extremely complicated IT architecture concepts, but about a series of trends that we can all relate to and that in a few years you’ll see appearing on the roadmaps and datasheets of Intel and AMD. Most of these changes are aimed at improving system performance, but deviate from the traditional path followed so far. Because? Because they are much more efficient solutions, both in cost and in consumption.
Even more heterogeneous nuclei
With the Intel Core 12 we saw the introduction of E-Cores, creating two types of cores in each processor, both binary compatible and sharing access to the same programs. The idea of throwing smaller cores to perform background tasks is something that came from mobile processors and will not be the idea that we will see copied later.
Many processors from Intel and AMD have a Boost mode that allows one of the cores to reach a higher clock speed than the others for certain tasks. Well, what we will see in processors of the future will be the implementation of what we might call super cores, with more processing capacity than normal cores. Let’s not forget that programs have parts that work in parallel and parts that work in series and it is important to speed up both points.
The idea is that instead of performing a Clock Speed Boost which shifts consumption to the stratosphere to place a core at maximum speed, one of those cores with a higher capacity is used to solve these specific times in the execution of programs.
Using accelerators for common tasks
An accelerator is an integrated circuit which executes a task or a series of tasks in much less time than the central processor of the system, in addition to releasing it from the execution of said task. Well, things like:
- Compress or decompress a .ZIP file or other similar format
- Convert a file from one format to another
- Data in motion between memories or within the same data sink will be transferred to specialized units in the future.
This is something we already see in server processors built into versions of Windows and Linux for these systems. As with everything, over time these will find their way into desktop and laptop processors where programs will take advantage of them.
Configurable peripheral connections
One of the things we will see will be the use of FPGA chips inside the processor with the aim of being able to configure the various peripheral interfaces as desired. For example, instead of having a fixed configuration of USB ports of each type on a chipset, your motherboard manufacturer will be able to configure the processor’s integrated chipset according to your system configuration. This way, they won’t have to wait for a next generation to integrate certain connection ports, and can add and remove them depending on the target market to which they intend to sell the system.
Table of Contents