It was April 2008, when the iPhone had not yet arrived in many countries and the App Store had not yet been released to the public, when Apple announced that it had purchased the PA Semi. The company, founded in 2003 (known as PA is Palo Alto), has been working in secret with Apple for years. And like many other companies, only 21 artificial intelligence (for example), Apple ended up buying it. In this purchase came the first chip made by Apple itself: A4. The chip introduced on January 27, 2010 as the brainchild of the iPad, was Apple's first device that had its own chip.
Since then it has stopped sharply, but one of the things that hasn't changed is that Apple is still leading the development of chips on mobile devices. The competition, such as Qualcomm, Samsung or Huawei is getting closer and closer, but Apple is always rushing ahead to not be available to them.
The A13 has been a major upgrade of the A12, upgrading it over its parallel structures.
The final jump was the A13 Bionic chip, an amazing engineering project that connects a good number of things to a single chip that defines the future of a computer. What is known as heterogenible computing. This, of course, chip's ability to have different cores for different performance or performance: something x86 architecture won't do because all Intel or AMD cores have to be the same.
Heterogenible computing, which is also being tested by other manufacturers such as Samsung, Qualcomm or Huawei, is allows for a 6-core CPU: 4 in one energy efficiency, it is less powerful, but it is enough for low weight functions (using the web, checking email, running messaging apps …) and others 2 high performance it consumes a lot of energy but is only used when you need to compress extra weight functions like video games, artificial intelligence, multimedia editing …
But Apple has moved on and tested on its latest chips since the A11, the installation of specific features for specific tasks that benefit the entire CPU. It's nothing new because Intel has been exploring that field for many years: cores are, for example, dedicated to video encoding and translation in standard format such as H.264 or H.265 (used for HD and 4K). However Apple is trying to push it forward in some way to give you some power shortages of arM architecture with its philosophy of chips focused on the efficiency of systems that have batteries and therefore use less.
The real change in Apple's design came with the A11 generation, the A12 became a jump because it was used for the first time in a 7nm print system that has been replicated in the A13. But the next generation may be down to the 5nm process, with the same structure but with less features, it can allow you a little warmth and get an 8% more speed, by simply modifying this property. Could the A14 be the next time we saw it on a Mac? We will clarify some concepts before understanding why a change in computer ARM means so little.
The difference between ARM and x86
Many times, when Apple ARM chips and Intel chips are compared, we often come to a head because Apple processors are, by virtue of, the latest competitors of the latest Intel chip. But really, that's part of the story. CPU is a set of commands. And the more orders we have, the sooner we can solve common tasks. We will add other metaphors to better understand them.
Intel is a CISC architecture or Complex Computing Instruction Set. ARM is an RISC or discounted computer tutorial.
Imagine a calculator, where we have four keys: add, subtract, multiply and divide. This calculator allows us to perform, in one command, any basic math calculations. However, it turns out that if we want to calculate 30% 200 we have to do more than one function because the calculator doesn't have a button to calculate the percent. That forces us to do two things: or make a three-dimensional rule that involves two functions and what we do 30 * 200 and divide by 100. Or we could also divide 30 by 100 and multiply that by 200. We had to do two counting operations with our four buttons to get the result.
If we have a% button calculator, we have to press a key and do one operation. Two calculators equals fasting. They have the same size. But the absence of a percentage button forces me to double the number of operations. That's ARM and x86 from Intel.
While Intel's architecture has many different commands and sizes, and there are specific instruction sets for certain types of calculations, the ARM architecture chip has very little functionality and they all have the same place. You need more tasks (and more time) to get the same result as an Intel chip. So the fact that they both go "equally fast" doesn't mean much, because if one needs a few steps to get the same result, then the one using the few steps will be faster. It's that simple.
That is compounded by the size of the orders. Imagine that the CPU is like a highway where the trucks rotate. Each truck is an order and internally they carry that information for which they will perform the operation. But Intel-powered trucks can be big or small. If I were going to do basic calculator work, the truck would be smaller so the extra trucks would equal the basic operation on the same road surface. But at ARM, all trucks stay the same, whatever they do. So it's not just that they have small orders, and that the space they use on their highways is huge.
While 10 small trucks can perform 10 operations per 10 meters (1 meter per truck), ArM trucks occupy every 2.5 meters and in the same location can have four basic operational trucks. Intel, having more instructions has trucks larger than 2.5 meters, but the bigger the truck, the fewer times that operation is made. They are usually the most repetitive tasks (the most common ones) travel in small trucks to fit in one and be more than a one-time space
ARM processes several orders and the same length, while Intel processes multiple orders with different lengths. ARM is designed to use less power and Intel for greater power.
Why am I counting this? Because that's how Apple found to fill the gap by providing specific chips to your CPUs to resolve common and concrete tasks. If a chip receives a video data set on the other hand it returns the video that has already been inserted, it will always be faster than the standard chip that should include all the video encoding activities.
They basically give us a scientific calculator. With only a four-dimensional machine calculator I can calculate logarithms, however it will always be very quick to have a dedicated button where you provide the calculation and return the expected data. Yes, those are the features that Apple is putting in place to improve the performance of its chip and make them faster and more efficient. They are no longer more complex commands: they are complete processes within a chip.
Among other things, the A13 has a neural motor that allows it to use machine learning neural models more efficiently than any CPU because it is optimized to work with the kind of statistical data and functionality expected by this branch of artificial intelligence. It has a video encoder, an image signal processor to process the images taken by the camera, a chip that accelerates graphic operations, some determining what kind of CPU core to use for processing, the other to process the audio and improve … that's an "apple trick".
As ARM decreases slightly by definition and will remain less efficient due to its limitations as a component, next to the Apple chip puts a number of specific resources that allow you to perform normal tasks we do it daily in a dedicated and special way. Like the T2 chip on Macs, it allows the Intel chip to upload file encryption or 4K video encoding.
Future A14
The A13 processor comes to measure 98,5 square miles, 20% more than the A12 by adding elements and improving others. That means the total A13 has 8,500 transmitters inside. If we take into account the predictions of experts in fields such as Jason Cross of MacWorld, who did a very interesting analysis of what to expect on the A14 and that you can read here, the new A14 would be larger and would exceed 100 square miles. we are probably talking about up to 15,000 million transistors
Speaking of image performance, the A13 had a surprising increase in image performance. Testing its capacity with 3DMark software, the A12 slightly exceeds 4,000 performance points. However, the A13 shoots more than 6,400 points. More than 50% of the improvements we can expect to continue on the A14. We will not forget that the A12x (previous generation) of the iPad Pro already has a moderate graphic power, more than the PS4 or Xbox One S (basic models). And the new A13 represents a significant improvement in accounting. It is basically "the research and development of algorithms and software for the making of mathematical techniques and other mathematical objects".
Looking at the other parts, we would think that a 5nm reduction with a new number of transistors would allow Apple to install more neural motor cores, for example, or to incorporate new graphics printing chips to enhance the functionality we've talked about. . Having more space to install new cores is very important because the Apple ecosystem today has a lot of shortages: an AV1 encoder.
Do you remember that we talked about chips doing certain tasks, if not, forcing multiple functions to achieve the same result? Our logarithm problem? That is happening today with AV1 video formats similar to YouTube using its 4K video.
AV1 is an open and free encoder for works of royalty, which is YouTube's use of its videos instead of the HEVC used by Apple because it has a secret chip and cut it. Why doesn't Apple TV 4K put 4K videos on YouTube? Because Google should install a standard encoder that will use more power from the Apple TV A10 Fusion processor. In fact, Apple belongs to the Consortium who created this standard and promoted it, as we've reported here in its day.
But if Apple were to create a video chip that supports AV1, it would not only support a more efficient scale and take up much less video space than the current HEVC. I can also get your products to play traditional YouTube videos, without punishing battery usage and using resources. And we can reduce bandwidth usage in a very important way. In fact, I bet the new Apple TV to be introduced this year will have the support of this AV1 installation.
Memory is also important: The iPhone now uses LPDDR4 memory (for several generations, directly from the A9). But Samsung (its memory provider) has begun to produce LPDDR5 memory, works very well, very quickly and that will start reaching all the top phones this year. It is natural to assume that Apple is installing it on its new devices.
These memories do not use up to 30% more energy alone performing the same tasks as before (so we have a lot of battery life and the same battery). And they are 30% faster.
New jump
Undoubtedly, all of these small improvements together in architecture, printing, number of transistors, component switches, memories, multiple cores, new video formats are supported … if we add all this, we can see that the CPU's technological development line continues to emerge and without a doubt, the A14 will still give you a lot to talk about. We will not forget that, speaking extensively, the A13 is an A12 review, though The A14 will be a new jump that will give us interesting targets.
I hope you found this article informative and we'll see what Apple does eventually with its ARM chips and what will happen when they arrive (because they will) on the Mac. How will they solve the problem of reducing the use of generic features?.