Graphics chips or known today as GPUs have seen an interesting evolution over time, from their beginnings in early PCs to the ultra-complex behemoths we have today. This is why we are going to review its evolution over time in this article.
Graphics cards or known as GPUs are the most widely used component of a PC today that is not part of the Von Neumann design. This has led to the dramatic evolution of GPUs to date.
What is a GPU?
A graphics card is a key component of a computer that uses it to communicate with the user through a screen, on which it generates images that report at all times the current situation of the computer. The main components of a graphics card are the graphics chip on the one hand and the video memory or VRAM.
This is the case from the launch of the first graphics cards until today, graphics chips and VRAM have become more complex and powerful, but the standard configuration is still maintained.
First generation of graphics cards: terminals and image buffers
The first generation is the one that came from text terminals, where a television without a radio frequency receiver was used to transmit the text that was written. The name given to these terminals was TV Typewriters or Television Typewriters, because for a person in those days that was it.
It was very simple to operate, part of the RAM was used to store the sequence of characters we were writing. Each character had its glyph in the form of a bitmap stored in a ROM and instead of storing them in RAM what the RAM did was inform the ROM which character should be written to the video output. For this, a series of binary counters were used which counted the vertical and horizontal scan times.
But in the same way that you can choose characters, you can choose colors, so some of those primitive graphics chips could draw images on the screen, but limited by the available memory and the color palette, which has evolved over time to display more and more. definition in a larger number of pixels, however these were very primitive elements that you could hardly do anything with.
Second evolution of GPUs: sprites
Unfortunately, the second generation did not reach the PC, since in the early 1980s, IBM had not considered running video games. The concept of the second generation? Moving targets or sprites are glyphs or bitmaps that can be placed in any position in the image.
In the early 1980s, most of the 8-bit systems people had in their homes used a sprite system, as most of their software was for video games. Sprite systems use attribute tables for each sprite that indicate the priority, if inverted, the color scheme they use and their position.
This system was used in 8-bit and 16-bit consoles for almost 15 years before moving to 32-bit. On PC, on the other hand, it was completely unheard of, which completely influenced the games released on the platform for many years.
Third evolution of GPUs: display lists and the blitter
Screen lists first appeared on home hardware via the Atari 800 computer’s ANTIC. A screen list is nothing more than a series of commands indicating how the graphics chip should draw. the scene. So far, what had been done was to copy a list of data into video memory and have it interpreted by the hardware.
But where they had their peak was in the Commodore Amiga, through its Agnus graphics chip which inside had an evolution of the ANTIC called Copper and a chip called Blitter, which completely revolutionized the operation. . The function of the blitter? Copy data from one memory to another but be able to modify it on the fly so that the origin is different from the destination and all this freeing the CPU.
Before the appearance of the blitter, the CPUs only had for them the time during which the screen was not drawn, with the blitter a CPU could have all the time in the world and its ability to manipulate graphic data at the same time. fly gave a jump in quality in the PCs.
Fourth evolution of GPUs: multiport memory
Although the processor was completely free, the graphics chips had the problem of not being able to access the RAM to manipulate the data if the screen was being generated. The next development was therefore the use of multiport memory. This results in a multi-channel memory. This means that the graphics adapter and the graphics chip are accessible at the same time without one having to wait for the other.
With this we come to the evolution of 2D maps, from there their evolution has not been architectural but in specifications, with the ability to draw more pixels per screen and with greater color accuracy.
The emergence of 3D hardware on PCs
The first 3D maps did not have 2D elements but two completely new material elements in 2D material, which were as follows:
- Texture processing units: which can take a specific part of an image stored in memory and manipulate it. A texture unit is able to take that specific part of an image and rotate it, resize it, and even adjust it to be placed on a 3D surface. Texture units were the last to appear on 3D workstations, but instead on PCs faced with the adoption of 3D graphics, it was the first hardware to be adopted.
- Rasterization units: Raster units are responsible for converting vector 3D data into 2D Cartesian data that can be displayed on the screen. This process was very heavy on the processors of the time and its inclusion allowed 3D software to reach homes and there was no need to use workstations.
The first 3D cards were overpriced and did not have the expected success despite their adoption in the PC video game market, the reason was that they demanded a separate 2D card, but it did not take long for them to get started. two parts of the material be integrated into one.
The GPU is completed in its final form
The NVIDIA GeForce 256 was the first GPU to have the full 3D pipeline, which meant that the entire 3D scene was computed by the GPU and the CPU only had to create the display list. This is a significant change from which the rest of the evolutions have taken place since then, which in summary have been as follows:
- Shader units: These are processors based on multithreaded execution, which allows them to run programs with which they first modified graphics primitives and, over time, more generic data. Its evolution led from graphic design to scientific computing and initially they were separated by the type of graphic primitive, but then they were unified and have continued like this ever since.
- Accelerators: Video codecs, display adapters, and other hardware were built into GPUs during this time. Ceasing to be pieces apart.
- Systolic tables for the execution of artificial intelligence algorithms.
- Units to speed up the execution of ray tracing.
Although over the last twenty years we have seen generation after generation of GPUs, really architecturally if we ditch the addition of new units for AI and Ray Tracing, what we have really seen is an evolution. in which there are more and more units. but with a common architecture.
Table of Contents