Everything has a why and a purpose and often if we think about it we find things that we initially take for granted, in fact if we think about it they don’t make sense from a pure logical point of view. And it is that in the case of the graphics card, it should always have been part of the basic hardware and, therefore, not exist as an additional element.
The first graphics card was due to a limitation
At that time, there was no frame buffer, i.e. the whole frame was not generated in a memory and then transmitted to the screen because RAM was extremely expensive, there were also no dual port memories, so in the period when the video system was accessing the memory to draw on the screen to draw it with the electron beam, the system could not access Memory. What was the problem? Well, if we take into account the total time of all this, we will realize that it leaves very little time for the rest of the functions, like the execution of the program.
When Steve Wozniac designed the Apple II, he realized that he was accessing RAM at the same frequency as the processor itself, so he decided to cheat, that is, to using memory at twice the clock speed of the central processor, allowing one clock cycle to be allocated to the CPU and the other cycle to the video system. Thanks to this, in text mode, I could draw up to 40 characters per scan line.
The problem arose when it was discovered that to run in text mode it was too little, so 2 MHz video hardware was needed and this completely broke the timing between CPU and RAM. The solution? Use one of the expansion ports to add a graphics card that would allow 80 columns of text. And yes, we know that today talking about it is primitive, but its origin was not to move video games, as many believe.
80 columns versus 40 columns
At the time, one way to differentiate whether a system was intended for professional use or for playing video games was whether or not it supported 80-column video mode. On the other hand, graphics chips that did both functions did not yet exist because such a level of integration had not been achieved, so the designers of any computer of the time had to decide whether to direct it towards the domestic or professional market.
And what about the graphics card of the first IBM PC?
We have already seen that the first personal computers were not the IBM PC, however, it was the one that popularized the set of x86 instructions and registers that we still use today. Since it was not a widely used processor, all software had to be converted, but it was chosen by IBM because it was a better processor than the 8080 in architecture and one thing it stood out for was its 20-bit addressing. That is, it allowed adding up to 1MB of RAM.
Initially and due to cost, there were no RAM memory modules as we know them now and the system memory was soldered down. The solution? Add expansion ports that allow expansion. If we add to this that the Intel processor did not have pins for peripherals, but communicated with them through the system RAM, well, with this we already have the possibility of connecting expansion cards. Let’s not forget that the IBM 5150 was not intended for the professional market and not to be sold in homes, but for the business market.
The plurality of monitors also influenced
Now IBM could have chosen to put all the extra circuitry on the board for the first PC, however, at that time there was no standard for the type of monitors users would use. So the solution was to sell the same basic machine with options for different graphics cards depending on the needs of each user.
This decision saved them manufacturing costs, but in return allowed many PC and PC XT users to upgrade their graphics card to an EGA. Historic moment when today’s computer graphics card market began and the division by market type disappeared, unifying the market and making the PC the Swiss army knife it continues to be today.
Table of Contents