In the universe, there is nothing faster than light, so the best way to transmit data is light. Why don’t we use photons to make processors? Well, because it’s more expensive to manufacture and they don’t fit in size like silicon. However, there is a photonics embedded in silicon that unites the two worlds. With what applications?
What is photonics in hardware?
Photonics in hardware is nothing more than the use of photons that make up light for the transmission of information. Within photonics, there is silicon photonics, which is based on the use of silicon for the transmission of optical signals, which allows its implementation in integrated circuits.
Its use is not to create more powerful processors, but for communication between different chips and therefore in the external interfaces between processors, memories and peripherals. The goal is to reduce the bandwidth gap, the speed at which data is transmitted, between processor and memory.
It must be taken into account that the dominant value when transmitting data is the energy they consume. Precisely, the idea of using photonics on silicon is to also have an interface that transmits data at a lower cost.
Light-based memory interfaces
Over time, new types of memory are designed to transmit and receive data at lower power costs. If we use the data, we’ll see how the most efficient types of memory required new packaging techniques. As is the case with HBM memory.
There is no doubt that the bandwidth needs continue to grow, especially in the era of big data where the information circulating is enormous. This means that we need more energy efficient bandwidths. For example, in the world of PostPC devices, we will soon see HBM type interfaces, on the other hand, at the other extreme, in the world of supercomputers, silicon photonics is already considered not as something of the future, but of the present. .
In terms of internal on-chip communication, it offers no advantage in terms of consumption for data transmission. It is when one moves away one communication interface from another that one sees that the efficiency of the use of the interfaces based on photonics begins to make sense due to the lower consumption of bandwidth, allowing a data transfer for <1 Joule peak per transmitted bit.
On the other hand, the bandwidth degrades in a conventional interface because there is more distance from the processor. This means that memories beyond RAM in the memory hierarchy also benefit from these types of interfaces. Imagine for example an SSD with a read speed typical of DDR4 RAM.
There is no Moore’s Law for I / O pins
They keep telling us how Moore’s Law makes it possible to make smaller chips. Well, that’s true except that the external communication pins are not reduced. In other words, the external interfaces always occupy the same thing, thus affecting the size of a chip if you want a specific bandwidth or requiring the use of more complex packaging systems allowing a greater number of pins.
The concept is easy to understand, the power consumption increases exponentially if the clock speed is high, a high clock speed means high voltage, and the growth in power consumption is high. The only way is to increase the number of pins, but it forces complex constructions to be mass-built, such as 2.5D and 3D ICs.
This is where photonics comes in, as a solution to the problem of memory interfaces and their scaling in order to obtain higher bandwidths without increasing the average consumption in data transfer.
Where is photonics used today?
Today, silicon photonics are used in data centers to connect widely separated systems.
Through optical transceivers in each system, which can transmit and receive signals. Their function is simple, they convert the electrical signals into optical signals which pass through the fiber optic cables which connect the various cabinets that make up the data center. When the transceiver receives the data, it converts it into an electrical signal that conventional processors and memories can process and store respectively.
Such optical transceivers have the ability to transmit and receive a large volume of data. His main problem? They are expensive to manufacture and even more so on a commercial scale. This is why we have them today in supercomputers and not in PCs at home.
Another market where silicon photonics is used is medical imaging for diagnosis. In fact, light is used in medical diagnostics. Especially in microscopes and spectroscopes. Through the use of light, cells can be counted and visualized, a determined DNA sequence. Therefore, the photonics embedded in silicon allows the creation of integrated circuits designed for medical diagnostics which at the same time have the capacity to process this data at high speed.
With photonics built into silicon, an ordinary doctor will be able to study a tissue, a blood sample, etc. Without having to opt for laboratories with expensive equipment. Since this technology will allow the creation of intelligent microscopes in the following years, with an integrated processor capable of obtaining information from the images, processing them and sending them to a PC via a USB interface if necessary.
Shall we see it on the PC?
The integration of an optical transceiver to replace the memory interface has advantages in terms of consumption and bandwidth. The disadvantage? We find it in the cost when implementing them in a processor.
Where we will see is in the hubs responsible for receiving and distributing multiple high bandwidth signals at the same time. These hubs are located in the central part of a chip system where the distance between the chips is the greatest. With the use of optical interfaces, it is possible to solve the problem of consumption in interfaces and degradation of bandwidth due to distance.
This is especially important in systems that require multiple GPUs to communicate at scale. Even if for the moment we will first see the transition to vertical interfaces of type 2.5DIC and 3DIC as solutions before the arrival of large-scale photonics.
Table of Contents