The gradual transition to the use of NVMe SSDs for storage in all areas of computing continues steadily, deterministically and unchecked. This includes network systems, which today intercommunicate several computers with each other, either on a local area network or in a data center.
Most of the storage units are of the DAS or Direct Access Storage type in which only the PC on which said unit is installed can access its contents and, therefore, in a network environment it is necessary to develop protocols which, for example, in a data center environment or a supercomputer composed of tens or hundreds of disks provide access to the entire storage infrastructure.
How is communication in a data center?
Before discussing how NVMe-oF works and what it consists of. It should be borne in mind that the technologies used in a data center or a local area network to intercommunicate their internal storage are called SAN, which is the acronym for Network of storage areas or local network storage. For this, three different technologies are used today, all based on the veteran SCSI.
- Fiber Channel Protocol (FCP): It is a protocol that carries SCSI commands on a fiber optic network, although it can also do this on copper lines. Their speeds can range from 1 to 128 GB/s.
- iSCSI: which combines the TCP/IP Internet Protocol and SCSI Commands
- Serial attached SCSI: the most used of all and based on SAS cables that allow up to 128 storage drives to be connected via host bus adapters or HBAs. The speed of these can be 3 GB/s, 6 GB/s, 12 GB/s and even 22.5 GB/s.
However, all of these technologies are intended to communicate with conventional hard drives. And we have to assume that accessing a hard drive is different from accessing a flash drive. This means that the use of these protocols is not the most appropriate.
What is NVMe-oF?
Well they are the acronym for NVMe over Fabric and it is that this protocol was not only intended to communicate with flash or non-volatile memory units, but also for the intercommunication of the different elements of a system via intercommunication infrastructures. We must understand that we are referring to a structure of communication between two elements. Which can be two processors, a RAM and a processor, an accelerator and a ROM, etc. Let us not forget that the topologies used for this case use the same structures as in telecommunications, but on a very small scale.
However, this is going to be used to communicate NVMe SSDs over the network. Either to communicate different elements to the CPU within the same PC or failing that via a network card. We are therefore talking about large data centers. The advantage of using NVMe-oF? Well, compared to SATA and SAS protocols used in hard drives, these are capable of supporting a queue of up to 65,000 end-to-end queries and up to 65,000 different commands per query
Types of NVMe-OF
There are currently two variants, which are as follows:
- NVMe-of with fiber optic channel: which was designed to fit into existing data centers and servers by supporting older protocols such as SCSI. This will ease the transition to using flash drives in existing data centers and servers.
- NVMe over Ethernet: which is used for two computers to exchange data through a remote direct memory access (RDMA) and, therefore, it refers to the fact that two computers can exchange the contents of their flash memories in NVMe SSDs without the CPU of one of the two systems intervening in the process. In this case, the communication does not use the so-called SCSI packets.
Let’s not forget that NAND Flash memories are also called non-volatile RAM due to the fact that their nature when accessed is the same as RAM, only that they do not lose information when they stop working. receive an electric charge. This allows the deployment of technologies making it possible to intercommunicate two distinct RAM memories in order to do so with the different flash memories.
What speeds are we talking about?
Let’s not forget that NVMe SSDs use PCI Express interfaces, so the fiber-based version will be one of the possible candidates to connect the various NVMe SSDs within the infrastructure of a data center or a local network. However, Ethernet will continue to dominate as the standard communication protocol for networks for a long time to come. There is no doubt that network interfaces at speeds of 50, 100 and even 200 Gigabits per second are under development and will soon be deployed in data centers.
The future of NVMe-oF is also on PC
The RDMA integrated in NVMe-oF is not a new technology, since it has been implemented in niche markets for years, due to the fact that integrated network controllers or NICs with RDMA were very expensive and required highly specialized technicians for their maintenance. implementation was expensive. However, it will be essential in the future, even on desktop computers. The reason for this is that the internal infrastructures of processors are evolving towards what we call NoC. In them, each element of the processor has a small integrated network card and an IP address with which to communicate with the rest of the elements through what we could call a network processor integrated in the processor.
It’s no secret to anyone familiar with the matter that in the same way that network controllers have been seen built into CPUs, the next step is to do so with flash controllers found in NVMe SSDs. Additionally, the advantage of implementing NVMe-oF internally is that the CPU does not have to run a series of processes to access data from one drive to another within a computer.
That is to say, in the future, the same protocols that will be used at the level of data centers and large servers will be seen on our PC in order to not only intercommunicate with the NVMe SSD units within them, but so that each element can be communicated differently to the CPU. We’ll just drop that protocols like those used in DirectStorage that allow access to the SSD from the GPU without having to go through the processor are based on NVMe-oF.