NVMe SSDs are the most advanced type of storage to date in terms of access speed and bandwidth. But what if we told you that we were approaching the end of SSDs in M.2 modules and so today’s fastest SSDs are going to be replaced? We explain the reason to you.
One of the biggest complaints about NVMe SSDs in the form of M.2 modules is how little space they have to accommodate the chips. This limits your storage capacity. However, this is not the reason why such connections might see their end in future generations.
Why will we see the end of M.2 modules?
A common element of all processors is that in terms of bandwidth and access speed, they generally do not take advantage of more than 2 DIMMs or SO-DIMMs, so that the additional modules serve as extensions to the existing access channels. This is at least the case with PC processors, since workstations and servers have memory access mechanisms to take advantage of the large number of modules installed on the motherboard.
Thus 90% of PC users have a maximum of two modules, which raises questions. Is there a way to use the third and fourth modules more effectively? Well yes, placing an SSD inside will mark the end. Let’s not forget that these storage units are composed of:
- A flash driver.
- DRAM memory so that the controller can perform its tasks more efficiently, although in some cases there are units without their own memory that use system RAM.
- Obviously, the NVMe memory chips, that is, the NAND Flash where the data is stored.
Putting an SSD on DIMMs might sound silly, but it makes sense since accessing data on your DRAM wouldn’t be slower than via PCI Express. In other words, there would be no need to perform continuous data copying exercises, nor to have part of the memory occupied as a backup cache.
NV-DIMMs are the future
One of the biggest performance issues today is CPU to RAM latency, although solutions such as CXL will allow additional RAM modules to be added via PCI Express. Access latency of the PCI Express port being in the middle has been shown to degrade performance. In other words, runn ing programs from said memory is much slower.
This is the biggest bottleneck of current NVMe SSDs, they can increase their storage capacity and even their bandwidth, but their latency is insurmountable and it will increase as soon as high bandwidths force the use of encoding PAM so that its consumption does not skyrocket. The consequences of all this? You need to find a way to increase performance, while maintaining high bandwidths. How? We have already told you the solution, place an SSD in a DIMM module.
A DDR5-4800 memory module has a 64-bit bus and therefore a bandwidth of 38.4 GB/s. The PCI Express 5.0 module? 16 GB/s. Although the numbers will increase with PCIe 6.0, we have already commented that the latency problem is insurmountable and that is why the end of M.2 modules can be written on paper.
The last point, the processor memory controller
Memory controllers built into the CPU and whose job it is to access RAM are no different in operation from flash controllers, the latter being rather a more complex version of the former. So, when the time comes, it would not be surprising to see processors capable of directly managing an NV-DIMM module. Which means for such systems, using an NVMe SSD will result in a much worse option.
Since the makers of NVMe SSDs in M.2 modules are the same makers that make RAM, this change won’t result in something that disrupts the industry. In addition, it will make possible one of the wishes of the system designers. Have massive storage with the same direct access speed as RAM and without having to perform heavy data copying exercises.