AI has opened up a new field in computing, a new industry that is eager for resources and consuming them at incredible speeds. Bandwidth suffers from its low hours as a concept, as differences in nutrients cause bottlenecks.
Since MCM operators have gained full power, GPUs have HBM2E As experience and they increase their capacity every two years, SSDs increase their performance and DRAMs cannot continue.
Estimating and installing DRAM may not be enough
The industry response to such bottles is clear: DRAM needs to be measured in small doses to increase volume and increase speed. Although forecasters are always inaccurate for obvious reasons, it has always been a big draw in the hardware world with the accompanying greater operational delays compared to other things.
The end is predicted at 90 nm and instead goes at 10 nm, but the debate is once again on the table for all opinions.
Unless now, DDR5 will be a temporary relief for the next step, which could end DRAM as we know it.
3D stacks will introduce HBM as we've seen in it Lakefield Intel also comes up with a new concept that puts DRAM of war to the test, what are the real options for staying in the market?
Although limited, the market cannot easily end it
DRAM is always on the wire with manufacturers in the eye of the storm. But the market itself knows that even though it is the main bottle (we leave storage aside for obvious reasons) it has many benefits that other technologies currently offer:
- Easy access to Byte level.
- Rapid access times.
- Writing and measuring.
- Endless data storage.
- The "unlimited" resistance.
- Mature technology and low cost of its volume.
From a practical standpoint, the problem of manufacturers and systems is understandable, but then what is the solution to its disappearance? Increasing power in the area. Denser bit cell capacitors are needed to maintain high speeds, sizes and eventually increase overall efficiency.
RAMBus says that the speed doubles every 5 or 6 years, enough time in the industry without a doubt and determines the cabling that goes up significantly to design support based on the accuracy of the amount of data generated.
What technology is capable of installing DRAM ?: NRAM
Obviously, technology that could bring DRAM to a new life better will be somewhere between DRAM itself and NAND Flash as well, something like Intel and Micron XPoint but taken to a whole new level.
New technology seems to be a plus carbon nanotubes, also called CNT and it will take NRAM as the keyword, where N stands for Nanotubes. The technology is already being developed by Fijitsu, despite its performance NRAM as such but as NVM machines.
The basis of the NRAMs is the Van der Waals forces, in which the CNTs are bonded in such a way that they are a random mass of carbon tubes based on the electrode conductor. The bond is then voltage-dependent and may be broken by thermal vibration using the opposite voltage.
This result makes NRAM inconsistent that IBM and Samsung's efforts are always failing, but when they do, Fujitsu has hit the key: add another layer of CNT edged, protecting the cell as if it were an iron barrier.
Preparation after this step is focused on the clearance and range of cells, thus achieving a 5-fold change speed without memory size. It is said that they will be able to access the current 16 intervals of DRAM at the same speed.
Speed will be the second phase to develop, but if it were to surpass DDR4 and later DDR5, only two major companies would be betting on it to break the current DRAM market. It is true that as it happens CMOS, the death of DRAM has been predicted over and over again, perhaps this is a very good attempt that the industry will move on to a new concept that means significant improvement in the short and long term.