As of December 31 and according to Backblaze, the company had a whopping 2,200 SSDs under its belt, so after such a cautious time as several years of use, they decided it was time to come up with what we’ll see next. , which As a curiosity, there are some very striking data.
Annual failure rate from 2019 to 2021
Here there are data that we have to explain because they can be misinterpreted and we will see them later in detail. What you have to understand is that year after year the number of annual breakdowns (AFR) goes from 0.86% to 1.22%. Here we have to ignore the 43.22% and 28.81%
This is due to a very curious effect that repeats itself in both SSDs and HDDs: the greatest number of failures occurs at the beginning of the useful life of the product. That said, the AFR is calculated using the following formula:
AFR = (disk failures / (disk days / 365)) X 100
Therefore, and knowing this, it will be easier for us to understand the next section.
Annual SSD failure rate in 2021 alone
This table is particularly interesting because first of all we can see the failures for the most recent disks as well as for the oldest ones. The Crucial SSD has such a high AFR because it only has 80 drives available with less than a month of use where 2 drives failed, hence its high value. The same thing happens in the Seagate, although they only have 3 units, but after 33 months of use, only one has failed.
The important thing here is to look at the reliability values and intervals, where Backblaze states that anything less than 2% is acceptable and if it was less than 1% that would be fine. Here you have to count the number of units available, because the lower the value, the worse the interval.
Quarterly vs. Cumulative
Here the data is taken in another way according to the AFR that we have seen. Quarterly reveal very steep peaks and show when more units have failed in time, cumulatives, on the other hand, are more accurate over time, as they reflect more lasting and equally interesting changes.
This is used to see failure spikes at certain times relative to the total time, where the value is always less than the 2% we mentioned.
How do old SSDs behave?
Here we have some more curious data that mirrors something we’ve commented on before: SSDs typically fail at some point close to their first use and from there they stabilize in terms of failures. In other words, those that fail do so at first and those that last do so without major problems during their useful life.
The interesting thing about this graph is to see how the AFR fluctuates over time and as units are added to servers. As you can see, SSDs fail the most after about a year or a month and then gradually stabilize. Likewise, the values always tend to be at or below 1%, so we are really talking about very high reliability in almost all cases.