SSD adoption is quickly expanding across the enterprise sector. According to InformationWeek’s 2014 State of Enterprise Storage Survey, 40% of respondents were using SSDs in disk arrays, up eight points from last year, while 39% deploy SSDs in servers, up 10 points from last year.
But the reasons behind the adoption of flash-based storage technology have been, and will be, changing. Let me walk you through the various drivers for SSD adoption I’ve identified as we continue en route the flash-transformed data center.
The First Wave – Performance
As the industry first looked at ‘why flash?,’ the simple and obvious answer is: performance, performance, performance. Faster boot times, DRAM-like responsiveness, scalable databases and denser virtual environments – storage performance is greatly superior with SSDs.
The first wave of data center adoption saw SSDs moving into server enclosures and stand-alone arrays as a catalyst for bringing higher throughput and faster I/O.
Looking a few years back, the cost per GB was still high in using SSDs in all areas of storage, and flash needed to be used opportunistically, wherever it made sense from a cost vs. performance need. But this is changing fast and dramatically with cost parity and the arguments for employing SSDs and flash-based storage are growing far beyond performance alone.
The Second Wave – Power And Cooling Costs
While the first wave was all about enhancing arrays of spinning disks or using SSDs as caching devices, it quickly became obvious that SSDs can do more for data center efficiency than just deliver superior speeds.
SSDs require far less power and produce a fraction of the heat of their spinning counterparts. According to the Storage Networking Industry Association (SNIA), SSDs demand 92% less power, and operate at 38% lower temperatures.
It’s no wonder hyperscale and cloud giants such as Facebook, Amazon, and Microsoft are replacing hard drives with solid-state storage in their data centers. The long term cost reduction of cooling and power consumption, and ability to reduce floor space due to consolidation and higher performance, delivers sizable savings.
The storage industry has been accustomed to measuring drive costs by calculating the cost per gigabyte of capacity. But the truth is that to calculate the true costs of drives one needs to look at the encompassing costs of application needs, such as a cost per transaction or IOPS per Watt calculation.
SanDisk® Enterprise SVP and GM John Scaramuzzo has recently contributed an article to the Data Center Journal detailing the various aspects one should consider when of calculating the costs of both drives.
The Third Wave – Reliability
So what’s the third wave of adoption that the industry is undertaking?
The absence of moving parts such as mechanical arms, motors and spinning platters enables SSDs to deliver far greater reliability and higher Mean Time Between Failure (MTBF). Parts stress analysis indicates a MTBF of almost twice as many hours for Enterprise SAS SSDs vs. HDDs.
The truth is that failure of spinning drives is simply a matter of fact in the data center, and has been one of the great drivers for infrastructure backup policies and high availability architectures.
SSDs being more reliable than their spinning counterparts means drastically improved uptime and less costly servicing, a feature that is proving critical in some industries. For example, one recent conversation I had was with a national ATM vendor. Sure, SSD responsiveness and performance are a great benefit for their hardware, but the reliability of SSDs is what is driving fast adoption for a sector that delivers tens of thousands of appliances across a great geographic spread, reducing massive time and costs for downtown and repair.
Performance, reliability, Total Cost of Ownership, endurance and fast growing densities are enabling the flash-transformed data center, for more reasons than one.