The Case for Software Defined Storage as the Future of IT Infrastructure

Christian Putz, Director, EEMEA, Pure Storage
Christian Putz, Director Emerging Markets, Europe, Middle East & Africa Pure Storage

Software Defined Storage has gained strong momentum across the IT market over the past few years. In fact, according to a recent Gartner report, by 2019, 50% of existing storage arrange products will be available as ‘software only’ versions, up from 15% in 2016. Furthermore, approximately 30% of the global storage array capacity installed in enterprise data centers will be deployed with SDS or hyper converged integrated system architectures

SDS offers a simpler approach to traditional data storage because the software that controls the storage-related capabilities is separate from the physical storage hardware. For business users who rely on IT infrastructure, the storage element of “software-defined” enables greater levels of responsiveness and agility. Customers want greater flexibility with their storage, from the physical footprint to simplification of deployment and ongoing management. So, removing the complexity from the hardware means we can also simplify the software.

A good example is the way data is protected on your standard hard drive. Having a single physical disk means that if there is a mechanical failure of that disk, your data is lost. Using a Redundant Array Independent Disks protects data by having multiple copies of the data spread across several physical disks, along with parity checking which is used to recreate the data in the event of a disk failure. Traditional storage solutions utilize ‘hot spare’ disks that sit idle while waiting for a failure to occur, to rebuild the lost data on the disk, using the parity information.

Instead of providing availability at the physical disk layer, a more efficient and reliable storage method is via hardware like Solid State Drives. Being much less prone to failure allows for RAID to be abstracted from the physical disk into segments which are spread across multiple SSDs within the storage array. The key benefit is that in the event of an SSD failure, only the data in use on the SSD is rebuilt from parity in minutes as opposed to the whole physical disk that can take days.

Gone are the days of sizing a solution for three to five years and building as much scale into the configuration from the beginning, to get the best price from a vendor. This process has traditionally been fraught with uncertainties—is the solution sized correctly for the performance/capacity I need for the lifecycle of these assets? Did the architects take my unknown changing business requirements into consideration when sizing the solution? Will I be required to pay a huge ongoing maintenance fee to keep the solution supported after the warranty expires? What if I want to push the asset beyond its intended lifecycle? What if that lifecycle could be ten years instead of five? Will I be forced to repurchase the solution again and have to repeat the whole process?

Software defined means you are able to change every component in the storage solution non-disruptively, without any impact to the availability or performance of production applications. When new storage technologies are introduced, for example, NVMe—they can easily be integrated into the existing solution.

Not only does SDS prove valuable for your IT team by saving time that can be redeployed back into the business but more importantly, it supports the overall growth of your organization. It’s clear that software defined is the future of infrastructure components and we haven’t even started looking at how this impacts orchestration/automation – that’s the next story!