The Recipe for Software-Defined Storage Success
  
News
The Recipe for Software-Defined Storage Success
16 Sep 2016

George Teixeira, CEO and Co-founder of DataCore Software, was one of the original pioneers of storage virtualization and software-defined storage. He shares his thoughts on the current state of the art in the technology and what the recipe is for maximizing the benefits of software-defined storage.

The formula for software-defined storage success is obvious: it is not the technology but the economic advantages that ultimately drive success. Yes, software-defined storage must cover a broad spectrum of storage services and functionality. But what is really important is that the investment in software-defined storage ultimately allows greater purchasing power and the flexibility to incorporate new investments as they make sense, resulting in significant cost savings. To achieve this, it is critical to ensure that software-defined storage will work with existing storage investments (different vendor offerings) and make them better, while allowing for new technologies to be readily incorporated.

This flexibility to work with different vendor offerings provides the ability for organizations to future-proof their infrastructure in order to minimize the risk of changes/disruptions and provide a growth path over time.  This gives users the agility to ride the wave of industry innovations and maximize price-performance without the high cost of constantly having to ‘rip-and-replace’ useful storage investments.
Whereas conventional storage poses serious constraints to the performance of computing processes, software-defined storage actually enables a dramatic increase in data access speeds. This is because traditional storage uses specialized controllers and device-specific software to do storage work. Software-defined storage, on the other hand, is able to use a standard Intel-based, off-the-shelf server and software that works across any vendor — infrastructure-wide — versus on a single brand or model of storage. By doing so, the technology is able to leverage the principles of Moore’s Law, which results in more cost-effective and efficient processing power.

Parallel I/O technology, for instance, utilizes this principle in software-defined storage to unlock the power of multi-core servers so that multiple CPUs can reduce response times and drive performance faster. The performance of that technology has been demonstrated by world-record performance results in audited third-party benchmarks that show that software defined storage solutions are not only affordable, but can outperform multi-million dollar enterprise storage systems and all-flash arrays.

With this revolutionary application of parallel computing in storage, users experience tremendous performance and cost savings. Market data by TechValidate has shown that customers using this type of software-defined storage technology are already seeing dramatic economic and performance improvements (see the findings at: https://www.techvalidate.com/product-research/datacore-sansymphony-v/charts).

The Evolution of Hyper-Convergence and Software-Defined Storage

Hyper-convergence has gained a lot of attention as a growing technology, but it is really more of a choice on how to deploy software-defined storage — meaning that users should be able to choose whether to use hyper-converged (hypervisors, compute, networking and storage), converged (networking and storage) or traditional SAN storage and cloud storage. A true software-defined storage solution allows all these to work in harmony. What some people in the storage industry call “hyper-converged” is actually too limiting, in that the technology doesn’t protect existing investments or properly handle enterprise workloads and their infrastructure (database workloads and Fibre channel for instance), and therefore creates yet another ‘silo’ of specialized equipment from a single vendor versus allowing users to have choice and purchasing power for the future. This further constrains the scalability and the performance, especially when it comes to enterprise-level workloads.

To avoid this, users should choose a hyper-converged platform that works with existing investments, where the hardware can be purchased separately from different manufactures so that users keep the power of choice. In terms of performance, ask what vendors are doing to take advantage of today’s high-performance multi-cores. Too often the answer is to throw a large number of nodes at the problem, which causes costs and complexity to rise — the opposite point of hyper-convergence, which is, in essence, to minimize cost and complexity by doing more with less.

List of news      >