idspopd - Fotolia
- Scott D. Lowe, ActualTech Media
Let's pretend we have a time machine and can go back to 1995 to pluck a couple of data center engineers from their...
natural habitat and bring them forward to 2018. At first, they'll probably feel comfortable seeing the blinking lights, the obvious disks, the networking cables, the servers and the racks that hold it all together. However, once they start peeling back the layers, it'll quickly become clear that what they are seeing today is magic compared to what was available more than 20 years ago.
Although innovation has swept all parts of IT, storage will be the most foreign to our time-traveling engineers. Servers look and act the same, even if they're virtualized, and networks still operate in a similar fashion. But storage has been fundamentally transformed. In many data centers, the disks that our "chroniton" particle-infused engineers are used to are gone. Their replacement devices work more like RAM than storage -- even if the outside shell appears like it did way back when.
Moreover, beyond the devices themselves, the entire storage market has changed. It has shifted from one based on hardware differentiation to one that owes its existence to Intel processors powerful enough to move functions that used to demand custom hardware into flexible, pliable, reprogrammable and easily updateable software constructs. It's clear that we're squarely at the beginning of the software-defined storage age.
Flexible software and the software-defined storage market
As is the unfortunate case with many a promising technology trend, the term software-defined storage (SDS) has been bandied about and used to describe, well, just about every new storage product. Even some of the most storied tier-one storage vendors take immense pride in saying, "We're not a hardware company. We're a software company." The truth is that they're right, at least to a point.
There are only a few truly custom hardware storage platforms left. Of course, there is some hardware differentiation, but not that many vendors are bending their own sheet metal these days. Rather, almost everyone uses commodity hardware at the platform level and differentiates at the software layer.
This change has resulted in storage products that often add new features and performance gains with every new software update. That being said, are all these systems really software-defined?
Storage systems today may outwardly look like those of yore, but it's clear something has changed. If you place three storage systems from three vendors side-by-side and remove the bezels, you'll likely have difficulty telling them apart because the move to software has driven a commodity hardware trend that results in enterprise storage vendors of all stripes using the same hardware.
In addition, they all still provide typical block and file storage protocols that servers and applications consume. For traditional storage services, outward appearances have remained consistent to enable application compatibility. But once you get beyond the presentation layer, the innards look different. So for standard applications, it all looks the same even if the engine has been modified.
Degrees of separation
Why does this matter? Let's talk about the software-defined storage market in general terms. While every major player and most minor players in storage call themselves software-defined, there are degrees of software-defined that are important to understand for a broader discussion. There are several ways to look at this space.
First, there's storage that looks, acts and smells like it always has. Even if it's called software-defined, it's more or less a traditional storage system that requires connection to servers via Fibre Channel or Ethernet and exposes itself to hosts using various standard protocols. You buy these devices from vendors, install them in racks, configure them and then use them. While many of these products, from vendors such as Infinidat, Kaminario, Tegile and Tintri, take a software first approach, they resemble iterations from before the software-defined era, which is often beneficial as organizations seek to adopt newer technologies in a nondisruptive way.
The second class in the software-defined storage market posits a similar outcome to the first, but is truly sold as software-only. Although you can typically buy appliances via partner channels, these vendors only sell storage software. Companies such as DataCore, ioFabric, Maxta and Nexenta Systems are all commonly associated with this market, but are far from alone (see "Is it hyper-converged or software-defined storage?"). Also consider products such as VMware's vSAN and Microsoft's Storage Spaces in this group, among many others. These vendors, which were previously storage consumers, have become forces to be reckoned with in the storage market.
Is it hyper-converged or software-defined storage?
It's clear that hyper-converged infrastructure has become a key market force and is impacting storage in myriad ways. In classifying emerging data center architecture opportunities, in almost every case, hyper-converged infrastructure falls within the realm of software-defined storage. These products manage the storage layer with some kind of OS-based construct or by running a virtual machine with the underlying storage hardware abstracted and presented to the host via a kernel module or VM. You can consider hyper-converged infrastructure, in general, to be a subset of software-defined storage.
The open source community
In the cases described so far, the software-defined storage products are drop-in replacements for their older hardware-bound cousins. But there's a third approach in the software-defined storage market that's gaining attention: open source storage services. Look no further than tools such as Ceph, OpenStack and ZFS to see that there are, in general, thriving communities around certain storage-centric services. You may find it odd that I include a file system as an interesting tool, but if you look at companies such as Maxta, Nexenta and Tegile, they all use ZFS as a part of their storage platforms. And, here, Maxta and Nexenta are ones that would be considered software-defined in the strictest sense.
Ceph is an open source project that delivers a complete storage approach capable of delivering block, file and object storage services. It provides for massive scale and enables support for both traditional and emerging workloads. Like most software-defined storage projects, Ceph deploys to commodity servers and can create a cluster of nodes used to deliver available and scalable storage. Also, like most open source projects, Ceph is available for free download and deployment, but, as the saying goes, it's "free like a free puppy." It requires ongoing care and feeding and a degree of knowledge that might be difficult to procure and maintain.
With that in mind, these projects typically monetize themselves through a service organization that provides the commercial support most enterprises need. Initially, such support was provided by a company called Inktank, which was acquired by Red Hat in 2014. Today, companies such as Fujitsu and SanDisk also support Ceph deployments.
On the infrastructure-as-a-service front, which includes storage, there's OpenStack. OpenStack is a free and open source platform intended to help clients deploy full private cloud environments. This software-only product installs on commodity servers you deploy in your data center and includes all the components required to create a full cloud-like environment. OpenStack integrates compute, storage, networking, identity, management, database, bare metal, virtual machines and more into a singular environment and allows individual components to be swapped out. For example, if you'd rather use Ceph in place of OpenStack's Cinder block storage service, you can.
The truth is OpenStack isn't doing well in the private cloud market, although certain OpenStack public cloud deployments continue to grow. For many, OpenStack is too complex. In an era in which dead-simple deployment and management trump all, asking users to deploy something as notoriously complex as OpenStack is a tough sell, even if it's marketed as free. In reality, there are various costs associated with the different OpenStack offerings. And regardless of the price of the software, because of the complexity, the cost of deployment and maintenance is often far too much to deal with.
OpenStack's future, for now, isn't bright, and it doesn't show signs of resurgence.
The state of the SDS market
If you listen to vendors, the entire market is already software-defined. But, as stated previously, that's something of an exaggeration. Hyper-convergence is driving a surge of interest in software-defined storage benefits, and there are pockets of pure SDS in Ceph, DataCore, IBM Spectrum Scale, Nexenta, OpenStack and a few others. However, there's a long way to go before SDS becomes as prevalent as vendors would have you believe it is.
Here's what I think is holding back pure SDS: People still want hardware. Like it or not, companies want full solutions, not pieces they have to put together. IT wants appliances they can slide into a rack and turn on. Many of the pure-play software options on the market haven't enjoyed the same success as some of their appliance-based siblings because of this one key difference.
I believe IT shops like the idea of software-defined storage, though. They just want it in the perceived simplicity of an appliance. To that end, several software-only vendors in the software-defined storage market have established either direct or so-called meet-in-the-channel relationships with hardware suppliers such as Cisco, Dell EMC, Fujitsu, Hewlett Packard Enterprise, Super Micro Computer and others. In those setups, the customer buys pure SDS from the storage software vendor, but has it delivered as a complete, supported appliance from an integrator or someone who can provide full support for both the hardware and software components.
The world is moving quickly toward eminently programmable and malleable software, providing storage that stays current and can easily extend to add new capabilities. Once this transition is complete, those engineers from 1995 will no longer have any relevant skills, and their ability to identify with our 2018 systems will vanish.
If you read this article and thought, "You know, maybe my skills are a bit out of date, and I haven't stayed as current as I should," consider expanding your understanding of software-led approaches. That way you won't turn into one of those accidental time travelers who don't recognize their own storage environment.
Dig Deeper on Storage architecture and strategy
Use multi-cloud software-defined storage to prevent storage silos
Elevate software-defined technology's role in the data center
SUSE Enterprise Storage 6 targets ease of use, hybrid cloud
Red Hat storage GM explains strategy change, customer impact