Tip

Effective storage service management rests on well-allocated features

Jon Toigo explains how proper management of value-added storage features can make a data center more efficient.

Storage efficiency is dependent on how well you perform in three areas: overseeing storage plumbing and hardware (Storage resource management), allocating value-added storage services (storage service management), and managing the data placed in storage in the first place (data management). The truth is that both hardware and software management are required to make the infrastructure return its investment.

Since the advent of distributed computing three decades ago, vendors have been fielding storage arrays that leveraged value-added software functions embedded on array controllers to create new products and differentiate their products from those of their competitors. Creating new products typically meant vendors blurred the lines between well-defined storage product tiers.

For example, Tier 1 arrays (fast, low-capacity disk) were merged in some cases with Tier 2 arrays (slower, high-capacity disk) to create array products that featured on-board hierarchical storage management (HSM) that moved older data to more capacious and less expensive Tier 2 disk components. The configuration, plus the value-added data migration software, supported a higher sticker price than traditional storage rigs sans integration and HSM. Similarly, products were introduced that blurred Tier 2 storage with Tier 3 (usually extremely high-capacity but slow-performing media, such as optical disk and tape) by using value-added software technologies such as compression or deduplication with disk. Again, the result was a pricey "new" rig distinguished by its value-added functionality.

According to vendors, all of this so-called innovation was necessary because the componentry of the arrays themselves -- including disk drives, power supplies, trays, enclosures and even RAID controllers -- were quickly understood to be commodity parts common to all products, regardless of the vendor insignia on the box. Only value-added functions provided a way to establish and justify array price tags and profit margins, or to impart technical differentiators to various products from multiple vendors.

This has resulted in isolated islands of storage automation, with each array providing a subset of special value-added services that too often obfuscated efforts to manage infrastructure resources in a holistic manner. Value-added software such as thin provisioning and deduplication often distorts the collection of factual information on available array capacity, for example, and introduces more obfuscation with each array added to the infrastructure.

Perhaps the easiest way to wrangle storage services (the software components of storage infrastructure) into some manageable whole would be to turn these functions off at the array controller altogether, and instead stand up the software services in a centralized supercontroller -- a storage hypervisor, to borrow terminology from the server world -- from which they can be assigned a workload on a policy-driven basis. CommVault, Tarmin, Sanbolic and a few other vendors offer interesting takes on this strategy.

Most vendors of storage service management software are changing their terminology to refer to their products as software-defined storage, a popular meme at the moment. But consumers need to be circumspect about confusing "marketecture" with architecture. Software-defined storage currently excludes storage virtualization technologies like DataCore Software's SANsymphony-V product and IBM's SAN Volume Controller. These products deliver an effective storage software service supercontroller, aggregate the capacity of all storage and enhance overall infrastructure performance by up to four times the speed of nonvirtualized storage. It is the capacity aggregation capability that excludes storage virtualization products from the software-defined storage party, though no cogent explanation has ever been offered to explain why. Storage virtualization has a longer pedigree than software-defined storage, but the reluctance to include it in software-defined rhetoric appears to have more to do with the proprietary interests of certain storage hardware vendors that resist the idea of capacity aggregation that reduces all storage to commodity JBOD.

The bottom line is that storage efficiency requires value-added storage services to be wrangled into an abstraction layer that can be assigned and applied judiciously to application workload data and the physical or logical containers it is stored in. Instead of isolating thin provisioning functionality to a single stand of disks, it can be applied to all storage.

The path to storage service management nirvana

There are a couple of different paths to storage service management nirvana. One option is to deploy storage virtualization. Most storage virtualization products will let you use RAID on your arrays and even other value-added functions. However, it's ideal to shut off value-added software and cancel licenses to save money since these functions will be provided by the service management layer anyway.

It will take some time to shift data temporarily to alternative storage so the capacity it occupies can be virtualized. Once this has been done, create virtual volumes; copy the data back to its pool, volume or other logical construct; and then assign the protection, performance and capacity management services you want. Service management can be monitored from a common graphical user interface across all virtual storage.

It is often best to virtualize storage incrementally, creating pools with different speeds, capacities and cost characteristics. That way, you can write policies for managing data at the same time you assign services. Plus, rolling the virtual environment out incrementally will build confidence in the technology among users and application administrators, and probably impress them with obvious speed improvements.

Some folks do not want to virtualize their storage infrastructure, often because they mistrust or misunderstand the difference between a software-based controller and a hardware-based one. Most array controllers on enterprise products today are simply PC motherboards running common Linux or Windows operating systems, so everyone is using a software application as a controller in any case. For those who prefer their storage capacity segregated and only the software services aggregated, use the try-before-you-buy versions of CommVault, Sanbolic, Symantec, Tarmin or other software-defined storage products to see which one works best with your infrastructure.

Keep in mind that storage service management is not the same thing as storage resource management. Both are required if you hope to raise your storage efficiency levels.

Next Steps

How VM-aware storage helped one firm manage storage resources

Managing your software-defined infrastructure

How to up your storage efficiency with archiving technology

Dig Deeper on Data storage strategy

Disaster Recovery
Data Backup
Data Center
Sustainability and ESG
Close