Tip

Three considerations to ensure data storage efficiency

Vendors often claim their products cut out storage inefficiency, but administrators can make a difference as well with some planning.

If you've been hearing a lot of pitches that tell you a new product can improve your data storage efficiency, you're not alone. It's a hot topic these days at trade shows and conferences, and storage pros are hearing a great deal about technologies that can target storage system inefficiencies and make their systems quickly -- and beautifully -- efficient.

A similar silver bullet pitch appeared to succeed when hypervisor vendors applied it to promote their wares in the first decade of the 21st century. First, they asserted that server resource use was terribly inefficient, which wasn't always the case. Rather than discuss the root causes of inefficiency, a group of vendors promoted a silver bullet in the form of server virtualization and multi-tenant workload hosting (hosting more workloads on a server to use all of its spare resources and capacity). Consequently, and after much effort to deploy the technology, consumers of this technology realized that the problem of inefficiency hadn't been fixed by virtualization. In some cases, inefficiency had been masked from view until the existing issues became exacerbated and reasserted themselves in other ways, often more costly than the initial ones.

Storage inefficiency is caused by the deployment of isolated islands of underused and overprovisioned storage which, in turn, produces high costs and reduced performance. To address poor data storage efficiency, administrators need to analyze and resolve its root causes. Here are three key factors to examine when searching for inefficiencies in your storage systems:

1. The Tier 1 factor

Knowing what data you have and how it's hosted is the most important step in the remediation of inefficiency. Too many IT operators host their application data on Tier 1 -- high-performance disk with or without solid-state drives. This is the natural result of their desire to accelerate application performance right out of the gate. As well intentioned as this may be, it tends to result in the assignment of expensive storage assets to less important data and introduces inefficiency. If applications are already stored on Tier 1, you'll need to implement some method to migrate older, infrequently accessed/updated data off expensive spindles and chips to less-expensive Tier 2 or Tier 3 storage. Then expensive disk and silicon can return to hosting the kind of data that needs their performance characteristics.

2. Unified management capability

You'll need to decide if you require a unified management capability as you proceed toward data storage efficiency. If you have only one array from a single vendor, the management software that came with the rig may be just fine for managing configuration and monitoring status over time. However, if your environment has two or more storage arrays -- and especially if these arrays are from different vendors -- you'll require a unified management capability that can integrate configuration details and operational status information into a single pane-of-glass hardware monitoring and management system. If your vendor doesn't support open management standards like the Simple Network Management Protocol, Storage Management Initiative Specification or REST, then you'll need to acquire and deploy a proprietary Storage resource management software suite that works with your current hardware. From this point on, you'll need to advise your vendors that you won't buy future products that can't be managed using your preferred software.

3. Centralized services such as software-defined storage or virtualization

You need to work toward scaling storage services beyond the individual array of disk drives. Since the dawn of distributed computing, vendors have leveraged the open systems revolution to proffer proprietary arrays with on-board controllers operating proprietary, value-added software. This has not only led to a huge acceleration in the price of storage at the array level -- as compared to a falling cost per gigabyte at the level of the commodity disk drives inside the array -- but obfuscated unified infrastructure and data management. Scaling also becomes problematic since running out of trays of disk drives in an array cabinet usually requires an entirely new and separate array to be deployed with another copy of the proprietary value-added software and a new pile of data requiring maintenance in place. The good news is that software-based "super controllers" -- storage virtualization engines and software-defined storage suites -- can replace on-board value-added software with a set of centralized services that can be extended across vast numbers of arrays and drives. These include storage virtualization products -- think DataCore Software, IBM SAN Volume Controller and so on -- while software-defined-storage suites would include Tarmin, CommVault, VIPR, etc. The former aggregates capacity and services at a software abstraction layer, serving up virtual volumes with customized value-added services. The latter fits the narrow definition of software-defined storage, which doesn't include capacity aggregation but does provide a set of services that can be applied to the storage where data is hosted.

Storage infrastructure itself needs to be planned more intelligently, with an application/data focus and eschewing shiny new things. Remember that using only disk, or only disk and solid-state, can make storage infrastructure dramatically more expensive over time, while introducing tape as a Tier 3 can cut storage infrastructure costs by half. The cloud needs to be evaluated in a calm and reasonable way that considers both its potential benefits and the seriously intimidating potential risks. Avoid being forced into vendor hardware lock-ins and keep your negotiating skills honed to obtain the best prices for what you buy. And don't forget the used hardware option.

Poor data storage efficiency is contributing to mounting costs that make the front-office bean counters look for their cost-cutting tools. Only a business-savvy approach to addressing the root causes of storage inefficiency will yield a workable and sustainable strategy for deriving the best performance and utilization efficiency from your storage and for demonstrating return on investment that meets front-office expectations.

Next Steps

Create a more efficient data center with virtual storage and SRM

How cost, longevity and management affect storage efficiency

Use archiving to make the most of your storage

Dig Deeper on Data storage strategy

Disaster Recovery
Data Backup
Data Center
Sustainability
and ESG
Close