Data storage efficiency can affect a number of aspects of a data center -- everything from performance speed to energy costs. Meanwhile, storage characteristics such as capacity allocation and utilization, data protection and level of management can affect efficiency. And according to Jon Toigo, managing principal at Toigo Partners International, while there are some effective monitoring tools on the market, it takes the brainpower of IT pros to keep storage running efficiently as well. In this TechTalk interview from TechTarget's Storage Decisions conference, Toigo discussed data storage efficiency with executive editor Ellen O'Brien. To hear what he had to say about the challenges IT pros face and the tools that can help them, take a look at the video or read the transcript below.
I want to get started by talking about the chief criteria for defining storage efficiency.
Jon Toigo: Well, is there an app for that? If I had a nice tablet in front of me, I might have five different criteria. One would be capacity allocation efficiency -- how are we allocating the capacity that we own? Another would be capacity utilization efficiency -- how are we doing in terms of storing the right kind of data on active storage and offloading data that's infrequently accessed to an archive to free up space?
I might also look out from a power efficiency standpoint. How much energy are we using, with disk drives consuming between 7 and 21 watts each? These vendors are now providing the fastest storage on the planet using only 1,900 disk drives? That's a huge energy sink inside your data center.
Then you would also look at data protection efficiency, which is: What are the techniques I'm using to make a copy of the data center so if the primary storage mechanism goes away, all the data doesn't go away.
Finally, management efficiency. How are we doing in terms of spotting and rectifying issues and errors, and cleaning up the infrastructure and just making sure it's all healthy? And allocating, of course. When somebody comes to you and says, "I need another terabyte," how quickly can we do that? How efficiently can we do that?
There's a broad spectrum from an operational standpoint to an architectural standpoint of meanings to that term. I believe that it's a multi-headed hydra of a question.
Let's say we understand what an efficient data storage system should look like. What are the chief challenges in getting there?
I think partly we outsource our thinking process to vendors who want to sell us software tools to do this job. We think that there's a magical tool or kit, we can go out and buy a storage array and the vendor cheerfully remarks that this is the last storage you'll ever need to buy. We're outsourcing our thinking process about [achieving efficient storage] to the vendors.
In many cases, that's because shops that have gone to virtual server technology have violated the fundamental rules of storage inside their environment with the new server hypervisors that are out there. They're usually run by server administrators who are trying to manage the storage with the servers. And they don't know servers from Shinola.
The bottom line is that they want it to be drool-proof, and if a vendor will come to them, it's music to their ears when the vendor says, "Hey, you don't have to worry about that stuff because we take care of all that under the covers."
I think there's a lot of that going on, and that's creating a huge human challenge. That's your primary challenge. Storage itself suffers from a number of hurdles from a management standpoint. There's no incentive within the storage industry to work and play well together on a common management scheme. Because otherwise everybody realizes that you're buying a box of Seagate hard drives. It doesn't matter if it says Hitachi on the outside or IBM or E I E I O. It's all coming from the same three vendors.
So, we have the valuable role of storage managers and their brainpower. Are there any effective monitoring tools for them to use?
Toigo: I think that there are. I've seen a lot of development in some of your basic storage resource management [SRM] software products. They don't sell very well, because people don't want to think about the problem of management. The vendors, despite what they do to sweeten the product and improve the functionality, they rarely are rewarded in terms of lots and lots of sales.
If you really want to chase cost out of storage these days, I don't know too many companies that don't. You definitely want to have a software product that's capable of monitoring your infrastructure so you can see what's going on. I like IBM's Tivoli for that sort of thing. I like SolarWinds, who bought Tek-Tools Storage Profiler. Good little product there. There are a number of good ones out there. I think the smart move for anybody who's watching this video, your smartest activity would be go out and select that SRM product that you want, and then tell your vendors you're not buying their stuff anymore unless that can manage their product.
You'll see how quickly the vendors will come up to the table with open APIs [application programming interfaces] and whatever is required in order to make their product manageable if enough customers demand it.
What I usually hear from the vendors is: Customers don't care. It's not even on the top-ten list of criteria for selecting a piece of hardware -- how it will be managed. That should be your first criteria. Not your afterthought. I'd love to see everybody follow X-IO's example, and I'm talking about the vendor community, and adopt REST as a protocol for managing their boxes. I've got a bunch of X-IO stuff in my labs. I've probably got a petabyte. I'm able to allocate that storage out with a smartphone, because it all speaks REST. It's also kind of cool. You set one box next to the other box and they're using a Facebook-like algorithm that says, "Oh, hi. You're an icebox too. Do you want to be best friends forever? We'll share our storage." And it automatically fits it all into a grid of infrastructure. [It's] very cool.