freshidea - Fotolia
Data protection and disaster recovery are something of a science. People have been honing their disaster recovery approaches across all kinds of infrastructure for decades. However, with data volumes on the increase, tried-and-true is never good enough.
Taking a cue from the layered and prioritized defenses of cybersecurity, a number of experts support a tiered approach to data protection and DR. All data is important, but giving some data priority treatment is useful.
"I do think DR plans can benefit from the tiered approach, and some organizations are taking that step," said Ed Featherston, vice president and principal cloud architect at Cloud Technology Partners. Conceptually, he said, application data in most organizations has already been tiered by definition through recovery point objectives (RPO) and recovery time objectives (RTO). But, frequently, the final DR plan gets set to the least common denominator, namely the toughest RPO and RTO numbers, and becomes an all-or-nothing effort.
However, as the volumes of data storage have grown "at more than exponential speeds, having all data under one tier is becoming more problematic from a time, resource and cost perspective," Featherston said. As a result, some organizations are now taking those RPO and RTO numbers and effectively creating tiers of recovery, similar to their cyberdefense strategy, he said.
Avoiding downtime is high-priority in DR
According to Christophe Bertrand, an analyst at Enterprise Strategy Group (ESG), modeling disaster recovery approaches after cyberdefense is a sound strategy. Not every collection of data is born equal, but even with that acknowledgement, determining how to protect each type still means navigating ever-more-stringent backup and recovery needs for every kind of data.
Citing results from a recently conducted ESG study, Bertrand said there is very little tolerance for even modest downtime among today's organizations. ESG asked IT pros about their data recovery priorities and tolerance -- for example, application unavailability or data unavailability for high-priority applications. Some 14% of respondents said they could tolerate no downtime ever for their high-priority applications, and another 36% said they could tolerate it for only up to 15 minutes. A mere 21% said they could tolerate 15 to 60 minutes of downtime.
"That is what organizations want, but not necessarily what they achieve," Bertrand said. So, in order to get closer to no downtime, you need to adopt technologies, classifications and policies to deliver on your service-level agreements, "because you are dealing with growing volumes of data, and that makes it hard to replicate and store everything," Bertrand said. In other words, inevitably, you must tier data based on how critical it is. You probably need to compose DR scenarios for that and orchestrate recovery in a way that supports that assessment, he said.
In fact, Bertrand noted, progress in methods and capacities has meant that DR has gotten better over the years: more democratic with more data subject to comprehensive backup and at least comparatively rapid recovery. And that's a good place to start. Still, tiering will need to be part of the fine-tuning for the foreseeable future.
Tiering is not a new approach
A discussion of DR tiering will feel like déjà vu for some, said Greg Schulz, analyst at StorageIO. "It will be old for some and new for others, but when you think about it, [business continuity and disaster recovery] and cybersecurity ultimately have similar objectives: It is all part of data protection," Schulz said.
Greg Schulz Analyst, StorageIO
With cybersecurity, multiple tiers and multiple strategies are needed to protect data from different threats internally and externally. For data protection, it is similar. The multiple tiers are about ensuring your backup and your business continuity and DR against technology glitches, hardware failure and human actions, whether accidental or intentional.
"The best protection is multiple tiers supplemented by multiple layers and multiple points," Schulz said. Of course, that is often easier said than done, but "if you step back and use some common sense," it is easy to find your way to best practices. For example, he noted, many organizations use the terms grandfather, father, son or "three, two, one" to describe multiple levels of backup on different media and off site or in the cloud.
Schulz stressed that "DR doesn't have to be an all-or-nothing approach." Although many organizations take that approach, greater granularity can be the key to efficiency and getting the results you need for your most critical data and applications. "DR is a big umbrella under which we do things like business resilience and automatic failover and continuous backup and restore at different granularity," he said. The question is always, "can we get back to a certain point without having to go for full recovery?"
Schulz said tiered disaster recovery approaches, or those that allow partial backup, can be helpful in situations such as recovering all or part of an email system. There is a place for tiering in this approach, he said. "It is common sense, but common sense is not always common."
How to prioritize DR for virtual machines
DR strategy in 2019