ekostsov - Fotolia
Disaster recovery focuses on the protection and restoration of IT system functions and network services, and it dates back to the 1970s. Its roots are in mainframe computing systems, and it extends to hardware and software, as well as data files, databases, utilities and applications.
One common model of recoverability -- the Tiers of Disaster Recovery -- originated in the Share automated remote site recovery task force in the 1990s. Share is an early data processing user group, whose members were largely IBM mainframe users.
These tiers have roots in an important concept that was critical in the emergence of modern IT disaster recovery: "reliability, availability and serviceability."
Tiers of disaster recovery
IT organizations must back up data, databases, applications and hardware to one or more secure locations for subsequent access, such as in an emergency with damaged systems and/or corrupted files. In tiered disaster recovery, each tier represents the recoverability of specific types of data storage resources based on the recovery method and recovery time. As the tier number increases -- from 0 to 7, the recovery time decreases. While not always the case, typically the higher the tier, the greater the cost of recovery resources.
From the early days of tape and disk, the backup and recovery process has evolved to intelligent systems that continuously monitor IT operations and respond immediately to a potentially disruptive condition. Cloud-based storage systems, coupled with intelligent backup and recovery platforms, have improved technology disaster recovery.
The previous standard number of tiers used to be six, but changes in technology have brought that number to seven. Tier 0 is a commonly accepted addition to these lists, and represents a "ground floor" for DR, with zero off-site recovery options.
As technology evolves and businesses adapt to modern methods, earlier tiers have become obsolete in most organizations. The commonly accepted tiers of disaster recovery in IT are as follows.
Tier 0: No off-site data
Recovery is only possible with on-site systems. This is how most organizations in the 1960s, 1970s and 1980s handled backup and recovery activities. Tape and disk drives were the storage media and backup processes handled by utility programs designed for that purpose.
Tier 1: Physical backup with a cold site
Data, stored on magnetic tape or another medium, is transported by the organization to an off-site facility that has no IT equipment installed. External companies provide secure above- and below-ground storage facilities for backup tapes. Cold site firms offer largely empty buildings configured with electric power, HVAC and basic network services, plus equipment space and office areas. Organizations that declare a disaster must procure replacement equipment themselves, have the equipment shipped to the cold site (as well as data tapes or disks), and then have everything installed, configured and tested to provide data recovery.
Tier 2: Physical backup with a hot site
Data is transported to a hot site, an off-site facility that has the hardware to take over from the primary data center. Hot sites were just becoming popular in the 1970s and 1980s and were used primarily by organizations that had at least one data center and a significant investment in mainframe hardware and software. Network speeds at the time were too slow to efficiently transmit backup data from the main to the hot site, so tapes were the backup vehicle of choice.
Tier 3: Electronic vaulting
Data is electronically transmitted to a hot site. As network bandwidth and overall transmission speeds increased during the 1980s and 1990s, organizations could receive electronic data backups on their on-site storage devices. Storage devices in this tier include hard drives, tape drives and optical storage.
Tier 4: Point-in-time recovery
Point-in-time copies of storage volumes, files or dates are created at a specific moment, such as end-of-day. The copies are stored in an active secondary site, either company-owned or provided by a third party. Copies are stored at both the primary and secondary sites, each site backing up the other. Disk and solid-state storage are used in this tier, as well as cloud storage technology.
Tier 5: Two-site commit/transaction integrity
At this tier, data is continuously transmitted to primary and alternate backup sites. Organizations need substantial network bandwidth, often provided through the internet, to maintain a constant flow of data. Local, as well as remote, storage can be used in this situation. Cloud storage is often a fit at this tier.
Tier 6: Minimal to zero data loss
Recovery in this approach is instantaneous, often thanks to disk mirroring or data replication. These technologies back up data files and databases as they are created. Companies with a low recovery point objective (RPO) requirement, such as banks and other financial institutions, aim for zero data loss. RPO is an important DR metric that specifies how much backed-up data and databases can age before the backup must be updated again. The lower the RPO value, the more frequently the data must be updated. Backed up data has the same age as primary data. Cloud storage is often used in tier 6 setups.
Tier 7: Recovery automation
Recovery automation is the newest tier of disaster recovery. In this tier, technology continuously monitors multiple aspects of data operations, looking for any kind of situation that threatens those operations. When the system detects a possible abnormal situation, it immediately analyzes the condition against its database of conditions and recovery rules. It then triggers systems that have critical data and applications already in place for recovery in a minimum amount of time. This capability is typically available with cloud-based storage tools. Tier 7 often uses artificial intelligence.