Arjuna Kodisinghe - stock.adobe.
Craft a secure and reliable backup redundancy strategy
Backup and redundancy are not interchangeable. By using both, backup admins can avoid critical data loss and help ensure recovery. However, watch out for some common missteps.
Backups are the last line of defense against catastrophic data loss. All too often, unforeseen problems with backups make data recovery impossible.
A full data recovery depends on reliable backups of critical data. One way to improve backup reliability is to integrate redundancy wherever possible to eliminate any potential single points of failure within a backup infrastructure.
There are numerous ways to implement backup redundancy. However, even a fully redundant backup plan does not guarantee that data restoration is possible. It is key to test backups on a regular basis to ensure they are functioning as intended.
Backup vs. redundancy
Backup and redundancy are sometimes used interchangeably in IT. For instance, backups are a redundant copy of an organization's data. Even so, backups and redundancy are two different things.
Backup redundancy typically refers to additional data copies that make backups more reliable. Organizations that have corrupt backups or damaged backup media can still restore data if they use this type of redundancy.
Some organizations use redundancy as a backup alternative. Rather than create a traditional backup, an organization might instead use replication to create multiple instances of its production data. This approach is based on the idea that multiple independent data copies are unlikely to all fail at the same time. For instance, backup admins can use the Windows Distributed File System (DFS) to replicate file data to replica servers. That data will remain intact and accessible even if a file server or its storage array fails.
The problem with using redundancy as a backup substitute is that the redundant data copies cannot collectively protect against all threats to an organization's data. If an organization's data is encrypted by ransomware, then the encrypted files are replicated to the other data copies, overwriting good data with bad data.
It isn't just ransomware that poses a threat to replicated data. If an end user accidentally overwrites a file, then the operation is replicated to the redundant servers. Similarly, if a user accidentally deletes a file, then the file will be removed from the replicas as well.
Some organizations use one or more lagged replicas, which are data copies that receive replication updates much more slowly than the other copies. That way, if a ransomware infection occurs, admins can take a lagged replica offline before it has the chance to become infected. However, there is always the chance that a lagged replica will be damaged before an administrator has the chance to take the replica offline.
Redundancy is not a backup substitute. Organizations must also use a traditional backup method that offers point-in-time recovery capabilities.
There are three main areas that backup admins must focus on for backup redundancy: avoiding a restoration, implementing redundant backup servers and using redundant backup media.
When possible, admins should avoid restoring data from backup. The restoration process can be disruptive and often results in the loss of any data that has accumulated since the creation of the most recent recovery point.
The best way to avoid data restoration is to use redundant servers and redundant storage on the production network. Rather than using these servers as a backup, however, the key is to use redundant servers to enhance data protection efforts.
Redundant backup servers
In most cases, the backup server is a critical part of the overall backup infrastructure, so it cannot become a single point of failure.
Organizations can implement redundant backup servers in different ways based on the backup architecture and on the server vendor's recommendations. Do not attempt to implement parallel backup servers that operate independently of one another. Doing so almost always results in backup consistency problems if the two servers are backing up the same data.
If an organization uses disk-based backups, the best approach is often to design a two-step backup process where one backup server protects production servers and the second protects the first. If the primary backup server fails, the secondary backup server can help rebuild the failed server and the data that it has backed up.
Redundant backup media
Another way to protect backups through redundancy is to use redundant backup media. For many years, this meant adhering to the 3-2-1 rule.
The 3-2-1 rule essentially states that in order to optimally protect data, organizations must have three copies of the data: the original and two backup copies. These data copies are on to two different types of media with one copy located offsite.
The 3-2-1 rule was created at a time when tape backups were the norm and does not perfectly align with current technology. Even so, there are numerous ways to use redundant backup media that adhere to the spirit of the 3-2-1 rule, even if they not follow the rule perfectly.
If the organization uses tape backups, then it can implement redundancy -- and follow the 3-2-1 rule -- by creating two separate copies of each tape. One copy can remain on premise where it is easily accessible for restoration, while the duplicate tape is located off-site for safekeeping.
There are a few different ways to achieve media redundancy with disk-based backups. One option is to perform disk-to-disk-to-tape backups, which copy the contents of disk-based backups to a separate storage array and to tape for safekeeping. As an alternative, backup admins can use disk-to-disk-to-cloud, which creates two disk backups and a cloud backup.
Another option is to use mirrored storage for disk-based backups, which enables admins to replicate backups to an identical storage array. However, this approach does not produce a backup on removable media or provide full protection. Organizations that consider this approach should also replicate the backup server's contents to the cloud or to a secondary data center rather than depending exclusively on hardware-level replication within the local data center.