carloscastilla - Fotolia
Unless they want to risk data loss, cloud admins need to invest some time in a solid backup plan.
Cloud-based backups, in general, ensure that data is copied and stored to a secondary location, such as an archival service. If the source data is compromised, enterprises can retrieve the backup data and restore to a state before the compromising event.
Backup processes and disaster recovery (DR) work hand in hand. A company's DR strategy relies on backups to bring systems back online. Review these common questions to get a cloud backup plan off the ground and ready to work.
1. What are some cloud backup methods to explore?
A key step in any cloud backup strategy is to weigh the available options. The first approach is to back up an application within the same cloud on which it's hosted. While it's easier and cheaper to implement than alternative backup methods, this approach lacks isolation -- particularly if the backups don't occur in a separate cloud region. If there is a provider outage or security breach, the whole system could fail.
The second option is to take a hybrid approach. Back up cloud-hosted applications locally, where IT teams have full control over retention, storage and security. This provides a high degree of isolation from the data source and minimizes the effects of cloud failure. However, this method can introduce latency issues and delay restores because of the physical separation between the cloud and on-premises environments.
Lastly, enterprises can perform backups from one cloud platform to another. For example, if they host data on AWS, they can back it up on Microsoft Azure. This approach provides isolation, like the on-premises backup method, and offers fast recovery times. There are cost implications, however, since providers bill users for consumed network bandwidth. Also, additional infrastructure components -- such as an inter-cloud VPN -- can increase costs.
2. What tools can support a cloud backup strategy?
Public cloud providers, including Google, AWS and Microsoft, offer multiple cloud storage and backup tools. For example, Google Cloud Storage Coldline, Amazon Glacier and Azure Archive Storage all provide a service to store infrequently accessed backup data. Microsoft also offers Azure Backup, an automated cloud backup service that lets admins choose between two storage options: locally redundant, where data copies exist within the same region, and geo-redundant, where data replicates to a secondary region. Amazon provides a similar service in AWS Backup, and Google also offers on-demand and automated backup options.
Additionally, there are third-party vendors, such as Acronis, Druva and Veeam, that offer cloud-based backup and recovery software.
Practice makes perfect
IT teams should regularly test a cloud-based backup and DR process to ensure things go smoothly in the event of an actual disaster. Take a hands-on approach -- rather than paper-based tests -- to check that systems run properly, and perform tests at least once a year. Some experts recommend testing once a quarter because of the frequent changes in cloud environments.
3. What are best practices for replication and protection?
To protect workloads, failover and replication should be two critical components of a cloud backup strategy. Public cloud providers have various regions worldwide, which IT teams can use for failover and replication targets -- but they need to consider latency issues before they choose a location. The farther away the replication target from the source data, the more latency is introduced to the replication process.
Cloud-based backup processes -- such as replication and snapshots -- need a suitable amount of bandwidth to maintain proper performance. Since bandwidth costs can add up quickly, enterprises should closely examine the amount of data they move and how fast they need to move it. For more reliability and consistency, consider direct or dedicated connections, such as Azure ExpressRoute and AWS Direct Connect.
When it's time to replicate workloads, define a sequence of events to avoid bandwidth issues. Create a schedule that prioritizes critical workloads to ensure the most important systems power on first in the event of failover.