ra2 studio - Fotolia

Tip

3 ways to approach cloud bursting

With different cloud bursting techniques and tools from Amazon, Zerto, VMware and Oracle, admins can bolster cloud connections and efficiently move data in and out of on-prem facilities.

Cloud bursting uses the public cloud's vast scale, on-demand availability and pay-as-you-go pricing. It enables organizations to shift workloads to the cloud when they need more capacity and disable those workloads when demand reduces. The challenge of cloud bursting is to recognize the appropriate use cases and limitations and to streamline the practice with a suitable level of automation.

The concept of cloud bursting is fairly straightforward, but successful implementation can be problematic. The first potential issue is workload compatibility; the application must work well in the cloud.

The public cloud is not open infrastructure, but rather an array of predefined services and resources; a workload must function within that menu. Workloads designed for the cloud -- such as those that already run on a private cloud within an organization -- have a much better chance of running in a public cloud and successfully shuttling back and forth between cloud and on-premises infrastructure.

Cloud bursting limitations

Even when a workload runs well within a public cloud, cloud bursting poses network and storage performance problems. The main issue is where the data is stored and how it is exchanged across the network; workloads that may be sensitive to network and storage latency issues might encounter problems with cloud bursting.

Moving data to or from the cloud is time-consuming, and moving cloud data back to the data center can impose additional costs.

There are three general approaches to using cloud bursting, with each using varying levels of manual management and software.

1. Distributed load balancing

The first option employs a familiar load-balancing approach where the workload is operated in tandem between the data center and the public cloud. An organization operates a workload within the local data center, simultaneously provisions resources -- such as compute instances, storage and monitoring -- in the cloud and deploys the workload to those services.

Next, admins implement load monitoring for the local workload. When a load exceeds a predetermined threshold, the identical workload environment in the public cloud is started, and traffic from the workload is redirected to the public cloud for service. When the load falls below a predetermined threshold, traffic is redirected back to the local data center, and any resources started in the cloud are stopped.

This technique is an expression of distributed load balancing where a workload is deployed both locally and remotely in the cloud, and load balancing shares traffic with the cloud deployment as needed.

Doing this typically requires a predeployment in the cloud, which can incur business operating costs, even when the cloud workload isn't active. The cloud-side deployment is also generally fixed and cannot automatically adjust resources for changing load conditions, which can limit cloud burst capacity.

2. Manual bursting

A second approach is to forego the predeployment with a manual bursting technique where cloud administrators manually provision and deprovision cloud resources and services based on notifications received from the load balancer.

This enables the organization to create a cloud deployment large enough to handle the required work and then destroy the deployment later to reduce costs.

A manual approach is undesirable because of human factors, such as the delay in receiving notifications, the probability of errors or oversights in deployment creation and the possibility of costly cloud sprawl if the cloud deployment is not destroyed.

Cloud admins rely on scripts and other automation tools to speed cloud deployments and enforce policies. But the human interaction element makes the manual approach useful for testing and proof-of-concept cloud bursting projects.

3. Automated bursting

The most desirable cloud bursting approach is a fully automatic and dynamic technique where admins use automation software or third-party services. This option helps provision cloud resources on demand, properly deploy the workload to the cloud and then deprovision the cloud deployment when traffic demands fall.

Such automation tools typically use the cloud provider's APIs, which facilitates dynamic, programmatic interaction with the cloud and its resources. Automation can create, grow, shrink and remove cloud resources as workloads change. This approach eliminates the disadvantages of human interactions and also saves money by provisioning in accordance with real-time demand.

Making cloud bursting work

Successful cloud bursting relies on adequate networking between the data center and the public cloud; a high-speed internet connection to the cloud provider's closest region should cover most latency needs.

Making the connection through a VPN bolsters security, and using a direct connection -- such as Azure ExpressRoute or AWS Direct Connect -- maximizes available bandwidth and eases congestion. Any tools must integrate with the local data center and the desired cloud provider. A tool intended to automatically provision AWS instances must integrate with any APIs or cloud interfaces admins use. This makes evaluation and testing a critical part of tool adoption.

Workload migration tools include Oracle Ravello, which can clone entire application stacks to the cloud, and Google Velostrata for enterprise workload migration to Google Cloud. For workload mobility, organizations can use Zerto 7, Pivot3 Acuity and VMware CloudVelox.

Dig Deeper on Data center ops, monitoring and management

SearchWindowsServer
Cloud Computing
Storage
Sustainability
and ESG
Close