everythingpossible - Fotolia

Tip

Optimize load balancing for a hybrid cloud architecture

Load balancing plays a critical role in ensuring application availability and high performance in hybrid cloud. Follow these general rules to get it right.

The goal of a hybrid cloud isn't to offer a choice between two IT environments -- it's to blend them together. Users expect a hybrid cloud architecture to provide a seamless pool of resources, where applications can deploy or redeploy based on availability, load and compliance policies.

To ensure quality of experience (QoE) for a hybrid cloud application, operations teams must support cloud bursting. And, to do that, teams must also provide load balancing across instances, which can be a complicated task in hybrid cloud.

A load balancer is a networking tool designed to distribute work. An end user connects to the port side of the load balancer, while the components of the application being scaled connect to the trunk side. When an end user request arrives, the load balancer directs it to a component based on a fair scheduling algorithm. Public cloud providers, such as AWS, Google and Microsoft Azure, offer load balancing tools on their platforms. But, in a hybrid cloud architecture, you also need to properly configure and manage load balancers in your data center.

While this juggling act can be a challenge, there are three general questions that can guide a hybrid cloud load balancing strategy -- and three rules to help you successfully implement one.

Three questions to ask

For a load balancer to work, it must connect to end users and to the scaled application components. To that end, the first question you need to ask is: How will you address the load balancer ports and trunks?

Load balancer availability is critical, and often overlooked.

As you add or remove application components, you must also add and subtract the load balancer's trunk ports. If a component fails and is replaced, you must update the trunk port address of that component. This is a problem for many enterprises, as load balancers are often part of network middleware. In a hybrid cloud architecture, you need to figure out how to make that work, given that the data center and the cloud likely use different software.

The second question is: How will you manage performance and QoE? It's unlikely you will have the same network connections and performance in the public cloud as in your data center -- often they're not even similar. Many enterprises, including those that use the public internet, have relatively slow connections to the public cloud. Network latency and performance can vary, which means that new component instances will perform differently, depending on whether they're in the cloud or data center. This variability can be a problem, and also complicate capacity planning.

The third question to ask is: How will you handle the problem of state control? Most business applications are transactional, which means  they involve multiple messages within a given dialog between a user and an app. With load balancing, messages related to a particular transaction could be sent to different components. If those components are stateful, meaning they expect to process transactions as a whole, the result can be a software failure or a corrupted database.

Three rules to guide your strategy

To address these challenges, implement policy-based scalability in your hybrid cloud architecture. A given component should generally scale within its native hosting environment whenever possible; only scale between the cloud and the data center in the case of a failure or lack of resources.

Carefully plan the capacity of your connection to public cloud services to handle any workflows that have to cross between the cloud and data center. Limit the number of cases where the public cloud and data center back each other up. This will help help you assess the connectivity requirements between the two and ensure your data center recovery strategy doesn't just create a cloud connection problem.

Try to scale components within confined resource pools, such as a single data center, closely connected data centers or a single cloud provider. This approach will likely improve performance stability and make it easier to update a load balancer with the addresses of new components.

Secondly, design your applications to do front-end processing in the cloud. Many enterprises already take this approach, but it's not universal. When you perform front-end processing in the cloud, you use the cloud's scalability and load balancing services where they matter most -- the point of end-user connection. This model also enables multi-message transactions to combine into a single message, which eliminates the problem of state control.

The third rule is to design your load balancer for accessibility and availability. If you have a cloud frontend, the load balancer should be in the cloud. If you load balance data center components, put the load balancer in the data center. This also makes it easier to update load balancers to maintain a connection with all the components it supports.

Load balancer availability is critical, and often overlooked. Most cloud providers design their load balancers for high availability. If you use your own load balancers, you must supply the connection to a new instance of the load balancer if the old instance fails.

Virtualization, in any form, promotes scalability. As container orchestrators, such as Kubernetes, and other hosting tools improve, we can expect to see more load-balancing options -- as well as more risk that they'll all work properly in a hybrid cloud architecture. But, if you apply the general rules above, you should be able to address scaling challenges in the future.

Next Steps

Pick a load balancer: Azure Front Door vs. Application Gateway

Configure Azure Load Balancer for session persistence

Dig Deeper on Cloud deployment and architecture

Data Center
ITOperations
SearchAWS
SearchVMware
Close