How can active-active clustering use load balancers?

Active-active clustering and cloud services enable IT workload resilience or redundancy in the enterprise. How does load balancing influence these setups?

Some workload disruptions can devastate a business, so IT organizations usually protect critical workloads via various strategies collectively known as high availability.

One of the most common strategies for high availability is server clustering. Two or more physical servers process the same IT workload. As far as the clients -- end users -- are concerned, the cluster behaves as a single server with one IP address

Although it is possible to configure an active-passive environment where one server -- node -- does the work and others stand by, it is increasingly common to establish active-active clustering where each node shares the workload. An active-active approach offers more computing capacity because multiple servers are available to do the work. It also provides redundancy -- if one server fails, the others continue to operate and service the workload without any disruption to users.

Server load balancing series

This expert answer is part of a series on server load balancing, including how to get the most out of load balancers and deciding whether to use software or hardware based products.

The key to any server cluster is load balancing, which channels traffic to and from the workload, distributing that traffic across the numerous cluster nodes. In an active-passive configuration, the server load balancer recognizes a failed node and redirects traffic to the next available node. In an active-active configuration, the load balancer spreads out the workload's traffic among multiple nodes. Distribution may be equal, called symmetrical distribution, or uneven –asymmetrical -- depending on the computing power of each node or how an administrator prefers for the active-active cluster to behave. For example, an older server in the cluster may receive a lesser percentage of traffic while a newer server receives more.

Some load balancers distribute traffic dynamically in response to server loads, so overworked servers with falling response times temporarily receive less traffic, streamlining traffic and optimizing overall workload performance. This makes the most out of the server cluster's total compute capacity

Selecting a load balancer for data center server clusters requires a keen knowledge of traffic requirements and a clear understanding of the necessary feature set. Clusters are not difficult to set up, but optimum load balancing and active-acting clustering performance demand testing and IT architectural design expertise.

Load balancers are an integral part of any enterprise application cluster, whether deployed locally or in the cloud. The issue of cloud services simply adds another layer for system and software architects to consider in relation to workload availability. Enterprises that rely on the public cloud for enterprise application deployment typically include the cloud provider's load balancing services to distribute application traffic among multiple redundant compute instances. For example, Amazon Web Services' elastic load balancing distributes incoming workload traffic across AWS EC2 instances for greater application availability. Google Compute Engine also provides load balancing services. IT organizations can implement load balancers in hybrid cloud environments to distribute traffic to applications spread between cloud setups.

Next Steps

How can I test server high availability?

Three management concerns for hybrid cloud

Load balancing in the private cloud

Dig Deeper on IT systems management and monitoring

Search Software Quality
Search App Architecture
Cloud Computing
Search AWS
TheServerSide.com
Search Data Center
Close