idspopd - Fotolia


How gRPC improves microservices load balancing on Kubernetes

Kubernetes networking can be a challenge, but the gRPC protocol can help software pros focus on the application logic rather than worry about how to handle network request calls.

Communication and networking are central to managing a Kubernetes cluster. While Istio and Linkerd manage end-to-end networking, they support integrations for other specific tasks within networking, such as proxying and load balancing. Promisingly, gRPC has emerged as a specialized, lightweight framework for remote procedure calls.

Why Kubernetes needs gRPC

Created at Google, gRPC was adopted by the Cloud Native Computing Foundation and is a vital part of the Kubernetes toolchain. The gRPC protocol handles communication between a client and server, and it enables remote instances to execute requests as if both client and server were on the same server. In this way, gRPC enables communication and request handling for distributed systems. This kind of framework is essential in a Kubernetes system where distributed architecture is the norm.

Able to handle workloads at a large scale, gRPC is ideal for load-balancing requests in Kubernetes. Load balancing ensures application reliability by routing requests across all nodes and diverting traffic away from failed nodes. Load balancing is an essential part of managing a Kubernetes cluster, and gRPC takes a modern, distributed approach to load balancing.

How gRPC works

The gRPC protocol is based on the HTTP/2 network protocol. This is much faster than the previous HTTP/1. The reason for this improvement in performance is a concept called multiplexing. HTTP/2 enables nodes to make multiple gRPC calls over a single TCP/IP connection. This means faster request processing, requiring fewer resources.

The biggest benefit of gRPC is that it lets you focus on your application logic rather than worry about handling request calls over the network.

HTTP/2 also brings bidirectional streaming, which helps process asynchronous requests simultaneously from the server to the client and the other way around. This makes real-time communication possible, which microservices applications need.

Rather than a traditional data format, like XML, the gRPC protocol uses the faster, lightweight protocol buffer format. With this format, the structure for the data is defined in a .proto file, which is machine-readable. Once defined, you can generate a data interface for any language. The latest version, proto3, brings an even simpler format and support for more languages.

The gRPC protocol supports 10 programming languages and enables you to use a common API for all your services, despite differences in language. You only need to define your .proto file once.

How gRPC handles requests

One of the core tenets of gRPC is that it looks at the application as a collection of services, not objects. There are four types of service requests in gRPC: unary, server streaming, client streaming and bidirectional streaming remote procedure calls (RPCs). With unary RPC, the client sends a single request to the server and waits for a single response. The next three types of service requests involve creating a stream where requests and responses are shared in sequence until all messages have passed.

Timeouts and deadlines are important when using gRPC, because they ensure resources aren't overprovisioned. When requests run too long, multiple requests can back up, consuming too many resources. This can result in using up all available memory and can slow the system down. This is why it's good to provide a timeout and default deadline value from the client and server side.

The biggest benefit of gRPC is that it lets you focus on your application logic rather than worry about handling request calls over the network. With gRPC, you can perform actions, like running updates of services, on the fly, which couldn't occur in a monolithic application where an update requires downtime.

As you consider modern approaches to run and manage applications in the cloud, Kubernetes is the preferred way to manage container infrastructure. However, as an open source project still in early stages, Kubernetes networking remains a challenge. The gRPC protocol enables load balancing and request handling in a way that's familiar to Kubernetes. It brings performance speed -- thanks to simultaneous, bidirectional request streams between client and server -- and lets you add strong defaults for timeouts and deadlines. It even enables running updates of services. It is supported by other Kubernetes networking tools, like Istio, and is now the standard for Kubernetes load balancing.

Next Steps

Differences between Kubernetes Ingress vs. load balancer

Dig Deeper on Application development and design

Software Quality
Cloud Computing