gjp311 - stock.adobe.com

Tip

Container hosting requires in-depth service network plans

Microservices and containers complement each other with scalability and flexibility. But they can also compound network addressing problems in production without proper precautions.

Container hosting for services and microservices is a smart match, but there's no easy button to make the combination work.

In fact, IT organizations must mesh carefully the natural container hosting and network connectivity model with the intended service or microservice approach of an application -- or things are likely to go poorly. This plan must start through an understanding of what that natural model is for containers -- and the model for the app architecture -- and then how those two fuse.

Container hosting is based on private subnets normally; component IP addresses are for internal use only. They cannot be published onto or reached directly from any public IP address space -- even if that space is within the corporate network. Application components hosted on containers can still communicate, but problems can arise if the container deployments are either destroyed or relocated through normal redeployment or load scaling.

Admins expose containerized components for external interactions through mapping their internal private addresses to a public address. But this doesn't eliminate the issue of redeployment and scaling efforts changing the internal address, which breaks the carefully constructed network mapping.

In some container systems, such as Docker deployments, applications deploy in private subnets -- even components of other container applications address them via exposed public IP addresses. Kubernetes, which Docker adopted alongside its internal orchestrator, swarm mode, in early 2018, deviates from this scenario through a single private IP address space that uses the RFC 1918 Class A (10.x.x.x) address range. With this approach, only the components of an application addressed externally -- in the corporate virtual private network or the internet at large -- must be exposed.

Addressing services

Many services run behind an API broker, a management system that takes care of the complexities involved to validate access rights, which are usually the key ingredient in service discovery. Everything that uses the service must be able to address the API manager, and services must be addressable to the API manager.

Most services and microservices are designed to be resilient and scalable. The mapping of a service API to the component that provides the service must be dynamic to accommodate redeployment and scaling. Containers fit this bill.

These two technologies -- containers and distributed services -- each have their own requirements, often in the same area, and are based on different technologies and tools. These differences mean that a deployed service might not be addressable by its target partner services and users; that connection to a service is either nonexistent or lost in the containerization process; or that multiple instances of containerized application components won't actually scale up appropriately. Plan ahead to prevent these negative cases.

Services, load balancing and networks

The first step is to understand the container orchestration features that are designed to facilitate service discovery, which is part of the services, load balancing and networking capabilities of Kubernetes. Start with Kubernetes or Docker swarm mode as the base platform for container orchestration.

Docker creates a per-user private IP-addressed overlay network across all the hosts, or swarms. This network provides every container an IP address, which creates default universal visibility across container deployments. Kubernetes provides a similar service for container hosting. This setup is the basis for service addressing.

Kubernetes has always operated on the presumption that all containers are mutually addressable. For service discovery, Kubernetes defines a service abstraction layer that links a service label selector to both a set of pods and an access policy. An API can link Kubernetes containers to these service abstractions; there's a more complex network feature for non-native elements. Kubernetes also integrates other container services, such as Istio service mesh, for an access-facing elastic load balancer that becomes a container front end for service delivery.

In all these approaches, users can rely on a private address space to enable containers to discover application services delivered through other containers. A public address exposure takes place via a proxy or a load balancer for interactions that need to take place over the internet, for example.

Third-party tools, such as API-based service brokers and load balancers, and distributed processing elements, like Istio, should be used in conjunction with the Docker or Kubernetes approaches described earlier because it links the services to the orchestration and deployment, redeployment and scaling processes. If your IT organization decides to build on its own independent service discovery system, it will likely produce service APIs disconnected from the underlying containers.

It's critical that IT organizations load test container-hosted service strategies. Many users build an application deployment on container hosting that works for a limited and easily tested set of situations and then send that application into production, where there are no limits. Service discovery mechanisms and container scaling create independent yet compounding issues. Service discovery plans must extend from application design through deployment testing to production, or they'll eventually fail.

Dig Deeper on Containers and virtualization

Software Quality
App Architecture
Cloud Computing
SearchAWS
TheServerSide.com
Data Center
Close