Tip

The pros, cons and challenges of containerized microservices

Containerized microservices are certainly a popular deployment approach, but are they the best? They can be, provided you overcome the unique challenges they present.

You don't necessarily need to use containers to deploy microservices, nor are you obligated to adopt microservices to use containers. Nonetheless, microservices and containers often go hand-in-hand, thanks to their mutually beneficial relationship.

However, fusing microservices and containers comes at a cost. Containers make microservices easier to develop and operate, especially when developers optimize their approach to containerized microservices management. But containers also complicate a microservices-based environment when it comes to things like server provisioning and data storage. For that reason, it's critical to develop a comprehensive strategy for working efficiently with containerized microservices before you build and deploy microservices inside containers.

Pros and cons of containerized microservices

As the name implies, creating containerized microservices means deploying individual application services within an abstracted container unit. Of course, containers aren't the only option: You could also deploy instances of microservices as packages of serverless functions, inside a traditional VM, or even on a single host.

However, for now, containers are the go-to deployment method for microservices. There are many reasons for this, but some of the biggest ones include:

  • Isolation. Containers allow developers to isolate microservice inside a software-defined environment. In turn, containers make it easier to set limits on each microservice's resource consumption and prevent potential noisy neighbor issues.
  • Efficiency. Because of a lower resource overhead, large collections of microservices tend to run more efficiently with containers than traditional VMs or a single direct host.
  • Parity. For the most part, a microservice hosted inside a container will operate in a consistent manner, regardless of the underlying host environment configurations. This reduces the number of variables developers need to contend with during the testing and deployment stages.
  • Scaling. It's possible to scale containerized microservices simply by adding more container instances, mitigating the complexity of scaling microservices in response to dynamic shifts in application demand.

Unfortunately, containers also come at a cost. The additional layer of abstraction introduced by containers will impose a new level of management complexity to contend with. While these challenges can be addressed using any number of the many open source and proprietary container management tools available today, you'll still need to familiarize yourself with those tools and learn how to implement them effectively.

For one, you must ensure that host servers are appropriately provisioned with the various container runtimes. In many cases, accomplishing this requires you to implement a container orchestration system like Kubernetes. You'll also need to ensure networking and storage provisioning for each service, which is much easier to do when deploying directly onto a host than it is with independent containers.

In most cases, the benefits of containerized microservices far outweigh the challenges. However, if your application is relatively simple and only includes a small number of services, you'll need to ask yourself if the increased complexity of container deployment will bring about an unnecessary management headache.

Five key considerations for container-based microservices

Let's break down some of the critical elements that you'll have to mind carefully when adopting containerized microservices. While you've likely encountered these concerns in other application projects, there are quite a few factors that add unique challenges to container management.

Here are five of those top concerns:

  1. Container runtimes. Containerized microservices are easier to manage using a complete set of configuration management tools, as opposed to simply deploying container runtimes on their own. The average container runtime can handle execution stages effectively, but these runtimes do little to help manage the actual containers. There are a variety of runtimes available from different providers, but many of these options will work in the same way, provided they conform to the specifications dictated by the Open Container Initiative.
  2. Service orchestration. When working with a sizable number of containers, it's important to use orchestration tools that automate operational tasks, such as distributing containers across a cluster of servers. While Kubernetes is typically the go-to container orchestrater, particularly for those using Docker, there are also container management platforms designed for specialized cases. For example, Amazon Elastic Container Service and Red Hat OpenShift are two proprietary options armed with enterprise-level features like large-scale workflow automation and integrated CI/CD pipelines.
  3. Persistent storage. Because container data typically disappears once the instance shuts down, those that rely on persistent data will need to implement an external storage mechanism. Data storage for containers is typically handled by an orchestrator component, so it's important to ensure that this external storage component is compatible with the orchestrator in use. Luckily, Kubernetes supports dozens of storage products via the Container Storage Interface.
  4. Networking. While deployed independently, the microservices that live in containers will still need to communicate with each other. This is often handled with the help of a service mesh, which manages requests between microservices using an abstracted proxy component. If those services interact with external endpoints, you'll also need some type of communication portal that can verify and relay requests from outside components, such as an API gateway. Finally, in cases where there is high traffic, you'll need to implement a load balancer that can prevent overloads by distributing requests across multiple container instances.
  5. Security. Although microservices often require access to back-end resources, running containers in a privileged mode permits them direct access to the host's root capabilities, as doing so could expose the kernel and other sensitive system components. Developers should also set security context definitions and network policies that prevent unauthorized access to containers and underlying systems. Finally, it's important to implement audit tools that verify container configurations meet security requirements, as well as container image scanners that automatically detect potential security exposures.

Despite these factors, however, containers still offer the easiest way to deploy functional microservices into dynamic production environments, especially in large-scale scenarios. While some aspects of containerized microservices present more management complexity than other methods, such as VM deployment, there is still a wide array of both vendor-provided and open source tools that can mitigate that complexity. On top of that, most container-centric orchestrator tools, storage plugins and security scanners provide a degree of automation that eliminates a lot of the work required to manage containers at scale.

Dig Deeper on Enterprise architecture management

Search Software Quality
Cloud Computing
TheServerSide.com
Close