Sergey Nivens - Fotolia
Master containerized microservices monitoring
Before IT teams can enjoy the benefits that containers and microservices bring, they must tackle several monitoring hurdles first.
As the use of containers has grown with the popularity of products such as Docker, more enterprises have adopted microservices to maximize the benefits of containerization. However, this not only changes how IT teams construct, provision and manage applications, it also requires a different approach to monitoring and reporting.
Physical applications and, to a greater extent, VMs, were relatively insular entities to monitor. Each application was self-contained -- even when it consisted of a combination of clients accessing a server-based application, dependent upon a database itself. Virtualization created more dependencies along the overall application path, such as the battle between different applications for underlying resources. But an application was still definable, and monitoring efforts could focus on activity across that application.
Microservices are different. Rather than build an application as a single environment, IT architects create sets of loosely connected, functional stubs that together create a composite app. Microservices are meant to be more efficient than traditional infrastructure. Each microservice runs on a shared set of underlying resources -- including OS resources -- and can be used in multiple different composite apps. As such, fewer overall resources are necessary to run similar or identical functions, and updating a single microservice improves multiple processes simultaneously.
More efficiency, more to monitor
Because physical and VM environments have a dedicated function within an application, an expected performance baseline can build up rapidly. Any divergence from that baseline can be seen easily and trigger events. For example, if the dedicated function diverges by requiring 20% more network resource or 30% more storage resource, it indicates that there is a problem, and this can be flagged and resolved.
But monitoring containerized microservices requires asking different questions. For example, IT teams must determine the source of any baseline divergence. It could be due to a problem, such as a newly provisioned composite app using the microservice. Or, one of the multiple composite apps that use the function is causing stress to the microservice because of a cyclical or special event.
Highly distributed applications require a monitoring system that works at a different level of granularity and better understands dependencies -- a system that can find the root cause of any issue and decide how best to deal with it.
Any chosen system must look at each microservice individually while also as part of a web of complex dependencies. The system must recognize when new composite apps are provisioned that will use the microservice and realign its baseline to embrace the new use patterns. It must flex resources to meet the needs of multiple processes calling on each microservice, yet also possess the means to throttle any specific microservice, or even stop it entirely.
Don't forget about containers
The system must also understand how containers work. The tools must maintain patch levels and carry out updates across live containers. If a container fails, the system must detect why and recover from such a failure.
Recovery might require a simple container reboot, or it might require something more complex. For instance, it could spin up a new version of the service on a different part of the platform to avoid any underlying failure of the physical resources that caused the initial issue.
Two different systems cannot comfortably carry out such actions concurrently. There will be areas where the systems could either believe the other will resolve the problem, or both will try to fix -- and thus possibly exacerbate -- the problem. The system that manages the containers must understand how the microservices inside each container operate -- and what all the dependencies are. This ensures that all the necessary dependencies move with containers in any migration, which maintains an accurate and functional operating environment.
This complex web of requirements demands a fully functional and tightly integrated container-microservices orchestration system. Within a DevOps environment, these become more available. Important features include:
- a rapid discovery of existing physical and virtual environments;
- dependency identification and mapping along a composite application workflow;
- the ability to embrace new microservices, containers and composite apps as they are provisioned;
- a highly functional dashboard with analytics and reporting;
- rapid and effective identification of any issue's root cause;
- the capacity to maintain the state of microservices and containers; and
- full lifecycle management, including patching of underlying resources, containers and microservices.
AI enters the ring
AI helps manage a containerized microservices environment. Most mapping and understanding are based on knowledge built up over time. This might be unsuitable for rapidly changing environments. Consequently, IT teams will need intelligent forecasting for the overall effects of any change on the production environment -- something that AI tools should address.
Additionally, capabilities built up around Docker, such as Swarm and Compose, provide orchestration capabilities. Kubernetes provides a decent set of functions for those with the time to learn how to best use the system. HashiCorp Nomad provides a wide range of capabilities to manage a container-microservices-based environment.
Microsoft Azure, AWS and Google Cloud provide their own systems -- some built on the above systems. Users can install their own software instead if desired, which provides greater fidelity when managing a hybrid cloud environment.