kentoh - Fotolia


Implementing microservices architecture best practices

Addressing the performance issues of microservice architectures can be quite challenging. Yet, using the right tools or practices at the right time and place will give you a boost.

Addressing how microservices affect an application's performance can be complex -- if not daunting at first. However, to make that task far more approachable, architects can break down the steps for each of its aspects, simplifying individual phases like monitoring, performance analysis and performance tuning.

The following tips cover many different parts of an application and the most effective tools one should use to manage and design microservices. This list should help teams with the patterns related to monitoring, managing and optimizing microservices performance alike -- in the process, making what are considered microservices architecture best practices fairly intuitive.

What are some basics for monitoring microservices?

Trying to enhance microservices' performance without first knowing how to properly track their effectiveness is like adding a full cup of milk to a cake mix before realizing it only called for half.

You will get more out of microservices by deploying monitoring tools like Amazon CloudWatch. Our technology editor, Stephen Bigelow, recommended setting them to track specific component statuses or events. That way, predetermined conditions, or rule sets, can automatically trigger set responses.

How microservices and cloud application performance fit together

Once you're familiar with the tools for monitoring microservices' performance, pay particular attention to the aspects of your application where microservices are prone to becoming impediments. It's also important to consider the common ways in which they affect susceptible processes or components.

By not using microservices architecture best practices, for instance, it can diminish an application's quality of experience (QoE) and service discovery. Additionally, with componentization so intrinsic to microservices, the approach also introduces more delays and network binds for which architects need to compensate or minimize.

TechTarget contributor Tom Nolle laid out how to be mindful of these pitfalls within cloud application development and notes that developers wishing to benefit from microservices will also need to architect their applications to address the approach's QoE and network-related shortcomings. To be properly prepared, addressing these flaws has to happen at every development stage, including during an application's design, initial deployment and whenever changes to application workflow or structure are made.

How to keep microservices running smoothly from the start of a project.

How to keep microservices' performance from going south

By now, the performance tradeoffs of using microservices -- and how a team can monitor them -- are hopefully apparent.

Such an endeavor can feel like accounting for and examining an exhaustive number of an application's aspects. Yet, the very nature of microservices increases the number of components and combinations in which different modular services interact. So, it is necessary to be thorough to ensure everything within the complex architecture is functional.

This is particularly true of the forms -- and scope -- microservices' performance analysis should take. Developers need to both engage in granular telemetry of individual microservices and applicationwide monitoring. The former will predominantly identify internal challenges, while the latter will discern problems from the user's perspective.

Effective performance management also depends on collecting as much data as possible. The more data, the more complete the picture will be for developers. Yet, with terabytes of incoming information, you will need to use tools like Loggly, Splunk or Sumo Logic. They can pore over and help garner insight from all that raw data.

Troubleshooting microservices' performance problems

Developers wishing to benefit from microservices will also need to architect their applications to address the approach's QoE and network-related shortcomings.

Isolating and debugging performance problems is inherently harder in microservice-based applications because of their more complex architecture. Therefore, productively managing microservices' performance calls for having a full-fledged troubleshooting plan.

In this follow-up article, Kurt Marko elaborated on what goes into successful performance analysis. Effective examples of the practice will incorporate data pertaining to metrics, logs and external events. To then make the most use of tools like Loggly, Splunk or Sumo Logic, aggregate all of this information into one unified data pool. You might also consider a tool that uses the open source ELK Stack. Elasticsearch has the potential to greatly assist troubleshooters in identifying and correlating events, especially in cases where log files don't display the pertinent details chronologically.

The techniques and automation tools used for conventional monolithic applications aren't necessarily well-suited to isolate and solve microservices' performance problems. In his follow-up piece, Marko listed several microservices architecture best practices, as well when and how to use them. These tools and techniques include application performance management, network analysis and management software, synthetic transaction monitoring and the building of APIs and incident response checklists.

To troubleshoot microservices performance issues, follow these seven steps.

Techniques for optimizing microservice performance in the cloud

A number of developers choose to use a unique copy of a microservice -- combined with other logic -- when binding it into an application. This particular method forms a deployment unit. However, in cases where a copy of a microservice is shared among the applications or components, performance tuning -- optimization -- needs to be seriously considered. That approach's performance is greatly affected by where you host the application components that share microservices.

Known as combined application topology, it is often contingent upon how hosting facilities are geographically distributed. As Nolle noted, a wide or narrow dispersion will determine how you need to pursue microservice performance optimization. To get a sense if placing a hosting facility somewhere -- say, in a geographically faraway location -- will affect an application's QoE, modeling or testing can be helpful. If either shows that microservice placement has a negative effect, you will want to consider enhancing connection performance or narrowing your placement options.

Explore and consider both options, and pay particular attention to whether either improvement in your case would justify its respective costs. Faster infrastructure or more expensive practices will not always be the way to get the microservice performance boosts you need.

Next Steps

Test your knowledge of microservices architectures

Dig Deeper on Application development and design

Software Quality
Cloud Computing