Creativeapril - Fotolia


Microservices challenges include latency, but it can be beat

When working with microservices, latency is the cost. Read about the weak points of microservices and the necessary workarounds to latency and experience issues.

The adage "there's no such thing as a free lunch" should ring very true to those who might adopt microservices. This approach to application design aims to enhance reuse of components and promote scalability and availability. Unsurprisingly, more developers have adopted this approach.

But even emerging technologies with seemingly little or no cost always have a price. Many prospective users of the architectural style don't know of the potential disadvantages, but they need to find out.

With decentralization comes latency

To get a more comprehensive understanding of microservices challenges, it can be helpful to look at the approach as an extreme form of componentization, something developers have been using for decades. What's distinct is how extreme some developers take the concept.

An application built on components or microservices is first broken into deployable units, which are then connected through one's network. This means that work is passed over a connection rather than directly at the program level, as is the case in older-style monolithic applications. While the division of pieces among servers means the application can do more work in parallel, the fact that components run in several places at once means that network delays will affect response time.

Microservices and component reuse also add another dimension to the problem of network latency and quality of experience. Considering that reusable components can be used by several applications at once, all applications need to be able to find them. Rather than use static addresses for components, like you'd probably do for components of a single application, you'll need to use some form of broker or directory to link applications to the microservices they are built to use. This API brokerage process is based on a middleware tool that also introduces delays.

Controlling this latency or delay for scalability is yet another of microservices' challenges. Scaling introduces new component instances of a microservice, in new locations. It also requires load balancing, which can centralize the process of distributing work among those instances. Yet, having load balancing in one place -- while instances of microservices pop up in random locations -- is a formula for unpredictable latency.

Tackling performance issues related to microservices from the start of a project
Microservices performance issues can be mitigated from the start. Here are some ways to do that.

Workarounds to microservices challenges

If latency is a problem, be sure to avoid overcomponentizing. You can start by carefully examining workflows.

Keep an eye out for situations in which two or more components occur together along a workflow. That indicates that the individual parts are not actually independent and would need to scale together to be useful. In these cases, you should not separate the services. Development teams will say that violating this rule to enhance composability alone isn't a good idea.

While you're reviewing the workflows within your componentization strategy, also think about the role component hosting location plays in latency. An application that uses multiple components will perform better if the components' placement minimizes network delay along the workflow. For instance, you don't want to host a component in New York when most of the time you'll be invoking it from an application running on the West Coast. Again, it's component reuse that can get you into trouble, because the main part of an application that uses a component might run in different locations.

Ensuring shared components are better connected with the other locations in which an application is hosted may remedy latency issues. That can mean a location geographically central to your data center sites or with faster network connections. Another approach may be to run multiple copies of the same component in different locations and assign components based in part on how efficient it will make the workflow of the assigned component. That adds a bit to the complexity of the load balancing, but it could be worth it.

The next step is to improve the network connections between the places microservices run. Within the data center, moving from traditional switching to a Fabric model that's nonblocking improves what's called horizontal connectivity, which means application-to-application or intercomponent flows. So, the more microservices you use, the more likely it is that this will improve latency without constraining your scalability goals.

Load balancing and API brokerage are other factors that can control latency to facilitate practical scaling strategies. The key point here is to make sure you have true stateless microservices whenever possible, not just microservices that have back-end state control. With any form of stateful behavior, it's necessary to retrieve state information, which can cause a considerable delay. If you do decide to implement a form of back-end state control for microservices, pay particular attention to where you store the state information. Make sure that you don't have long network paths to state data when scaling by ensuring all instances are local to the state repository, not spread over multiple data centers.

If you perform database operations from microservices, you may not want to scale them. Alternatively, you might need a distributed database model that lets your microservices access something from a local resource. Distributed transaction processing can keep parallel databases in sync. Plus, if there are many more inquiries than updates -- which is usually the case -- you can minimize delays by keeping your databases local to the microservices that access them.

When innovations like microservices come along, it's very easy to be enticed by all the new processes, designs and features they might enable. Prudent application developers, however, should be mindful that even the most impressive tech can have consequential tradeoffs. That is true of microservices, just as it would be for any other approach.

When implemented heedlessly, microservices can introduce delays so profound that they can threaten the application's overall ability to support your workers. As always, following workflows and tracking accumulated delays must be a part of microservices planning.

Dig Deeper on Enterprise architecture management

Software Quality
Cloud Computing