Getty Images/iStockphoto


Factor performance into an application modernization strategy

Dev teams increasingly use cloud, containers and microservices to modernize apps, but these technologies bring little value unless they boost performance, which is no guarantee.

Some enterprises think all you need to do to modernize legacy applications is redeploy them to the cloud or move them to a container. But it's rarely that simple. Cloud migrations and other approaches to modernization don't automatically translate into what matters most to the business: better application performance.

The goal of modernization is to move an application from a legacy deployment model to one that is more nimble, but that doesn't guarantee better performance. And, after all, no one is going to choose to do business with you simply because you host your applications in the cloud or use containers. Customers and end users don't care about that -- they care about how well the applications run.

If an application suffers from underlying performance problems, such as memory leaks or network bottlenecks, these problems will persist after it moves to a modern environment.

To be sure, in some cases, modern environments may lead to better performance because modern hosting strategies often remove components like hypervisors. As a result, more resources are available to applications, leading to better performance. However, that's not necessarily the case; sometimes, the additional layers in modern hosting stacks, like orchestrators and services meshes, mean that apps perform worse in modern environments than they would in conventional ones.

To ensure optimal performance, developers need to understand the nuances of four common application modernization strategies: cloud migration, containerization, microservices adoption and automation.

Cloud migration

When you move an application to the cloud, you will likely see an immediate performance boost, as hosting resources are available on demand and virtually without limit. As a result, an application that suffers from a memory leak might run faster in the cloud, where there's no limit to memory resources. Based on end users' perspectives and metrics, such as average response time, your app might appear to perform better in the cloud.

But that perception is deceptive. To refer back to the memory leak example, that issue won't go away on its own, and you pay more for the extra memory resources that the app consumes in the cloud. In this case, cloud migration doesn't solve the performance problem; it just slaps an expensive bandage on it and leaves you with ever-increasing technical debt.

If your application modernization strategy centers around cloud, don't confuse metrics like response time with performance. Pay attention to cloud costs, which largely reflect performance optimization. If you spend more on cloud resources for your app than you would to maintain it on premises, that's a sign the app has underlying performance issues.


Containers deploy an app in an isolated, lightweight virtual environment without the overhead of traditional VMs running on hypervisors, which leaves more resources available for app use.

This means a poorly performing app might run a little faster inside a container than inside a VM, as the container host server has more resources it can expend on the app. But, ultimately, you still waste valuable system resources if the app's design is inefficient.

What's more, containers can create new performance challenges. Poorly configured resource limits for containers may deprive containerized apps of the resources they need to run efficiently. Or you may assign a container to a node that lacks sufficient resources for supporting the container. If you don't manage risks like these, you may discover that your app performs poorer inside a container than it did when running directly on a server.


If you refactor a monolithic application to run as microservices in a loosely coupled architecture, the apps, in theory, can take advantage of greater, more finely controlled scalability, resilience against security intrusions and more seamless updates.

Yet, when you move a legacy app to a microservices architecture, some thorny performance risks can arise. Unlike monolithic apps, microservices typically rely on the network to communicate with each other via APIs. Network problems or poorly written APIs can quickly degrade performance. Think carefully about where you host each microservice and how you optimize communication between them to improve performance.

Monolithic vs. microservices architectures
Compare monolithic and microservices architectures.

Additionally, the complexity of managing microservices-based apps often leads to the deployment of additional tools -- such as service meshes to help manage microservices communications and orchestrators to manage microservices across clusters of servers -- within the hosting stack. Those additional tools consume resources, which may lead to poorer app performance because fewer resources are available to workloads.

In addition, the tools need to be managed, increasing the skills and effort required on the part of staff. Complexity and ownership issues can slow CD processes and delay innovation, which is sure to cause business value questions about the modernization strategy to arise.

When used the wrong way, microservices, containers and other modern solutions may have a negative overall impact on performance.


Simply deploying applications into modern environments is unlikely to result in major performance gains unless you also automate the management of those applications. Applications typically struggle to consume the optimal level of resources without automated processes in place for scaling resource allocations.

For example, if you deploy containerized apps on a Kubernetes cluster, it's likely that the total resource requirements of your workloads will fluctuate over time. If you rely on manual processes to change the resource allocations for your Pods, you may not be able to update resource assignments quickly enough to ensure that the applications have the resources they need to perform optimally.

If you configure autoscaling for your Pods, applications automatically receive the resources they need to perform at their best. You also avoid wasting money by allocating excess resources to Pods during times of decreased demand.

Key takeaway

Modern technologies can offer great opportunities for improving workload performance, but they're no panacea. When used the wrong way, microservices, containers and other modern solutions may have a negative overall impact on performance. They also tend to increase the complexity of environment management.

These examples hint at why an enterprise might stick with a monolithic app over microservices. If you don't have the resources to manage a microservices architecture or if the added overhead of modern environments leads to poorer overall application performance, there's no shame in sticking with monoliths and on-premises environments. In fact, according to IDC, 80% of organizations plan to repatriate at least some of their cloud workloads as of 2023, an indicator of the challenges that many businesses experience when they move apps to modern environments yet don't achieve the performance results they hoped for.

Dig Deeper on Cloud app development and management

Data Center