Getty Images/iStockphoto


5 ways to survive the challenges of monolithic architectures

Those unable to make the jump to microservices still need a way to improve architectural reliability. Here are five ways software teams can improve a monolith's reliability.

The most direct way to improve the reliability of a monolithic, single-tier architecture is to redesign it into an ecosystem of properly segmented and decoupled microservices. Unfortunately, a migration into microservices isn't always as feasible, be it due to budget constraints, resource requirements, staffing challenges or any number of other limiting factors.

Fortunately, it's still possible to enhance the reliability of a monolith and the applications within it without a full commitment to microservices. In this article, we'll examine the biggest challenges of monolithic architecture management, as well as five techniques architects and their team members can use to increase the uptime and performance of a monolith without immediately resorting to heavy application rewrites, rebuilds or refactoring.

Monoliths and reliability

Monoliths are certainly capable of hosting many types of sophisticated applications. However, things like heavy coupling between components, intensive update processes and the potential for cascading failures severely limit its ability to support the intensive management needed to keep up in today's software application markets.

For one, monolithic applications usually run as a single process on a single server, which means a failure at any layer of the hosting stack could cause one or more applications to crash entirely. This is less likely to happen with microservices-based applications, where the failure of one service doesn't typically bring down every other service alongside it.

Compared to microservices, monoliths also struggle to quickly scale up or down in response to shifts in resource demands and requests for data. The scaling process in a monolith might require updates to the architecture's overall code, imposing significant time spent remodeling tightly coupled services with large numbers of direct dependencies.

Barriers to microservices

Unfortunately, it's not always possible for software teams to remedy the challenges of monolithic architecture management through a transformation to microservices. There are countless barriers that may get in the way, but these are some of the most prominent:

  • Staffing and expertise. A migration to microservices might require substantial amounts of coding or application rebuilds -- not all development teams have the required expertise on staff.
  • Stress on operations teams. Microservices require operations teams to support a new range of application management and orchestration tools, and some might not be able to adapt quickly enough.
  • Impact on usability. Newly refactored applications must be deployed in place of the monolithic applications they're meant to replace, which disrupt business operations or the user experience.
  • Data management challenges. Microservices often require new data management techniques, like the ability to set up shared storage between microservices, which may come at a cost that outweighs the financial benefits of a migration.

Bolstering a monolith's reliability

In spite of the challenges associated with managing monolithic architectures, it's certainly possible to optimize the reliability of a monolith without a complete overhaul. The way a team manages application workloads, service requests, component changes, versioning and hosting has a particularly strong effect on the stability and performance of monolithic systems.

Below are five straightforward techniques architects and their team members can put into action when total refactoring isn't an option.

Load balancing

Software teams will often run multiple instances of a monolithic application by, for example, hosting the same application on multiple servers. This requires careful attention to load balancing configurations in order to maintain reliability and uptime. By configuring load balancing processes to distribute requests evenly across service instances, both development and operations teams can minimize the risk of letting dormant instances sit unused while neighboring instances are flooded with failure-inducing loads of traffic.

Even better is that proprietary tools for load balancing often include built-in monitoring capabilities that can detect failed instances and automatically rebalance workloads in response, can detect failed application instances and automatically direct requests to other available instances. SolarWinds, Nginx and Incapsula are just a couple examples of providers that tout this capability as one of their platform's primary strengths.

Traffic filtering

In addition to distributing workloads, it's wise to also make use of traffic filtering capabilities to detect and block application traffic that could trigger a failure. Malicious requests, like connections from a botnet executing a DDoS attack, can cause chaos-inducing disruptions and failures across a monolith. On top of that, legitimate requests that manage to trigger errors will waste hosting resources and undercut overall application performance.

Traffic filtering is often included as part of larger load balancing platforms, but it's also included as part of firewall systems -- depending on how the hosting stack is set up. Either way, blocking problematic traffic will help a team ensure that their monolith makes the best use of available resources and can avoid long periods of downtime.

Feature flags

Another way to reduce the risk of breaking changes is to introduce feature flags that allow developers to reliably turn new functions and other additions on or off during runtime. In this respect, feature flags help teams add new features to monoliths while minimizing the risk that those features will introduce new failures. If a problem does occur because of an unstable feature, the feature can be turned off quickly.

Canary releases and blue/green deployments

One way to mitigate reliability risks in a monolith is to limit the scope of potential problems. Canary releases, for instance, allow teams to only push updates and other application changes to only a certain subset of end users. Blue/green deployment, meanwhile, makes it possible to gradually move isolated groups of users from older to newer versions of an application, offering a means of reliability (provided the ability to host multiple application instances). By minimizing the number of users impacted by a potential problem, teams managing a monolith can reduce the chances of experiencing a failure that disrupts availability or performance across the entire application system.

Using multiple availability zones or regions

If you host your monolith in the cloud, configuring it to operate across multiple availability zones can improve reliability by ensuring that the application remains operational if one or more data centers go down. Combining this strategy with diligent load balancing can significantly improve a cloud infrastructure's reliability, even when working with a monolithic architecture. Unfortunately, the downside is that this might increase cloud hosting costs, depending on how many instances you run.

Dig Deeper on Enterprise architecture management

Software Quality
Cloud Computing