Modern Stack
Konstantin Emelyanov - Fotolia
Modern deployment models put a new spin on componentization
Developers can use containers, serverless and microservices to strengthen app composability, but there are tradeoffs to consider. Discover the brave new world of componentization.
Application componentization is an old concept, but fresh infrastructure trends like containers and serverless computing can offer more advantages to breaking software into composable pieces -- if developers understand the implications of these new deployment models.
Combining componentization with cloud computing allows developers to build highly distributable and scalable applications. However, developers must ensure they understand how each deployment model works to evaluate whether it's right for their application.
Containers provide a way to make applications portable and scalable, serverless computing offers a way to deal with events more efficiently, and microservices provide a technique for developers to build composable apps. The three models have differences and similarities, and developers need to understand each to utilize these new techniques effectively.
All three platforms start with components, which are pieces of applications developers write and deploy independently, then stitch together with network connections and workflows. The first step for developers working with any new software model is to evaluate the best way to break their applications into components. A lot can go wrong down the road if your evaluation is inaccurate.
Component connectivity issues
The componentization of applications introduces network connections that can affect application performance and reliability. Application architects must consider how these risks compare to specific benefits that a technology offers. The three different technologies that have emerged for componentized applications deliver different benefits.
Containers are by far the easiest for teams to adopt. Containers are the preferred strategy when your applications are built from copies of components rather than when they're shared in real time. Containers share a single host operating system but have different file systems and resource spaces. Furthermore, developers can treat containers almost like independent servers.
Concurrency and connectivity are still two issues programmers should be aware of. The concurrency issue revolves around whether a given component can be called from multiple places, giving rise to possible collisions in the request. If your applications are single-threaded, meaning they process one item of work at a time, then your components won't face concurrent execution. Instead, multithreaded applications will require one of two techniques. First, components built to be re-entrant -- also known as threadsafe in Java terms -- can handle concurrent use. Be careful to use only class libraries that are also threadsafe for these components, and remember that threadsafe components are also stateless. Consequently, they hold no data internally between inputs, so you can't rely on them to maintain context in multistep transactions. The alternative is to give non re-entrant components a work queue to stage requests until they can be handled.
Connectivity of components is important for containers because containers are by default not addressable to your company VPN or the internet. If you use containers to host components that are accessed from or have access to outside networks, be sure to explicitly expose them. Also, remember that container management systems and orchestration tools, like Docker and Kubernetes, vary in how they expose container addresses to other containerized apps. You may need to build in security mechanisms to protect application access and data.
Scaling and security challenges
When containers host a service, a component is shared in real time rather than included as a copy, which can create scaling and security challenges. Container orchestrators aren't aware of relationships between applications, so it may not be possible to see that services share certain components, which can complicate version control. If you want to host services in containers, you can deploy each service independently rather than collectively as a single app.
The extreme case of services is microservices, by which it's easiest to see the issue of scaling under load. Microservices don't necessarily have fewer lines of code, but they are typically limited in functionality in order to facilitate reuse. Because both reuse and componentization are explicit microservice goals, microservices are more likely to be abused through overuse, in the name of development efficiency. Ultimately, having fewer components translates to better efficiency unless you have a specific benefit to justify the overhead.
Because microservices are shared among applications and are publicly addressable, it's especially important to manage access to the public APIs needed to connect with them. Most microservices users will employ a form of API management or brokerage to authenticate requests to microservices. Token-based authentication is one popular approach.
Handling multiple messages
Microservices are also used to improve scalability and resilience, and this poses specific development constraints. You cannot simply replace or replicate a component if that component holds data internally, which is why microservices are often considered to be stateless. This acknowledgement allows free substitution of one component copy for another, but it poses a problem with transactions that consist of multiple messages.
When a new copy of a component is used to process work, there's a risk that the context of the new work is lost because the new copy didn't see the previous messages. This can happen either because a component failed and was replaced or because a new instance was spawned. If a transaction consists of several related messages, then the context or state of the transaction has to be provided from the outside and delivered to any copy of the component that's being used. This can be carried out from the front end by having each message tagged with a sequence indicator and any necessary related data. It can also be carried out on the back end from a database shared by all the component copies. Just make sure to stick with one approach for development consistency.
Multiple copies of a component aren't useful without a form of load balancing to ensure that work is divided among them. It's possible to use external load balancers or load-balancing tools built into middleware, including container software. Some users employ a customized DNS server that decodes URLs into IP addresses, but this approach can create a problem if components cache IP addresses. The best practice is to use an independent load balancer and to position it in front of any component that needs to scale.
You can also instantiate two copies of any scalable component, along with its load balancer, during initial deployment. Load balancers can be made context-aware to a degree, so explore combining state control and load balancing for scalable components. Also remember that new copies of a microservice have to be registered with the API broker or manager and with the load balancer, or else other services won't find them.
Functions enter the arena
Serverless is a concept in componentization that abstracts away many infrastructure concerns. It provides a framework that loads and executes software components on demand rather than assigning them to a specific server or infrastructure component. The term is often applied to cloud services, such as AWS Lambda or Google Cloud Functions. Designed for event processing, serverless employs functional or lambda programming concepts.
In the cloud, the service provider runs a function or lambda on demand, wherever a trigger or event is recognized. You're charged for the time it takes to run the function, which means you limit your function's execution time. You're also charged based on the number of times a function runs, which suggests that if trigger events are frequent, the serverless model could prove more costly than another hosting model.
Lambdas are microservices in a sense, so issues of resilience, scalability and state control that affect microservices may apply to lambdas, too. However, event processing isn't the same as transaction processing. Event processing may require correlation between multiple event sources, which means that the lambdas are orchestrated based on a state-like structure. It's smart to review how orchestration works before developing lambdas, and AWS Step Functions provides explicit resources, which are worth looking at even if you don't plan to host on Amazon's cloud. State control via orchestration is a better approach than back-end state control when you're developing lambdas.
Take it seriously
Remember that where you run something matters. Cloud support for all three application models is different from the support available in your data center. That's particularly true with serverless and microservices.
Many developers don't give containers serious consideration in building modern applications. Don't make that mistake.