This content is part of the Essential Guide: How serverless works to usher in a new era of applications
Tip

How to choose between serverless and containerized microservices

Explore the intricacies of serverless microservices and container microservices to discover which development requirements are crucial to your ultimate decision.

When the microservices revolution began, containers provided the only sure way to run isolated microservices. As a result, Docker became popular with developers, and it is still one of the most widely used microservices development tools. When Amazon released AWS Lambda, some developers switched to a serverless architecture, which led to even more service and development isolation.

As microservices technologies mature, many developers find themselves in a serverless vs. containers conundrum and are unsure whether they're better suited for running serverless microservices or running containerized microservices.

Why serverless?

Serverless enables developers to go beyond microservices and add even more service granularity. While microservices can manage things like login and authentication, a serverless function's purpose may simply be to look up a user by a cookie. Serverless also does things like check what permissions a user has or redirect a user to a login screen. Developers call these more granular, lower-level microservices nanoservices.

Unlike microservices, nanoservices only run a single block of code, also known as a function. It could parse a cookie header or determine if a user has permissions for a particular API. And nanoservices are only possible through serverless technologies.

However, nanoservices aren't right for every application, especially because they tend to increase API latency. Certain events might trigger multiple functions, and each separate function typically creates network latency when it runs. However, in some cases, it is better to run multiple functions inside a single serverless function.

Nanoservices provide a helping hand with the reusability of components across multiple services. For example, you could use nanoservices to look up a user by an authentication token. Nanoservices enable user authentication to cache across all functions and ensure that users don't log in from multiple locations at once, which helps prevent sharing accounts. You can also use nanoservices to fetch all of the permissions that a user has based on her plan, and it enables custom overrides. Another nanoservice can also verify user permissions to see if they can perform a specific action. All three of these collective nanoservices constitute the authentication and authorization microservice.

The need for serverless microservices

Serverless doesn't necessarily mean you have to use nanoservices. You can develop microservices that run within a serverless architecture or create microservices that call certain serverless functions. But there is a catch with serverless functions: AWS Lambda currently only supports up to five minutes of execution time per call, which means long-running services are not optimal in a serverless architecture.

By design, serverless functions are disposable and do not guarantee any persistent state. Therefore, services like a cache microservice or a database service can't run using serverless.

Alternatively, serverless microservices are great for events that can repeat without consequence to users. For example, a login function that returns a JSON Web Token can migrate to a serverless microservice. If you're working on an application that encodes video, a serverless microservice could accept the encoding request and trigger a transcode job through Amazon Elastic Transcoder.

Serverless is ideal for any type of interaction that spawns off of an event; these types of interactions either include a user request through API Gateway or an AWS-based event in the vein of an SQS-delivered message, a newly written or updated record in DynamoDB or even a click tracked through Kinesis Data Firehose.

When to use containers

Container-based microservices are incredibly useful for applications that require long-running services and tasks that do not have to bootstrap code for every request, such as a service that needs to load several thousand records from DynamoDB into local memory to run text analysis. Additionally, container-based microservices are significantly more portable than serverless functions that rely on the cloud provider's configuration.

For example, DynamoDB can trigger a Lambda function that synchronizes records into a CloudSearch index anytime a record changes. This synchronization itself is fairly trivial to code, but because it is Lambda, it locks the application into AWS. If you need to support multi-cloud platforms, this solution isn't viable unless you rewrite code specifically for every individual platform.

Using containers to build microservices is like writing an HTML5 application for mobile devices: You can write it once and run it nearly anywhere. However, container management does require more operation time than running on serverless does. Microservices can easily run locally, which facilitates debugging, but containerized microservices require management of the underlying software OS upgrades. It also means that services need to occasionally reboot, unlike serverless functions that expire after each invocation. For applications that may leak memory or need to cache a lot of various data types for each request, serverless microservices are a better option than containerized microservices.

Concluding the serverless vs. containers debate, choose containers if you're worried about portability, want greater control of your architecture or need long-running services. If you're working with event-driven architectures, need to develop apps quickly or lack an operations team to help with maintenance, go with serverless functions.

But remember: The two technologies are not mutually exclusive. Just because you're using containers doesn't mean you can't also use serverless functions. For instance, a common architecture pattern is to have a serverless function trigger an AWS Fargate task, which, in turn, spawns a container and runs a longer-term process, like a complex report that requires several hours to complete. Make sure to consider the possibility of adopting serverless and containers before you begin searching for an ultimatum in the serverless vs. containers debate.

Dig Deeper on Application development and design

Software Quality
Cloud Computing
TheServerSide.com
Close