Getty Images/iStockphoto

Tip

Deploy microservices with serverless functions

Microservices enable IT teams to create flexible, uniform systems -- but what happens when IT teams use serverless functions for microservices? Discover the benefits and drawbacks.

The world is moving away from the era of large, monolithic enterprise apps to more flexible systems that can react to changes in the markets. This shift toward a microservices model has also changed how developers approach tasks, with continuous development and delivery via DevOps far more aligned with this approach.

Within a microservices model, IT teams create standalone functional services that other services can call on as needed. For example, rather than host multiple instances of a billing system embedded within disparate monolithic applications across an organization, it can use a single billing service that creates a universal standard. It's much easier to update and improve just one billing service.

Uniformity maintains consistency across the organization.

Microservices

By aggregating such services to fulfill a specific process, an organization gains agility and greater capabilities to modify a single microservice as required -- or to swap out one service and replace it with another. However, dependence on a single-instance service is a recipe for disaster: The failure of a single instance affects all composite apps dependent upon it.

Alongside this issue, there are also problems with the construction of a microservices architecture. With the increasing use of virtualized platforms, such as cloud platforms and services, traditional provisioning approaches -- where there is a dependence on the physical attributes of the platform -- are no longer fit for purpose.

Over time, orchestration systems have appeared that enable IT teams to provision microservices without the need to understand those underlying physical attributes. Instead, they provide capabilities around the use of metadata, which defines what resources the platform must provide, with orchestration software to match the virtual resources available.

This doesn't deal with all problems, however. For example, linking microservices with "fixed" connections could break the composite app's functionality. Any change in the microservices involved, such as movement from one platform area to another, now produces a broken link, which inevitably leads to complete functional breakdown.

To escape the fixed link problem requires total abstraction from physical hardware needs. Here, IT teams should look to serverless functions.

Usefulness of serverless

Serverless is a concept that encompasses the full automation of services on a platform, alongside highly responsive resource flexing; the entire composite application is based on event-driven interactions.

Major cloud services -- such as AWS Lambda, Microsoft Azure Functions and Google Cloud Functions -- have built serverless functions into their cloud platforms. Serverless functions enable IT teams to run multiple instances of identical functions around the globe in different data centers. Admins can shift workloads in real time to wherever a service can be best supported, and introduce new functions without sacrificing current functionality for testing or to address the needs of specific customers. Organizations can use serverless functions in their own data center environments if they are architected as highly abstracted virtual environments.

Serverless functions become independent of everything around them: The resources -- CPU, storage or network -- are all abstracted. Links between services are maintained via loose coupling. This is based on metadata directories and coding concepts in development.

For the IT operations team, it makes provisioning and management easier. As long as IT admins feed the chosen serverless orchestration system the right information, it will try to ensure that the right outcome is achieved automatically. This is known as an idempotent approach. The system will perform a defined series of actions to attain a specified result -- every time.

Changes to the platform -- whether at the physical level of resource additions or at the virtual level of moving services around -- are all managed automatically. As such, IT operations admins can spin up a new instance of a service, and the platform will recognize it. Admins can also move a service to another part of the platform, or a new platform completely, without confusing the system.

Use cases for serverless microservices

Idempotency is, of course, the ideal outcome. But it must be examined in the light of all involved dependencies. For example, if a microservice is coded poorly with physically dependent calls, serverless functions will not help -- in fact, it will break with the first change.

If many microservices construct a composite application, the number and frequency of changes can cause performance and management issues. Dependence on single services will not work. Serverless management systems must enable high availability via multiple service instances and load balancing, as AWS, Microsoft and Google have implemented within their systems. IT teams can manage this setup with stateless logic -- a lack of dependency on physical location.

Serverless alone might not respond quickly enough to microservices' demands for resource increases, which leads to poor performance. If the change cycle is too heavy or rapid, this architecture can harm corporate audit capabilities, along with potential legal ramifications.

In such cases, containerization is a better approach. The available container orchestration systems have had a considerable period of time to mature. Monitoring and reporting that creates and manages audit trails are now strengths of containers. The elasticity of resource application to containers is a well-known concept in modern IT. Overall, containers are at a more mature point in their evolution than serverless is at time of writing.

However, this does not mean serverless, microservices and containerization are mutually exclusive. Combining the best aspects of these approaches -- well-coded microservices wrapped in containers that are easily provisioned, monitored and managed -- leads to a faster, more flexible overall platform. Over time, the maturity levels of both approaches will come together and, likely, combine into a single overall composite application architecture.

Dig Deeper on Containers and virtualization

Software Quality
App Architecture
Cloud Computing
SearchAWS
TheServerSide.com
Data Center
Close