How to adapt API management for serverless architecture
It's critical to understand what will happen to legacy APIs when you transition to a serverless architecture. Discover how to properly adapt API management for this new paradigm.
If you've already built a stable API infrastructure, you don't want to just throw everything away and start over....
Continue Reading This Article
Enjoy this article as well as all of our content, including E-Guides, news, tips and more.
After all, the term legacy is often another way of referring to things that already work for you. But how can you adapt an existing API to a serverless architecture?
Many tools exist to help convert APIs piece by piece, enabling you to monitor and replace individual parts of the APIs as you transition to a serverless architecture.
Knowing where to start
A traditional API system is built on layers. Therefore, you might have an authentication piece that checks credentials against something like Active Directory (AD), a query interface that converts user input into SQL. You might also have middleware that, for example, processes raw HTTP headers and converts them into formats specific to your system. You can transition each piece to a serverless architecture, but you'll want to create a set of tests that make sure they transition properly.
API transitions are perfect to use for test-driven development. To begin, you should try to replace the least complicated APIs. These should be the simplest to test, and this will give you a handle on API migration. It's also important to make sure you can easily roll back a change and revert to the old API version if needed. For example, you don't want to update the domain name system (DNS) records with a time-to-live (TTL) of one week, because that means it could take up to a week to update and roll back a change. Instead, consider lowering TTLs for the DNS and setting transitions to something like five minutes. You could also use an API proxy or load balancer to route traffic as needed.
One common migration approach is to replicate traffic to a new API while disabling any side effects -- such as charging a credit card -- so you can test the new API with real user data. Sending identical data to both the old and new API enables you to verify that the output of both APIs are compatible, and it lets you identify issues that the switch might cause.
The next step is traffic shifting or A/B routing, which can be carried out through an API gateway. Have the gateway send most of the traffic to the old API and a smaller percentage of traffic to the new API. Monitor for unusual activity, and watch for user complaints, which will help verify that the new API works without customer impact.
If you have public APIs, it's always a good idea to version them. If you work with an old SOAP API but want to offer up an event-based REST API, you can offer both simultaneously for a period of time while users transition to the new API. API versioning typically involves the addition of a version prefix to a URL, such as mycompanyapi.com/v1/users. But developers can also create separate endpoints, such as v1.mycompanyapi.com/users. Different formats of API responses typically use suffixes, such as users.json or users.xml. The most important thing is to communicate the transition to end users and let them test out the new APIs, but still provide access to the old APIs in case something goes wrong.
It's also important to install monitoring that identifies any users or internal applications that still use the old APIs. It's generally good practice to provide notice at least six months to a year before an old API is terminated, provided there aren't any critical security issues.
Exposed APIs don't necessarily need to change, even when the underlying architecture changes. You can still move an API to a serverless architecture and not modify the user-exposed API when the implementation changes. Even if the API exposes a WebSocket, you can run a separate adapter on an Amazon Elastic Compute Cloud (EC2) instance. You can also run it through Amazon EC2 Container Service using the Fargate launch type, which simply exposes the WebSocket API but uses the new serverless API under the hood.
Adapters on top of an API provide users with a simple way to access the API in a familiar way. For example, if your API only produces JSON outputs but your users want a SOAP response, you could create a specialized SOAP adapter that converts requests from the SOAP format to the new JSON REST format.
What should I scrap?
There will always be certain pieces of an old API architecture that won't properly convert to serverless, such as lock files or local files. Anything that requires sequential ordering, such as an automated ID that increments by one with every new purchase, does not work well with web-scale databases. On the other hand, elements like single sign-on (SSO) are a good fit for serverless.
If your authentication process jams up the works because it only allows six-character passwords, it's time to scrap that. Authentication and authorization are the most important parts of any API, and they can also be the biggest hindrance. Caching can help, but if the actual login verification process takes a few minutes, your API is essentially broken. Consider swapping out an old authentication service like AD for a modern SSO system. Or integrate a social media login that enables customers to easily sign up and log in.
Not everything needs to be migrated to serverless
The most important thing to remember during a serverless migration is that not everything needs to migrate. If 90% of your requests are designed to constantly read a list of products, it makes sense to migrate to serverless. If you only create one new product a month, it does not make sense to migrate to serverless. Instead, create a back-end SQL database that synchronizes changes to Amazon DynamoDB in a format that the serverless API can read, but not write to.