Alan Yrok -


How to address and mitigate serverless security issues

There are two major security implications of serverless cloud infrastructure: secure coding and identity and access management. Uncover best practices to mitigate these risks.

As DevOps and microservices architectures become more prevalent, app dev teams are looking to use more lightweight cloud services to deploy application workloads. One common technology being employed is serverless computing, which offloads the entire workload -- the container and OS instance -- to the provider's backplane, enabling developers to create microservices apps that only require app code to be uploaded and operated within the cloud provider environment.

However, as with any new technology deployment, the increased use of serverless cloud infrastructure exposes new potential risks that security teams will need to evaluate and address. Read on to learn about the top serverless security issues and how to best mitigate them.

Serverless coding risks

Serverless computing is especially subject to coding security risks. For example, malicious code execution can be generated through serverless and stay resident in memory for extended periods of times, as evidenced in Rich Jones' "Gone in 60 Milliseconds" talk. Inherent weaknesses in serverless environments themselves are also a concern as they may also facilitate remote code execution. These were demonstrated in Eric Johnson's 2020 RSA Conference session "Defending Serverless Infrastructure in the Cloud."

Serverless environments are also prone to code security issues. However, with the container host platform largely out of scope, teams must focus on securing input, code and execution. When developing a strategy to mitigate serverless security risks, organizations should first focus on static code review. Some third-party providers integrate into serverless environments, such as AWS Lambda, to scan code.

Many serverless coding risks will apply regardless of where and how the code is run, resulting in the following core issues:

The increased use of serverless cloud infrastructure exposes some new potential risks that security teams will need to evaluate and address.
  • Event injection. Injection flaws can still affect the code and how it handles the input. Improved input validation and predefined database layer logic -- stored procedures, for example -- can help fix this issue.
  • Broken authentication. Enforcement of strong authentication and role definitions for users of serverless apps should be emphasized.
  • Insecure app secret storage. API keys, encryption keys and other secrets are often involved in serverless function execution. Security teams should ensure developers are using mature secrets management tools and cloud-specific key stores.
  • Improper exception handling. Developers should ensure errors and exceptions are handled appropriately, preventing stack traces from being exposed or displayed.
Serverless computing and the modern app environment
Where serverless functions fit into the modern app environment

Identity and access issues in serverless environments

Another major serverless security concern is privilege and permission control. This should be implemented over all serverless applications with strong cloud identity and access management (IAM). Organizations must minimize the permissions that serverless functions run with, in addition to the permissions of services accessing the serverless functions.

For starters, log every serverless API event -- including access, modification or execution -- within the native cloud environment. This can be achieved with services such as AWS CloudTrail or Google Cloud operations suite, formerly Stackdriver.

More serverless security concerns, service offerings

Coding and IAM aren't the only two serverless security issues. Other areas to focus on are related to configuration: Insecure deployment settings should be considered and mitigated by limiting memory usage and possible serverless input vectors. Tuning execution timeout can help mitigate denial-of-service attacks and financial exhaustion.

There are currently limited cloud-native options for securing serverless. To truly secure serverless, an organization would need fully integrated static code analysis, memory analysis for buffer injection, dynamic scanning of input and behavior, and logging of all activity, among other security controls. Most cloud providers only provide the ability to define operational elements, such as memory size and usage, logging, and permissions on the functions and serverless services themselves.

There are some tools that can be integrated to provide more visibility into serverless execution. These include Check Point Protego and Dashbird. Organizations may also consider tools that add security to the continuous integration/continuous delivery stages and in-cloud execution. Many container security tools, such as Palo Alto Prisma and Aqua, can do this. But security practitioners be warned: All of these offerings require a significant dedication of resources and likely will not handle all possible use cases.

Next Steps

Create a private endpoint to secure Azure Functions apps

Dig Deeper on Cloud deployment and architecture

Data Center