Serverless computing offers a highly efficient way to deploy and run software on demand. Its rise in popularity can be attributed to its simplicity, lower costs and faster time to market. But like any other technology, it needs proper security.
Serverless frameworks require developers and ops teams to rethink their security approach. Follow these security best practices to limit your vulnerabilities and protect your serverless apps.
Write simple functions to reduce attack vectors
To improve serverless security, write minimalistic functions that call only the resources needed to achieve a given task. Minimalism decreases potential attack vectors and limits the potential ramifications of a vulnerability within one function. The fewer resources a function can access, the less harm attackers can do if they gain control of that function.
Also, write different serverless functions for different tasks, and separate those functions from one another as much as possible. This isolation decreases the likelihood that a vulnerability in one function will affect other functions.
Limit dependencies and patch the ones you need
It is common to include dependencies from third-party repositories within serverless code; however, avoid this unless absolutely necessary because you won't have as much ability to secure them. The developers that created the code might not follow the same security standards as you, and, if problems arise, you will be dependent on the third-party developers to fix the issue -- and they may not fix it as quickly as you need. In cases where dependencies are required, always include the latest stable versions of the ones you pull.
Keep careful inventory of the dependencies in serverless code, and use vulnerability detection tools to receive notification of any security problems discovered in those dependencies.
Regularly analyze and test serverless functions
To further ensure serverless security, another best practice is to analyze functions for potential vulnerabilities in their code. As teams often develop and deploy functions in a different pipeline than the rest of an app, it's crucial to remember to include them in routine security tests.
Implement monitoring tools designed for serverless
The importance of monitoring serverless environments may seem obvious. However, since it can be difficult to properly monitor a serverless environment with existing enterprise security tools, this is an important point to emphasize for ops teams.
It's often possible to pull metrics from a serverless environment into a security information and event management (SIEM). But most legacy SIEM tools were not designed to detect anomalous behavior within event-driven frameworks. For example, conventional SIEMs might mark a process that runs briefly and then stops as an anomaly because that type of behavior is not typical on conventional infrastructure -- even though it is entirely normal for a serverless function. Customize SIEM policies to help a security analytics system understand serverless, or adopt a detection tool designed specifically for serverless security, such as PureSec or Twistlock.
Customize access policies based on least privilege
Administrators attach Lambda control policies to specific users, roles or groups. For example, users or services could have full access to a Lambda function and its associated resources, or they could be restricted to read-only access. They could also have specific role-based permissions to invoke functions.
And while these policies are useful starting points for serverless security, don't rely exclusively on vendor-supplied configurations to control your serverless resources. That's because they are a default option for general purposes and aren't designed to meet your specific needs. Also, attackers know the default configuration, so they can more easily find potential attack vectors.
Instead, take the vendor-supplied configuration that provides the least amount of access, then build up from there.
Auto scale wisely to guard against DDoS attacks and other risks
Enterprises value serverless functions because they can quickly scale. However, if ops teams configure functions to scale rapidly without reasonable limits, attackers -- or just poorly written code -- can trigger a large volume of functions in a short time, which leads to significant costs.
Find a happy medium that lets functions scale as much as they need to for legitimate use, but also prevents costly abuse via autoscaling limits. It takes time to find this middle ground, and ops engineers may need to adjust it manually from time to time.