In the past decade, serverless computing has been one of the key innovations cloud-based technologies brought into the software development lifecycle. The ability to develop and launch applications without directly managing underlying compute infrastructure can significantly accelerate the early stages of the software release cycle.
There are many areas related to operational activities where serverless can introduce some degree of complexity, one of which is cost. The first step to overcome high costs in AWS is to understand serverless pricing. Dig into how serverless costs are calculated, review pricing scenarios and explore common costly culprits, such as overprovisioning, lack of logging and cold starts.
How are serverless costs calculated?
Serverless computing typically results in low or even negligible costs during early development stages. However, there are many cases where this advantage diminishes once an application scales.
Cost is calculated based on GB-seconds, which is the number of memory GBs allocated to a function, multiplied by the accumulated time the function is executed over a period of time. This is determined by how long each function execution takes to complete and the number of triggered executions.
For example, a function deployed in the U.S. East (N. Virginia) Region, running on x86 with 1GB memory allocation, which is executed once per second -- 2.6 million times per month -- and with an average execution time of 1 second, it would accumulate 2.6 million GB-seconds throughout the month. In AWS, this would result in $43 a month on duration cost, plus a very low cost related to the number of requests -- $0.2 per million -- for a total of approximately $43.50 per month.
Keep in mind that there can be additional costs. For example, in the case of Lambda, the processor type also has an impact on cost, given that ARM is approximately 20% cheaper than x86.
Understand pricing scenarios
To avoid unexpected costs, understand pricing scenarios for all serverless components and link them to the usage and performance requirements expected in a live application. Examples include the following:
- Transaction volume. Minimum, average and maximum requests per unit of time.
- Volume patterns. Baseline usage vs. spikes.
- Consumed resources. Required compute capacity per execution, execution duration, storage and external integrations.
The expected response time is a key factor to consider, given that many applications have strict maximum response time requirements. Sometimes, this cannot be allocated with serverless components or would result in higher compute costs compared to a server-based counterpart. These estimations need to be performed per each serverless component in any given cloud architecture.
Three main factors affect pricing of serverless functions, such as AWS Lambda and Azure Functions:
- Memory allocation.
- Number of executions.
- Duration of each execution.
Even though a single function with relatively high volume can incur reasonable costs, having many functions or overprovisioning memory can result in high cost. There are also applications with a large request volume -- such as over 100 requests per second -- which can quickly reach thousands of dollars per month.
In the case of large volume or high number of functions, it's recommended to compare cost and performance against an equivalent server-based deployment, such as Amazon EC2. Develop code in a way that can easily migrate from serverless into a server-based deployment.
Overprovisioning of compute capacity will result in unexpected costs. There are services that can aid in overprovisioning, such as AWS Fargate or Amazon Aurora Serverless. Fargate vCPU allocation in a Linux deployment is approximately $29 per month and each GB of memory results in $3.20 per month. The service allows for up to 120 GB of memory allocation and up to 16 vCPUs, which could result in hundreds of dollars of unnecessary cost if these parameters are overprovisioned or containers are left running unnecessarily.
When it comes to serverless databases, Aurora Serverless has a parameter known as Aurora Capacity Units (ACUs), which defines the amount of vCPU and memory allocation -- 2 GiB per ACU -- for the serverless database. There is a minimum ACU value that is configured per database. Given each ACU costs approximately $86 per month, overprovisioning this parameter can also result in hundreds or even thousands of dollars of unnecessary costs if not done properly.
Logging is another area that can result in unexpected costs. For example, Lambda functions automatically write log data into CloudWatch Logs. However, cost issues can potentially arise when a particular function logs a large amount of data as part of its code implementation. The function described previously would result in 2.6 million executions per month. If this function writes 100KB of log data per execution, it would result in 259 GB of log data ingestion per month. CloudWatch Logs charges $0.50 per GB of data ingested, which would result in approximately $130 a month of log data ingestion for this particular function.
If several functions are deployed with similar patterns, this could result in thousands of dollars of unnecessary log ingestion and storage fees. If log storage retention is set to never expire, this would result in approximately $8 of increased storage cost per month, which after one year would cost approximately $96 per month, per function.
Cost of cold start
Serverless functions have an important behavior known as cold start. This happens when a function has not been invoked for a period of time. This results in a longer than usual invocation latency, which can have a negative impact on the user experience. To mitigate this behavior, AWS offers a feature called provisioned concurrency, which deploys a configurable always-on compute capacity that maintains a function ready to handle transactions without incurring in cold start latencies.
Ernesto Marquez is the owner and project director at Concurrency Labs, where he helps startups launch and grow their applications on AWS. He particularly enjoys building serverless architectures, automating everything and helping customers cut their AWS costs.