In 2018, AWS introduced its initial lineup of Graviton processors, and since then, they have continued to evolve and improve. The latest iteration of Graviton processors showcases impressive enhancements, including “25% improved compute performance, up to double the floating-point performance, and up to double the speed for cryptographic workloads when compared to the previous generation…”
You can note they don’t compare to Intel or AMD. Thats because in a brute force workload usually the x86 processors win out. But from my experience there are many workloads where the ARM based processors are comparable and when considering the 20% cost reduction, it becomes a compelling argument. Nik Krichko wrote a great blog post benchmarking the three architectures.
Moving on to Lambdas, which are a fundamental aspect of serverless architectures. The pay-as-you-use model provides cost savings by eliminating the need for unused resources. However, once the functions are deployed and running, there are limited options for optimizing costs other than optimizing the code itself. Unfortunately, optimizing code can often be impractical or time-consuming, diverting valuable engineering resources.
This is where ARM64 shines, offering a significant 20% cost reduction! Most of the latest runtimes are compatible with ARM, except for go1.x
. If you’re currently using a supported runtime, it’s time to make the switch and start enjoying the cost savings. But if you’re using Go, keep reading for more information!
To get Go working we need to use the custom Amazon Linux 2 runtime provided.al2
. The custom runtime looks for a file named bootstrap
in your deployment package, if it can’t find it your function will return an error.
You could create a boot script but I’ve found it easier just to name my lambda binary bootstrap. You will also need to set the GOARCH
to arm64
.
CGO_ENABLED=0 GOOS=linux GOARCH=arm64 go build -ldflags="-w -s" -o bootstrap
It is worth noting you can also add -tags lambda.norpc
this removes the amazon linux 2 dependencies reducing binary size further. But it may not improve coldstart noticeably.
Then its as simple as updating your terraform or other IaC to reflect this change and enjoy your savings.
resource "aws_lambda_function" "arm_hello_world_lambda" {
s3_bucket = aws_s3_bucket.lambda_bucket.id
s3_key = aws_s3_object.file_upload.key
source_code_hash = aws_s3_object.file_upload.source_hash
role = aws_iam_role.role.arn
function_name = "arm-hello-world-lambda"
runtime = "provided.al2"
handler = "bootstrap"
architectures = ["arm64"]
timeout = 2
}