Battle of the Serverless — Part 2: AWS Lambda Cold Start Times

This experiment continues the work done in our pretend suite of microservices exposed via API Gateway to form an API with a code name of Slipspace in a mock company called STG. Slipspace drives are how the ships in the Halo universe travel so quickly to different sectors of the galaxy through something called Slipstream Space, so thought it was cool for a name requiring awesome warp API speeds.

Part 1 is here: https://medium.com/@shouldroforion/battle-of-the-serverless-part-1-rust-vs-go-vs-kotlin-vs-f-vs-c-32a66613f919

Part 1.5 is here: https://medium.com/@shouldroforion/battle-of-the-serverless-part-1-5-608a73c5f9fa

TL;DR

Rust wins, with Go, TypeScript/Node.js and Python close behind, all bringing in ~1–2secs cold start durations for Lambda functions. Kotlin is decent, and just stay away from C# and F# if cold start times are important for your use case.

What is a cold start?

Serverless is awesome. You don’t have to worry at all about scaling, right? Wrong. Functions in Lambda are run on demand when they are called, and thrown away when no longer required. This cycle of “spin up and destroy” leads to an event called a Cold Start. Cold starts are one such consideration serverless architectures need to address.

Stolen from re:Invent presentation on Lambda execution lifecycles.

A cold start literally is the first time your function has been executed in a while (~10 minutes as of October 2019). Getting your function downloaded, containerized and bootstrapped all need to happen before your code can run in a cold start. After your code has been bootstrapped, your function is then considered a “warm” function until it is destroyed.

The results

Testing for this round of cold start times is based on Lambda functions written in Rust, Go, Kotlin, F#, C#, Python, and TypeScript/Node.js. In “worst to best” order, here are the average results from Charles Proxy for cold start times based on ~200K requests hitting each function over the period of a week, every 12 minutes. The sweet spot for determining when a cold start would happen was right around 10 minutes +/- 1mins, meaning our function containers tended to live in a “warm state” for a maximum of 10 minutes without requests hitting them.

F# was terrible, bombing on the first 15 requests, succeeding on the next 15 requests, taking ~15secs for cold starts to complete.
C# wasn’t too much better than C#. The first 15 requests took ~15secs for cold starts to complete.
Kotlin shaved off ~6–8 seconds for cold start times, but still came in at roughly 6.5secs cold start durations.
From here on out, we’re seeing 2–3secs cold start times. Python is hitting ~2.3secs cold start durations.
TypeScript is still showing very impressive numbers with ~2.1secs cold start durations.
Go is our runner-up coming in with 2.0secs for average cold duration start times.
Rust wins and continues to show astounding consistency in performance times. It’s not the quickest with warm function execution, but is the most consistent with execution times.

For the top 4 performers in cold starts, being Rust, Go, TypeScript/Node.js and Python, here are more details showing you where durations tended to cluster. Note that this includes the entire HTTP handshake, including TLS, DNS, etc.

Top 4 contenders were all very close in cold start durations.
Rust durations over 50 cold starts
Go durations over 50 cold starts
TypeScript/Node.js durations over 50 cold starts
Python durations over 50 cold starts

FIN/ACK

This benchmarking experimentation is a blast. I learned a ton about cold starts in the process, how often they occur, how to mitigate them, when they typically occur. Fairly obviously, the interpreted languages (Python and TypeScript/Node.js) have quick spin up times. But not so obviously, the compiled languages (Rust and Go) beat them by milliseconds. I love that compiled language runtimes are improving so rapidly, and I LOVE that BYOR runtimes like Rust are competitive.

Continuous learner & technologist currently focused on building healthcare entities with forward-thinking partners. Passionate about all things Cloud.

Part 1 is here. This story is just a quick resume of new numbers based on a micro test including interpreted languages TypeScript/Node.js and Python 3.7. Read on if you want to see some observations and data points based on ~50,000 requests over 3 hours on AWS Lambda, API Gateway, and some compiled serverless “hello world” type services; some are with AWS provided runtimes; some are with BYOR, or Bring-Your-Own-Runtime like Rust.

Battle of the Serverless continues

This experiment continues the work done in our pretend suite of microservices exposed via API Gateway to form an API with a code name of Slipspace in a…


This weekend was supposed to be about some deep exploration on Go and dusting off my “archineer” and “engitect” hats. Some late night frustration with Go, however, on Friday prompted me to do some experimentation around serverless technologies in general; some benchmarking and studying. Read on if you want to see some observations and data points based on ~750,000 requests over 3 days on AWS Lambda, API Gateway, and some compiled serverless “hello world” type services; some are with AWS provided runtimes; some are with BYOR, or Bring-Your-Own-Runtime like Rust.

Battle of the Serverless “Hello Worlds”

For this test, I built a pretend suite of microservices…


At my day job, we’re working hard to promote continuous improvement, leading with culture, and planning for the future by being forward-thinking with our current talent. This sometimes means promoting unique individuals, teaching and mentoring others, or even removing ones that would likely thrive in another company. We call this “talent planning” and is what my mind has been focused on over the past few weeks (besides learning Golang and Machine Learning in AWS). I’ve jotted down a few verbal notes over the past few days and wanted to capture them as writ and share them here.

What are you doing to succeed in an overly-constrained environment (aka day 1 mindsets)?

Most of us…


This is the first in a multi-episode series of posts anchored on Azure and Terraform, partially to teach myself more about these services, and to also share with others that may be trying to build an Enterprise cloud environment from the ground up. If you’re interested in my opinions on the services involved, see this post: https://medium.com/@shouldroforion/azure-terraform-some-quick-observations-through-a-weekend-of-failfastshareoften-9bffc310c372.

Azure and Terraform in the Cloud.

We’ll start slow, then ramp up to a fully blown Enterprise cloud network with DevOps, serverless, and other cool marketing buzzwords. The project itself I’ve coined Project: High Charity, named after a pseudo-planet in my favorite sci-fi universe.

Find the entire project in…


In the spirit of continuous improvement, this weekend was focused on learning how to use Terraform with the Azure cloud platform. My professional career has taken me very close to orbit with AWS, CloudFormation, the Serverless.com framework for serverless development, and even Azure’s own Resource Manager templates for infrastructure management. I’ve tinkered with Pulumi, but nothing serious. That experience helped me to ramp up pretty quickly with Azure and Terraform so far.

Azure + Terraform = Awesome Sauce

In the spirit of continuous improvement, this weekend was focused on learning how to use Terraform with the Azure cloud platform. My professional career has taken me very…