AWS Lambda announced a standardized way to handle non-code configuration that’s consistent across Lambda runs in the form of environment variables. Anyone who’s read about 12 Factor Apps or used Heroku/OpenShift for any length of time will feel right at home. You set the environment variables on your function, and they show up in the process environment at execution time.
In the Serverless Framework
On the ball as usual, the Serverless Framework team has merged a PR with support for environments. Here’s what my little test project configuration looks like with environment variables set.
service: testenvvars
provider:
name: aws
runtime: python2.7
environment:
TESTNUM: 27
TESTSTR: hello
functions:
hello:
handler: handler.hello
With those set, I can run serverless deploy
normally, and those vars are set
for me. To check, I wrote a Python handler for the hello
function that will
respond with the environment.
import datetime, os
def hello(event, context):
e = dict(os.environ.items())
# specifically add the two vars we set
e['relevant'] = {k: os.environ[k] for k in ('TESTNUM', 'TESTSTR')}
# and a timestamp
e['relevant']['timestamp'] = datetime.datetime.utcnow().isoformat()
return e
The relevant
portion of the output is pretty much what you’d expect. You can
see the full output in this gist, but here’s the important bits.
{
"timestamp": "2016-11-21T19:26:22.928586",
"TESTNUM": "27",
"TESTSTR": "hello"
}
Restrictions
There are a couple restrictions on environment variables:
- Maximum of 4KB total
- Must be simple types (strings, ints, etc)
The first restriction is due to KMS, which is where variables are stored. With KMS storage comes a certain peace of mind – it’s safe to put credentials, keys, and other secrets in the Lambda environment.
There’s still an argument to be made for manually encrypting/decrypting
secrets. The secrets show up in the Lambda GetFunctionConfiguration
API call,
for example. If there are secrets that some team members (like contractors)
shouldn’t be able to see you’ll still need a different way of limiting access
to those secrets.
In my testing, I also found some interesting consistency questions with
environment variables. For this example, I used the same handler code from
above while deploying new environment values. First, I started looping
invocations of the Lambda function with serverless invoke
.
while true; do
~/code/serverless/bin/serverless invoke -f hello | jq .relevant
done
Deploying the new values works fine, but it’s not monotonic. I had requests see the old value after the new value was already seen by other functions. Here’s the output:
{
"TESTNUM": "22",
"TESTSTR": "hello"
}
{
"TESTNUM": "22",
"TESTSTR": "hello"
}
{
"TESTNUM": "23",
"TESTSTR": "hello"
}
{
"TESTNUM": "22",
"TESTSTR": "hello"
}
{
"TESTNUM": "23",
"TESTSTR": "hello"
}
{
"TESTNUM": "23",
"TESTSTR": "hello"
}
{
"TESTNUM": "23",
"TESTSTR": "hello"
}
Notice the one request where the TESTNUM
value shows up as 22 after a
request has seen 23. The Lambda and KMS docs don’t make any promises about
consistency of new values rolling out, but it’s important to note that even my
limited testing uncovered the inconsistency. When doing config changes like
changing service keys, make sure that you have overlapping time where both keys
work since it’s not possible to rely on config values changing exactly when you
want.
Wrapping Up
Having env vars in my toolkit is pretty exciting. Being able to set config values without changing code will make my EBS Snapshot series (part 2) and Yesterdaytabase project to be easier to package and give to you all.
If you like this sort of thing, subscribe to the Serverless Code mailing list. Send questions them to ryan@serverlesscode.com or @ryan_sb on Twitter.