Immutable Infrastructure with AWS and Ansible – Part 2 – Workstation

Introduction

In the first part of this series, we setup our workstation so that it could communicate with the Amazon Web Services APIs, and setup our AWS account so that it was ready to provision EC2 compute infrastructure.  In this section, we’ll start building our Ansible playbook that will provision immutable infrastructure.

Ansible Dynamic Inventory

When working with cloud resources, Ansible has the capability of using a dynamic inventory system to find and configure all of your instances within AWS, or any other cloud provider.  In order for this to work properly, we need to setup the EC2 external inventory script in our playbook.

  1. First, create the playbook folder (I named mine ~/immutable) and the inventory folder within it:
    mkdir -p ~/immutable/inventory;cd ~/immutable/inventory
  2. Next, download the EC2 external inventory script from Ansible:
    wget https://raw.github.com/ansible/ansible/devel/contrib/inventory/ec2.py
  3. Make the script executable by typing:
    chmod +x ec2.py
  4. Configure the EC2 external inventory script by creating a new file called ec2.ini in the inventory folder alongside the ec2.py script.  If you specify the region you are working with, you will significantly decrease the execution time because the script will not need to scan every EC2 region for instances.  Here is a copy of my ec2.ini file, configured to use the us-east-1 region:
  5. Next, create the default Ansible configuration for this playbook by editing a file in the root of the playbook directory (in our example, ~/immutable), named ansible.cfg. This file should have the following text in it:

To test and ensure that this script is working properly (and that your boto credentials are setup properly), execute the following command:
./ec2.py --list
You should see the following output:
Screen Shot 2016-01-22 at 12.06.19 PM

Ansible Roles

Roles are a way to automatically load certain variables and tasks into an Ansible playbook, and allow you to reuse your tasks in a modular way.  We will heavily (ab)use roles to make the tasks in our playbook reusable, since many of our infrastructure provisioning operations will use the same tasks repeatedly.

Group Variables

The following group variables will apply to any tasks configured in our playbook, unless we override them at the task level.  This will allow us to specify a set of sensible defaults that will work for most provisioning use cases, but still have flexibility to change them when we need it.  Start by creating a folder for your playbook (I called it immutable, but you can call it whatever you like), then create a group_vars folder underneath it:
cd ~immutable; mkdir group_vars
Now, we can edit the file all.yml in that folder we just created, group_vars, to contain the following text. Please note that indentation is important in YAML syntax:

Now that our group_vars are setup, we can move onto creating our first role.

The Launch Role

The launch role performs an important first step – it first searches for the latest Ubuntu 14.04 LTS (long term support) AMI (Amazon Machine Image) that is published by Canonical, the creators of Ubuntu, then launches a new EC2 compute instance, in the region and availability zone specified in our group_vars file.  Note that the launch role also launches a very small compute instance (t2.micro) because this instance will only live for a short time while it is configured by subsequent tasks, then baked into a golden master AMI snapshot that lives in S3 object storage.

A quick note about Availability Zones: if you comment out the zone variable in our group_vars file, your instances will be launched in a random zone within the region specified.  This can be useful if you want to ensure that an outage in a single AZ doesn’t take down every instance in your auto-scaling group, but there is a trade-off as data transfer between zones incurs a charge, so if your database, for example, is in another zone, you’ll pay a small network bandwidth fee to access it.

Create a new folder under your playbook directory called roles, and create a launch folder within it, then create a tasks folder under that, then edit a file called main.yml in this tasks folder:
mkdir -p roles/launch/tasks
Now, put the following contents in the main.yml file:

You’ll notice that this launch role also waits for the instance to boot by waiting for port 22 (ssh) to be available on the host. This is useful because subsequent tasks will use an ssh connection to configure the system, so we want to ensure the system is completely booted before we proceed.

The Workstation Role

Now that we have a role that can launch a brand new t2.micro instance, our next role will allow us to configure this instance to be used as a workstation.  This workstation configuration will be fairly simplistic, however, you can easily customize it as much as you want later.  This is mainly just to illustrate how you would configure the golden image.

There are two directories we need to create for this role, the tasks directory, as well as the files directory, as there is an init script we want to populate onto the workstation that will create a swapfile on first boot:
mkdir -p roles/workstation/tasks;mkdir roles/workstation/files
Next, we’ll create the task:

Initializing a swap file automatically

When you provision the Ubuntu 14.04 LTS instance, it won’t have a swapfile by default.  This is a bit risky because if you run out of memory your system could become unstable.  This file should be placed in roles/workstation/files/aws-swap-init, where the task above will copy it to your workstation during the configuration process, so that a swap file will be created when the system is booted for the first time.

The DeployWorkstation Play

Now, we’ll create a play that calls these tasks in the right order to provision our workstation and configure it.  This file will be created in the root of your playbook, and I named it deployworkstation.yml.

Testing our Playbook

To test our work so far, we simply need to execute it with Ansible and see if we are successful:
ansible-playbook -vv -e group_name=test deployworkstation.yml
You should see some output at the end of the playbook run like this:
Screen Shot 2016-01-22 at 10.16.05 PM

Next, connect to your instance with ssh by typing the following command:
ssh ubuntu@ec2-52-90-220-60.compute-1.amazonaws.com
(hint: copy/paste the hostname above)

You should see something like the following after you connect with ssh:
Screen Shot 2016-01-22 at 10.17.01 PM

That’s it!  You’ve now created a workstation in the Amazon public cloud.  Be sure to terminate the instance you’ve created so that you don’t incur any unexpected fees.  You can do this by navigating to the EC2 (top left) dashboard from the AWS console, then selecting any running instances and choosing to terminate them:
Screen Shot 2016-01-22 at 10.18.32 PM

After selecting to Terminate them from the Instance State menu, you’ll need to confirm it:
Screen Shot 2016-01-22 at 10.18.52 PM

Now that you’ve terminated any running instances, in the next part, we’ll learn how to create snapshots, launch configurations, and auto-scaling groups from our immutable golden master images.

1 Response

Leave a Reply

Your email address will not be published. Required fields are marked *