1. Introduction
Welcome to the Kubernetes Kickstart Bootcamp at DevOpsCon Berlin 2017! In this Bootcamp, you will learn how to containerize workloads, deploy them to Google Container Engine clusters, scale them to handle increased traffic, and continuously deploy your app to provide application updates.
This Bootcamp will cover:
- The technology of containers at a low level and how Docker builds on that to provide a user friendly interface to containers.
- The basic concepts of Kubernetes and how to run containers in a distributed group of machines.
- Deploying applications to Kubernetes so they are always available, how to test new versions of the application in live environments, and how to update the application without downtime.
- Creating a Continuous Delivery pipeline so that new changes can be picked up, built, and deployed automatically.
2. Module 0: Setup
If you are taking this codelab onsite at an event, you will be provided with temporary Google account credentials which you should use for the duration of this codelab. To isolate use of this temporary account, and avoid mixing it with your normal working environment, it's highly recommended that you use an incognito window for the entirety of this codelab.
Log in to Google Cloud Console
Using an incognito browser window, open https://console.cloud.google.com, and enter the credentials provided by the lab instructor. If prompted, accept the new account terms and conditions. Since this is a temporary account:
- Do not attempt to add recovery options.
- Do not sign up for free trial.
Click on the project name (mid-screen, circled in the screenshot above) to select your temporary work project as the default project to operate on during this lab.
3. Google Cloud Shell
While Google Cloud and Kubernetes can be operated remotely from your laptop, in this lab we will be using Google Cloud Shell, a command line environment running in the Cloud.
This Debian-based virtual machine is loaded with all the development tools you'll need. It offers a persistent 5GB home directory, and runs on the Google Cloud, greatly enhancing network performance and authentication. This means that all you will need for this codelab is a browser (yes, it works on a Chromebook).
To activate Google Cloud Shell, from the developer console simply click the button on the top right-hand side (it should only take a few moments to provision and connect to the environment):
Then accept the terms of service and click the "Start Cloud Shell" link:
Once connected to the cloud shell, you should see that you are already authenticated and that the project is already set to your PROJECT_ID
:
gcloud auth list
Command output
Credentialed accounts: - <myaccount>@<mydomain>.com (active)
gcloud config list project
Command output
[core] project = <PROJECT_ID>
If for some reason the project is not set, simply issue the following command :
gcloud config set project <PROJECT_ID>
Looking for your PROJECT_ID
? Check out what ID you used in the setup steps or look it up in the console dashboard:
IMPORTANT: Finally, set the default zone and project configuration:
gcloud config set compute/zone europe-west1-c
You can choose a variety of different zones. Learn more in the Regions & Zones documentation.
Lastly, select the API Manager -> Library menu:
to enable the following APIs:
- Compute Engine API
- Container Engine API
- Cloud Storage
4. Module 1: Intro to Docker & Containers
Containers are a way of isolating programs or processes from each other. The primary aim of containers is to make them easy to deploy in a way that they don't cause programs to break. It's easy to start using containers without being familiar with the technology that makes them work.
In this lab, you will create a virtual machine and manually run busybox from within a container on this VM.
Note that container runtimes typically use a lot more features than what is illustrated here to achieve maximum isolation between containers.
Create a Virtual Machine
gcloud compute instances create k8s-workshop-module-1-lab \ --zone europe-west1-c \ --machine-type n1-standard-1 \ --subnet default \ --tags http-server,https-server \ --image ubuntu-1604-xenial-v20170516 \ --image-project ubuntu-os-cloud \ --boot-disk-type pd-standard \ --metadata startup-script-url=gs://mco-k8s/startup \ --scopes https://www.googleapis.com/auth/compute,https://www.googleapis.com/auth/devstorage.full_control
This step is provisioning a new virtual machine with all the necessary tools pre-installed and may take up to a few minutes to complete.
Connect to the environment
You can connect to your new VM using the gcloud compute ssh
command.
gcloud compute ssh k8s-workshop-module-1-lab --zone europe-west1-c
We will be using a few Linux tools to create containers.
cgexec
- Create Control Groupsunshare
- Create Linux Namespacesmount
- Create an overlay filesystem
Set Up a Container Root Filesystem
It will take several minutes for your VM to be fully provisioned. You can tell it's ready when you see an archived root filesystem on your machine at /busybox.tar
. Wait until you see that file before proceeding.
You can use this file to create the base filesystem for a container in which we will run busybox. Busybox provides several stripped-down Unix tools in a single executable file.
mkdir ~/busybox-base
tar -xvf /busybox.tar -C ~/busybox-base
Now create a directory that will act as the "writable layer" for our busybox container. Any updates to the filesystem from within the container will be stored within this directory and the base busybox directory will not be affected.
mkdir ~/writeable-layer
Create a working directory for the overlay filesystem. This is required for internal operation of overlay filesystem.
mkdir ~/.work
Create a directory that will be the root filesystem for the busybox container.
mkdir ~/rootfs
Create an overlay mount composed of the busybox base image and the writable layer.
sudo mount -t overlay -o lowerdir=$HOME/busybox-base,upperdir=$HOME/writeable-layer,workdir=$HOME/.work overlayfs rootfs
Create a Sandbox using Control Groups
Create control cgroups for having cpu and memory isolation.
sudo cgcreate -a `whoami`:`whoami` -t `whoami`:`whoami` -g cpu,memory:`whoami`
Take a look at your current control groups.
cat /proc/self/cgroup | grep -E "cpu|memory"
Now execute a shell within the newly created control groups.
sudo cgexec -g cpu,memory:`whoami` bash
And now take a look at your new control groups.
cat /proc/self/cgroup | grep -E "cpu|memory"
At this stage, our container can still access files on the VM, and kill processes running on the VM. Let's fix that by adding Linux Namespaces.
First, lets exit the container before we continue:
exit
Extend the Sandboxes to Use Linux Namespaces
Let's use ‘unshare' utility to create new pid, uts, and ipc namespaces and enter into the busybox root filesystem.
sudo unshare --pid --uts --ipc --mount -f chroot rootfs /bin/sh
Procfs is ideal to have within our container. So let's mount it.
mount -t proc proc /proc
Notice that we can no longer see all the processes since we are in a new pid namespace.
ps aux
Inspect the hostname of our container. It should match that of the host VM.
hostname
Now let's set a new hostname for our busybox container. We can do this because we are running in a separate uts namespace.
hostname my-busybox-container
hostname
This will not alter the hostname of your host VM. Now our container cannot access files on the host VM and cannot affect processes running on the host. To further secure and isolate our container, you can place resource limits to the container's control groups and drop capabilities. These activities will not be covered in this lab though.
Now let's exit the container and return to the shell prompt on the Virtual Machine.
exit
5. Running & Distributing Containers with Docker
In this lab, you will learn how to:
- Build a Docker image
- Push a Docker image to Google Cloud Registry
- Run a Docker container
Docker provides a simple means to package applications and a repeatable execution environment for those applications. Let's explore Docker by creating a simple Docker image that will contain a web server written in Python.
Run the Web Server from Scratch
The source code for this lab is available in the /kickstart
folder. Go ahead and list the contents of the directory.
cd /kickstart
ls -lh
You should see a Dockerfile
and web-server.py
. web-server.py is a simple Python application that runs a web server which responds to http requests on localhost:8888 with the hostname.
Let's run the program manually to begin with. But first, there are a few steps we need to run to install our dependencies.
Install the latest version of Python.
sudo apt-get install -y python3 python3-pip
Install tornado library that is required by our application.
pip3 install tornado
Run the Python application in the background.
python3 web-server.py &
Ensure the web server is accessible, then terminate it.
curl http://localhost:8888
kill %1
To install and run this application on other machines, the installation steps have to be automated prior to running the application. Even if you do automate it, you are still depending on the apt and pypi (Python) package servers for your deployment! The versions of Python and libraries might also change across installations and it is not trivial to track those versions either. Imagine packaging a full blown web server with a lot of web content!
Package using Docker
Now, let's see how Docker can help. Docker images are described via Dockerfiles. Docker allows for stacking of images on top of each other. Our Docker image will be built on top of an existing Docker image library/python which has Python pre-installed.
Take a peek at the Dockerfile.
cat Dockerfile
Build a Docker image with the web server.
sudo docker build -t py-web-server:v1 .
Run the webserver using Docker.
sudo docker run -d -p 8888:8888 --name py-web-server -h my-web-server py-web-server:v1
Try accessing the web server again the stop the container.
curl http://localhost:8888
sudo docker rm -f py-web-server
The web server and all its dependencies including the python
and tornado
library have been packaged into a single Docker image that can now be shared with everyone. The py-web-server:v1
docker image will function the same way on all Docker supported OSes (OS X, Windows & Linux).
The Docker image needs to be uploaded to a Docker registry to be available for use on other machines. Let's upload the Docker image to your private image repository in Google Cloud Registry (gcr.io).
Store your GCP project name in an environment variable.
export GCP_PROJECT=`gcloud config list core/project --format='value(core.project)'`
Rebuild the Docker image with an image name that includes gcr.io project prefix.
sudo docker build -t "gcr.io/${GCP_PROJECT}/py-web-server:v1" .
Make the Image Publically Accessible
Google Container Registry stores its images on Google Cloud storage. Push the image to gcr.io.
sudo gcloud docker push -- gcr.io/${GCP_PROJECT}/py-web-server:v1
Let's update the permissions on Google Cloud Storage to make our image repository publically accessible. The image is now available to anyone who has access to your GCP project.
gsutil defacl ch -u AllUsers:R gs://artifacts.${GCP_PROJECT}.appspot.com
gsutil acl ch -r -u AllUsers:R gs://artifacts.${GCP_PROJECT}.appspot.com
gsutil acl ch -u AllUsers:R gs://artifacts.${GCP_PROJECT}.appspot.com
Run the Web Server from Any Machine
The Docker image can now be run from any machine that has Docker installed by running the following command.
sudo docker run -d -p 8888:8888 -h my-web-server gcr.io/${GCP_PROJECT}/py-web-server:v1
To know more about Dockerfiles, take a look at this reference guide. Don't forget to exit the lab environment and return to the Cloud Shell.
exit
Finally delete the instance to clean up the environment.
gcloud compute instances delete k8s-workshop-module-1-lab --zone europe-west1-c
6. Module 2: Introduction to Kubernetes
In this lab, you will learn how to:
- Provision a complete Kubernetes cluster using Google Container Engine
- Deploy and manage Docker containers using kubectl
- Break an application into microservices using Kubernetes' Deployments and Services.
Kubernetes is all about applications and in this lab, you will utilize the Kubernetes API to deploy, manage, and upgrade applications. In this part of the workshop, you will use an example application called "app" to complete the labs.
Kubernetes is an open source project (available on kubernetes.io) which can run on many different environments, from laptops to high-availability multi-node clusters, from public clouds to on-premise deployments, from virtual machines to bare metal.
For the purpose of this lab, using a managed environment such as Google Container Engine (a Google-hosted version of Kubernetes running on Compute Engine) will allow you to focus more on experiencing Kubernetes rather than setting up the underlying infrastructure.
Google Container Engine
In this course, we'll be using a hosted version of Kubernetes, called Google Container Engine or GKE. The Container Engine API should be enabled for your project by default but if that doesn't seem to be the case, follow this link to manually enable the Container Engine API.
After the Container Engine API is enabled, we'll start up a cluster. The scopes argument is so that we have access to project hosting and Google Cloud Storage APIs later.
gcloud container clusters create bootcamp --num-nodes 5 --zone europe-west1-c --scopes "https://www.googleapis.com/auth/projecthosting,storage-rw"
After your cluster is created, let's check the version of Kubernetes that's currently installed, using the kubectl version
command.
kubectl version
We can also find out more about our cluster, by using the kubectl cluster-info command.
kubectl cluster-info
The gcloud container clusters create command automatically authenticated kubectl for us. If you want to authenticate with your cluster on another machine where you have kubectl installed you can run the following command.
gcloud container clusters get-credentials bootcamp --zone europe-west1-c
Bash Completion (Optional)
Kubernetes comes with auto-completion! You can use the kubectl completion
command as well as the built-in source
command to set this up.
source <(kubectl completion bash)
After running the command, you can use the TAB button to provide a list of available commands to you.
Here's are a few examples:
kubectl <TAB>
annotate autoscale config create
...
You can complete a partial command as well.
kubectl co<TAB>
completion config convert cordon
This feature makes using kubectl
easier to use.
Get the Sample Code
Clone the GitHub repository from the command line:
git clone https://github.com/googlecodelabs/orchestrate-with-kubernetes.git
cd orchestrate-with-kubernetes/kubernetes
The sample has the following layout:
deployments/ /* Deployment manifests */
...
nginx/ /* nginx config files */
...
pods/ /* Pod manifests */
...
services/ /* Services manifests */
...
tls/ /* TLS certificates */
...
cleanup.sh /* Cleanup script */
Now that you have the code, it's time to give Kubernetes a try!
Quick Kubernetes Demo
The easiest way to get started with Kubernetes is to use the kubectl run
command.
Let's use the kubectl run
command to launch a single instance of the nginx container.
kubectl run nginx --image=nginx:1.10.0
And you see, Kubernetes has created what is called a deployment—we'll explain more about deployments later, but for now all you need to know is that deployments keep our pods up and running even when the nodes they run on fail.
In Kubernetes, all containers run in what's called a pod. Use the kubectl get pods
command to view the running nginx container.
kubectl get pods
Now that the nginx container is running, we can expose it outside of Kubernetes using the kubectl expose
command.
kubectl expose deployment nginx --port 80 --type LoadBalancer
So what just happened? Behind the scenes Kubernetes created a Service and external Load Balancer with a public IP address attached to it (we will cover Services later). Any client who hits that public IP address will be routed to the pods behind the Service. In this case, that would be the nginx pod.
We can see the newly created Service using the kubectl get command.
kubectl get services
We'll see that we have an IP that we can use to hit the nginx container remotely.
Before we interact with our service, let's scale up the number of backend applications running on our service. This would be useful if you wanted to decrease workload for a web application that grew more popular. You can do that in one line using the kubectl expose command.
kubectl scale deployment nginx --replicas 3
Get the pods one more time to see that Kubernetes has updated the number of pods.
kubectl get pods
Now, we'll use the kubectl get services
command (again) to find the external IP address of our service.
kubectl get services
Once we have an external IP address, we'll use it with the curl
command to test our demo application.
curl http://<External IP>:80
And there you go! Kubernetes supports an easy to use workflow out-of-the-box using the kubectl run
, expose
, and scale
commands.
Clean Up
You can clean up nginx by running the following commands.
kubectl delete deployment nginx
kubectl delete service nginx
Now that you've seen a quick tour of Kubernetes, it's time to dive into each of the components and abstractions.
7. Pods
At the core of Kubernetes is the Pod.
Pods represent a logical application.
Pods represent and hold a collection of one or more containers. Generally, if you have multiple containers with a hard dependency on each other they would be packaged inside of a single pod.
In our example, you can see that we have a pod that contains the monolith and nginx containers.
Pods also have Volumes. Volumes are data disks that live as long as the pods lives -- and can be used by the containers in that pod. This is possible because pods provide a shared namespace for their contents. This means that the two containers inside of our example pod can communicate with each other. And they also share the attached volumes.
Pods also share a network namespace. This means that a pod has one IP address per pod.
Let's take a deeper dive into pods now.
Creating Pods
Pods can be created using pod configuration files.
Before going any further, explore the built-in Pod documentation using the kubectl explain command.
kubectl explain pods
Now that you know more about Pods, explore the monolith
pod's configuration file.
cat pods/monolith.yaml
apiVersion: v1
kind: Pod
metadata:
name: monolith
labels:
app: monolith
spec:
containers:
- name: monolith
image: kelseyhightower/monolith:1.0.0
args:
- "-http=0.0.0.0:80"
- "-health=0.0.0.0:81"
- "-secret=secret"
ports:
- name: http
containerPort: 80
- name: health
containerPort: 81
resources:
limits:
cpu: 0.2
memory: "10Mi"
There's a few things to notice here. You'll see that our Pod is made up of one container (the monolith). You can also see that we're passing a few arguments to our container when it starts up. Lastly, we're opening up port 80 for http traffic.
When exploring the Kubernetes API, it is often useful to use the handy kubectl explain
command to find out more. Let's see the documentation for Pod containers.
kubectl explain pods.spec.containers
Feel free to explore the rest of the API at your leisure before moving on.
Once you're ready, create the monolith Pod using kubectl create
.
kubectl create -f pods/monolith.yaml
Let's examine our Pods. Use the kubectl get pods
command to list all Pods running in the default namespace.
kubectl get pods
Once the Pod is running, use kubectl describe
command to get more information about the monolith Pod.
kubectl describe pods monolith
You'll see a lot of the information about the monolith Pod including the Pod IP address and the event log. This information will come in handy when troubleshooting.
As you can see, Kubernetes makes it easy create Pods by describing them in configuration files and view information about them when they are running. At this point, you have the ability create all the Pods your deployment requires!
Interacting with Pods
Pods are allocated a private IP address by default and cannot be reached outside of the cluster. Use the kubectl port-forward
command to map a local port to a port inside the monolith Pod.
Use two terminals. One to run the kubectl port-forward
command, and the other to issue curl
commands. You can create a new terminal by pressing the "+" button in Cloud Shell.
Run the following command to set up port-forwarding.
kubectl port-forward monolith 10080:80
Now we can start talking to our pod using curl
.
curl http://127.0.0.1:10080
Yes! We got a very friendly "hello" back from our container. Now let's see what happens when we hit a secure endpoint.
curl http://127.0.0.1:10080/secure
Uh oh. Let's try logging in to get an auth token back from our monolith. At the login prompt, use the super-secret password "password" to login.
curl -u user http://127.0.0.1:10080/login
Logging in caused a JWT token to be printed out. We'll copy the token and use it to hit our secure endpoint with curl
.
TOKEN=$(curl http://127.0.0.1:10080/login -u user|jq -r '.token')
curl -H "Authorization: Bearer $TOKEN" http://127.0.0.1:10080/secure
At this point, we should get a response back from our application letting us know everything is right in the world again!
Use the kubectl logs
command to view the logs for the monolith Pod.
kubectl logs monolith
Let's open another terminal and use the -f flag to get a stream of the logs happening in real-time! Create a third terminal using the same "+" button in Cloud Shell and run the following command.
kubectl logs -f monolith
Now if you use curl
to interact with the monolith, you can see the logs updating (back in terminal 3).
curl http://127.0.0.1:10080
We can use the kubectl exec
command to run an interactive shell inside the monolith Pod. This can come in handy when you want to troubleshoot from within a container.
kubectl exec monolith --stdin --tty -c monolith /bin/sh
For example, once we have a shell into the monolith container we can test external connectivity using the ping
command.
ping -c 3 google.com
When you're done with the interactive shell, be sure to logout.
exit
As you can see, interacting with Pods is as easy as using the kubectl
command. If you need to hit a container remotely or get a login shell, Kubernetes provides everything you need to get up and going.
When you are finished be sure to quit kubectl port-forward
and kubectl logs
in terminal 2 and 3 by hitting Ctrl^C
.
Monitoring & Health Checks
Kubernetes supports monitoring applications in the form of readiness and liveness probes. Health checks can be performed on each container in a Pod. Readiness probes indicate when a Pod is "ready" to serve traffic. Liveness probes indicate a container is "alive". If a liveness probe fails multiple times, the container will be restarted. Liveness probes that continue to fail will cause a Pod to enter a crash loop. If a readiness check fails, the container will be marked as not ready and will be removed from any load balancers.
In this lab, you will deploy a new Pod named healthy-monolith, which is largely based on the monolith Pod with the addition of readiness and liveness probes.
In this lab, you will learn how to:
- Create Pods with readiness and liveness probes
- Troubleshoot failing readiness and liveness probes
Creating Pods with Liveness and Readiness Probes
Explore the healthy-monolith Pod configuration file.
cat pods/healthy-monolith.yaml
Create the healthy-monolith Pod using kubectl.
kubectl create -f pods/healthy-monolith.yaml
Pods will not be marked ready until the readiness probe returns an HTTP 200 response. Use the kubectl describe
to view details for the healthy-monolith Pod.
kubectl describe pod healthy-monolith
Readiness Probes
Now let's look at how Kubernetes responds to failed readiness probes. The monolith container supports the ability to force failures of its readiness and liveness probes. This will enable us to simulate failures for the healthy-monolith Pod.
Use the kubectl port-forward
command in terminal 2 to forward a local port to the health port of the healthy-monolith Pod.
kubectl port-forward healthy-monolith 10081:81
Force the monolith container readiness probe to fail. Use the curl
command to toggle the readiness probe status. Note that this command does not show any output.
curl http://127.0.0.1:10081/readiness/status
Get the status of the healthy-monolith Pod using the kubectl get
pods -w
command.
kubectl get pods healthy-monolith -w
Press Ctrl^C
once you see that there are 0/1 ready containers. Use the kubectl describe
command to get more details about the failing readiness probe.
kubectl describe pods healthy-monolith
Notice the events for the healthy-monolith Pod report details about failing readiness probe.
Force the monolith container readiness probe to pass. Use the curl
command to toggle the readiness probe status.
curl http://127.0.0.1:10081/readiness/status
Wait about 15 seconds and get the status of the healthy-monolith Pod using the kubectl get pods
command.
kubectl get pods healthy-monolith
Hit Ctrl^C
in terminal 2 to close the kubectl port-forward
command.
Liveness Probes
Building on what you learned in the previous tutorial, use the kubectl port-forward
and curl
commands to force the monolith container liveness probe to fail. Observe how Kubernetes responds to failing liveness probes.
Use the kubectl port-forward
command to forward a local port to the health port of the healthy-monolith Pod in terminal 2.
kubectl port-forward healthy-monolith 10081:81
In another terminal, force the monolith container liveness probe to fail. Use the curl
command to toggle the liveness probe status.
curl http://127.0.0.1:10081/healthz/status
Get the status of the healthy-monolith Pod using the kubectl get pods -w
command.
kubectl get pods healthy-monolith -w
When a liveness probe fails the container is restarted. Once restarted, the healthy-monolith
should go back into a healthy state. Press Ctrl^C
to exit that command when you notice the pod being restarted. Note the restart count.
Use the kubectl describe
command to get more details about the failing liveness probe. You can see the related events for when the liveness probe failed and the pod was restarted.
kubectl describe pods healthy-monolith
When you are finished, Ctrl^C
in terminal 2 to close the kubectl port-forward
command.
8. Services
Pods aren't meant to be persistent. They can be stopped or started for many reasons—like failed liveness or readiness checks—and this leads to a problem.
What happens if we want to communicate with a set of Pods? When they get restarted, they might have a different IP address.
That's where Services come in.
Services provide stable endpoints for Pods.
Services use labels to determine what Pods they will operate on. If Pods have the correct labels, they are automatically picked up and exposed by our services.
The level of access a service provides to a set of Pods depends on the Service's type. Currently, there are three types:
1. ClusterIP
(internal) – the default type means that this Service is only visible inside of the cluster
2. NodePort
gives each node in the cluster an externally accessible IP
3. LoadBalancer
adds a load balancer from the cloud provider which forwards traffic from the service to Nodes within it
It's time for you to learn how to:
- Create a service
- Use label selectors to expose a limited set of Pods externally
Creating a Service
Before we can create our services, let's first create a secure Pod that can handle https traffic.
Explore the secure-monolith
service configuration file.
cat pods/secure-monolith.yaml
Create the secure-monolith Pods and it's configuration data. First create a secret for the TLS certificates for nginx.
kubectl create secret generic tls-certs --from-file tls/
Then create a ConfigMap to hold nginx's configuration files. Secrets and ConfigMaps will be covered in a later section.
kubectl create configmap nginx-proxy-conf --from-file nginx/proxy.conf
kubectl create -f pods/secure-monolith.yaml
Now that we have a secure Pod, it's time to expose the secure-monolith Pod externally and to do that we'll create a Kubernetes service.
Explore the monolith service configuration file.
cat services/monolith.yaml
kind: Service
apiVersion: v1
metadata:
name: "monolith"
spec:
selector:
app: "monolith"
secure: "enabled"
ports:
- protocol: "TCP"
port: 443
targetPort: 443
nodePort: 31000
type: NodePort
Use the kubectl create
command to create the monolith service from the monolith service configuration file.
kubectl create -f services/monolith.yaml
The type: NodePort
in the Service's yaml file means that it uses a port on each cluster node to expose the service. This means that it's possible to have port collisions if another app tries to bind to port 31000 on one of your servers.
Normally, Kubernetes would handle this port assignment for us. In this lab, we chose one so that it's easier to configure health checks later on.
Use the gcloud compute firewall-rules
command to allow traffic to the monolith service on the exposed nodeport.
gcloud compute firewall-rules create allow-monolith-nodeport --allow=tcp:31000
Now that everything is setup, we should be able to hit the secure-monolith service from outside the cluster without using port forwarding. First, let's get an IP address for one of our nodes.
gcloud compute instances list
Then try to open the url in your browser.
https://<EXTERNAL_IP>:31000
Uh oh! That timed out. What's going wrong?
Adding Labels to Pods
Currently the monolith service does not have any endpoints. One way to troubleshoot an issue like this is to use the kubectl get pods
command with a label query.
We can see that we have quite a few Pods running with the monolith label.
kubectl get pods -l "app=monolith"
But what about "app=monolith" and "secure=enabled"?
kubectl get pods -l "app=monolith,secure=enabled"
Notice this label query does not print any results.
It seems like we need to add the "secure=enabled" label to them.
We can use the kubectl label
command to add the missing secure=enabled label to the secure-monolith Pod. Afterwards, we can check and see that our labels have been updated.
kubectl label pods secure-monolith 'secure=enabled'
kubectl get pods secure-monolith --show-labels
Now that our Pods are correctly labeled, let's view the list of endpoints on the monolith service.
kubectl get endpoints monolith
And we have one!
Let's test this out by hitting one of our nodes again.
gcloud compute instances list | grep gke-
Open the following URL in your browser. You will need to click through the SSL warning because secure-monolith
is using a self-signed certificate.
https://<EXTERNAL_IP>:31000
9. Miscellaneous Topics
Over the course of this lab, we glossed over a few important topics. Why? So that you get a Kubernetes cluster up and running as fast as possible.
Now is a good time to go over some of what we skipped earlier. In this section, we'll be covering Secrets
, Configmaps
, and Volumes
.
Volumes
Volumes are a way for containers within a Pod to share data and they allow for Pods to be stateful. These are two very important concerns for production applications.
There are many different types of volumes in Kubernetes. Some of the volume types include long-lived persistent volumes, temporary, short-lived emptyDir Volumes, networked nfs volumes, and many more.
In fact, we've secretly used Volumes before when we used secrets
to set up the secure-monolith
Pod earlier.
Secrets and Configmaps
Secrets in Kubernetes are way to store sensitive, encrypted data, such as passwords or keys. These were introduced to keep developers from having to bake in sensitive information into their pods and containers. Secrets are stored in a temporary filesystem so that they're never written into non-volatile storage.ConfigMaps
are similar, but subtly different. ConfigMaps
are used for non-sensitive string data. Storing configuration, setting command line variables, and storing environment variables are natural use cases for ConfigMaps
.
Currently, both Secrets
and ConfigMaps
are stored in etcd
.
In this lab, we used a secret
to store our tls keys and we used a ConfigMap
to store our nginx configuration data. When we created the secure-monolith
Pod earlier, you may have noticed the kubectl create secret generic
command and the kubectl create configmap
commands.
kubectl create secret generic tls-certs --from-file tls/
kubectl create configmap nginx-proxy-conf --from-file nginx/proxy.conf
Earlier, we glossed over these commands but now we'll explain them in a little more depth.
The following images will follow the lifecycle of a secret.
After we have our Secrets
or ConfigMap
, we create a Pod that consumes that data. In this example, we're doing that with the kubectl create
command.
Once our Pod is created, the Secret
is attached to the Pod as a Volume. This Volume is made available to any container in the Pod before the containers are brought online.
Once the Volume is attached, the data in it is mounted into the container's file system. In our examples, we mounted the Secret
data to /etc/tls
.
After the data in Volumes is mounted, the containers in the Pod are brought online and the rest of Pod initialization happens as before.
At this point, we have a fully functioning Pod that's ready to do work.
In the next section, we'll clean up our example application.
What's Next?
This concludes this simple getting started lab with Kubernetes.
We've only scratched the surface of this technology and we encourage you to explore further with your own Pods, replication controllers, and services but also to check out liveness probes (health checks) and consider using the Kubernetes API directly.
Here are some follow-up steps :
- Sign up for our free Udacity course, Managing Microservices with Kubernetes, that goes over this plus, Secrets and Config Maps, Health and Monitoring, Scaling, and Rolling Updates
- Try out other Google Cloud Platform features for yourself. Have a look at our tutorials.
- Remember, Kubernetes is an open source project (http://kubernetes.io/) hosted on GitHub. Your feedback and contributions are always welcome.
- You can follow the Kubernetes news on Twitter and on the community's blog.
10. Module 3: Deploying to Kubernetes
The goal of this section is to get you ready for scaling and managing containers in production.
And that's where Deployments come in. Deployments are a declarative way to ensure that the number of Pods running is equal to the desired number of Pods specified by the user.
Introduction to Deployments
Deployments abstract away the low level details of managing Pods. They provide a single stable name that you can use to update an application. Behind the scenes, Deployments rely on ReplicaSets to manage starting, stopping, scaling, and restarting the Pods if they happen to go down for some reason. If Pods need to be updated or scaled, the Deployment will handle all of the details for you.
Deployments (and ReplicaSets) are powered by control loops. Control loops are a design pattern for distributed software that allows you, the user, to declaratively define your desired state and have the software implement the desired state for you based on the current state. We'll see more about how that works below.
Learn About Deployment Objects
Let's get started with Deployments. First let's take a look at the Deployment object. The explain
command in kubectl
can tell us about the Deployment object.
kubectl explain deployment
We can also see all of the fields using the --recursive
option.
kubectl explain deployment --recursive
You can use the explain command as you go through the lab to help you understand the structure of a Deployment object and understand what the individual fields do.
kubectl explain deployment.metadata.name
Create a Deployment
Now let's create a simple deployment. Let's examine the deployment configuration file.
cat deployments/auth.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: auth
spec:
replicas: 1
template:
metadata:
labels:
app: auth
track: stable
spec:
containers:
- name: auth
image: "kelseyhightower/auth:1.0.0"
ports:
- name: http
containerPort: 80
- name: health
containerPort: 81
...
Notice how the Deployment is creating one replica and it's using version 1.0.0 of the auth container.
When you run the kubectl create
command to create the auth deployment, it will make one pod that conforms to the data in the Deployment manifest. This means we can scale the number of Pods by changing the number specified in the replicas
field.
Go ahead and create our deployment object using kubectl create
.
kubectl create -f deployments/auth.yaml
Once you have created the Deployment, you can verify that it was created.
kubectl get deployments
Once the deployment is created, Kubernetes will create a ReplicaSet for the Deployment. We can verify that a ReplicaSet was created for our Deployment. We should see a ReplicaSet with a name like auth-xxxxxxx
.
kubectl get replicasets
And finally, we can view the Pods that were created as part of our Deployment. The single Pod is created by the Kubernetes when the ReplicaSet is created.
kubectl get pods
It's time to create a service for our auth deployment. You've already seen service manifest files, so we won't go into the details here. Use the kubectl create
command to create the auth service.
kubectl create -f services/auth.yaml
Now, lets to do the same thing to create and expose the hello Deployment.
kubectl create -f deployments/hello.yaml kubectl create -f services/hello.yaml
And one more time to create and expose the frontend
Deployment.
kubectl create configmap nginx-frontend-conf --from-file=nginx/frontend.conf kubectl create -f deployments/frontend.yaml kubectl create -f services/frontend.yaml
Interact with the frontend by grabbing it's external IP and then curling to it.
kubectl get services frontend curl -ks https://<EXTERNAL-IP>
And we get our hello response back. You can also use the output templating feature of kubectl
to use curl
as a one liner.
curl -ks https://`kubectl get svc frontend -o=jsonpath="{.status.loadBalancer.ingress[0].ip}"`
Scale a Deployment
Now that we have a Deployment created, we can now scale our Deployment. We will do this by updating the spec.replicas
field. We can look at an explanation of this field using the kubectl explain
command again.
kubectl explain deployment.spec.replicas
You can update the replicas field most easily using the kubectl scale
command.
kubectl scale deployment hello --replicas=5
After we update the Deployment, Kubernetes will automatically update the associated ReplicaSet and start new Pods to make the total number of Pods equal 5. Let's verify that there are now 5 Pods for our auth running.
kubectl get pods | grep hello- | wc -l
Now scale back the application.
kubectl scale deployment hello --replicas=3
Again, verify that you have the correct number of Pods.
kubectl get pods | grep hello- | wc -l
11. Rolling Update
Deployments support updating images to a new version through a rolling update mechanism. When a Deployment is updated with a new version, it creates a new ReplicaSet and slowly increases the number of replicas in the new ReplicaSet as it decreases the replicas in the old ReplicaSet.
Trigger a Rolling Update
To update your Deployment, run the following command.
kubectl edit deployment hello
Change the image
in containers
section of the Deployment to the following, then save and exit.
...
containers:
- name: hello
image: kelseyhightower/hello:2.0.0
...
Once you save out of the editor, the updated Deployment will be saved to your cluster and Kubernetes will begin a rolling update. You can see the new ReplicaSet that Kubernetes creates.
kubectl get replicaset
You can also see a new entry in the rollout history.
kubectl rollout history deployment/hello
Pause a Rolling Update
If you detect problems with a running rollout, you can pause it to stop the update. Let's give that a try now.
kubectl rollout pause deployment/hello
You can then verify the current state of the rollout.
kubectl rollout status deployment/hello
You can also verify this on the Pods directly.
kubectl get pods -o jsonpath --template='{range .items[*]}{.metadata.name}{"\t"}{"\t"}{.spec.containers[0].image}{"\n"}{end}'
Resume a Rolling Update
The rollout is paused which means that some pods are at the new version and some pods are at the older version. We can continue the rollout using the resume
command.
kubectl rollout resume deployment/hello
When the rollout is complete, you should see the following when running the status
command.
kubectl rollout status deployment/hello
deployment "hello" successfully rolled out
Rollback an Update
Now let's assume that a bug was detected in our new version. Since the new version is presumed to have problems, any users connected to the new Pods will experience those issues. You will want to roll back to the previous version so you can investigate and then release a version that is fixed properly.
You can use the rollout
command to roll back to the previous version.
kubectl rollout undo deployment/hello
Now that we have rolled back, let's verify that in the history.
kubectl rollout history deployment/hello
Finally, we can verify that all the Pods have rolled back to their previous versions.
kubectl get pods -o jsonpath --template='{range .items[*]}{.metadata.name}{"\t"}{"\t"}{.spec.containers[0].image}{"\n"}{end}'
12. Canary Deployments
When you would like to test a new deployment in production with a subset of your users, you can do a canary deployment. Canary deployments can allow you to release a change to a small subset of your users to mitigate risk associated with new releases.
Create a Canary Deployment
A canary deployment consists of a separate deployment with your new version and a service that targets both your normal, stable deployment as well as your canary deployment.
First, we can create a new canary deployment for our new version. Examine the file.
cat deployments/hello-canary.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-canary
spec:
replicas: 1
template:
metadata:
labels:
app: hello
track: canary
# Use ver 1.0.0 so it matches version on service selector
version: 1.0.0
spec:
containers:
- name: hello
image: kelseyhightower/hello:2.0.0
ports:
- name: http
containerPort: 80
- name: health
containerPort: 81
...
Now create the canary deployment.
kubectl create -f deployments/hello-canary.yaml
After the canary deployment is created, you should have two deployments, hello
and hello-canary
. You can verify that with kubectl
.
kubectl get deployments
On the hello
Service, the selector uses the app: hello
, selector which will match Pods in both the prod deployment and canary deployment. However, because the canary deployment has a fewer number of pods, it will be visible to fewer users.
Verify the Canary Deployment
You can verify the hello
version being served by the requests.
curl -ks https://`kubectl get svc frontend -o=jsonpath="{.status.loadBalancer.ingress[0].ip}"`/version
Run this several times and you should see that some of the requests are served by hello 1.0.0 and a small subset (1/4 = 25%) are served by 2.0.0.
[Note] Canary Deployments in Production
For the purposes of this lab, each request sent to the nginx service had a chance to be served by the canary deployment. In some cases, we want the user to "stick" to one or the other. For instance, the UI for an application may have changed and you don't want to confuse the user.
In that case, you can create a service with session affinity. That way the same user will always be served from the same version. In this case, the service is the same as before but we add a new sessionAffinity
field and set it to ClientIP
. This way all clients with the same IP address will have their requests sent to the same version of the hello
application.
kind: Service
apiVersion: v1
metadata:
name: "hello"
spec:
sessionAffinity: ClientIP
selector:
app: "hello"
ports:
- protocol: "TCP"
port: 80
targetPort: 80
Due to it being difficult to set up an environment to test this, you don't need to here but you may want to use sessionAffinity
for canary deployments in production.
Clean Up
Now that you have verified that the canary deployment is working, you can go ahead and delete it and the service we created.
kubectl delete deployment hello-canary
13. Blue-Green Deployments
Rolling updates are ideal because they allow you to deploy an application slowly with minimal overhead, minimal performance impact, and minimal downtime. However, there are instances where it is beneficial to modify the load balancers to point to that new version only after it has been fully deployed. In this case, so called blue-green deployments are the way to go.
In Kubernetes, we will achieve this by creating two separate deployments. One for our old "blue" version and one for our new "green" version. We will use our existing hello
Deployment for our "blue" version. Our deployments will be accessed via a Service which will act as our router. Once the new "green" version is up and running, we'll switch over to using that version by updating the Service.
The Service
You will use the existing hello
Service, but update it so that it has a selector app: hello, version: 1.0.0
. The selector will match the existing "blue" deployment. But it will not match our "green" deployment because it will use a different version.
First update the service:
kubectl apply -f services/hello-blue.yaml
Updating using Blue-Green Deployment
In order to support a blue-green deployment style, we will create a new "green" deployment for our new version. The green deployment simply updates the version label, and the image path.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-green
spec:
replicas: 3
template:
metadata:
labels:
app: hello
track: stable
version: 2.0.0
spec:
containers:
- name: hello
image: kelseyhightower/hello:2.0.0
ports:
- name: http
containerPort: 80
- name: health
containerPort: 81
resources:
limits:
cpu: 0.2
memory: 10Mi
livenessProbe:
httpGet:
path: /healthz
port: 81
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 15
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /readiness
port: 81
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 1
Create the green deployment.
kubectl create -f deployments/hello-green.yaml
Once we have a green deployment and it has started up properly, you can verify that the current version of 1.0.0 is still being used.
curl -ks https://`kubectl get svc frontend -o=jsonpath="{.status.loadBalancer.ingress[0].ip}"`/version
Now, update the service to point to the new version.
kubectl apply -f services/hello-green.yaml
Once the service is updated, the "green" deployment will be used immediately. You can now verify that the new version is always being used.
curl -ks https://`kubectl get svc frontend -o=jsonpath="{.status.loadBalancer.ingress[0].ip}"`/version
Blue-Green Rollback
If necessary, you can then roll back to the old version in the same way. While the "blue" Deployment is still running, simply update the service back to the old version.
kubectl apply -f service/hello-blue.yaml
Once you have updated the service, your rollback will have been successful. Again, verify that the right version is now being used.
curl -ks https://`kubectl get svc frontend -o=jsonpath="{.status.loadBalancer.ingress[0].ip}"`/version
14. Module 4: Introduction to Jenkins
This tutorial shows you how to set up a continuous delivery pipeline using Jenkins and Google Container Engine as described in the following diagram.
Start by cloning the sample code in your Cloud Shell. The Git repository contains Kubernetes manifests that you'll use to deploy Jenkins. The manifests and their settings are described in Configuring Jenkins for Container Engine.
git clone https://github.com/GoogleCloudPlatform/continuous-deployment-on-kubernetes.git
cd continuous-deployment-on-kubernetes
15. Provisioning Jenkins
Create the Jenkins home volume
To pre-populate Jenkins with the configurations discussed in Jenkins on Container Engine, you'll need to create the volume from the supplied tarball. Container Engine will mount this volume into your Jenkins pod.
gcloud compute images create jenkins-home-image --source-uri https://storage.googleapis.com/solutions-public-assets/jenkins-cd/jenkins-home-v3.tar.gz
gcloud compute disks create jenkins-home --image jenkins-home-image --zone europe-west1-c
Configuring Jenkins Credentials
In order to enable authentication for the Jenkins UI, first create a random password. Take note of the password for use later in the lab.
export PASSWORD=`openssl rand -base64 15`; echo "Your password is $PASSWORD"; sed -i.bak s#CHANGE_ME#$PASSWORD# jenkins/k8s/options
Next, create a Kubernetes namespace for Jenkins. Namespaces allow you to use the same resource manifests across multiple environments without needing to give resources unique names. We will include this namespace as a parameter to the commands we send to Kubernetes.
kubectl create ns jenkins
Finally, create a Kubernetes secret. Kubernetes uses this object to provide Jenkins with the default username and password when Jenkins boots.
kubectl create secret generic jenkins --from-file=jenkins/k8s/options --namespace=jenkins
Deploy Jenkins
In this section, you'll create a Jenkins deployment and services based on the Kubernetes resources defined in the jenkins/k8s
folder of the sample code.
The kubetcl apply
command creates a Jenkins deployment that contains a container for running Jenkins and a persistent disk that contains the Jenkins home directory. Keeping the home directory on the persistent disk ensures that your critical configuration data is maintained, even if the pod running your Jenkins master goes down.
The kubetcl apply
command also creates two services that enable your Jenkins master to be accessed by other pods in the cluster:
- A NodePort service on port 8080 that allows pods and external users to access the Jenkins user interface. This type of service can be load balanced by an HTTP Load Balancer.
- A ClusterIP service on port 50000 that the Jenkins executors use to communicate with the Jenkins master from within the cluster.
Create the Jenkins deployment and services.
kubectl apply -f jenkins/k8s/
deployment "jenkins" created
service "jenkins-ui" created
service "jenkins-discovery" created
Confirm that the pod is running. Look for Running
in the STATUS
column.
kubectl get pods -n jenkins
NAME READY STATUS RESTARTS AGE
jenkins-2477738154-iafn5 1/1 Running 0 1d
Configuring HTTP Load Balancing
Next, you'll create an ingress resource that manages the external load balancing of the Jenkins user interface service. The ingress resource also acts as an SSL terminator to encrypt communication between users and the Jenkins user interface service.
Confirm that the services are set up correctly by listing the services in the Jenkins namespace. Confirm that jenkins-discovery
and jenkins-ui
display. If not, ensure the steps above were all run.
kubectl get svc -n jenkins
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
jenkins-discovery 10.79.254.142 <none> 50000/TCP 10m
jenkins-ui 10.79.242.143 <nodes> 8080/TCP 10m
Create an SSL certificate and key.
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/tls.key -out /tmp/tls.crt -subj "/CN=jenkins/O=jenkins"
Upload the certificate to Kubernetes as a secret.
kubectl create secret generic tls --from-file=/tmp/tls.crt --from-file=/tmp/tls.key -n jenkins
Create the HTTPS load balancer using an ingress.
kubectl apply -f jenkins/k8s/lb/ingress.yaml
Connecting to Jenkins
Check the status of the load balancer's health checks. The backends
field displays as UNKNOWN
or UNHEALTHY
until the checks complete in a healthy state. Repeat this step until you see the backends field display HEALTHY
.
kubectl describe ingress jenkins --namespace jenkins
Name: jenkins
Namespace: jenkins
Address: 130.211.14.253
Default backend: jenkins-ui:8080 (10.76.2.3:8080)
TLS:
tls terminates
Rules:
Host Path Backends
---- ---- --------
Annotations:
https-forwarding-rule: k8s-fws-jenkins-jenkins
https-target-proxy: k8s-tps-jenkins-jenkins
static-ip: k8s-fw-jenkins-jenkins
target-proxy: k8s-tp-jenkins-jenkins
url-map: k8s-um-jenkins-jenkins
backends: {"k8s-be-32371":"HEALTHY"}
Once your backends are healthy, you can get the Jenkins URL by running the following command.
echo "Jenkins URL: https://`kubectl get ingress jenkins -n jenkins -o jsonpath='{.status.loadBalancer.ingress[0].ip}'`"; echo "Your username/password: jenkins/$PASSWORD"
Jenkins URL: https://130.211.14.253
Your username/password: jenkins/2nsdhzgqrjue
Visit the URL from the previous command in your browser and login with the credentials displayed.
16. Understanding the Application
You'll deploy the sample application, gceme
, in your continuous deployment pipeline. The application is written in the Go language and is located in the repo's sample-app
directory. When you run the gceme
binary on a Compute Engine instance, the app displays the instance's metadata in an info card.
The application mimics a microservice by supporting two operation modes.
- In backend mode,
gceme
listens on port 8080 and returns Compute Engine instance metadata in JSON format. - In frontend mode,
gceme
queries the backendgceme
service and renders the resulting JSON in the user interface.
17. Deploying the Application
You will deploy the application into two different environments:
- Production: The live site that your users access.
- Canary: A smaller-capacity site that receives only percentage of your user traffic. Use this environment to validate your software with live traffic before it's released to all of your users.
In Google Cloud Shell, navigate to the sample application directory.
cd sample-app
Create the Kubernetes namespace to logically isolate the deployment.
kubectl create ns production
Create the production and canary deployments and services using the kubectl apply
commands.
kubectl apply -f k8s/production -n production
kubectl apply -f k8s/canary -n production
kubectl apply -f k8s/services -n production
Scale up the production environment frontends. By default, only one replica of the frontend is deployed. Use the kubectl scale
command to ensure that we have at least 4 replicas running at all times.
kubectl scale deployment gceme-frontend-production -n production --replicas 4
Confirm that you have 5 pods running for the frontend, 4 for production traffic and 1 for canary releases. This means that changes to our canary release will only affect 1 out of 5 (20%) of users. You should also have 2 pods for the backend, 1 for production and 1 for canary.
kubectl get pods -n production -l app=gceme -l role=frontend
kubectl get pods -n production -l app=gceme -l role=backend
Retrieve the external IP for the production services.
kubectl get service gceme-frontend -n production
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
gceme-frontend 10.79.241.131 104.196.110.46 80/TCP 5h
Store the frontend service load balancer IP in an environment variable for use later.
export FRONTEND_SERVICE_IP=$(kubectl get -o jsonpath="{.status.loadBalancer.ingress[0].ip}" --namespace=production services gceme-frontend)
Confirm that both services are working by opening the frontend external IP address in your browser.
Check the version output of the service by hitting the /version path. It should read 1.0.0.
curl http://$FRONTEND_SERVICE_IP/version
18. Creating the Jenkins Pipeline
Creating a Repository to Host the Sample App Source Code
Create a copy of the gceme sample app and push it to Cloud Source Repositories.
Initialize the sample-app directory as its own Git repository. Replace [PROJECT_ID]
with your current project ID in the following command. To find your current project ID you can run gcloud config list project
.
gcloud alpha source repos create default
git init
git config credential.helper gcloud.sh
git remote add origin https://source.developers.google.com/p/[PROJECT_ID]/r/default
Set the username and email address for your Git commits. Replace [EMAIL_ADDRESS]
with your Git email address. Replace [USERNAME]
with your Git username.
git config --global user.email "[EMAIL_ADDRESS]"
git config --global user.name "[USERNAME]"
Add, commit, and push the files.
git add .
git commit -m "Initial commit"
git push origin master
Adding Your Service Account Credentials
Configure your credentials to allow Jenkins to access the code repository. Jenkins will use your cluster's service account credentials in order to download code from the Cloud Source Repositories.
- In the Jenkins user interface, click Credentials in the left navigation.
- Click Jenkins in the top group.
- Click Global Credentials.
- Click Add Credentials in the left navigation.
- Select Google Service Account from metadata from the Kind drop-down.
- Click OK.
There are now two global credentials. Make a note of the second credential's name for use later on in this tutorial.
Creating the Jenkins Job
Navigate to your Jenkins user interface and follow these steps to configure a Pipeline job.
- Click the Jenkins link in the top left of the interface.
- Click the New Item link in the left navigation.
- Name the project sample-app, then choose the Multibranch Pipeline option and click OK.
- On the next page, click Add Source and select git.
- Paste the HTTPS clone URL of your sample-app repo in Cloud Source Repositories into the Project Repository field. Replace
[PROJECT_ID]
with your project ID.
https://source.developers.google.com/p/[PROJECT_ID]/r/default
- From the Credentials drop-down, select the name of the credentials you created when adding your service account in the previous steps.
- Under Build Triggers, select the checkbox Build Periodically, and enter five spaced separated asterisks (* * * * *) into the Schedule field. This ensures that Jenkins checks your code repository for changes once every minute. This field uses the CRON expression syntax to define the schedule.
- Your job configuration should look like this:
- Click Save.
After you complete these steps, a job named "Branch indexing" runs. This meta-job identifies the branches in your repository and ensures changes haven't occurred in existing branches. If you click sample-app
in the top left, the master
job should be seen.
19. Creating the Development Environment
Development branches are a set of environments your developers use to test their code changes before submitting them for integration into the live site. These environments are scaled-down versions of your application, but need to be deployed using the same mechanisms as the live environment.
Creating a Development Branch
To create a development environment from a feature branch, you can push the branch to the Git server and let Jenkins deploy your environment.
Create a development branch and push it to the Git server.
git checkout -b new-feature
Modifying the Pipeline Definition
The Jenkinsfile that defines that pipeline is written using the Jenkins Pipeline Groovy syntax. Using a Jenkinsfile allows an entire build pipeline to be expressed in a single file that lives alongside your source code. Pipelines support powerful features like parallelization and requiring manual user approval.
In order for the pipeline to work as expected, you need to modify the Jenkinsfile to set your project ID.
Open the Jenkinsfile in your favorite terminal editor. For example using Vi.
vi Jenkinsfile
Replace REPLACE_WITH_YOUR_PROJECT_ID
with your project ID. To get your project ID, run gcloud config get-value project
def project = 'REPLACE_WITH_YOUR_PROJECT_ID'
def appName = 'gceme'
def feSvcName = "${appName}-frontend"
def imageTag = "gcr.io/${project}/${appName}:${env.BRANCH_NAME}.${env.BUILD_NUMBER}"
Save the file and exit the editor. In Vi, enter :wq
.
Modify the Site
In order to demonstrate changing the application, we will be change the gceme
cards from blue to orange.
- Open
html.go
and replace the two instances ofblue
withorange
. - Open
main.go
and change the version number from1.0.0
to2.0.0
. The version is defined in this line:
const version string = "2.0.0"
Kick Off Deployment
Commit and push your changes. This will kick off a build of your development environment.
git add Jenkinsfile html.go main.go
git commit -m "Version 2.0.0"
git push origin new-feature
After the change is pushed to the Git repository, navigate to the Jenkins user interface where you can see that your build started for the new-feature
branch It can take up to a minute for the changes to be picked up.
After the build is running, click the down arrow next to the build in the left navigation and select Console Output.
Track the output of the build for a few minutes and watch for the kubectl --namespace=new-feature apply...
messages to begin. Your new-feature
branch will now be deploying to your cluster.
In a development scenario, you wouldn't use a public-facing load balancer. To help secure your application, you can use kubectl proxy
. The proxy authenticates itself with the Kubernetes API and proxies requests from your local machine to the service in the cluster without exposing your service to the Internet.
Start the proxy in the background.
kubectl proxy &
Verify that your application is accessible by sending a request to localhost and letting kubectl proxy
forward it to your service. You should see it respond with 2.0.0
, which is the version that is now running.
curl http://localhost:8001/api/v1/proxy/namespaces/new-feature/services/gceme-frontend:80/version
20. Deploying a Canary Release
Now that we have verified that our app is running our latest code in the development environment, lets deploy that code to the canary environment.
Create a canary branch and push it to the Git server.
git checkout -b canary
git push origin canary
In Jenkins, you should see the canary
pipeline has kicked off. Once complete, you can check the service URL to ensure that some of the traffic is being served by your new version. You should see about 1 in 5 requests returning version 2.0.0.
export FRONTEND_SERVICE_IP=$(kubectl get -o jsonpath="{.status.loadBalancer.ingress[0].ip}" --namespace=production services gceme-frontend)
while true; do curl http://$FRONTEND_SERVICE_IP/version; sleep 1; done
You can stop this command by pressing Ctrl-C
.
21. Deploying to Production
Now that our canary release was successful and we haven't heard any customer complaints, we can deploy to the rest of our production fleet.
Create a canary branch and push it to the Git server.
git checkout master
git merge canary
git push origin master
In Jenkins, you should see the master
pipeline has kicked off. Once complete, you can check the service URL to ensure that all of the traffic is being served by your new version, 2.0.0. You can also navigate to the site using your browser to see your orange cards.
export FRONTEND_SERVICE_IP=$(kubectl get -o jsonpath="{.status.loadBalancer.ingress[0].ip}" --namespace=production services gceme-frontend)
while true; do curl http://$FRONTEND_SERVICE_IP/version; sleep 1; done
You can stop this command by pressing Ctrl-C
.
22. Wrapping Up
End your lab
When you have completed your lab, click End. Qwiklabs removes the resources you've used and cleans the account for you.
You will be given an opportunity to rate the lab experience. Select the applicable number of stars, type a comment, and then click Submit.
Note: The number of stars indicates the following:
- 1 star = Very dissatisfied
- 2 stars = Dissatisfied
- 3 stars = Neutral
- 4 stars = Satisfied
- 5 stars = Very satisfied
You may close the dialog if you don't want to provide feedback.
Additional Resources
- For more information about Google Cloud Training and Certification, see https://cloud.google.com/training/
- For more Google Cloud Platform Self-Paced Labs, see http://www.qwiklabs.com
For feedback, suggestions, or corrections, please use the Support tab.