Deployment using Kubernetes Helm

One of the challenges I face in my development setup is that I want to quickly and often create and deploy my robotics stack. I often want to change and redeploy my entire stack from scratch, because I want to iterate quickly and also reduce my costs as much as possible. My Jenkins jobs have helped a great deal here, and automation is definitely key. However I have recently started experimenting with Kubernetes Helm which is a package manager for Kubernetes which has made this even easier for me.

Kubernetes Helm

Helm is a package manager that allows you to define a package with all its dependent deployment objects for Kubernetes. With helm and this package you can then ask a cluster to install the entire package in one go instead of passing individual deployment commands. This means for me that instead of asking Kubernetes to install each of my several micro-services to be installed I simply ask it to install the entire package/release in one atomic action which also includes all of the dependent services like databases and message brokers I use.

Installing Helm

In this blog I want to give a small taste on how nice Helm is. So how do we get started? Well in order to get started with Helm you should first follow the installation instructions at this page: https://github.com/kubernetes/helm/blob/master/docs/install.md

In case you are using OSX (like me) its relatively simple if you are using homebrew, simply run the following cask:

brew cask install helm

Once helm is installed it should also be installed in your cluster. In my case I will be testing against a minikube installation as described in my previous post: https://renzedevries.wordpress.com/2016/11/07/using-kubernetes-minikube-for-local-test-deployments/

On the command line I have a kubernetes command line client (kubectl) with my configuration pointing towards my minikube cluster. The only thing I have to do is the following to install Helm in my cluster:

helm init

This will install a container named tiller in my cluster, this container will understand how to deploy the Helm packages (charts) into my cluster. This is in essence the main endpoint the helm client will use to interrogate the cluster for package deployments and package changes.

Creating the package

Next we need to start creating something which is called a Chart, this is the unit of packaging in Helm. For this post I will reduce the set of services I have used in previous posts and only deploy the core services Cassandra, MQTT and ActiveMQ. The first thing to define is the *Chart.yaml** which is the package manifest:

Chart.yaml
The manifest looks pretty simple, most important is the version number, the rest is mainly metadata for indexing:

name: robotics
version: 0.1
description: Robotic automation stack
keywords:
- robotics
- application
maintainers:
- name: Renze de Vries
engine: gotpl

The second I am going to define is the deployment objects I want to deploy. For this we create a ‘Charts’ subdirectory which contains these dependent services. In this case I am going to deploy MQTT, ActiveMQ and Cassandra which are required for my project. For each of these services I create a templates folder which contains the Kubernetes Deployment.yaml descriptor and Kubernetes service descriptor file and have their own Charts.yaml file as well.

When you have this all ready it look as following:
screen-shot-2016-11-16-at-20-42-12

I am not going to write out all the files in this blog, if you want to have a look at the full source have a look at the github repository here that contains the full Helm chart structure describe in this post: https://github.com/renarj/helm-repo

Packaging a release

Now that the Chart source files have been created the last thing to do is to create the actual package. For this we have to do nothing else than simply run the following command:

helm package .

This will create a file called robotics-0.1.tgz that we can use further to deploy our release. In a future blog post I will talk a bit about Helm repositories and how you can distribute these packages, but for now we keep them on the local file system.

Installing a release

Once we have defined the packages the only thing thats remaining is to simply install a release into the cluster. This will install all the services that are packaged in the Chart.

In order to install the package we have created above we just have to run the following command:

helm install robotics-0.1.tgz
NAME: washing-tuatar
LAST DEPLOYED: Sun Nov  6 20:42:30 2016
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Service
NAME          CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
amq       10.0.0.59   <nodes>   61616/TCP   1s
mqtt      10.0.0.131   <nodes>   1883/TCP   1s
cassandra-svc   10.0.0.119   <nodes>   9042/TCP,9160/TCP   1s

==> extensions/Deployment
NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
mqtt      1         1         1            0           1s
amq       1         1         1         0         1s
cassandra   1         1         1         0         1s

We can ask Helm which packages are installed in the cluster by simply asking a list of installed packages as following:

helm list
NAME          	REVISION	UPDATED                 	STATUS  	CHART       
washing-tuatar	1       	Sun Nov  6 20:42:30 2016	DEPLOYED	robotics-0.1

Please note that the name for the installation is a random generated name, in case you want a well known name you can install using the ‘-name’ switch and specify the name yourself.

In order to delete all the deployed objects I can simply ask Helm to uninstall the release as following:

helm delete washing-tuatar

Conclusion

I have found that Helm has a big potential, it allows me to very quickly define a full software solution composed out of many individual deployments. In a future blog post I will talk a bit more about the templating capabilities of Helm and the packaging and distributing of your packages. In the end I hope this blog shows everyone that with Helm you can make all of your Kubernetes work even easier than it already is today 🙂

Advertisements

Building containers with Docker in Docker and Jenkins

Today I wanted to share some of my experiences I have when I am at home experimenting with Docker. I really love Docker it makes my life of deploying my software a lot easier. However as a hobbyist in my home setup I face one challenge and that is limited funds. I don’t have a nice build farm at home, and I do not want to run a cloud build server 24/7.

However what I want to have is a reliable Continuous Integration environment on hardware that is not always on and might be portable. Based on this I set out to run Jenkins as a Docker container which is relatively simple. You can simply run ”’docker run -d -p”8080:8080″ jenkinsci/jenkins”’ and voila a running Jenkins installation. For most of my challenges this indeed seems to be the case, I can create portable projects using Jenkins pipelines where I just need to point Jenkins to my github account.

Docker in Docker with Jenkins

The main challenge I faced was that I actually also wanted to build new containers as part of the CI setup. But the problem here is that in order to build a Docker container there needs to be a running Docker engine in the Jenkins host. I actually would like to have access to a Docker engine inside my already existing Docker container so I can build new containers.

There are actually multiple solutions to the problem:
1. Really run a Docker engine inside the Jenkins Docker container
2. Map the Docker from the Jenkins Docker host to be accesible inside the Jenkins container

There is an obvious downside to nr 1, and many other blogs do not recommend using this ever…. So this left me with solution nr 2. using the already running Docker inside my parent host environment.

Solving the problem

In order to make this work I need to do a few things:

1: Install the Docker client inside the Jenkins container
The first part is actually the hardest, I can make the docker engine of the host system available but I still need the client tools installed. The default Jenkins container does ofcourse not contain these tools. As I had to make some more modifications to my Jenkins container I set out to simply extend the current container.

In order to install the Docker CLI, you can use the below Dockerfile to extend the official Jenkins 2.x container with the latest docker client that is available.

from jenkinsci/jenkins:latest

USER root
RUN apt-get update -qq
RUN apt-get install -qqy apt-transport-https ca-certificates
RUN apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
RUN echo deb https://apt.dockerproject.org/repo debian-jessie main > /etc/apt/sources.list.d/docker.list
RUN apt-get update -qq
RUN apt-get install -qqy docker-engine

USER jenkins

Next thing I just need to build this container once on the host system:
docker build -t myjenkins .

2: Start the Jenkins container with a mapper Docker socket
In order to make the Docker from the host system available I need to make the API available to the Jenkins docker container. You can do this by mapping the docker socket that is available on the parent system. I have created a small docker-compose file where I map both my volumes and the docker socket as following:

jenkins:
  container_name: jenkins
  image: myjenkins:latest
  ports:
    - "8080:8080"
  volumes:
    - /Users/devuser/dev/docker/volumes/jenkins:/var/jenkins_home
    - /var/run:/var/run:rw

Please not specially mapping the ‘/var/run’ with rw privileges, this is needed to make sure the Jenkins container has access to the host systems docker.sock.

3: Building a container
In order to demonstrate what better way than to create a Jenkins pipeline project that can build the Jenkins Docker container itself 🙂

In essence it is quite simple, using Jenkins 2.x you can create a Jenkins Pipeline project and supply the project with this Jenkinsfile in the configuration:

node {
   stage 'Checkout'

   git credentialsId: 'aa123c45-6abc-78e9-a1b2-583c1234e123', url: 'git@bitbucket.org:myorg/jenkins-repo.git'

   stage 'Build Jenkins Container'
   sh "docker pull jenkinsci/jenkins:latest"
   sh "docker build -t myjenkins -f ./jenkins/Dockerfile ."
}

This above Jenkinsfile checks out a git repository where I have put the Dockerfile for the Jenkins container. It uses the credentials that are already saved inside Jenkins for providing the credentials for GIT.

Because I always want the latest Jenkins 2.x container, i first pull in the latest version from Docker hub. The next stage is to simply run the Docker build and voila we have a completely build Jenkins container.

If i now run docker images in the host I see the following:

REPOSITORY                                                         TAG                 IMAGE ID            CREATED             SIZE
myjenkins                                                          latest              f89d0d6ba4a5        24 seconds ago        1.07 GB

Conclusion

When using standard home setups where hardware does not run fulltime you have some limitations. However with Docker and Jenkins you can create a very nice and effective Docker CI pipeline.

In some of the next few blog posts I will detail more about how to create a fully fledged Kubernetes deployment using Jenkins and Docker.