Deployment using Kubernetes Helm

One of the challenges I face in my development setup is that I want to quickly and often create and deploy my robotics stack. I often want to change and redeploy my entire stack from scratch, because I want to iterate quickly and also reduce my costs as much as possible. My Jenkins jobs have helped a great deal here, and automation is definitely key. However I have recently started experimenting with Kubernetes Helm which is a package manager for Kubernetes which has made this even easier for me.

Kubernetes Helm

Helm is a package manager that allows you to define a package with all its dependent deployment objects for Kubernetes. With helm and this package you can then ask a cluster to install the entire package in one go instead of passing individual deployment commands. This means for me that instead of asking Kubernetes to install each of my several micro-services to be installed I simply ask it to install the entire package/release in one atomic action which also includes all of the dependent services like databases and message brokers I use.

Installing Helm

In this blog I want to give a small taste on how nice Helm is. So how do we get started? Well in order to get started with Helm you should first follow the installation instructions at this page: https://github.com/kubernetes/helm/blob/master/docs/install.md

In case you are using OSX (like me) its relatively simple if you are using homebrew, simply run the following cask:

brew cask install helm

Once helm is installed it should also be installed in your cluster. In my case I will be testing against a minikube installation as described in my previous post: https://renzedevries.wordpress.com/2016/11/07/using-kubernetes-minikube-for-local-test-deployments/

On the command line I have a kubernetes command line client (kubectl) with my configuration pointing towards my minikube cluster. The only thing I have to do is the following to install Helm in my cluster:

helm init

This will install a container named tiller in my cluster, this container will understand how to deploy the Helm packages (charts) into my cluster. This is in essence the main endpoint the helm client will use to interrogate the cluster for package deployments and package changes.

Creating the package

Next we need to start creating something which is called a Chart, this is the unit of packaging in Helm. For this post I will reduce the set of services I have used in previous posts and only deploy the core services Cassandra, MQTT and ActiveMQ. The first thing to define is the *Chart.yaml** which is the package manifest:

Chart.yaml
The manifest looks pretty simple, most important is the version number, the rest is mainly metadata for indexing:

name: robotics
version: 0.1
description: Robotic automation stack
keywords:
- robotics
- application
maintainers:
- name: Renze de Vries
engine: gotpl

The second I am going to define is the deployment objects I want to deploy. For this we create a ‘Charts’ subdirectory which contains these dependent services. In this case I am going to deploy MQTT, ActiveMQ and Cassandra which are required for my project. For each of these services I create a templates folder which contains the Kubernetes Deployment.yaml descriptor and Kubernetes service descriptor file and have their own Charts.yaml file as well.

When you have this all ready it look as following:
screen-shot-2016-11-16-at-20-42-12

I am not going to write out all the files in this blog, if you want to have a look at the full source have a look at the github repository here that contains the full Helm chart structure describe in this post: https://github.com/renarj/helm-repo

Packaging a release

Now that the Chart source files have been created the last thing to do is to create the actual package. For this we have to do nothing else than simply run the following command:

helm package .

This will create a file called robotics-0.1.tgz that we can use further to deploy our release. In a future blog post I will talk a bit about Helm repositories and how you can distribute these packages, but for now we keep them on the local file system.

Installing a release

Once we have defined the packages the only thing thats remaining is to simply install a release into the cluster. This will install all the services that are packaged in the Chart.

In order to install the package we have created above we just have to run the following command:

helm install robotics-0.1.tgz
NAME: washing-tuatar
LAST DEPLOYED: Sun Nov  6 20:42:30 2016
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Service
NAME          CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
amq       10.0.0.59   <nodes>   61616/TCP   1s
mqtt      10.0.0.131   <nodes>   1883/TCP   1s
cassandra-svc   10.0.0.119   <nodes>   9042/TCP,9160/TCP   1s

==> extensions/Deployment
NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
mqtt      1         1         1            0           1s
amq       1         1         1         0         1s
cassandra   1         1         1         0         1s

We can ask Helm which packages are installed in the cluster by simply asking a list of installed packages as following:

helm list
NAME          	REVISION	UPDATED                 	STATUS  	CHART       
washing-tuatar	1       	Sun Nov  6 20:42:30 2016	DEPLOYED	robotics-0.1

Please note that the name for the installation is a random generated name, in case you want a well known name you can install using the ‘-name’ switch and specify the name yourself.

In order to delete all the deployed objects I can simply ask Helm to uninstall the release as following:

helm delete washing-tuatar

Conclusion

I have found that Helm has a big potential, it allows me to very quickly define a full software solution composed out of many individual deployments. In a future blog post I will talk a bit more about the templating capabilities of Helm and the packaging and distributing of your packages. In the end I hope this blog shows everyone that with Helm you can make all of your Kubernetes work even easier than it already is today 🙂

Using Kubernetes Minikube for Local test deployments

One of the many challenges I face with my Robotics Cloud development is the need to test locally and constantly re-create the stack from scratch. Now I have a lot of automation to deploy against AWS using Jenkins as seen in previous posts. However setting up a local development environment is the thing I do the most and that is costing a lot of time because the tooling always was painful to use.

Now in the last few months there have been a lot of innovations happening in the Kubernetes field. In particular in this blog post I want to talk about using Minikube.

Minikube

One of the big pains was always to setup a Kubernetes cluster on your local machine. Before there were some solutions, the simplest one was to use vagrant or the kube-up script that would create some vm’s in virtualbox. However my experience was that they were error prone and did not always complete succesfully. For local machine development setups there is now a new solution called minikube. In essence using minikube you can create a single machine kubernetes test cluster to get you quickly up and running.

The simplest way to get started is to install minikube first using the latest release instructions, in my case for OSX on the 0.12.2 release I install it using this command:

curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.12.2/minikube-darwin-amd64 &amp;&amp; chmod +x minikube &amp;&amp; sudo mv minikube /usr/local/bin/

Please visit this page for the latest release of minikube: https://github.com/kubernetes/minikube/releases

In essence the above command downloads the minikube binary and moves it to the local usr bin directory so its available on the path. After this we can start creating the minikube machine, in my case I will use virtualbox as the provider which is automatically detected if its installed. In my case all i have to do is the following:
minikube start --memory=8196

The above will start a single node kubernetes cluster which acts both as master and worker in virtualbox with 8GB of memory. Also it will ensure my local kubernetes (kubectl) client configuration is set to point to the cluster master. This will take a few minutes to get up and running but the cluster should be available after this and you can check if its ready by doing this:

kubectl get nodes
NAME       STATUS    AGE
minikube   Ready     1h

The minikube setup has created a virtual box setup that exposes all its services via the virtualbox ip. The minikube binary provides a shortcut to get that ip using below command:

minikube ip
192.168.99.100

This ip can be used to directly access all services that are exposed on the kubernetes cluster.

Now the cluster is available you can start deploying to your hearts content, but you might want to use the kubernetes dashboard for this which is handy for the overview. In order to quickly get to the dashboard you can run this minikube command:

minikube dashboard

If you want to stop the cluster you can simply type the following command:

minikube stop

The next time you start the cluster it will resume the state it was in previously. So all previously running containers will also be started once the cluster comes back up which is quite handy in case of development.

Conclusion

I hope this post helps people who are struggling setting up their own Kubernetes cluster and getting them quickly started. I am sure there is a lot more to come from the Kubernetes folks, its really getting easier and easier 🙂