Having fun with Robots and Model trains

Last few blog posts have all been about heavy docker and Kubernetes stuff. I thought it was time for something more light and today I want to blog about my hobby robotics project again. Before I had robots to tinker with actually I used to play around a lot with trying to automate a model train setup. So recently I had the idea why can’t I combine this and let one of the robots have some fun by playing with a model train🙂

In this blog post I will use MQTT and my Robot Cloud SDK I have developed to hook up our Nao Robot to MQTT together with the model train.

Needed materials

In order to build a automated train layout I needed a model train setup, I have a already existing H0 based Roco/Fleischmann based model train setup. All the trains on this setup are digitised using decoders that are based on DCC. If you do not know what this means, you can read a about digital train systems here: http://www.dccwiki.com/DCC_Tutorial_(Basic_System)

Hooking up the train

The train system I have is controlled using an Ecos controller which has a well defined TCP network protocol I can use for controlling it. I have written a small library that hooks the controller to my IoT/robot cloud that I have described in previous blogposts. The commands for moving the train are sent to MQTT which are then translated to a TCP command the controller can understand.

I will have a MQTT broker available somewhere in the Cloud (AWS/Kubernetes) where also my robots can connect to so this will be the glue connecting the robot and trains.

I don’t really want to bother people to much with the technicals of the train and code behind it, but if you are interested in the code I have put it on Github: https://github.com/renarj/rc-train

Hooking up the robots

Hooking up the robots is actually quite simple, I have done this before and am using the same setup before. The details of this are all available in this blog post: https://renzedevries.wordpress.com/2016/06/10/robot-interaction-with-a-nao-robot-and-a-raspberry-pi-robot/

In this case I will be using our Nao Robot and hook this up to the MQTT bridge. The framework have developed contains a standard message protocol on top of MQTT. This means the messages are always defined the same way and all parties adhering to this can give states and commands to each other. In this case both the train and robot use the same message protocol via MQTT, hence why we can hook them up.

In order to make this a bit more entertaining I want to run a small scenario:
1. Nao walks a bit towards the train and the controller
2. Sits down and says something
3. Starts the train
4. Reacts when the train is running

I always like to put a bit of code in a post, so this code is used to create this scenario:

    private static void runScenario(Robot robot) {
        //Step1: Walk to the train controller (1.2 meters)
        robot.getMotionEngine().walk(WalkDirection.FORWARD, 1.2f);

        //Step2: Let's say something and sit down
        robot.getCapability(SpeechEngine.class).say("Oh is that a model train, let me sit and play with it", "english");
        //Step3: Let's start the train
        sleepUninterruptibly(1, TimeUnit.SECONDS);
        startTrain(robot.getRemoteDriver(), "1005", "forward");
        //Step4: Nao is having lots of fun
        sleepUninterruptibly(5, TimeUnit.SECONDS);
        robot.getCapability(SpeechEngine.class).say("I am having so much fun playing with the train", "english");

    private static void startTrain(RemoteDriver remoteDriver, String trainId, String direction) {
                .property("trainId", trainId).build());
                .property("trainId", trainId)
                .property("state", "on").build());
                .property("trainId", trainId)
                .property("direction", direction).build());
                .property("trainId", trainId)
                .property("speed", "127").build());

What happens here is that in the Robot SDK there is a bit of code that can translate Java objects into MQTT messages. Those MQTT messages are then received by the train controller from the MQTT bridge which translates this again into TCP messages.

For people that are interested in also this piece of code on how I create the scenario’s around the Nao robot it’s also available on github: https://github.com/renarj/robo-pep

End result

So how does this end result look like, well video’s say more than a thousand words (actually ±750 for this post🙂 )

This is just to show that you can have a bit of fun integrating very different devices. Using protocols like MQTT could really empower robot and other appliances to be tightly integrated very easily. The glue that I am adding is to make sure there is a standard message on top of MQTT for the different appliances and hooking them up to MQTT. Stay tuned for some more posts about my Robotics and hobby projects.

Deploying Docker containers to Kubernetes with Jenkins

After all my recent posts about deploying a Kubernetes cluster to AWS the one step I still wanted to talk about is how you can deploy the Docker containers to a Kubernetes cluster using a bit of automation. I will try to explain here how you can relatively simply do this by using Jenkins pipelines and some groovy scripting🙂

* Working Kubernetes cluster (see here: https://renzedevries.wordpress.com/2016/07/18/deploying-kubernetes-to-aws-using-jenkins/)
* Jenkins slave/master setup
* Kubectl tool installed and configured on the Jenkins master/slave and desktop
* Publicly accessible Docker images (AWS ECR for example see: https://renzedevries.wordpress.com/2016/07/20/publishing-a-docker-image-to-amazon-ecr-using-jenkins/)

What are we deploying
In order to deploy containers against kubernetes there are two things that are needed. First I need to deploy the services that will ensure that we have ingress traffic via AWS ELB’s and this also ensures we have an internal DNS lookup capability for service to service communication. Second I need to deploy the actual containers using Kubernetes Deployments.

In this post I will focus on mainly one service which is called ‘command-service’. If you want to read a bit more about the services that I deploy you can find that here: https://renzedevries.wordpress.com/2016/06/10/robot-interaction-with-a-nao-robot-and-a-raspberry-pi-robot/

Creating the services

The first task I do is to create the actual kubernetes service for the command-service. The service descriptors are relatively simple in my case, the command-service needs to be publicly load balanced so I want kubernetes to create an AWS ELB for me. I will deploy this service by first checking out my git repository where I contain the service descriptors using Kubernetes yaml files. I will then use a Jenkins pipeline with some groovy scripting to deploy it.

The service descriptor for the public loadbalanced command-svc looks like this. This is a load balancer that is backed by all pods that have a label ‘app’ with value ‘command-svc’ and then attached to the AWS ELB backing this service.

apiVersion: v1
kind: Service
  name: command-svc
    app: command-svc
  type: LoadBalancer
  - port: 8080
    targetPort: 8080
    protocol: TCP
    name: command-svc
    app: command-svc

In order to actually create this services I use the below Jenkins pipeline code. In this code I use the apply command because the services are not very likely to change and this way it works both in clean and already existing environments. Because I constantly create new environments and sometimes update existing ones, I want all my scripts to be runnable multiple times regardless of current cluster/deployment state.

import groovy.json.*

node {
    stage 'prepare'

    git credentialsId: 'bb420c66-8efb-43e5-b5f6-583b5448e984', url: 'git@bitbucket.org:oberasoftware/haas-build.git'
    sh "wget http://localhost:8080/job/kube-deploy/lastSuccessfulBuild/artifact/*zip*/archive.zip"
    sh "unzip archive.zip"
    sh "mv archive/* ."

    stage "deploy services"
    sh "kubectl apply -f command-svc.yml --kubeconfig=kubeconfig"

Waiting for creation
One of the challenges I faced tho is that I have a number of containers that I want to deploy that depend on these service definitions. However it takes a bit of time to deploy these services and for the ELB’s to be fully created. So I have created a bit of small waiting code in Groovy that checks if the services are up and running. This is being called using the ‘waitForServices()’ method in the pipeline, you can see the code for this below:

def waitForServices() {
  sh "kubectl get svc -o json > services.json --kubeconfig=kubeconfig"

  while(!toServiceMap(readFile('services.json')).containsKey('command-svc')) {
        echo "Services are not yet ready, waiting 10 seconds"
        sh "kubectl get svc -o json > services.json --kubeconfig=kubeconfig"
  echo "Services are ready, continuing"

Map toServiceMap(servicesJson) {
  def json = new JsonSlurper().parseText(servicesJson)

  def serviceMap = [:]
  json.items.each { i ->
    def serviceName = i.metadata.name
    def ingress = i.status.loadBalancer.ingress
    if(ingress != null) {
      def serviceUrl = ingress[0].hostname
      serviceMap.put(serviceName, serviceUrl)

  return serviceMap

This should not complete until at least all the services are ready for usage, in this case my command-svc with its ELB backing.

Creating the containers

The next step is actually the most important, deploying the actual container. In this example I will be using the deployments objects that are there since Kubernetes 1.2.x.

Let’s take a look again at the command-svc container that I want to deploy. I use again the yaml file syntax for describing the deployment object:

apiVersion: extensions/v1beta1
kind: Deployment
  name: command-svc
  replicas: 1
        app: command-svc
      - name: command-svc
        image: account_id.dkr.ecr.eu-west-1.amazonaws.com/command-svc:latest
        - containerPort: 8080
        - name: amq_host
          value: amq
          value: production

Let’s put all that together for the rest of my deployments for the other containers. In this case I have one additional container that I deploy the edge-service. Using Jenkins pipelines this looks relatively simple:

    stage "deploy"
    sh "kubectl apply -f kubernetes/command-deployment.yml --kubeconfig=kubeconfig"

I currently do not have any active health checking at the end of the deployment, i am still planning on it. For now I just check that the pods and deployments are properly deployed, you can also do this by simply running these commands:
kubectl get deployments

This will yield something like below:

command-svc   1         1         1            1           1m

If you check the running pods kubectl get po you can see the deployment has scheduled a single pod:

NAME                          READY     STATUS    RESTARTS   AGE
command-svc-533647621-e85yo   1/1       Running   0          2m


I hope in this article I have taken away a bit of the difficulty on how to deploy your containers against Kubernetes. It can be done relatively simple, of course its not production grade but it shows on a very basic level how with any basic scripting (groovy) you can accomplish this task by just using Jenkins.

In this particular article I have not zoomed into the act of upgrading a cluster or the containers running on them. I will discuss this in a future blog post where I will zoom in on the particulars of doing rolling-updates on your containers and eventually will address the upgrade of the cluster itself on AWS.

Publishing a Docker image to Amazon ECR using Jenkins

I wanted to do a quick post, because some recent posts have lead to some questions about how do I actually make a docker container available on AWS. Luckily Amazon has a solution for this and its called Amazon ECR (EC2 Container Registry).

How to push a container

Let me share a few quick steps on how you can push your Docker container to the Amazon ECR repository.

Step1: Creating a repository
The first step is to create a repository where your Docker container can be pushed to. A single repository can contain multiple versions of a docker container with a maximum of 2k versions. For different docker containers you would create individual repositories.

In order to create a repository for let’s say our test-svc docker container let’s just run this command using the AWS CLI:

aws ecr create-repository --repository-name test-svc

Please note the returned repositoryUri we will need it in the next steps.

Step2: Logging in to ECR
In order to be able to push containers via Docker, you need to login to the AWS ECR repository. You can do this by running this AWS CLI:

aws ecr get-login

This will give an output something like this:

docker login -u AWS -p password -e none https://aws_account_id.dkr.ecr.eu-west-1.amazonaws.com

You need to take that output and run it in the console to do the actual login so that you can push your container.

Step3: Pushing the container
Now that we are authenticated we can start pushing the docker container, let’s make sure that we tag the container we want to push first:

docker tag test-svc:latest aws_account_id.dkr.ecr.eu-west-1.amazonaws.com/test-svc:latest

And after this we push the container as following:

docker push aws_account_id.dkr.ecr.eu-west-1.amazonaws.com/test-svc:latest

Note: Please make sure to replace aws_account_id with your actual AWS account id. This repository URL with the account ID is also returned when your repository was created in Step 1.

Automating in a Jenkins job

For people that have read my other posts, I tend to automate everything via Jenkins this also includes docker container publishing to Amazon ECR. This can be quite simply done by creating a small Jenkins job using this Jenkinsfile, I ask for input to confirm publish is needed, after that input it gets published to AWS ECR:

node {
    stage 'build-test-svc'
    //this triggers the Jenkins job that builds the container
    //build 'test-svc'

    stage 'Publish containers'
    shouldPublish = input message: 'Publish Containers?', parameters: [[$class: 'ChoiceParameterDefinition', choices: 'yes\nno', description: '', name: 'Deploy']]
    if(shouldPublish == "yes") {
     echo "Publishing docker containers"
     sh "\$(aws ecr get-login)"

     sh "docker tag test-svc:latest aws_account_id.dkr.ecr.eu-west-1.amazonaws.com/state-svc:latest"
     sh "docker push aws_account_id.dkr.ecr.eu-west-1.amazonaws.com/test-svc:latest"

Note: There is also a plugin in Jenkins that can publish to ECR, however up until this moment that does not support the eu-west region in AWS ECR yet and gives a login issue.


Hope the above helps for people that want to publish their Docker containers to AWS ECR🙂 If you have any questions do not hesitate to reach out to me via the different channels.

Deploying Kubernetes to AWS using Jenkins

Based on some of my previous posts I am quite busy creating a complete Continuous Integration pipeline using Docker and Jenkins-pipelines. For those who have read my previous blogposts I am a big fan of Kubernetes for Docker container orchestration and what I ideally want to achieve is have a full CI pipeline where even the kubernetes cluster gets deployed.

In this blog post I will detail how you can setup Jenkins to be able to deploy a Kubernetes cluster. I wanted to use cloudformation scripts as that makes most sense for a structural deployment method. It seems actually the easiest way to do this is using the kube-aws tool from core-os. The kube-aws tool is provided by core-os and can generate a cloudformation script that we can use for deploying a core-os based kubernetes cluster on AWS.

Preparing Jenkins

I am still using a Docker container to host my Jenkins installation (https://renzedevries.wordpress.com/2016/06/30/building-containers-with-docker-in-docker-and-jenkins/). If you just want to know how to use the kube-aws tool from Jenkins please skip this section🙂.

In order for me to use the kube-aws tool I need to do modify my docker Jenkins container, I need to do three things for that:

  1. Install the aws command line client
    Installing the aws-cli is actually relatively simple, I just need to make sure python & python pip are install them I can simply install the python package. These two packages are simply available from the apt-get repository, so regardless if you are running Jenkins in Docker or on a regular Ubuntu box you can install aws-cli as following:

    RUN apt-get install -qqy python
    RUN apt-get install -qqy python-pip
    RUN pip install awscli

  2. Install the kube-aws tool
    Next we need to install the kube-aws tool from core-os, this is a bit more tricky as there is nothing available in the default package repositories. So instead i simply download a specific release from the core-os site using wget, unpack it and then move the tool binary to /usr/local/bin

    RUN wget https://github.com/coreos/coreos-kubernetes/releases/download/v0.7.1/kube-aws-linux-amd64.tar.gz
    RUN tar zxvf kube-aws-linux-amd64.tar.gz
    RUN mv linux-amd64/kube-aws /usr/local/bin

3. Provide AWS keys
Because I do not want to prebake the AWS identity keys into the Jenkins image I will instead use the ability to inject these as environment variables during Docker container startup. Because I start Jenkins using a docker-compose start sequence, I can simply modify my compose file to inject the AWS identity keys, this looks as following:

  container_name: jenkins
  image: myjenkins:latest
    - "8080:8080"
    - /Users/renarj/dev/docker/volumes/jenkins:/var/jenkins_home
    - /var/run:/var/run:rw
    - AWS_DEFAULT_REGION=eu-west-1

Putting it all together
Jenkins Dockerfile used to install all the tooling, looks as following:

from jenkinsci/jenkins:latest

USER root
RUN apt-get update -qq
RUN apt-get install -qqy apt-transport-https ca-certificates
RUN apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
RUN echo deb https://apt.dockerproject.org/repo debian-jessie main > /etc/apt/sources.list.d/docker.list
RUN apt-get update -qq
RUN apt-get install -qqy docker-engine
RUN usermod -a -G staff jenkins
RUN apt-get install -qqy python
RUN apt-get install -qqy python-pip
RUN pip install awscli
RUN wget https://github.com/coreos/coreos-kubernetes/releases/download/v0.7.1/kube-aws-linux-amd64.tar.gz
RUN tar zxvf kube-aws-linux-amd64.tar.gz
RUN mv linux-amd64/kube-aws /usr/local/bin

USER jenkins

Generating the CloudFormation script

The kube-aws tooling has relatively simple input parameters, and the tool is relatively straightforward. What the tool does is to generate a cloudformation script you can use to deploy a Kubernetes stack on AWS.

The tool has in essence three phases
1. Initialise the cluster settings
2. Render the CloudFormation templates
3. Validate and Deploy the cluster

To start we need to initialise the tool, we do this by specifying a number of things. We need to tell the name of the cluster, we need to specify what the external DNS name will be (for the kube console for example) and you need to specify the name of the key-pair to use for the nodes that will be created.

kube-aws init --cluster-name=robo-cluster \
   --external-dns-name=cluster.mydomain.com \
   --region=eu-west-1 \
   --availability-zone=eu-west-1c \
   --key-name=kubelet-key \

You also will need a KMS encryption key from AWS (see here how: http://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html), the ARN of that encryption you need to specify in the above call to the kube-aws tool.

Rendering CloudFormation templates
The next step is actually very simple, we need to render the CloudFormation templates, we do this by simply executing the below command:

kube-aws render

This renders in the local directory a number of files, where most importantly you have the cluster.yaml which contains all the configuration settings used in the CloudFormation template. The cloudformation template is available as well under the filename stack-template.json

Setting nodeType and number of nodes
Before we deploy we need to set some typical settings in the generated cluster.yaml settings file. The main settings I want to change are the AWS instance type and the number of kubernetes worker nodes. Because I want to automate the deployment, I have chosen to do a simple in line replacement of values using sed

With the below code I change the workerInstanceType to a Environment variable called ‘INSTANCE_TYPE’ and I change the workerCount property (the amount of kube nodes) to an environment variable ‘$WORKER_COUNT’.

sed -i '''s/#workerCount: 1/workerCount: '''$WORKER_COUNT'''/''' cluster.yaml
sed -i '''s/#workerInstanceType: m3.medium/workerInstanceType: '''$INSTANCE_TYPE'''/''' cluster.yaml

The reason I use environment variables is because this way I can use Jenkins job input parameters later on to specify them. This way when deploying a cluster I can specify the values on triggering of the job.

Last step to do is to validate the changes we have made are correct, we do this as following

kube-aws validate

Deploying the Kubernetes cluster to AWS

The next and final step is to actually run the CloudFormation scripts, the kube-aws tool has a handy command for this, running this will execute the CloudFormation template. Simply run this:

kube-aws up

You are of course free to run the actual template yourself manually via the aws-cli or using the web console. I do like this handy shortcut built in the kube-aws tool for automation purposes. All the files for running this manually are available after the render and validate step described above. The output you need to use are the cluster.yaml,stack-template.json,userdata/* folder and the credentials folder.

After the kube-aws command completes it will have written a valid kubeconfig file to the local working directory. This kubeconfig can be used to access the kubernetes cluster. The cluster will take roughly an additional 5-10 minutes to be available but once it is up you can execute the regular kubernetes commands to validate your cluster is up and running:

kubectl --kubeconfig=kubeconfig get nodes

Creating a Jenkins Job

I want to put all of the above together in a single Jenkins job. For this I have created a Jenkins pipelines file where we put the entire process in as put in smaller pieces above. On top of this I introduced a multi-stage Jenkins pipeline:
1. Initialise the tool
2. Change the cluster configuration based on input parameters
3. Archive the generated cloudformation scripts
4. Deploy the cluster
5. Destroy the cluster

Input parameters
In the Jenkins pipeline I have defined multiple input parameters. These allow me to customise the worker count and instance types as described before. In order to use them you can do this two ways, you can hard code this in a freestyle job, but if you want to use the new style build pipelines in Jenkins you can use this:

   WORKER_COUNT = input message: 'Number of Nodes', parameters: [[$class: 'StringParameterDefinition', defaultValue: '4', description: '', name: 'WORKER_COUNT']]
   INSTANCE_TYPE = input message: 'Number of Nodes', parameters: [[$class: 'StringParameterDefinition', defaultValue: 't2.micro', description: '', name: 'INSTANCE_TYPE']]

For step 4 and 5 I use an additional Input step in Jenkins to check if the cluster really needs to be deployed. Also an additional fifth step is introduced to potentially allow destroying of the cluster.

The full pipeline looks like this:

node {
   stage 'Kube-aws init'

   sh "kube-aws init --cluster-name=robo-cluster \
   --external-dns-name=kube.robot.renarj.nl \
   --region=eu-west-1 \
   --availability-zone=eu-west-1c \
   --key-name=kube-key \

   stage "Kube-aws render"

   WORKER_COUNT = input message: 'Number of Nodes', parameters: [[$class: 'StringParameterDefinition', defaultValue: '4', description: '', name: 'WORKER_COUNT']]
   INSTANCE_TYPE = input message: 'Number of Nodes', parameters: [[$class: 'StringParameterDefinition', defaultValue: 't2.micro', description: '', name: 'INSTANCE_TYPE']]

   sh "kube-aws render"
   sh "sed -i '''s/#workerCount: 1/workerCount: '''$WORKER_COUNT'''/''' cluster.yaml"
   sh "sed -i '''s/#workerInstanceType: m3.medium/workerInstanceType: '''$INSTANCE_TYPE'''/''' cluster.yaml"
   sh "kube-aws validate"

   stage "Archive CFN"
   step([$class: 'ArtifactArchiver', artifacts: 'cluster.yaml,stack-template.json,credentials/*,userdata/*', fingerprint: true])

   stage "Deploy Cluster"
   shouldDeploy = input message: 'Deploy Cluster?', parameters: [[$class: 'ChoiceParameterDefinition', choices: 'yes\nno', description: '', name: 'Deploy']]
   if(shouldDeploy == "yes") {
    echo "deploying Kubernetes cluster"
    sh "kube-aws up"
    step([$class: 'ArtifactArchiver', artifacts: 'kubeconfig', fingerprint: true])

   stage "Destroy cluster"
   shouldDestroy = input message: 'Destroy the cluster?', parameters: [[$class: 'BooleanParameterDefinition', defaultValue: false, description: '', name: 'Destroy the cluster']]
   if(shouldDestroy) {
    sh "kube-aws destroy"

In Jenkins you can create a new job of type ‘Pipeline’ and then simply copy this above Jenkinsfile script in the job. But you can also check this file into sourcecontrol and let Jenkins scan your repositories for the presence of these files using the Pipeline multibranch capability.

The pipeline stage-view will roughly look as following after the above:
Screen Shot 2016-07-18 at 08.30.57


I hope based on the above people one way of automating the deployment of a Kubernetes cluster. Please bear in mind that this is not ready for a production setup and will require more work. I hope the above demonstrates the possibilities of deploying and automating a Kubernetes cluster in a simple way. However it is still lacking significant steps like doing rolling upgrades of existing clusters, deploying parallel clusters, cluster expansion etc. etc.

But for now for my use-case, it helps me to quickly and easily deploy a Kubernetes cluster for my project experiments.

Building containers with Docker in Docker and Jenkins

Today I wanted to share some of my experiences I have when I am at home experimenting with Docker. I really love Docker it makes my life of deploying my software a lot easier. However as a hobbyist in my home setup I face one challenge and that is limited funds. I don’t have a nice build farm at home, and I do not want to run a cloud build server 24/7.

However what I want to have is a reliable Continuous Integration environment on hardware that is not always on and might be portable. Based on this I set out to run Jenkins as a Docker container which is relatively simple. You can simply run ”’docker run -d -p”8080:8080″ jenkinsci/jenkins”’ and voila a running Jenkins installation. For most of my challenges this indeed seems to be the case, I can create portable projects using Jenkins pipelines where I just need to point Jenkins to my github account.

Docker in Docker with Jenkins

The main challenge I faced was that I actually also wanted to build new containers as part of the CI setup. But the problem here is that in order to build a Docker container there needs to be a running Docker engine in the Jenkins host. I actually would like to have access to a Docker engine inside my already existing Docker container so I can build new containers.

There are actually multiple solutions to the problem:
1. Really run a Docker engine inside the Jenkins Docker container
2. Map the Docker from the Jenkins Docker host to be accesible inside the Jenkins container

There is an obvious downside to nr 1, and many other blogs do not recommend using this ever…. So this left me with solution nr 2. using the already running Docker inside my parent host environment.

Solving the problem

In order to make this work I need to do a few things:

1: Install the Docker client inside the Jenkins container
The first part is actually the hardest, I can make the docker engine of the host system available but I still need the client tools installed. The default Jenkins container does ofcourse not contain these tools. As I had to make some more modifications to my Jenkins container I set out to simply extend the current container.

In order to install the Docker CLI, you can use the below Dockerfile to extend the official Jenkins 2.x container with the latest docker client that is available.

from jenkinsci/jenkins:latest

USER root
RUN apt-get update -qq
RUN apt-get install -qqy apt-transport-https ca-certificates
RUN apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
RUN echo deb https://apt.dockerproject.org/repo debian-jessie main > /etc/apt/sources.list.d/docker.list
RUN apt-get update -qq
RUN apt-get install -qqy docker-engine

USER jenkins

Next thing I just need to build this container once on the host system:
docker build -t myjenkins .

2: Start the Jenkins container with a mapper Docker socket
In order to make the Docker from the host system available I need to make the API available to the Jenkins docker container. You can do this by mapping the docker socket that is available on the parent system. I have created a small docker-compose file where I map both my volumes and the docker socket as following:

  container_name: jenkins
  image: myjenkins:latest
    - "8080:8080"
    - /Users/devuser/dev/docker/volumes/jenkins:/var/jenkins_home
    - /var/run:/var/run:rw

Please not specially mapping the ‘/var/run’ with rw privileges, this is needed to make sure the Jenkins container has access to the host systems docker.sock.

3: Building a container
In order to demonstrate what better way than to create a Jenkins pipeline project that can build the Jenkins Docker container itself🙂

In essence it is quite simple, using Jenkins 2.x you can create a Jenkins Pipeline project and supply the project with this Jenkinsfile in the configuration:

node {
   stage 'Checkout'

   git credentialsId: 'aa123c45-6abc-78e9-a1b2-583c1234e123', url: 'git@bitbucket.org:myorg/jenkins-repo.git'

   stage 'Build Jenkins Container'
   sh "docker pull jenkinsci/jenkins:latest"
   sh "docker build -t myjenkins -f ./jenkins/Dockerfile ."

This above Jenkinsfile checks out a git repository where I have put the Dockerfile for the Jenkins container. It uses the credentials that are already saved inside Jenkins for providing the credentials for GIT.

Because I always want the latest Jenkins 2.x container, i first pull in the latest version from Docker hub. The next stage is to simply run the Docker build and voila we have a completely build Jenkins container.

If i now run docker images in the host I see the following:

REPOSITORY                                                         TAG                 IMAGE ID            CREATED             SIZE
myjenkins                                                          latest              f89d0d6ba4a5        24 seconds ago        1.07 GB


When using standard home setups where hardware does not run fulltime you have some limitations. However with Docker and Jenkins you can create a very nice and effective Docker CI pipeline.

In some of the next few blog posts I will detail more about how to create a fully fledged Kubernetes deployment using Jenkins and Docker.

Robot interaction with a Nao Robot and a Raspberry Pi Robot

In my recent blogpost I have mainly been working on creating the Raspberry PI based robot. As in the first post I mentioned (https://renzedevries.wordpress.com/2016/02/28/building-a-raspberry-pi-humanoid-robot-in-java-part-1/) the next challenge after getting the Raspberry PI robot to walk is to have it interact with a Nao robot via an IoT solution: (https://renzedevries.wordpress.com/2015/11/26/api-design-the-good-bad-and-the-ugly/).

Robot framework

This has been a lot more challenging than I originally anticipated, mainly due to the fact that I decided to do this properly and build a framework for it. The approach I have taken here is to build an SDK where I can create a standard Java robot model. This capabilities model defines the properties of the robot (speec, movement, sonar, etc.) and generalises them for different robot types. For both the RPI robot and the Nao robot I implement this framework so they in the end can speak the same language.

The benefits of this framework are great, because the idea is to expose and enable this framework via the Cloud using MQTT which is an IoT pub-sub message broker. Meaning all sensor data and commands for the robot will be sent via this MQTT broker. This also enables another option that I can run just a small piece of software on the robot that remotely exposes their capabilities and talk remotely via the MQTT broker to these capabilities.

I have chosen for MQTT because it is a very simple protocol and is already heavily adopted in the IoT industry. Next to this my current home automation system runs via MQTT, so this offers in the future a very nice integration between multiple robots and the home automation😀

In this post I will describe two scenarios:
1. Having the Nao robot trigger a movement in the Raspberry PI robot
2. Have the Nao robot react to a sensor event on the Raspberry PI robot


In order to do this I have to implement my framework for each of the robots. The robot framework consists of the following high level design:

The capability indicates something the robot can do, these capabilities can vary from basic capabilities like motion and low level capabilities like servo control and sensor drivers but also higher level capabilities like Speech capability.

These indicate sensors on the robots that can provide feedback based on actions that happen on the robot.

The robot has a list of capabilities and sensors which form the entire robot entity.

The code

Here is an example of bootstrapping the framework on the Nao robot, this is a spring boot application that connects to the robot and installs all the capabilities. All the logic for dealing with the Robot’s capabilities is contained in the specific capability classes. These capabilities also are not aware of the cloud connectivity. The cloud connectivity is dealt with by a special driver that listens to all local robot event and forwards them to the MQTT bridge, and listens to all MQTT commands coming in meant for this robot.

Robot robot = new SpringAwareRobotBuilder("peppy", context)
        .sensor(new DistanceSensor("distance", NaoSensorDriver.SONAR_PORT), NaoSensorDriver.class)
        .sensor(new BumperSensor("head", NaoSensorDriver.TOUCH_HEAD), NaoSensorDriver.class)

RobotEventHandler eventHandler = new RobotEventHandler(robot);

public static class RobotEventHandler implements EventHandler {
    private Robot robot;

    public RobotEventHandler(Robot robot) {
        this.robot = robot;

    public void receive(DistanceSensorEvent event) {
        LOG.info("Received a distance event: {}", event);

        if(event.getDistance() < 30) {
            LOG.info("Stopping walking motion");

    public void receive(TextualSensorEvent event) {
        LOG.info("Barcode scanned: {}", event.getValue());

        robot.getCapability(NaoSpeechEngine.class).say(event.getValue(), "english");

    public void receive(BumperEvent event) {
        LOG.info("Head was touched on: {}", event.getLabel());

The robot has a main identifier which is called the ‘controllerId’, in the above example this is ‘peppy’. Also all sensors have a name, for example the BumperSensor defined below has a name of type ‘head’. Each component can generate multiple events which all need to be labeled identifying the source of the event. For example in case the head gets touched the label will indicate where on the head the robot is touched.

Effectively this means any sensor event sent to the cloud will always have three identifiers (controllerId, itemId and label). For commands send from the MQTT bridge there will also always be three identifiers with slightly different meaning (controllerId, itemId and commandType).

Here is an example of a sample sensor event coming from a robot forwarded to the MQTT broker. On MQTT each message is sent to a topic where you can subscribe to, events from the robot are sent to the following topic for below example /states/peppy/head/FrontTactilTouched. The message sent to that topic has a body as following:


For a command send via the cloud the message to the MQTT bridge would be sent to a topic like this: /command/peppy/tts/say. The message for letting the robot say something would be as following:

  	"text":"Hello everyone, how are you all doing?",

If you are curious and would like to see what it takes to develop your own robot to connect to the cloud you can find all the code on Github. The full code for the Nao implementation can be found here: https://github.com/renarj/robo-pep. I have similar code for the Raspberry Pi robot which can be found here: https://github.com/renarj/robo-max

The Architecture

Now that the robot has been implemented it is up to the microservices to coordinate these robots. I have decided to split the architecture in two parts a public message ring using MQTT and an internal one that is secure using ActiveMQ.

At the border of the cloud there is a message processor that picks up message from MQTT and forward to ActiveMQ. This processor checks if the messages coming in are valid and allowed, and if so forwards it to the secure internal ring which is using ActiveMQ. This way I can have a filter on the edge to protect against attacks and authorize the sensor data and commands, this is just nothing more than a bit of safety and scalability for potentially a future.

So the architecture in essence looks as following:

Deploying to the cloud

Based on above architecture I have a number of services I need to deploy including their supporting services. I will deploy the entire stack using Docker on a Kubernetes cluster running on Amazon AWS. For more information on how to deploy to Kubernetes please read my previous blog post: <>

Deploying the services

Getting back to deploying my services, this is the list of services / containers to deploy:
* MQTT message broker based on Moquette for public Robot messages
* ActiveMQ message broker for internal system messages
* Edge processor that forward message from MQTT to ActiveMQ and other way around
* Command service for sending robot commands to the cloud
* State service for processing robot state (sensor data)
* Dashboard service for interacting with the robot (future)

If you want to know more on how to deploy a Docker container on a Kubernetes cluster and how to create one, please check out my previous blog on that: https://renzedevries.wordpress.com/2016/05/31/deploying-a-docker-container-to-kubernetes-on-amazon-aws/

All of the above services can also be found on my github account: https://github.com/renarj

Putting all together

The goal I had was to connect two robots via a brokering architecture and let them interact. It was a lengthy process and it was not easy, but I did finally manage to pull it off. Once the above services where deployed, all I had to do was start the implemented spring-boot application on the individual robots. In order to get the interaction going tho, i had to write a third spring-boot application that would receive events from both robots and take action based on these events. The awesome part about the above architecture is that i can now do that simply, by writing remote capability connectors for the robot framework that directly speak to the MQTT bridge.

You can see this in the below code:

Robot max = new SpringAwareRobotBuilder("max", context)
        .remote(RemoteCloudDriver.class, true)

Robot pep = new SpringAwareRobotBuilder("peppy", context)
        .remote(RemoteCloudDriver.class, true)

MaxRobotEventHandler maxHandler = new MaxRobotEventHandler(pep);

PepRobotEventHandler pepHandler = new PepRobotEventHandler(max);

public static class MaxRobotEventHandler implements EventHandler {
    private Robot pep;

    private AtomicBoolean guard = new AtomicBoolean(true);

    private MaxRobotEventHandler(Robot pep) {
        this.pep = pep;

    public void receive(ValueEvent valueEvent) {
        LOG.info("Received a distance: {}", valueEvent.getValue().asString());
        if(valueEvent.getControllerId().equals("max") && valueEvent.getLabel().equals("distance")) {
            int distance = valueEvent.getValue().getValue();
            if(distance < 30) {
                if(guard.compareAndSet(true, false)) {
                    LOG.info("Distance is too small: {}", distance);
                    pep.getCapability(SpeechEngine.class).say("Max, are you ok, did you hit something?", "english");

                    Uninterruptibles.sleepUninterruptibly(10, TimeUnit.SECONDS);
                    LOG.info("Allowing further distance events");

public static class PepRobotEventHandler implements EventHandler {
    private Robot max;

    public PepRobotEventHandler(Robot max) {
        this.max = max;

    public void receive(ValueEvent valueEvent) {
        LOG.info("Received an event for pep: {}", valueEvent);
        if(valueEvent.getControllerId().equals("peppy") && valueEvent.getItemId().equals("head")) {
            if(valueEvent.getValue().asString().equals("true")) {


What happens in this code is that when we receive an event from Pep (the Nao robot) indicating his head was touched, we trigger a motion to run in the other robot named Max (Raspberry Pi robot). The other way around if we receive a distance event on Max indicating he is about to hit a wall we execute a remote operation on the speech engine of Pep to say something.

So how does that look, well see for yourself in this Youtube video:


It has been an incredible challenge to get this interaction all working. But i finally did manage to get it working and i am just at the starting point now. The next step is to work out all the capabilities in the framework including for example video/vision capabilities in both robots. After that the next big step will become to get both robots to explore the room and try to find each other and then collaborate. More on that in posts to come in the future.

Deploying a Docker container to Kubernetes on Amazon AWS

Based on recent posts I am still very busy with my Robots, the goal I am working on now is to create multiple Robot interaction solution. The approach I am taken here is to utilize the Cloud as a means of brokering between the Robots. The Architecture I have in mind will be based on a reactive / event driven framework that the robots will use for communication.

One of the foundations for this mechanism is to utilise the Cloud and Event driven architecture. In another post I will give some more detail about how the robots will actually interact. But in this post I wanted to give some information about the specifics of my Cloud Architecture and how to deploy such a Cluster of microservices using Docker and Kubernetes.

Deploying to the cloud

The robot framework will have a significant number of microservices and their dependent supporting services that I need to deploy. With this in mind I have chose to go for a Docker / Microservices based Architecture where in the future I can easily extend the number of services to be deployed and simply grow the cluster if needed.

I have quite some experience now at this moment with Docker and really have grown fond of Kubernetes as an orchestrator so that will be my pick for orchestrator as it is so easy to administer on the Cloud. There are other orchestrators, but the other ones seems to be more early on, although the AWS ECS service does seem to get close

Note to google: I have played before on Google Cloud and Kubernetes and its actually a perfect fit. It is super easy to deploy containers and have your kubernetes cluster there. However Google does not allow individual developers (like me) without their own company a Cloud account in EU because of VAT ruling. So unfortunately for this exercise I will have to default to Amazon AWS. Hope Google reads this so they can fix this quickly!

Creating a Kubernetes cluster on AWS

In this next part I will demo on how to deploy the most important service, which is the MQTT one to a Kubernetes cluster.

In order to get started I need to first create an actual Kubernetes cluster on AWS. This is relatively simple, I just followed the steps on this page: http://kubernetes.io/docs/getting-started-guides/aws/

Prerequisite: Amazon AWS CLI is installed and configured http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html

I used the following variables to create the cluster, which will be a 4 node cluster using M3 Medium instances:

export KUBE_AWS_ZONE=eu-west-1c
export NUM_NODES=2
export MASTER_SIZE=m3.medium
export NODE_SIZE=m3.medium
export AWS_S3_REGION=eu-west-1
export AWS_S3_BUCKET=robot-kubernetes-artifacts

After that it is simply a case of running this:

wget -q -O - https://get.k8s.io | bash

If you already previously ran above command, a directory called ‘kubernetes’ will have been created and in there you can control the cluster using these commands:
For creating the cluster (only if you already ran above command once before):


For stopping the cluster:


After these above commands have been run your cluster should have been created. For kubernetes you still need a small tool the kubernetes command
line client called ‘kubectl’. The folder that was downloaded with kubernetes gets shipped with a client. You can simply add it to your path as following:

# OS X
export PATH=/Users/rdevries/kubernetes/platforms/darwin/amd64:$PATH

# Linux
export PATH=/Users/rdevries/kubernetes/platforms/linux/amd64:$PATH

You should be able to check your cluster is up and running with the following:

kubectl get nodes
NAME                                         STATUS    AGE
ip-172-20-0-115.eu-west-1.compute.internal   Ready     4m
ip-172-20-0-116.eu-west-1.compute.internal   Ready     4m
ip-172-20-0-117.eu-west-1.compute.internal   Ready     4m
ip-172-20-0-118.eu-west-1.compute.internal   Ready     4m

Using Amazon ECR

In order to get the containers to deploy on Kubernetes I have to make my containers available to Kubernetes and Docker. I could use Docker hub to distribute my containers and this is actually fine for MQTT which I have made available on Docker hub (https://hub.docker.com/r/renarj/mqtt/).

However some of the future containers are not really meant for public availability, so instead I am opting to upload the mqtt container to the new Amazon Container Registry just to test out how simple this is.

We can simply start using the AWS CLI and create a Container repository for MQTT:
aws ecr create-repository --repository-name mqtt

Next we can build the container from this git repo: https://github.com/renarj/moquette-spring-docker

git clone https://github.com/renarj/moquette-spring-docker.git
cd moquette-spring-docker
mvn clean package docker:build

Now we need to upload the container to AWS, these instructions will vary slightly for everyone due to account id being used. First we need to tag the container, login to AWS ECR and last push the container.
1. Tag container docker tag mqtt/moquette-spring:latest 1111.dkr.ecr.eu-west-1.amazonaws.com/mqtt:latest (Replace 1111 with your AWS account id)
2. Login to AWS with command return by this aws ecr get-login --region eu-west-1
3. Push container: docker push 1111.dkr.ecr.eu-west-1.amazonaws.com/mqtt:latest

This should make the container available in the Kubernetes cluster.

Deploying the containers
In order to deploy the containers I have to create a small Yaml file for the MQTT service and the Container to be deployed.

The first part is to tell Kubernetes how to deploy the actual container, we can do this with this simple Yaml file:

apiVersion: v1
kind: ReplicationController
  name: mqtt
  replicas: 1
        app: mqtt
      - name: mqtt
        image: 1111.dkr.ecr.eu-west-1.amazonaws.com/mqtt:latest
        - containerPort: 1883
        imagePullPolicy: Always

If you look at above sample there are two important aspects, one if the image location. This is the same location as my ECR repository URL. If you do not know the url you can always execute this AWS CLI command: aws ecr describe-repositories which will return a list of all available repositories and url’s.

Note: I have noticed in older versions of Kubernetes there are some permission errors when using ECR as the image repository, I am unsure exactly why, but the later version of Kubernetes (1.2.4+) seem to properly work

Now that we have described how the container needs to be deployed, we want to tell Kubernetes how it can be made available. I will load balance the MQTT service using a ELB, the nice part about Kubernetes is that it can arrange all that for me.

In order to tell Kubernetes and as a result AWS to create a loadbalancer we define a Kubernetes service with type ‘LoadBalancer’:

apiVersion: v1
kind: Service
  name: mqtt
    app: mqtt
  type: LoadBalancer
  - port: 1883
    targetPort: 1883
    protocol: TCP
    name: mqtt
    app: mqtt

Using this we should have a loadbalancer available after the service is created, in order to find out the public address of the service you can describe the service from kubernetes and find the endpoint like this:

kubectl describe svc/mqtt
Name:           mqtt
Namespace:      default
Labels:         app=mqtt
Selector:       app=mqtt
Type:           LoadBalancer
LoadBalancer Ingress:   a9d650446271e11e6b3a30afb1473159-632581711.eu-west-1.elb.amazonaws.com
Port:           mqtt    1883/TCP
NodePort:       mqtt    30531/TCP
Session Affinity:   None
  FirstSeen LastSeen    Count   From            SubobjectPath   Type        Reason          Message
  --------- --------    -----   ----            -------------   --------    ------          -------
  2s        2s      1   {service-controller }           Normal      CreatingLoadBalancer    Creating load balancer
  0s        0s      1   {service-controller }           Normal      CreatedLoadBalancer Created load balancer

Please note the LoadBalancer Ingress address, this is the publicly available LB for the MQTT service.

Testing MQTT on Kubernetes

Well now that our cluster is deployed and we have a MQTT load balanced container running, how can we give this a try? Well for this I have used a simple node.js package called mqtt-cli. Simply run npm install mqtt-cli -g to install the package.

After this you can send a test-message to our load balanced mqtt container using this command with a -w at the end to watch the topic for incoming messages:

mqtt-cli a9d650446271e11e6b3a30afb1473159-632581711.eu-west-1.elb.amazonaws.com /test/Hello "test message" -w

Now let’s try in a new console window to again send a message but now without the ‘-w’ at the end:

mqtt-cli a9d650446271e11e6b3a30afb1473159-632581711.eu-west-1.elb.amazonaws.com /test/Hello "Message from another window"

If everything worked well it should now have sent the message to the other window, and voila we have a working kubernetes cluster with a load balanced mqtt broker on it🙂

Next steps

I will be writing a next blog post soon on how all of the above has helped me connect my robots to the cloud. Hope this above info helps some people and shows Kubernetes and Docker are really not at all so scary to use in practice🙂