Building containers with Docker in Docker and Jenkins

Today I wanted to share some of my experiences I have when I am at home experimenting with Docker. I really love Docker it makes my life of deploying my software a lot easier. However as a hobbyist in my home setup I face one challenge and that is limited funds. I don’t have a nice build farm at home, and I do not want to run a cloud build server 24/7.

However what I want to have is a reliable Continuous Integration environment on hardware that is not always on and might be portable. Based on this I set out to run Jenkins as a Docker container which is relatively simple. You can simply run ”’docker run -d -p”8080:8080″ jenkinsci/jenkins”’ and voila a running Jenkins installation. For most of my challenges this indeed seems to be the case, I can create portable projects using Jenkins pipelines where I just need to point Jenkins to my github account.

Docker in Docker with Jenkins

The main challenge I faced was that I actually also wanted to build new containers as part of the CI setup. But the problem here is that in order to build a Docker container there needs to be a running Docker engine in the Jenkins host. I actually would like to have access to a Docker engine inside my already existing Docker container so I can build new containers.

There are actually multiple solutions to the problem:
1. Really run a Docker engine inside the Jenkins Docker container
2. Map the Docker from the Jenkins Docker host to be accesible inside the Jenkins container

There is an obvious downside to nr 1, and many other blogs do not recommend using this ever…. So this left me with solution nr 2. using the already running Docker inside my parent host environment.

Solving the problem

In order to make this work I need to do a few things:

1: Install the Docker client inside the Jenkins container
The first part is actually the hardest, I can make the docker engine of the host system available but I still need the client tools installed. The default Jenkins container does ofcourse not contain these tools. As I had to make some more modifications to my Jenkins container I set out to simply extend the current container.

In order to install the Docker CLI, you can use the below Dockerfile to extend the official Jenkins 2.x container with the latest docker client that is available.

from jenkinsci/jenkins:latest

USER root
RUN apt-get update -qq
RUN apt-get install -qqy apt-transport-https ca-certificates
RUN apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
RUN echo deb https://apt.dockerproject.org/repo debian-jessie main > /etc/apt/sources.list.d/docker.list
RUN apt-get update -qq
RUN apt-get install -qqy docker-engine

USER jenkins

Next thing I just need to build this container once on the host system:
docker build -t myjenkins .

2: Start the Jenkins container with a mapper Docker socket
In order to make the Docker from the host system available I need to make the API available to the Jenkins docker container. You can do this by mapping the docker socket that is available on the parent system. I have created a small docker-compose file where I map both my volumes and the docker socket as following:

jenkins:
  container_name: jenkins
  image: myjenkins:latest
  ports:
    - "8080:8080"
  volumes:
    - /Users/devuser/dev/docker/volumes/jenkins:/var/jenkins_home
    - /var/run:/var/run:rw

Please not specially mapping the ‘/var/run’ with rw privileges, this is needed to make sure the Jenkins container has access to the host systems docker.sock.

3: Building a container
In order to demonstrate what better way than to create a Jenkins pipeline project that can build the Jenkins Docker container itself 🙂

In essence it is quite simple, using Jenkins 2.x you can create a Jenkins Pipeline project and supply the project with this Jenkinsfile in the configuration:

node {
   stage 'Checkout'

   git credentialsId: 'aa123c45-6abc-78e9-a1b2-583c1234e123', url: 'git@bitbucket.org:myorg/jenkins-repo.git'

   stage 'Build Jenkins Container'
   sh "docker pull jenkinsci/jenkins:latest"
   sh "docker build -t myjenkins -f ./jenkins/Dockerfile ."
}

This above Jenkinsfile checks out a git repository where I have put the Dockerfile for the Jenkins container. It uses the credentials that are already saved inside Jenkins for providing the credentials for GIT.

Because I always want the latest Jenkins 2.x container, i first pull in the latest version from Docker hub. The next stage is to simply run the Docker build and voila we have a completely build Jenkins container.

If i now run docker images in the host I see the following:

REPOSITORY                                                         TAG                 IMAGE ID            CREATED             SIZE
myjenkins                                                          latest              f89d0d6ba4a5        24 seconds ago        1.07 GB

Conclusion

When using standard home setups where hardware does not run fulltime you have some limitations. However with Docker and Jenkins you can create a very nice and effective Docker CI pipeline.

In some of the next few blog posts I will detail more about how to create a fully fledged Kubernetes deployment using Jenkins and Docker.

Robot interaction with a Nao Robot and a Raspberry Pi Robot

In my recent blogpost I have mainly been working on creating the Raspberry PI based robot. As in the first post I mentioned (https://renzedevries.wordpress.com/2016/02/28/building-a-raspberry-pi-humanoid-robot-in-java-part-1/) the next challenge after getting the Raspberry PI robot to walk is to have it interact with a Nao robot via an IoT solution: (https://renzedevries.wordpress.com/2015/11/26/api-design-the-good-bad-and-the-ugly/).

Robot framework

This has been a lot more challenging than I originally anticipated, mainly due to the fact that I decided to do this properly and build a framework for it. The approach I have taken here is to build an SDK where I can create a standard Java robot model. This capabilities model defines the properties of the robot (speec, movement, sonar, etc.) and generalises them for different robot types. For both the RPI robot and the Nao robot I implement this framework so they in the end can speak the same language.

The benefits of this framework are great, because the idea is to expose and enable this framework via the Cloud using MQTT which is an IoT pub-sub message broker. Meaning all sensor data and commands for the robot will be sent via this MQTT broker. This also enables another option that I can run just a small piece of software on the robot that remotely exposes their capabilities and talk remotely via the MQTT broker to these capabilities.

I have chosen for MQTT because it is a very simple protocol and is already heavily adopted in the IoT industry. Next to this my current home automation system runs via MQTT, so this offers in the future a very nice integration between multiple robots and the home automation 😀

In this post I will describe two scenarios:
1. Having the Nao robot trigger a movement in the Raspberry PI robot
2. Have the Nao robot react to a sensor event on the Raspberry PI robot

Implementation

In order to do this I have to implement my framework for each of the robots. The robot framework consists of the following high level design:
robo-sdk

Capability
The capability indicates something the robot can do, these capabilities can vary from basic capabilities like motion and low level capabilities like servo control and sensor drivers but also higher level capabilities like Speech capability.

Sensor
These indicate sensors on the robots that can provide feedback based on actions that happen on the robot.

Robot
The robot has a list of capabilities and sensors which form the entire robot entity.

The code

Here is an example of bootstrapping the framework on the Nao robot, this is a spring boot application that connects to the robot and installs all the capabilities. All the logic for dealing with the Robot’s capabilities is contained in the specific capability classes. These capabilities also are not aware of the cloud connectivity. The cloud connectivity is dealt with by a special driver that listens to all local robot event and forwards them to the MQTT bridge, and listens to all MQTT commands coming in meant for this robot.

Robot robot = new SpringAwareRobotBuilder("peppy", context)
        .motionEngine(NaoMotionEngine.class)
        .servoDriver(NaoServoDriver.class)
        .capability(NaoSpeechEngine.class)
        .capability(NaoQRScanner.class)
        .sensor(new DistanceSensor("distance", NaoSensorDriver.SONAR_PORT), NaoSensorDriver.class)
        .sensor(new BumperSensor("head", NaoSensorDriver.TOUCH_HEAD), NaoSensorDriver.class)
        .remote(RemoteCloudDriver.class)
        .build();

RobotEventHandler eventHandler = new RobotEventHandler(robot);
robot.listen(eventHandler);

public static class RobotEventHandler implements EventHandler {
    private Robot robot;

    public RobotEventHandler(Robot robot) {
        this.robot = robot;
    }

    @EventSubscribe
    public void receive(DistanceSensorEvent event) {
        LOG.info("Received a distance event: {}", event);

        if(event.getDistance() < 30) {
            LOG.info("Stopping walking motion");
            robot.getMotionEngine().stopWalking();
        }
    }

    @EventSubscribe
    public void receive(TextualSensorEvent event) {
        LOG.info("Barcode scanned: {}", event.getValue());

        robot.getCapability(NaoSpeechEngine.class).say(event.getValue(), "english");
    }

    @EventSubscribe
    public void receive(BumperEvent event) {
        LOG.info("Head was touched on: {}", event.getLabel());
    }
}

The robot has a main identifier which is called the ‘controllerId’, in the above example this is ‘peppy’. Also all sensors have a name, for example the BumperSensor defined below has a name of type ‘head’. Each component can generate multiple events which all need to be labeled identifying the source of the event. For example in case the head gets touched the label will indicate where on the head the robot is touched.

Effectively this means any sensor event sent to the cloud will always have three identifiers (controllerId, itemId and label). For commands send from the MQTT bridge there will also always be three identifiers with slightly different meaning (controllerId, itemId and commandType).

Here is an example of a sample sensor event coming from a robot forwarded to the MQTT broker. On MQTT each message is sent to a topic where you can subscribe to, events from the robot are sent to the following topic for below example /states/peppy/head/FrontTactilTouched. The message sent to that topic has a body as following:

{
   "value":[
      "com.oberasoftware.home.api.impl.types.ValueImpl",
      {
         "value":false,
         "type":"BOOLEAN"
      }
   ],
   "controllerId":"peppy",
   "channelId":"head",
   "label":"FrontTactilTouched"
}

For a command send via the cloud the message to the MQTT bridge would be sent to a topic like this: /command/peppy/tts/say. The message for letting the robot say something would be as following:

{
  "controllerId":"peppy",
  "itemId":"tts",
  "commandType":"say",
  "properties":{
  	"text":"Hello everyone, how are you all doing?",
    "language":"english"
  }
}

If you are curious and would like to see what it takes to develop your own robot to connect to the cloud you can find all the code on Github. The full code for the Nao implementation can be found here: https://github.com/renarj/robo-pep. I have similar code for the Raspberry Pi robot which can be found here: https://github.com/renarj/robo-max

The Architecture

Now that the robot has been implemented it is up to the microservices to coordinate these robots. I have decided to split the architecture in two parts a public message ring using MQTT and an internal one that is secure using ActiveMQ.

At the border of the cloud there is a message processor that picks up message from MQTT and forward to ActiveMQ. This processor checks if the messages coming in are valid and allowed, and if so forwards it to the secure internal ring which is using ActiveMQ. This way I can have a filter on the edge to protect against attacks and authorize the sensor data and commands, this is just nothing more than a bit of safety and scalability for potentially a future.

So the architecture in essence looks as following:
robo-arch

Deploying to the cloud

Based on above architecture I have a number of services I need to deploy including their supporting services. I will deploy the entire stack using Docker on a Kubernetes cluster running on Amazon AWS. For more information on how to deploy to Kubernetes please read my previous blog post: <>

Deploying the services

Getting back to deploying my services, this is the list of services / containers to deploy:
* MQTT message broker based on Moquette for public Robot messages
* ActiveMQ message broker for internal system messages
* Edge processor that forward message from MQTT to ActiveMQ and other way around
* Command service for sending robot commands to the cloud
* State service for processing robot state (sensor data)
* Dashboard service for interacting with the robot (future)

If you want to know more on how to deploy a Docker container on a Kubernetes cluster and how to create one, please check out my previous blog on that: https://renzedevries.wordpress.com/2016/05/31/deploying-a-docker-container-to-kubernetes-on-amazon-aws/

All of the above services can also be found on my github account: https://github.com/renarj

Putting all together

The goal I had was to connect two robots via a brokering architecture and let them interact. It was a lengthy process and it was not easy, but I did finally manage to pull it off. Once the above services where deployed, all I had to do was start the implemented spring-boot application on the individual robots. In order to get the interaction going tho, i had to write a third spring-boot application that would receive events from both robots and take action based on these events. The awesome part about the above architecture is that i can now do that simply, by writing remote capability connectors for the robot framework that directly speak to the MQTT bridge.

You can see this in the below code:

Robot max = new SpringAwareRobotBuilder("max", context)
        .motionEngine(RemoteMotionEngine.class)
        .remote(RemoteCloudDriver.class, true)
        .build();

Robot pep = new SpringAwareRobotBuilder("peppy", context)
        .motionEngine(RemoteMotionEngine.class)
        .capability(RemoteSpeechEngine.class)
        .remote(RemoteCloudDriver.class, true)
        .build();

MaxRobotEventHandler maxHandler = new MaxRobotEventHandler(pep);
max.listen(maxHandler);

PepRobotEventHandler pepHandler = new PepRobotEventHandler(max);
pep.listen(pepHandler);

public static class MaxRobotEventHandler implements EventHandler {
    private Robot pep;

    private AtomicBoolean guard = new AtomicBoolean(true);

    private MaxRobotEventHandler(Robot pep) {
        this.pep = pep;
    }

    @EventSubscribe
    public void receive(ValueEvent valueEvent) {
        LOG.info("Received a distance: {}", valueEvent.getValue().asString());
        if(valueEvent.getControllerId().equals("max") && valueEvent.getLabel().equals("distance")) {
            int distance = valueEvent.getValue().getValue();
            if(distance < 30) {
                if(guard.compareAndSet(true, false)) {
                    LOG.info("Distance is too small: {}", distance);
                    pep.getCapability(SpeechEngine.class).say("Max, are you ok, did you hit something?", "english");

                    Uninterruptibles.sleepUninterruptibly(10, TimeUnit.SECONDS);
                    guard.set(true);
                    LOG.info("Allowing further distance events");
                }
            }
        }
    }
}

public static class PepRobotEventHandler implements EventHandler {
    private Robot max;

    public PepRobotEventHandler(Robot max) {
        this.max = max;
    }

    @EventSubscribe
    public void receive(ValueEvent valueEvent) {
        LOG.info("Received an event for pep: {}", valueEvent);
        if(valueEvent.getControllerId().equals("peppy") && valueEvent.getItemId().equals("head")) {
            if(valueEvent.getValue().asString().equals("true")) {

                max.getMotionEngine().runMotion("Bravo");
            }
        }
    }
}

What happens in this code is that when we receive an event from Pep (the Nao robot) indicating his head was touched, we trigger a motion to run in the other robot named Max (Raspberry Pi robot). The other way around if we receive a distance event on Max indicating he is about to hit a wall we execute a remote operation on the speech engine of Pep to say something.

So how does that look, well see for yourself in this Youtube video:

Conclusion

It has been an incredible challenge to get this interaction all working. But i finally did manage to get it working and i am just at the starting point now. The next step is to work out all the capabilities in the framework including for example video/vision capabilities in both robots. After that the next big step will become to get both robots to explore the room and try to find each other and then collaborate. More on that in posts to come in the future.