Building a Raspberry PI Robot Car part 3: Wheel design

It’s been a while since my last post on the design of my robot car. The reason is that I have been moving to a new house and also have been quite stuck with the wheel design for my robot car, and it took many iterations to get the design right. This post is thus all about robot wheels and the design of creating an awesome drive system for the robot 🙂

Original Design

The first iteration I started out with a more traditional wheel design with a simple rubber wheel. This wheel has a suspension system with a spring based actuation based on a simple triangular geometry.

This looked as following:

There are quite some obvious challenges with this design, one being that if you have 4 motors/friction components meaning you can not really drive this setup without some form of active steering. My Design did not include this type and this was also not the intention, the wheel setup was to validate the suspension design. Altho the suspension was weak it was ok enough for such simple wheels and therefore I kept the basic design for the moment.

Wheel Design

The design goal was to create some Mecanum wheels that would allow omni directional drive system. I had looked into buying these wheels online, but seeing prices of 130-150 dollar for a set of 4 wheels I thought why not print them myself. Initially I looked through existing designs and settled with something that I found on a blog / thingiverse ( and modified this to fit my Dynamixel Servo’s.

This resulted in the following wheel setup:

Altho it worked more or less, the main challenge here was that I never got the rollers to roll freely enough. This together with lack of friction made the drive quality quite poor and not reliable when driven in a ‘strafing’ mode for example as is visible in this movie:

Mecanum Wheel v2
Based on this I decided to take a fully different approach and start design my full wheel myself. I created a properly angled roller setup and used bearings for all the rollers. It took me many iterations and improvement cycles to get this right, there was always some challenge in terms of getting perfect frictionless setup. It took me as many as 20 prototypes before I settled on a final design.

Next to this the rollers where printed in a TPU based filament so they are more rubber like as you would have in real mecanum based wheels. This has significantly improved the wheel roller quality and resulted in the following single wheel setup:

Improved suspension

After having finished the mecanum wheels I found out the current suspension simply was to light for the more heavy wheels and there was to much flex in the system. Inspired by this Makeblock robot ( I started a design and ended up with the following:

Driving the Rover

The result of this is quite good, the robot has a perfect strafe and axis rotation.

Hope this shows that it is possible to create a fully open source robot design with mecanum wheels that work quite well. If there is interest I can upload the 3D files to github in the coming period.


Building a Raspberry PI Robot Car part 2

In the last post we talked about the electronics, in this post I will talk a bit about the 3D design and printing of the components. I have recently acquired an Ultimaker 3D printer and after quite some experimenting has led me to be able to start designing my own components for the robotcar. In this blog I will try to walk through the design of the robotcar.

Designing the robot

The robot itself is going to consist of 4 main components:
* Casing containing the main electronics (Raspberry PI, power distribution, etc.
* Casing containing the LiPo battery that allows easy battery replacement
* Frame that supports both the battery and electronics casing
* Wheel / suspension mechanism to hold the wheels

Note: The printer has a maximum print size of roughly 20x20x20 cm, so this is the main reason that the casing for the power, electronics and frame are separated from each other.

The software
For the design of the software I started out with TinkerCad which is an online free based 3D editor. However I quickly ran into problems with dimensions which get quickly complex. I switched after this to Autodesk Fusion 360 which is a lot better if it comes to designing technical components, as a hobbyist it is possible to get a free year license.

Wheel / Suspension

The suspension design is a spring based design that will allow some form of flex in the wheel design. The wheel design actually needs to attach to a servo, the wheel itself is attached to the servo. For this I have designed a small bracket suited for my Dynamixel servo’s.

Next I have one beam that will have the spring attached to it and two static beams that connect to the servo holder. The static beams will ensure linear motion of the servo holder and the spring ensures there is dampening of the motions. This looks as following:

For the wheel design I will at some point dedicate a special post as they have caused me a lot of headache. For now I will use some standard wheels that fit onto the servo’s, but ultimately these will become mecanum based wheels.

Designing the frame

The beams used for the suspension are actually part of the base frame. There are going to be 4 wheels, meaning 4 beams that are part of the frame. In order to create sufficient surface for the battery and electronics casing I have connected the beams in a longer frame using connecting pieces. I have design an end piece for the end pieces of the frame and a middle piece to connect the beams all together. This looks as following:

Each of the beams has a length of 12cm, the middle piece is 4cm and the end pieces each 2cm. This gives a total length of 32cm for the robotcar, this is quite long but for the current suspension design it is needed as the suspension beams cannot really be shortened. In the future I might want to shorten the design by redesigning the suspension, however for now its good enough.

Battery & Electronics case

The main battery and electronics case has caused me a lot of problems and many iterations to get right. Every time you print it, there is something that is not entirely right. The tricks has been to measure, measure and measure again all the components you want to fit. I have in the end drawn out a sketch on paper roughly showing the placement of the components. Both the battery and electronics case have to fit in a fixed length of 16cm and 10 cm in width to fit the baseframe. The electronics case contains special accomodation for the Raspberry PI, UBEC power converter, two grove Sensors and the Dynamixel power board:

Note: The electronics casing will have a separate lid which will allow closing up the electronics compartment and allow easy access.

For the battery case its a lot simpler, we just need something to contain the battery. However one of the challenges is that I do not want a lid here, it just needs to be easily replaceable. For this to work there will be two covers on either end of the case that hide the wires but are far enough apart to remove the battery. A not here is that I used round edges instead of sharp 90 degree angles to allow for better printing without support. The round angles allow for a pretty decent print on my ultimaker, and its a lot better than having support material in the case. The case looks as following:

Assembling the robot

Here are a series of pictures of the various parts in stages of assembly


The process of getting to the above design and printed parts has not been easy. I have had for each component many, many iterations before getting to the above. Even now I am still seeing improvement areas, however for now I do think its close to being a functional robot car which was the goal. In the future posts I will start talking a bit about the software and the drive system with the mecanum wheels.

For those wanting to have a look at the 3D parts, I have uploaded them to Github, the idea is in the future to provide a proper manual on how to print and assemble with a bill of materials needed, for now just have a look:

Here is a last picture to close with of the first powerup of the robot car:

Building a Raspberry PI Robot Car part1

In the recent few months I have been very focussed on a few topics like humanoid robotics and robot interaction. Recently I have had some extra time and decided to take the next step and really design a robot from scratch. I thought for my first from scratch robot it would be handy to start simple and go for a relatively simple wheel based robot.

I will write a series of blog posts about the robot and how I am taking next steps to design and hopefully perfect the robot. In this first post I will discuss the basic concept and shows how I am going to power up the servo’s and control unit.

The robot concept

Let’s start of with setting the design goal of the robot:
Design an open source wheel based robot that has a holonomic drive solution capable of detecting obstacles and recognising objects it encounters

Given this goal let’s first start off with some basic requirements for the robot and what it needs to consist of. I will design this robot based on principles I have used in previous modifications of robots, this has lead me to these requirements:
* It will be based on a Raspberry PI based with Wifi
* Entire robot should be powered by a single LiPo battery for simplicity
* Distance based sensors for obstacle detection
* Rotatable vision camera
* Holonomic drive system where I can use four individual wheel servos for multi-vector driving
* Arm / Gripper for interaction


For the servos in this project I will for the moment re-use my trustworthy Dynamixel AX-12A servos which can be used in continuous rotation mode and therefore act as wheel servos. However given the desire to open source this project and the costs of these servos they will be replaced in the future, however for the first iteration it is best to stick to what I know.

Powering the solution

One of the important principles for this robot design will be that it needs to be powered by a single power source. In previous robots I always used the combination of the Robotis Lipo battery with a separate battery solution for the Raspberry PI. This has caused in multiple projects issues, like balancing issues or simply nowhere to leave the batteries.

LiPo Battery
In this robot I will use a single LiPo battery, I have picked a Turnigy NanoTech 3S battery with a 2200mAh capacity. This should be plenty to power the Raspberry PI and the Servos for a estimated 30-60 minutes, and easy enough to increase capacity in the future.

Power conversion
The Raspberry PI accepts 5 volt as input and needs roughly 1-2 Amps of current. In order to use a single LiPo battery I need a power converter as the 3S Lipo has an output voltage of minimum ~11.1 Volt and Maximum ~12.8 Volt. For this I will use a simple UBEC (Universal Battery Elimination Circuit) from HobbyKing. This Ubec can convert input voltages ranging from 6 volt to 23 volt into a stable output voltage of 5.1 volt with a maximum current of 3 amps, which is perfect for the Raspberry PI.

For the Dynamixel Servos I will use the Dynamixel official power converter a SMPS2Dynamixel . This can take input voltage up to 20 volts so can be directly connected to the 3S Lipo. All we need is a small 2.1/5.5MM DC power jack, I have managed to source one with a screw terminal but you can find different types.

Power wire harnass
In order to connect both the UBEC and SMPS2Dynamixel to a single servo I have to create a small power harnass that splits the power output from the 3S Lipo to both power converters. For this I have custom built a harnass using XT60 power plugs and some cables I have soldered together and put a screw-cap on the end to protect the wire ends. All is in the end topped off with some electric insulation tape, this looks as following:

Combining it all
Next step is connecting all the electronics. In order to control the Dynamixel servo’s I will use my trusted USB2AX which allows controlling the servo’s via a Dynamixel serial protocol. What remains is wiring up the power with a servo and the Raspberry PI. What better way then to show this with a picture:

In order to connect the entire solution I have had to hook the UBEC directly onto the 5v/Gnd header connectors of the Raspberry PI. Do this with extreme care, any wrong polarity will directly blow up your Raspberry PI. Make sure to check the pinning layout properly RED = 5v Black = GND and they need to go to the respected pin header on the Raspberry PI

Look at this slightly more zoomed in picture for the polarity / Raspberry PI Ubec connection and click on it for full zoom:

Next Steps

In this post I zoomed into the big project plan and in particular laid out the power setup. In the next post I will start with the 3D design of the robot and will show how I use Fusion 360 to create the design of the robot.

Remote Controlling Nao robot using a Raspberry Pi Robot

Today I want to take some time to write about the next step I am currently taking to have both my self-build Raspberry PI robot and the Nao robot interact with each other on a useful basis. You might have already seen some posts before like about robot interaction or perhaps the model train one However both these posts did not really demonstrate a practical use-case.

Recently I presented about this topic at the Devoxx conference in Antwerp where I attempt to demonstrate how to control one robot from another using Kubernetes, Helm and Minikube combined with some IoT glue 🙂 The scenario I demonstrated was to create a Robotic Arm from my Raspberry PI robot that I use to remote control a Nao robot.

Robot arm
In order to have some form of remote control I have created a Robot Arm which i can use as a sort of joystick. I have created the robot from the same parts as described in this post ( The robot arm is controller via a Raspberry PI that has a bit of Java software to connect it to MQTT to send servo position changes and to receive commands from MQTT to execute motions on the robot arm.

The robot looks like this:

Nao Robot
For the Nao robot I have written a customer Java controller that connects to the remote API of Nao. This controller software does nothing else but allowing remote control of the Nao robot by listening to commands coming from MQTT.

Connecting the Robots

Like before in previous setups I will be using my custom Robot Cloud deployment setup for this experiment. I will be deploying a number of micro-services to a Kubernetes cluster that is running on AWS. The most important public services are the MQTT message bus which is where the robots are sending status (sensors/servo’s) towards and received commands from (animations, walk commands etc.). For more detail on the actual services and their deployment you can check here

The most important part of bridging the gap between the robots is to have a specific container that receives updates from the servo’s on the robot arm. Based on events from those servo’s (move the joystick forward) I want to trigger the Nao robot to start walking. The full code with a lot more detail is available in this git repository:


It’s quite a complex setup, but the conclusion is that by using my Kubernetes deployed Robot Cloud stack I can use the robot Arm to control the Nao robot. If you want to see a bit more with a live demo you can check out my Devoxx presentation here:

One thing I could not demo at Devoxx was the interaction with a real Nao Robot, I have made a recording how that would look and also put this on youtube here:

Deployment using Kubernetes Helm

One of the challenges I face in my development setup is that I want to quickly and often create and deploy my robotics stack. I often want to change and redeploy my entire stack from scratch, because I want to iterate quickly and also reduce my costs as much as possible. My Jenkins jobs have helped a great deal here, and automation is definitely key. However I have recently started experimenting with Kubernetes Helm which is a package manager for Kubernetes which has made this even easier for me.

Kubernetes Helm

Helm is a package manager that allows you to define a package with all its dependent deployment objects for Kubernetes. With helm and this package you can then ask a cluster to install the entire package in one go instead of passing individual deployment commands. This means for me that instead of asking Kubernetes to install each of my several micro-services to be installed I simply ask it to install the entire package/release in one atomic action which also includes all of the dependent services like databases and message brokers I use.

Installing Helm

In this blog I want to give a small taste on how nice Helm is. So how do we get started? Well in order to get started with Helm you should first follow the installation instructions at this page:

In case you are using OSX (like me) its relatively simple if you are using homebrew, simply run the following cask:

brew cask install helm

Once helm is installed it should also be installed in your cluster. In my case I will be testing against a minikube installation as described in my previous post:

On the command line I have a kubernetes command line client (kubectl) with my configuration pointing towards my minikube cluster. The only thing I have to do is the following to install Helm in my cluster:

helm init

This will install a container named tiller in my cluster, this container will understand how to deploy the Helm packages (charts) into my cluster. This is in essence the main endpoint the helm client will use to interrogate the cluster for package deployments and package changes.

Creating the package

Next we need to start creating something which is called a Chart, this is the unit of packaging in Helm. For this post I will reduce the set of services I have used in previous posts and only deploy the core services Cassandra, MQTT and ActiveMQ. The first thing to define is the *Chart.yaml** which is the package manifest:

The manifest looks pretty simple, most important is the version number, the rest is mainly metadata for indexing:

name: robotics
version: 0.1
description: Robotic automation stack
- robotics
- application
- name: Renze de Vries
engine: gotpl

The second I am going to define is the deployment objects I want to deploy. For this we create a ‘Charts’ subdirectory which contains these dependent services. In this case I am going to deploy MQTT, ActiveMQ and Cassandra which are required for my project. For each of these services I create a templates folder which contains the Kubernetes Deployment.yaml descriptor and Kubernetes service descriptor file and have their own Charts.yaml file as well.

When you have this all ready it look as following:

I am not going to write out all the files in this blog, if you want to have a look at the full source have a look at the github repository here that contains the full Helm chart structure describe in this post:

Packaging a release

Now that the Chart source files have been created the last thing to do is to create the actual package. For this we have to do nothing else than simply run the following command:

helm package .

This will create a file called robotics-0.1.tgz that we can use further to deploy our release. In a future blog post I will talk a bit about Helm repositories and how you can distribute these packages, but for now we keep them on the local file system.

Installing a release

Once we have defined the packages the only thing thats remaining is to simply install a release into the cluster. This will install all the services that are packaged in the Chart.

In order to install the package we have created above we just have to run the following command:

helm install robotics-0.1.tgz
NAME: washing-tuatar
LAST DEPLOYED: Sun Nov  6 20:42:30 2016
NAMESPACE: default

==> v1/Service
amq   <nodes>   61616/TCP   1s
mqtt   <nodes>   1883/TCP   1s
cassandra-svc   <nodes>   9042/TCP,9160/TCP   1s

==> extensions/Deployment
mqtt      1         1         1            0           1s
amq       1         1         1         0         1s
cassandra   1         1         1         0         1s

We can ask Helm which packages are installed in the cluster by simply asking a list of installed packages as following:

helm list
NAME          	REVISION	UPDATED                 	STATUS  	CHART       
washing-tuatar	1       	Sun Nov  6 20:42:30 2016	DEPLOYED	robotics-0.1

Please note that the name for the installation is a random generated name, in case you want a well known name you can install using the ‘-name’ switch and specify the name yourself.

In order to delete all the deployed objects I can simply ask Helm to uninstall the release as following:

helm delete washing-tuatar


I have found that Helm has a big potential, it allows me to very quickly define a full software solution composed out of many individual deployments. In a future blog post I will talk a bit more about the templating capabilities of Helm and the packaging and distributing of your packages. In the end I hope this blog shows everyone that with Helm you can make all of your Kubernetes work even easier than it already is today 🙂

Having fun with Robots and Model trains

Last few blog posts have all been about heavy docker and Kubernetes stuff. I thought it was time for something more light and today I want to blog about my hobby robotics project again. Before I had robots to tinker with actually I used to play around a lot with trying to automate a model train setup. So recently I had the idea why can’t I combine this and let one of the robots have some fun by playing with a model train 🙂

In this blog post I will use MQTT and my Robot Cloud SDK I have developed to hook up our Nao Robot to MQTT together with the model train.

Needed materials

In order to build a automated train layout I needed a model train setup, I have a already existing H0 based Roco/Fleischmann based model train setup. All the trains on this setup are digitised using decoders that are based on DCC. If you do not know what this means, you can read a about digital train systems here:

Hooking up the train

The train system I have is controlled using an Ecos controller which has a well defined TCP network protocol I can use for controlling it. I have written a small library that hooks the controller to my IoT/robot cloud that I have described in previous blogposts. The commands for moving the train are sent to MQTT which are then translated to a TCP command the controller can understand.

I will have a MQTT broker available somewhere in the Cloud (AWS/Kubernetes) where also my robots can connect to so this will be the glue connecting the robot and trains.

I don’t really want to bother people to much with the technicals of the train and code behind it, but if you are interested in the code I have put it on Github:

Hooking up the robots

Hooking up the robots is actually quite simple, I have done this before and am using the same setup before. The details of this are all available in this blog post:

In this case I will be using our Nao Robot and hook this up to the MQTT bridge. The framework have developed contains a standard message protocol on top of MQTT. This means the messages are always defined the same way and all parties adhering to this can give states and commands to each other. In this case both the train and robot use the same message protocol via MQTT, hence why we can hook them up.

In order to make this a bit more entertaining I want to run a small scenario:
1. Nao walks a bit towards the train and the controller
2. Sits down and says something
3. Starts the train
4. Reacts when the train is running

I always like to put a bit of code in a post, so this code is used to create this scenario:

    private static void runScenario(Robot robot) {
        //Step1: Walk to the train controller (1.2 meters)
        robot.getMotionEngine().walk(WalkDirection.FORWARD, 1.2f);

        //Step2: Let's say something and sit down
        robot.getCapability(SpeechEngine.class).say("Oh is that a model train, let me sit and play with it", "english");
        //Step3: Let's start the train
        sleepUninterruptibly(1, TimeUnit.SECONDS);
        startTrain(robot.getRemoteDriver(), "1005", "forward");
        //Step4: Nao is having lots of fun
        sleepUninterruptibly(5, TimeUnit.SECONDS);
        robot.getCapability(SpeechEngine.class).say("I am having so much fun playing with the train", "english");

    private static void startTrain(RemoteDriver remoteDriver, String trainId, String direction) {
                .property("trainId", trainId).build());
                .property("trainId", trainId)
                .property("state", "on").build());
                .property("trainId", trainId)
                .property("direction", direction).build());
                .property("trainId", trainId)
                .property("speed", "127").build());

What happens here is that in the Robot SDK there is a bit of code that can translate Java objects into MQTT messages. Those MQTT messages are then received by the train controller from the MQTT bridge which translates this again into TCP messages.

For people that are interested in also this piece of code on how I create the scenario’s around the Nao robot it’s also available on github:

End result

So how does this end result look like, well video’s say more than a thousand words (actually ±750 for this post 🙂 )

This is just to show that you can have a bit of fun integrating very different devices. Using protocols like MQTT could really empower robot and other appliances to be tightly integrated very easily. The glue that I am adding is to make sure there is a standard message on top of MQTT for the different appliances and hooking them up to MQTT. Stay tuned for some more posts about my Robotics and hobby projects.

Robot interaction with a Nao Robot and a Raspberry Pi Robot

In my recent blogpost I have mainly been working on creating the Raspberry PI based robot. As in the first post I mentioned ( the next challenge after getting the Raspberry PI robot to walk is to have it interact with a Nao robot via an IoT solution: (

Robot framework

This has been a lot more challenging than I originally anticipated, mainly due to the fact that I decided to do this properly and build a framework for it. The approach I have taken here is to build an SDK where I can create a standard Java robot model. This capabilities model defines the properties of the robot (speec, movement, sonar, etc.) and generalises them for different robot types. For both the RPI robot and the Nao robot I implement this framework so they in the end can speak the same language.

The benefits of this framework are great, because the idea is to expose and enable this framework via the Cloud using MQTT which is an IoT pub-sub message broker. Meaning all sensor data and commands for the robot will be sent via this MQTT broker. This also enables another option that I can run just a small piece of software on the robot that remotely exposes their capabilities and talk remotely via the MQTT broker to these capabilities.

I have chosen for MQTT because it is a very simple protocol and is already heavily adopted in the IoT industry. Next to this my current home automation system runs via MQTT, so this offers in the future a very nice integration between multiple robots and the home automation 😀

In this post I will describe two scenarios:
1. Having the Nao robot trigger a movement in the Raspberry PI robot
2. Have the Nao robot react to a sensor event on the Raspberry PI robot


In order to do this I have to implement my framework for each of the robots. The robot framework consists of the following high level design:

The capability indicates something the robot can do, these capabilities can vary from basic capabilities like motion and low level capabilities like servo control and sensor drivers but also higher level capabilities like Speech capability.

These indicate sensors on the robots that can provide feedback based on actions that happen on the robot.

The robot has a list of capabilities and sensors which form the entire robot entity.

The code

Here is an example of bootstrapping the framework on the Nao robot, this is a spring boot application that connects to the robot and installs all the capabilities. All the logic for dealing with the Robot’s capabilities is contained in the specific capability classes. These capabilities also are not aware of the cloud connectivity. The cloud connectivity is dealt with by a special driver that listens to all local robot event and forwards them to the MQTT bridge, and listens to all MQTT commands coming in meant for this robot.

Robot robot = new SpringAwareRobotBuilder("peppy", context)
        .sensor(new DistanceSensor("distance", NaoSensorDriver.SONAR_PORT), NaoSensorDriver.class)
        .sensor(new BumperSensor("head", NaoSensorDriver.TOUCH_HEAD), NaoSensorDriver.class)

RobotEventHandler eventHandler = new RobotEventHandler(robot);

public static class RobotEventHandler implements EventHandler {
    private Robot robot;

    public RobotEventHandler(Robot robot) {
        this.robot = robot;

    public void receive(DistanceSensorEvent event) {"Received a distance event: {}", event);

        if(event.getDistance() < 30) {
  "Stopping walking motion");

    public void receive(TextualSensorEvent event) {"Barcode scanned: {}", event.getValue());

        robot.getCapability(NaoSpeechEngine.class).say(event.getValue(), "english");

    public void receive(BumperEvent event) {"Head was touched on: {}", event.getLabel());

The robot has a main identifier which is called the ‘controllerId’, in the above example this is ‘peppy’. Also all sensors have a name, for example the BumperSensor defined below has a name of type ‘head’. Each component can generate multiple events which all need to be labeled identifying the source of the event. For example in case the head gets touched the label will indicate where on the head the robot is touched.

Effectively this means any sensor event sent to the cloud will always have three identifiers (controllerId, itemId and label). For commands send from the MQTT bridge there will also always be three identifiers with slightly different meaning (controllerId, itemId and commandType).

Here is an example of a sample sensor event coming from a robot forwarded to the MQTT broker. On MQTT each message is sent to a topic where you can subscribe to, events from the robot are sent to the following topic for below example /states/peppy/head/FrontTactilTouched. The message sent to that topic has a body as following:


For a command send via the cloud the message to the MQTT bridge would be sent to a topic like this: /command/peppy/tts/say. The message for letting the robot say something would be as following:

  	"text":"Hello everyone, how are you all doing?",

If you are curious and would like to see what it takes to develop your own robot to connect to the cloud you can find all the code on Github. The full code for the Nao implementation can be found here: I have similar code for the Raspberry Pi robot which can be found here:

The Architecture

Now that the robot has been implemented it is up to the microservices to coordinate these robots. I have decided to split the architecture in two parts a public message ring using MQTT and an internal one that is secure using ActiveMQ.

At the border of the cloud there is a message processor that picks up message from MQTT and forward to ActiveMQ. This processor checks if the messages coming in are valid and allowed, and if so forwards it to the secure internal ring which is using ActiveMQ. This way I can have a filter on the edge to protect against attacks and authorize the sensor data and commands, this is just nothing more than a bit of safety and scalability for potentially a future.

So the architecture in essence looks as following:

Deploying to the cloud

Based on above architecture I have a number of services I need to deploy including their supporting services. I will deploy the entire stack using Docker on a Kubernetes cluster running on Amazon AWS. For more information on how to deploy to Kubernetes please read my previous blog post: <>

Deploying the services

Getting back to deploying my services, this is the list of services / containers to deploy:
* MQTT message broker based on Moquette for public Robot messages
* ActiveMQ message broker for internal system messages
* Edge processor that forward message from MQTT to ActiveMQ and other way around
* Command service for sending robot commands to the cloud
* State service for processing robot state (sensor data)
* Dashboard service for interacting with the robot (future)

If you want to know more on how to deploy a Docker container on a Kubernetes cluster and how to create one, please check out my previous blog on that:

All of the above services can also be found on my github account:

Putting all together

The goal I had was to connect two robots via a brokering architecture and let them interact. It was a lengthy process and it was not easy, but I did finally manage to pull it off. Once the above services where deployed, all I had to do was start the implemented spring-boot application on the individual robots. In order to get the interaction going tho, i had to write a third spring-boot application that would receive events from both robots and take action based on these events. The awesome part about the above architecture is that i can now do that simply, by writing remote capability connectors for the robot framework that directly speak to the MQTT bridge.

You can see this in the below code:

Robot max = new SpringAwareRobotBuilder("max", context)
        .remote(RemoteCloudDriver.class, true)

Robot pep = new SpringAwareRobotBuilder("peppy", context)
        .remote(RemoteCloudDriver.class, true)

MaxRobotEventHandler maxHandler = new MaxRobotEventHandler(pep);

PepRobotEventHandler pepHandler = new PepRobotEventHandler(max);

public static class MaxRobotEventHandler implements EventHandler {
    private Robot pep;

    private AtomicBoolean guard = new AtomicBoolean(true);

    private MaxRobotEventHandler(Robot pep) {
        this.pep = pep;

    public void receive(ValueEvent valueEvent) {"Received a distance: {}", valueEvent.getValue().asString());
        if(valueEvent.getControllerId().equals("max") && valueEvent.getLabel().equals("distance")) {
            int distance = valueEvent.getValue().getValue();
            if(distance < 30) {
                if(guard.compareAndSet(true, false)) {
          "Distance is too small: {}", distance);
                    pep.getCapability(SpeechEngine.class).say("Max, are you ok, did you hit something?", "english");

                    Uninterruptibles.sleepUninterruptibly(10, TimeUnit.SECONDS);
          "Allowing further distance events");

public static class PepRobotEventHandler implements EventHandler {
    private Robot max;

    public PepRobotEventHandler(Robot max) {
        this.max = max;

    public void receive(ValueEvent valueEvent) {"Received an event for pep: {}", valueEvent);
        if(valueEvent.getControllerId().equals("peppy") && valueEvent.getItemId().equals("head")) {
            if(valueEvent.getValue().asString().equals("true")) {


What happens in this code is that when we receive an event from Pep (the Nao robot) indicating his head was touched, we trigger a motion to run in the other robot named Max (Raspberry Pi robot). The other way around if we receive a distance event on Max indicating he is about to hit a wall we execute a remote operation on the speech engine of Pep to say something.

So how does that look, well see for yourself in this Youtube video:


It has been an incredible challenge to get this interaction all working. But i finally did manage to get it working and i am just at the starting point now. The next step is to work out all the capabilities in the framework including for example video/vision capabilities in both robots. After that the next big step will become to get both robots to explore the room and try to find each other and then collaborate. More on that in posts to come in the future.