Building a Raspberry PI Robot Car part 2

In the last post we talked about the electronics, in this post I will talk a bit about the 3D design and printing of the components. I have recently acquired an Ultimaker 3D printer and after quite some experimenting has led me to be able to start designing my own components for the robotcar. In this blog I will try to walk through the design of the robotcar.

Designing the robot

The robot itself is going to consist of 4 main components:
* Casing containing the main electronics (Raspberry PI, power distribution, etc.
* Casing containing the LiPo battery that allows easy battery replacement
* Frame that supports both the battery and electronics casing
* Wheel / suspension mechanism to hold the wheels

Note: The printer has a maximum print size of roughly 20x20x20 cm, so this is the main reason that the casing for the power, electronics and frame are separated from each other.

The software
For the design of the software I started out with TinkerCad which is an online free based 3D editor. However I quickly ran into problems with dimensions which get quickly complex. I switched after this to Autodesk Fusion 360 which is a lot better if it comes to designing technical components, as a hobbyist it is possible to get a free year license.

Wheel / Suspension

The suspension design is a spring based design that will allow some form of flex in the wheel design. The wheel design actually needs to attach to a servo, the wheel itself is attached to the servo. For this I have designed a small bracket suited for my Dynamixel servo’s.

Next I have one beam that will have the spring attached to it and two static beams that connect to the servo holder. The static beams will ensure linear motion of the servo holder and the spring ensures there is dampening of the motions. This looks as following:

For the wheel design I will at some point dedicate a special post as they have caused me a lot of headache. For now I will use some standard wheels that fit onto the servo’s, but ultimately these will become mecanum based wheels.

Designing the frame

The beams used for the suspension are actually part of the base frame. There are going to be 4 wheels, meaning 4 beams that are part of the frame. In order to create sufficient surface for the battery and electronics casing I have connected the beams in a longer frame using connecting pieces. I have design an end piece for the end pieces of the frame and a middle piece to connect the beams all together. This looks as following:

Each of the beams has a length of 12cm, the middle piece is 4cm and the end pieces each 2cm. This gives a total length of 32cm for the robotcar, this is quite long but for the current suspension design it is needed as the suspension beams cannot really be shortened. In the future I might want to shorten the design by redesigning the suspension, however for now its good enough.

Battery & Electronics case

The main battery and electronics case has caused me a lot of problems and many iterations to get right. Every time you print it, there is something that is not entirely right. The tricks has been to measure, measure and measure again all the components you want to fit. I have in the end drawn out a sketch on paper roughly showing the placement of the components. Both the battery and electronics case have to fit in a fixed length of 16cm and 10 cm in width to fit the baseframe. The electronics case contains special accomodation for the Raspberry PI, UBEC power converter, two grove Sensors and the Dynamixel power board:

Note: The electronics casing will have a separate lid which will allow closing up the electronics compartment and allow easy access.

For the battery case its a lot simpler, we just need something to contain the battery. However one of the challenges is that I do not want a lid here, it just needs to be easily replaceable. For this to work there will be two covers on either end of the case that hide the wires but are far enough apart to remove the battery. A not here is that I used round edges instead of sharp 90 degree angles to allow for better printing without support. The round angles allow for a pretty decent print on my ultimaker, and its a lot better than having support material in the case. The case looks as following:

Assembling the robot

Here are a series of pictures of the various parts in stages of assembly

Conclusion

The process of getting to the above design and printed parts has not been easy. I have had for each component many, many iterations before getting to the above. Even now I am still seeing improvement areas, however for now I do think its close to being a functional robot car which was the goal. In the future posts I will start talking a bit about the software and the drive system with the mecanum wheels.

For those wanting to have a look at the 3D parts, I have uploaded them to Github, the idea is in the future to provide a proper manual on how to print and assemble with a bill of materials needed, for now just have a look:
https://github.com/renarj/robo-max/tree/master/3d-parts

Here is a last picture to close with of the first powerup of the robot car:

Building a Raspberry PI Robot Car part1

In the recent few months I have been very focussed on a few topics like humanoid robotics and robot interaction. Recently I have had some extra time and decided to take the next step and really design a robot from scratch. I thought for my first from scratch robot it would be handy to start simple and go for a relatively simple wheel based robot.

I will write a series of blog posts about the robot and how I am taking next steps to design and hopefully perfect the robot. In this first post I will discuss the basic concept and shows how I am going to power up the servo’s and control unit.

The robot concept

Let’s start of with setting the design goal of the robot:
Design an open source wheel based robot that has a holonomic drive solution capable of detecting obstacles and recognising objects it encounters

Given this goal let’s first start off with some basic requirements for the robot and what it needs to consist of. I will design this robot based on principles I have used in previous modifications of robots, this has lead me to these requirements:
* It will be based on a Raspberry PI based with Wifi
* Entire robot should be powered by a single LiPo battery for simplicity
* Distance based sensors for obstacle detection
* Rotatable vision camera
* Holonomic drive system where I can use four individual wheel servos for multi-vector driving
* Arm / Gripper for interaction

Servos

For the servos in this project I will for the moment re-use my trustworthy Dynamixel AX-12A servos which can be used in continuous rotation mode and therefore act as wheel servos. However given the desire to open source this project and the costs of these servos they will be replaced in the future, however for the first iteration it is best to stick to what I know.

Powering the solution

One of the important principles for this robot design will be that it needs to be powered by a single power source. In previous robots I always used the combination of the Robotis Lipo battery with a separate battery solution for the Raspberry PI. This has caused in multiple projects issues, like balancing issues or simply nowhere to leave the batteries.

LiPo Battery
In this robot I will use a single LiPo battery, I have picked a Turnigy NanoTech 3S battery with a 2200mAh capacity. This should be plenty to power the Raspberry PI and the Servos for a estimated 30-60 minutes, and easy enough to increase capacity in the future.

Power conversion
The Raspberry PI accepts 5 volt as input and needs roughly 1-2 Amps of current. In order to use a single LiPo battery I need a power converter as the 3S Lipo has an output voltage of minimum ~11.1 Volt and Maximum ~12.8 Volt. For this I will use a simple UBEC (Universal Battery Elimination Circuit) from HobbyKing. This Ubec can convert input voltages ranging from 6 volt to 23 volt into a stable output voltage of 5.1 volt with a maximum current of 3 amps, which is perfect for the Raspberry PI.

For the Dynamixel Servos I will use the Dynamixel official power converter a SMPS2Dynamixel . This can take input voltage up to 20 volts so can be directly connected to the 3S Lipo. All we need is a small 2.1/5.5MM DC power jack, I have managed to source one with a screw terminal but you can find different types.

Power wire harnass
In order to connect both the UBEC and SMPS2Dynamixel to a single servo I have to create a small power harnass that splits the power output from the 3S Lipo to both power converters. For this I have custom built a harnass using XT60 power plugs and some cables I have soldered together and put a screw-cap on the end to protect the wire ends. All is in the end topped off with some electric insulation tape, this looks as following:

Combining it all
Next step is connecting all the electronics. In order to control the Dynamixel servo’s I will use my trusted USB2AX which allows controlling the servo’s via a Dynamixel serial protocol. What remains is wiring up the power with a servo and the Raspberry PI. What better way then to show this with a picture:

In order to connect the entire solution I have had to hook the UBEC directly onto the 5v/Gnd header connectors of the Raspberry PI. Do this with extreme care, any wrong polarity will directly blow up your Raspberry PI. Make sure to check the pinning layout properly RED = 5v Black = GND and they need to go to the respected pin header on the Raspberry PI

Look at this slightly more zoomed in picture for the polarity / Raspberry PI Ubec connection and click on it for full zoom:

Next Steps

In this post I zoomed into the big project plan and in particular laid out the power setup. In the next post I will start with the 3D design of the robot and will show how I use Fusion 360 to create the design of the robot.

Remote Controlling Nao robot using a Raspberry Pi Robot

Today I want to take some time to write about the next step I am currently taking to have both my self-build Raspberry PI robot and the Nao robot interact with each other on a useful basis. You might have already seen some posts before like https://renzedevries.wordpress.com/2016/06/10/robot-interaction-with-a-nao-robot-and-a-raspberry-pi-robot/ about robot interaction or perhaps the model train one https://renzedevries.wordpress.com/2016/09/13/having-fun-with-robots-and-model-trains/. However both these posts did not really demonstrate a practical use-case.

Recently I presented about this topic at the Devoxx conference in Antwerp where I attempt to demonstrate how to control one robot from another using Kubernetes, Helm and Minikube combined with some IoT glue 🙂 The scenario I demonstrated was to create a Robotic Arm from my Raspberry PI robot that I use to remote control a Nao robot.

Robot arm
In order to have some form of remote control I have created a Robot Arm which i can use as a sort of joystick. I have created the robot from the same parts as described in this post (https://renzedevries.wordpress.com/2016/03/31/build-a-raspberry-pi-robot-part-2/). The robot arm is controller via a Raspberry PI that has a bit of Java software to connect it to MQTT to send servo position changes and to receive commands from MQTT to execute motions on the robot arm.

The robot looks like this:
67d466ea-019e-4216-a1e0-4d577bf7038e

Nao Robot
For the Nao robot I have written a customer Java controller that connects to the remote API of Nao. This controller software does nothing else but allowing remote control of the Nao robot by listening to commands coming from MQTT.

Connecting the Robots

Like before in previous setups I will be using my custom Robot Cloud deployment setup for this experiment. I will be deploying a number of micro-services to a Kubernetes cluster that is running on AWS. The most important public services are the MQTT message bus which is where the robots are sending status (sensors/servo’s) towards and received commands from (animations, walk commands etc.). For more detail on the actual services and their deployment you can check here https://renzedevries.wordpress.com/2016/06/10/robot-interaction-with-a-nao-robot-and-a-raspberry-pi-robot/

The most important part of bridging the gap between the robots is to have a specific container that receives updates from the servo’s on the robot arm. Based on events from those servo’s (move the joystick forward) I want to trigger the Nao robot to start walking. The full code with a lot more detail is available in this git repository: https://github.com/renarj/robo-bridge

Conclusion

It’s quite a complex setup, but the conclusion is that by using my Kubernetes deployed Robot Cloud stack I can use the robot Arm to control the Nao robot. If you want to see a bit more with a live demo you can check out my Devoxx presentation here:

One thing I could not demo at Devoxx was the interaction with a real Nao Robot, I have made a recording how that would look and also put this on youtube here:

Deployment using Kubernetes Helm

One of the challenges I face in my development setup is that I want to quickly and often create and deploy my robotics stack. I often want to change and redeploy my entire stack from scratch, because I want to iterate quickly and also reduce my costs as much as possible. My Jenkins jobs have helped a great deal here, and automation is definitely key. However I have recently started experimenting with Kubernetes Helm which is a package manager for Kubernetes which has made this even easier for me.

Kubernetes Helm

Helm is a package manager that allows you to define a package with all its dependent deployment objects for Kubernetes. With helm and this package you can then ask a cluster to install the entire package in one go instead of passing individual deployment commands. This means for me that instead of asking Kubernetes to install each of my several micro-services to be installed I simply ask it to install the entire package/release in one atomic action which also includes all of the dependent services like databases and message brokers I use.

Installing Helm

In this blog I want to give a small taste on how nice Helm is. So how do we get started? Well in order to get started with Helm you should first follow the installation instructions at this page: https://github.com/kubernetes/helm/blob/master/docs/install.md

In case you are using OSX (like me) its relatively simple if you are using homebrew, simply run the following cask:

brew cask install helm

Once helm is installed it should also be installed in your cluster. In my case I will be testing against a minikube installation as described in my previous post: https://renzedevries.wordpress.com/2016/11/07/using-kubernetes-minikube-for-local-test-deployments/

On the command line I have a kubernetes command line client (kubectl) with my configuration pointing towards my minikube cluster. The only thing I have to do is the following to install Helm in my cluster:

helm init

This will install a container named tiller in my cluster, this container will understand how to deploy the Helm packages (charts) into my cluster. This is in essence the main endpoint the helm client will use to interrogate the cluster for package deployments and package changes.

Creating the package

Next we need to start creating something which is called a Chart, this is the unit of packaging in Helm. For this post I will reduce the set of services I have used in previous posts and only deploy the core services Cassandra, MQTT and ActiveMQ. The first thing to define is the *Chart.yaml** which is the package manifest:

Chart.yaml
The manifest looks pretty simple, most important is the version number, the rest is mainly metadata for indexing:

name: robotics
version: 0.1
description: Robotic automation stack
keywords:
- robotics
- application
maintainers:
- name: Renze de Vries
engine: gotpl

The second I am going to define is the deployment objects I want to deploy. For this we create a ‘Charts’ subdirectory which contains these dependent services. In this case I am going to deploy MQTT, ActiveMQ and Cassandra which are required for my project. For each of these services I create a templates folder which contains the Kubernetes Deployment.yaml descriptor and Kubernetes service descriptor file and have their own Charts.yaml file as well.

When you have this all ready it look as following:
screen-shot-2016-11-16-at-20-42-12

I am not going to write out all the files in this blog, if you want to have a look at the full source have a look at the github repository here that contains the full Helm chart structure describe in this post: https://github.com/renarj/helm-repo

Packaging a release

Now that the Chart source files have been created the last thing to do is to create the actual package. For this we have to do nothing else than simply run the following command:

helm package .

This will create a file called robotics-0.1.tgz that we can use further to deploy our release. In a future blog post I will talk a bit about Helm repositories and how you can distribute these packages, but for now we keep them on the local file system.

Installing a release

Once we have defined the packages the only thing thats remaining is to simply install a release into the cluster. This will install all the services that are packaged in the Chart.

In order to install the package we have created above we just have to run the following command:

helm install robotics-0.1.tgz
NAME: washing-tuatar
LAST DEPLOYED: Sun Nov  6 20:42:30 2016
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Service
NAME          CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
amq       10.0.0.59   <nodes>   61616/TCP   1s
mqtt      10.0.0.131   <nodes>   1883/TCP   1s
cassandra-svc   10.0.0.119   <nodes>   9042/TCP,9160/TCP   1s

==> extensions/Deployment
NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
mqtt      1         1         1            0           1s
amq       1         1         1         0         1s
cassandra   1         1         1         0         1s

We can ask Helm which packages are installed in the cluster by simply asking a list of installed packages as following:

helm list
NAME          	REVISION	UPDATED                 	STATUS  	CHART       
washing-tuatar	1       	Sun Nov  6 20:42:30 2016	DEPLOYED	robotics-0.1

Please note that the name for the installation is a random generated name, in case you want a well known name you can install using the ‘-name’ switch and specify the name yourself.

In order to delete all the deployed objects I can simply ask Helm to uninstall the release as following:

helm delete washing-tuatar

Conclusion

I have found that Helm has a big potential, it allows me to very quickly define a full software solution composed out of many individual deployments. In a future blog post I will talk a bit more about the templating capabilities of Helm and the packaging and distributing of your packages. In the end I hope this blog shows everyone that with Helm you can make all of your Kubernetes work even easier than it already is today 🙂

Having fun with Robots and Model trains

Last few blog posts have all been about heavy docker and Kubernetes stuff. I thought it was time for something more light and today I want to blog about my hobby robotics project again. Before I had robots to tinker with actually I used to play around a lot with trying to automate a model train setup. So recently I had the idea why can’t I combine this and let one of the robots have some fun by playing with a model train 🙂

In this blog post I will use MQTT and my Robot Cloud SDK I have developed to hook up our Nao Robot to MQTT together with the model train.

Needed materials

In order to build a automated train layout I needed a model train setup, I have a already existing H0 based Roco/Fleischmann based model train setup. All the trains on this setup are digitised using decoders that are based on DCC. If you do not know what this means, you can read a about digital train systems here: http://www.dccwiki.com/DCC_Tutorial_(Basic_System)

Hooking up the train

The train system I have is controlled using an Ecos controller which has a well defined TCP network protocol I can use for controlling it. I have written a small library that hooks the controller to my IoT/robot cloud that I have described in previous blogposts. The commands for moving the train are sent to MQTT which are then translated to a TCP command the controller can understand.

I will have a MQTT broker available somewhere in the Cloud (AWS/Kubernetes) where also my robots can connect to so this will be the glue connecting the robot and trains.

I don’t really want to bother people to much with the technicals of the train and code behind it, but if you are interested in the code I have put it on Github: https://github.com/renarj/rc-train

Hooking up the robots

Hooking up the robots is actually quite simple, I have done this before and am using the same setup before. The details of this are all available in this blog post: https://renzedevries.wordpress.com/2016/06/10/robot-interaction-with-a-nao-robot-and-a-raspberry-pi-robot/

In this case I will be using our Nao Robot and hook this up to the MQTT bridge. The framework have developed contains a standard message protocol on top of MQTT. This means the messages are always defined the same way and all parties adhering to this can give states and commands to each other. In this case both the train and robot use the same message protocol via MQTT, hence why we can hook them up.

In order to make this a bit more entertaining I want to run a small scenario:
1. Nao walks a bit towards the train and the controller
2. Sits down and says something
3. Starts the train
4. Reacts when the train is running

I always like to put a bit of code in a post, so this code is used to create this scenario:

    private static void runScenario(Robot robot) {
        //Step1: Walk to the train controller (1.2 meters)
        robot.getMotionEngine().prepareWalk();
        robot.getMotionEngine().walk(WalkDirection.FORWARD, 1.2f);

        //Step2: Let's say something and sit down
        robot.getCapability(SpeechEngine.class).say("Oh is that a model train, let me sit and play with it", "english");
        robot.getMotionEngine().goToPosture("Sit");
        
        //Step3: Let's start the train
        sleepUninterruptibly(1, TimeUnit.SECONDS);
        startTrain(robot.getRemoteDriver(), "1005", "forward");
        
        //Step4: Nao is having lots of fun
        sleepUninterruptibly(5, TimeUnit.SECONDS);
        robot.getCapability(SpeechEngine.class).say("I am having so much fun playing with the train", "english");
    }

    private static void startTrain(RemoteDriver remoteDriver, String trainId, String direction) {
        remoteDriver.publish(BasicCommandBuilder.create("ecos")
                .item("train").label("control")
                .property("trainId", trainId).build());
        remoteDriver.publish(BasicCommandBuilder.create("ecos")
                .item("train").label("light")
                .property("trainId", trainId)
                .property("state", "on").build());
        remoteDriver.publish(BasicCommandBuilder.create("ecos")
                .item("train").label("direction")
                .property("trainId", trainId)
                .property("direction", direction).build());
        remoteDriver.publish(BasicCommandBuilder.create("ecos")
                .item("train").label("speed")
                .property("trainId", trainId)
                .property("speed", "127").build());
    }

What happens here is that in the Robot SDK there is a bit of code that can translate Java objects into MQTT messages. Those MQTT messages are then received by the train controller from the MQTT bridge which translates this again into TCP messages.

For people that are interested in also this piece of code on how I create the scenario’s around the Nao robot it’s also available on github: https://github.com/renarj/robo-pep

End result

So how does this end result look like, well video’s say more than a thousand words (actually ±750 for this post 🙂 )

This is just to show that you can have a bit of fun integrating very different devices. Using protocols like MQTT could really empower robot and other appliances to be tightly integrated very easily. The glue that I am adding is to make sure there is a standard message on top of MQTT for the different appliances and hooking them up to MQTT. Stay tuned for some more posts about my Robotics and hobby projects.

Robot interaction with a Nao Robot and a Raspberry Pi Robot

In my recent blogpost I have mainly been working on creating the Raspberry PI based robot. As in the first post I mentioned (https://renzedevries.wordpress.com/2016/02/28/building-a-raspberry-pi-humanoid-robot-in-java-part-1/) the next challenge after getting the Raspberry PI robot to walk is to have it interact with a Nao robot via an IoT solution: (https://renzedevries.wordpress.com/2015/11/26/api-design-the-good-bad-and-the-ugly/).

Robot framework

This has been a lot more challenging than I originally anticipated, mainly due to the fact that I decided to do this properly and build a framework for it. The approach I have taken here is to build an SDK where I can create a standard Java robot model. This capabilities model defines the properties of the robot (speec, movement, sonar, etc.) and generalises them for different robot types. For both the RPI robot and the Nao robot I implement this framework so they in the end can speak the same language.

The benefits of this framework are great, because the idea is to expose and enable this framework via the Cloud using MQTT which is an IoT pub-sub message broker. Meaning all sensor data and commands for the robot will be sent via this MQTT broker. This also enables another option that I can run just a small piece of software on the robot that remotely exposes their capabilities and talk remotely via the MQTT broker to these capabilities.

I have chosen for MQTT because it is a very simple protocol and is already heavily adopted in the IoT industry. Next to this my current home automation system runs via MQTT, so this offers in the future a very nice integration between multiple robots and the home automation 😀

In this post I will describe two scenarios:
1. Having the Nao robot trigger a movement in the Raspberry PI robot
2. Have the Nao robot react to a sensor event on the Raspberry PI robot

Implementation

In order to do this I have to implement my framework for each of the robots. The robot framework consists of the following high level design:
robo-sdk

Capability
The capability indicates something the robot can do, these capabilities can vary from basic capabilities like motion and low level capabilities like servo control and sensor drivers but also higher level capabilities like Speech capability.

Sensor
These indicate sensors on the robots that can provide feedback based on actions that happen on the robot.

Robot
The robot has a list of capabilities and sensors which form the entire robot entity.

The code

Here is an example of bootstrapping the framework on the Nao robot, this is a spring boot application that connects to the robot and installs all the capabilities. All the logic for dealing with the Robot’s capabilities is contained in the specific capability classes. These capabilities also are not aware of the cloud connectivity. The cloud connectivity is dealt with by a special driver that listens to all local robot event and forwards them to the MQTT bridge, and listens to all MQTT commands coming in meant for this robot.

Robot robot = new SpringAwareRobotBuilder("peppy", context)
        .motionEngine(NaoMotionEngine.class)
        .servoDriver(NaoServoDriver.class)
        .capability(NaoSpeechEngine.class)
        .capability(NaoQRScanner.class)
        .sensor(new DistanceSensor("distance", NaoSensorDriver.SONAR_PORT), NaoSensorDriver.class)
        .sensor(new BumperSensor("head", NaoSensorDriver.TOUCH_HEAD), NaoSensorDriver.class)
        .remote(RemoteCloudDriver.class)
        .build();

RobotEventHandler eventHandler = new RobotEventHandler(robot);
robot.listen(eventHandler);

public static class RobotEventHandler implements EventHandler {
    private Robot robot;

    public RobotEventHandler(Robot robot) {
        this.robot = robot;
    }

    @EventSubscribe
    public void receive(DistanceSensorEvent event) {
        LOG.info("Received a distance event: {}", event);

        if(event.getDistance() < 30) {
            LOG.info("Stopping walking motion");
            robot.getMotionEngine().stopWalking();
        }
    }

    @EventSubscribe
    public void receive(TextualSensorEvent event) {
        LOG.info("Barcode scanned: {}", event.getValue());

        robot.getCapability(NaoSpeechEngine.class).say(event.getValue(), "english");
    }

    @EventSubscribe
    public void receive(BumperEvent event) {
        LOG.info("Head was touched on: {}", event.getLabel());
    }
}

The robot has a main identifier which is called the ‘controllerId’, in the above example this is ‘peppy’. Also all sensors have a name, for example the BumperSensor defined below has a name of type ‘head’. Each component can generate multiple events which all need to be labeled identifying the source of the event. For example in case the head gets touched the label will indicate where on the head the robot is touched.

Effectively this means any sensor event sent to the cloud will always have three identifiers (controllerId, itemId and label). For commands send from the MQTT bridge there will also always be three identifiers with slightly different meaning (controllerId, itemId and commandType).

Here is an example of a sample sensor event coming from a robot forwarded to the MQTT broker. On MQTT each message is sent to a topic where you can subscribe to, events from the robot are sent to the following topic for below example /states/peppy/head/FrontTactilTouched. The message sent to that topic has a body as following:

{
   "value":[
      "com.oberasoftware.home.api.impl.types.ValueImpl",
      {
         "value":false,
         "type":"BOOLEAN"
      }
   ],
   "controllerId":"peppy",
   "channelId":"head",
   "label":"FrontTactilTouched"
}

For a command send via the cloud the message to the MQTT bridge would be sent to a topic like this: /command/peppy/tts/say. The message for letting the robot say something would be as following:

{
  "controllerId":"peppy",
  "itemId":"tts",
  "commandType":"say",
  "properties":{
  	"text":"Hello everyone, how are you all doing?",
    "language":"english"
  }
}

If you are curious and would like to see what it takes to develop your own robot to connect to the cloud you can find all the code on Github. The full code for the Nao implementation can be found here: https://github.com/renarj/robo-pep. I have similar code for the Raspberry Pi robot which can be found here: https://github.com/renarj/robo-max

The Architecture

Now that the robot has been implemented it is up to the microservices to coordinate these robots. I have decided to split the architecture in two parts a public message ring using MQTT and an internal one that is secure using ActiveMQ.

At the border of the cloud there is a message processor that picks up message from MQTT and forward to ActiveMQ. This processor checks if the messages coming in are valid and allowed, and if so forwards it to the secure internal ring which is using ActiveMQ. This way I can have a filter on the edge to protect against attacks and authorize the sensor data and commands, this is just nothing more than a bit of safety and scalability for potentially a future.

So the architecture in essence looks as following:
robo-arch

Deploying to the cloud

Based on above architecture I have a number of services I need to deploy including their supporting services. I will deploy the entire stack using Docker on a Kubernetes cluster running on Amazon AWS. For more information on how to deploy to Kubernetes please read my previous blog post: <>

Deploying the services

Getting back to deploying my services, this is the list of services / containers to deploy:
* MQTT message broker based on Moquette for public Robot messages
* ActiveMQ message broker for internal system messages
* Edge processor that forward message from MQTT to ActiveMQ and other way around
* Command service for sending robot commands to the cloud
* State service for processing robot state (sensor data)
* Dashboard service for interacting with the robot (future)

If you want to know more on how to deploy a Docker container on a Kubernetes cluster and how to create one, please check out my previous blog on that: https://renzedevries.wordpress.com/2016/05/31/deploying-a-docker-container-to-kubernetes-on-amazon-aws/

All of the above services can also be found on my github account: https://github.com/renarj

Putting all together

The goal I had was to connect two robots via a brokering architecture and let them interact. It was a lengthy process and it was not easy, but I did finally manage to pull it off. Once the above services where deployed, all I had to do was start the implemented spring-boot application on the individual robots. In order to get the interaction going tho, i had to write a third spring-boot application that would receive events from both robots and take action based on these events. The awesome part about the above architecture is that i can now do that simply, by writing remote capability connectors for the robot framework that directly speak to the MQTT bridge.

You can see this in the below code:

Robot max = new SpringAwareRobotBuilder("max", context)
        .motionEngine(RemoteMotionEngine.class)
        .remote(RemoteCloudDriver.class, true)
        .build();

Robot pep = new SpringAwareRobotBuilder("peppy", context)
        .motionEngine(RemoteMotionEngine.class)
        .capability(RemoteSpeechEngine.class)
        .remote(RemoteCloudDriver.class, true)
        .build();

MaxRobotEventHandler maxHandler = new MaxRobotEventHandler(pep);
max.listen(maxHandler);

PepRobotEventHandler pepHandler = new PepRobotEventHandler(max);
pep.listen(pepHandler);

public static class MaxRobotEventHandler implements EventHandler {
    private Robot pep;

    private AtomicBoolean guard = new AtomicBoolean(true);

    private MaxRobotEventHandler(Robot pep) {
        this.pep = pep;
    }

    @EventSubscribe
    public void receive(ValueEvent valueEvent) {
        LOG.info("Received a distance: {}", valueEvent.getValue().asString());
        if(valueEvent.getControllerId().equals("max") && valueEvent.getLabel().equals("distance")) {
            int distance = valueEvent.getValue().getValue();
            if(distance < 30) {
                if(guard.compareAndSet(true, false)) {
                    LOG.info("Distance is too small: {}", distance);
                    pep.getCapability(SpeechEngine.class).say("Max, are you ok, did you hit something?", "english");

                    Uninterruptibles.sleepUninterruptibly(10, TimeUnit.SECONDS);
                    guard.set(true);
                    LOG.info("Allowing further distance events");
                }
            }
        }
    }
}

public static class PepRobotEventHandler implements EventHandler {
    private Robot max;

    public PepRobotEventHandler(Robot max) {
        this.max = max;
    }

    @EventSubscribe
    public void receive(ValueEvent valueEvent) {
        LOG.info("Received an event for pep: {}", valueEvent);
        if(valueEvent.getControllerId().equals("peppy") && valueEvent.getItemId().equals("head")) {
            if(valueEvent.getValue().asString().equals("true")) {

                max.getMotionEngine().runMotion("Bravo");
            }
        }
    }
}

What happens in this code is that when we receive an event from Pep (the Nao robot) indicating his head was touched, we trigger a motion to run in the other robot named Max (Raspberry Pi robot). The other way around if we receive a distance event on Max indicating he is about to hit a wall we execute a remote operation on the speech engine of Pep to say something.

So how does that look, well see for yourself in this Youtube video:

Conclusion

It has been an incredible challenge to get this interaction all working. But i finally did manage to get it working and i am just at the starting point now. The next step is to work out all the capabilities in the framework including for example video/vision capabilities in both robots. After that the next big step will become to get both robots to explore the room and try to find each other and then collaborate. More on that in posts to come in the future.

Deploying a Docker container to Kubernetes on Amazon AWS

Based on recent posts I am still very busy with my Robots, the goal I am working on now is to create multiple Robot interaction solution. The approach I am taken here is to utilize the Cloud as a means of brokering between the Robots. The Architecture I have in mind will be based on a reactive / event driven framework that the robots will use for communication.

One of the foundations for this mechanism is to utilise the Cloud and Event driven architecture. In another post I will give some more detail about how the robots will actually interact. But in this post I wanted to give some information about the specifics of my Cloud Architecture and how to deploy such a Cluster of microservices using Docker and Kubernetes.

Deploying to the cloud

The robot framework will have a significant number of microservices and their dependent supporting services that I need to deploy. With this in mind I have chose to go for a Docker / Microservices based Architecture where in the future I can easily extend the number of services to be deployed and simply grow the cluster if needed.

I have quite some experience now at this moment with Docker and really have grown fond of Kubernetes as an orchestrator so that will be my pick for orchestrator as it is so easy to administer on the Cloud. There are other orchestrators, but the other ones seems to be more early on, although the AWS ECS service does seem to get close

Note to google: I have played before on Google Cloud and Kubernetes and its actually a perfect fit. It is super easy to deploy containers and have your kubernetes cluster there. However Google does not allow individual developers (like me) without their own company a Cloud account in EU because of VAT ruling. So unfortunately for this exercise I will have to default to Amazon AWS. Hope Google reads this so they can fix this quickly!

Creating a Kubernetes cluster on AWS

In this next part I will demo on how to deploy the most important service, which is the MQTT one to a Kubernetes cluster.

In order to get started I need to first create an actual Kubernetes cluster on AWS. This is relatively simple, I just followed the steps on this page: http://kubernetes.io/docs/getting-started-guides/aws/

Prerequisite: Amazon AWS CLI is installed and configured http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html

I used the following variables to create the cluster, which will be a 4 node cluster using M3 Medium instances:

export KUBERNETES_PROVIDER=aws
export KUBE_AWS_ZONE=eu-west-1c
export NUM_NODES=2
export MASTER_SIZE=m3.medium
export NODE_SIZE=m3.medium
export AWS_S3_REGION=eu-west-1
export AWS_S3_BUCKET=robot-kubernetes-artifacts
export INSTANCE_PREFIX=robo

After that it is simply a case of running this:

wget -q -O - https://get.k8s.io | bash

If you already previously ran above command, a directory called ‘kubernetes’ will have been created and in there you can control the cluster using these commands:
For creating the cluster (only if you already ran above command once before):

cluster/kube-up.sh

For stopping the cluster:

cluster/kube-down.sh

After these above commands have been run your cluster should have been created. For kubernetes you still need a small tool the kubernetes command
line client called ‘kubectl’. The folder that was downloaded with kubernetes gets shipped with a client. You can simply add it to your path as following:

# OS X
export PATH=/Users/rdevries/kubernetes/platforms/darwin/amd64:$PATH

# Linux
export PATH=/Users/rdevries/kubernetes/platforms/linux/amd64:$PATH

You should be able to check your cluster is up and running with the following:

kubectl get nodes
NAME                                         STATUS    AGE
ip-172-20-0-115.eu-west-1.compute.internal   Ready     4m
ip-172-20-0-116.eu-west-1.compute.internal   Ready     4m
ip-172-20-0-117.eu-west-1.compute.internal   Ready     4m
ip-172-20-0-118.eu-west-1.compute.internal   Ready     4m

Using Amazon ECR

In order to get the containers to deploy on Kubernetes I have to make my containers available to Kubernetes and Docker. I could use Docker hub to distribute my containers and this is actually fine for MQTT which I have made available on Docker hub (https://hub.docker.com/r/renarj/mqtt/).

However some of the future containers are not really meant for public availability, so instead I am opting to upload the mqtt container to the new Amazon Container Registry just to test out how simple this is.

We can simply start using the AWS CLI and create a Container repository for MQTT:
aws ecr create-repository --repository-name mqtt

Next we can build the container from this git repo: https://github.com/renarj/moquette-spring-docker

git clone https://github.com/renarj/moquette-spring-docker.git
cd moquette-spring-docker
mvn clean package docker:build

Now we need to upload the container to AWS, these instructions will vary slightly for everyone due to account id being used. First we need to tag the container, login to AWS ECR and last push the container.
1. Tag container docker tag mqtt/moquette-spring:latest 1111.dkr.ecr.eu-west-1.amazonaws.com/mqtt:latest (Replace 1111 with your AWS account id)
2. Login to AWS with command return by this aws ecr get-login --region eu-west-1
3. Push container: docker push 1111.dkr.ecr.eu-west-1.amazonaws.com/mqtt:latest

This should make the container available in the Kubernetes cluster.

Deploying the containers
In order to deploy the containers I have to create a small Yaml file for the MQTT service and the Container to be deployed.

The first part is to tell Kubernetes how to deploy the actual container, we can do this with this simple Yaml file:

apiVersion: v1
kind: ReplicationController
metadata:
  name: mqtt
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: mqtt
    spec:
      containers:
      - name: mqtt
        image: 1111.dkr.ecr.eu-west-1.amazonaws.com/mqtt:latest
        ports:
        - containerPort: 1883
        imagePullPolicy: Always

If you look at above sample there are two important aspects, one if the image location. This is the same location as my ECR repository URL. If you do not know the url you can always execute this AWS CLI command: aws ecr describe-repositories which will return a list of all available repositories and url’s.

Note: I have noticed in older versions of Kubernetes there are some permission errors when using ECR as the image repository, I am unsure exactly why, but the later version of Kubernetes (1.2.4+) seem to properly work

Now that we have described how the container needs to be deployed, we want to tell Kubernetes how it can be made available. I will load balance the MQTT service using a ELB, the nice part about Kubernetes is that it can arrange all that for me.

In order to tell Kubernetes and as a result AWS to create a loadbalancer we define a Kubernetes service with type ‘LoadBalancer’:

apiVersion: v1
kind: Service
metadata:
  name: mqtt
  labels:
    app: mqtt
spec:
  type: LoadBalancer
  ports:
  - port: 1883
    targetPort: 1883
    protocol: TCP
    name: mqtt
  selector:
    app: mqtt

Using this we should have a loadbalancer available after the service is created, in order to find out the public address of the service you can describe the service from kubernetes and find the endpoint like this:

kubectl describe svc/mqtt
Name:           mqtt
Namespace:      default
Labels:         app=mqtt
Selector:       app=mqtt
Type:           LoadBalancer
IP:         10.0.225.68
LoadBalancer Ingress:   a9d650446271e11e6b3a30afb1473159-632581711.eu-west-1.elb.amazonaws.com
Port:           mqtt    1883/TCP
NodePort:       mqtt    30531/TCP
Endpoints:      10.244.1.5:1883
Session Affinity:   None
Events:
  FirstSeen LastSeen    Count   From            SubobjectPath   Type        Reason          Message
  --------- --------    -----   ----            -------------   --------    ------          -------
  2s        2s      1   {service-controller }           Normal      CreatingLoadBalancer    Creating load balancer
  0s        0s      1   {service-controller }           Normal      CreatedLoadBalancer Created load balancer

Please note the LoadBalancer Ingress address, this is the publicly available LB for the MQTT service.

Testing MQTT on Kubernetes

Well now that our cluster is deployed and we have a MQTT load balanced container running, how can we give this a try? Well for this I have used a simple node.js package called mqtt-cli. Simply run npm install mqtt-cli -g to install the package.

After this you can send a test-message to our load balanced mqtt container using this command with a -w at the end to watch the topic for incoming messages:

mqtt-cli a9d650446271e11e6b3a30afb1473159-632581711.eu-west-1.elb.amazonaws.com /test/Hello "test message" -w

Now let’s try in a new console window to again send a message but now without the ‘-w’ at the end:

mqtt-cli a9d650446271e11e6b3a30afb1473159-632581711.eu-west-1.elb.amazonaws.com /test/Hello "Message from another window"

If everything worked well it should now have sent the message to the other window, and voila we have a working kubernetes cluster with a load balanced mqtt broker on it 🙂

Next steps

I will be writing a next blog post soon on how all of the above has helped me connect my robots to the cloud. Hope this above info helps some people and shows Kubernetes and Docker are really not at all so scary to use in practice 🙂