Create your own Crypto dashboard with Kotlin deployed on Kubernetes & AWS

It has been a long while since I posted but recently I have been messing around with crypto and decided I needed a way to quickly gain insights in to several statisics around Crypto currencies. In specific I wanted to have different Pairs (BTC vs EUR and BTC vs ETH) in a single page. Each of these pages can be easily found on the interwebs, but I could not find a single dashboard.

Hence I was wondering if I could not run a Grafana dashboard and publish some of the output from these exchanges to this? Seems easy enough so let’s get to it and hopefully you can also get this up and running quickly!!


When I do these projects I always love to challenge myself a bit and use some technologies I am not familiar with yet and learn more about them, hence I picked Kotlin as my main language. I first drafted a high level concept Architecture on what I wanted to achieve:

Rationales for the components:
1. The reason I am going with Kafka as this later allows me to hook in additional consumers with different responsibilities and keep a clean separation of concerns.
2. For this dashboard I am using Kotlin for building the Kraken scraper and the consumer that pushes the data into InfluxDB. As mentioned I love a new challenge, have read a lot about Kotlin but never used in a production use-case, so this is a good as moment as ever to try this.
3. For storing the data for the dashboard I decided to use an old favorite of mine InfluxDB which is perfectly suited as a Time-Series DB to display the crypto values over time and allow me to do some queries on this for the dashboard.

Building the services

All of the services where written in Kotlin, this is actually where I spend most of the time as this was my first serious attempt writing something in Kotlin that needs to be more production level and can actually be deployed. I have used Maven for building and used the spotify-maven plugin for creating dockerized processes, nothing to fancy.

Framework wise I used Fuel as the HTTP client framework that helps me scrape the Kraken API and Clikt for some command line parsing that allows me to parameterise all the different configuration settings. This allows me to use Kubernetes to set all the appropriate configuration values and inject them either using the startup arguments or environment variables.

The code of this project is freely available here:

Deploying to AWS Kubernetes

I am using a Kubernetes cluster on AWS that is deployed using KOPS you can read about this in my previous blog post (

The reason I am using Kops instead of the service in AWS for Kubernetes is mainly convenience, Kops gets the cluster up and running quickly for me and I can tweak all the settings easily. The AWS K8 offering looks like more tinkering is needed, hence I decided to skip it for now.


The deployment is quite straightforward, I create a number of services for the internal components (influxdb, zookeeper and kafka). For Kafka I needed to use a bit of custom configuration specially in the listeners area to get this working. I am not an expert in this domain, and given my setup only uses a single broker with a single partition I did not find it so relevant to spend a lot time on this as scaling this out is probably not needed for this simple use-case.

In case you want to deploy this yourself, please clone this Github Repo:

For deploying Kafka, Zookeeper and InfluxDB (supporting services) use the following commands (in this order preferably):

kubectl apply -f k8-deployment/zookeeper-service.yaml
kubectl apply -f k8-deployment/zookeeper-deployment.yaml
kubectl apply -f k8-deployment/kafka-service.yaml 
kubectl apply -f k8-deployment/kafka-public-service.yaml 
kubectl apply -f k8-deployment/kafka-deployment.yaml 
kubectl apply -f k8-deployment/influxdb-config.yaml
kubectl apply -f k8-deployment/influxdb-service.yaml
kubectl apply -f k8-deployment/influxdb-deployment.yaml
kubectl apply -f k8-deployment/grafana-service.yaml
kubectl apply -f k8-deployment/grafana-deployment.yaml

This above should deploy all the supporting services for our Crypto dashboard, but now we need to deploy the scraper and consumer to influxdb processes. Let me put the code here inline so it is visible what configuration options there are:

The Kraken scraper has this deployment descriptor:

apiVersion: apps/v1
kind: Deployment
    app: crypto-producer-kraken
  name: crypto-producer-kraken
  replicas: 1
      app: crypto-producer-kraken
        app: crypto-producer-kraken
        - env:
            - name: KAFKA_PRODUCER_TOPIC
              value: "crypto-topic"
            - name: KAFKA_HOST
              value: "kafka-public-svc"
            - name: KAFKA_PORT
              value: "30092"
          image: renarj/kraken-producer:1.0.004
          imagePullPolicy: IfNotPresent
          name: crypto-producer-kraken

The consumer has the following:

apiVersion: apps/v1
kind: Deployment
    app: crypto-subscriber
  name: crypto-subscriber
  replicas: 1
      app: crypto-subscriber
        app: crypto-subscriber
        - env:
            - name: KAFKA_CONSUMER_TOPIC
              value: "crypto-topic"
            - name: KAFKA_HOST
              value: "kafka-public-svc"
            - name: KAFKA_PORT
              value: "30092"
            - name: KAFKA_GROUP
              value: "crypto-tickers"
            - name: INFLUXDB_HOST
              value: "influxdb-service"
            - name: INFLUXDB_PORT
              value: "8086"
            - name: INFLUXDB_NAME
              value: "cryptoseries"
          image: renarj/crypto-subscriber:1.0.003
          imagePullPolicy: IfNotPresent
          name: crypto-subscriber

Most important configuration is to configure the Kafka connectivity and which topic to produce/consume from and for the consumer the location of the InfluxDB service. In this case they connect to the internal public service on port 30092, which is a NodePort service. This can work only in this setup with kafka due to only have a single Broker in this cluster and no need to scale-out.

All the deployment descriptors and the services needed can be found in this repository and have been tested both on a Minikube setup and an AWS K8 Kops based Cluster:

Note of warning: All of the deployment descriptors are available but not hardened in anyway, recommend to run these in a shielded Kubernetes Cluster

End result

After all the deployment was done I needed only to configure the Grafana dashboard, simply pointing it to the influxdb-service and configuring a few visualisations has given me this result:

I really hope people can use this to fine-tune and run their own crypto dashboard, all the Code and deployment descriptors are available:

Dynamixel XL430-W250T Servo on Raspberry PI using Java

As I am recently building multiple robots at once (Hexapod, Rover and some other projects you can read on my blog) I was running out of Servo’s and wanted to buy a few more. At the end of 2017 Robotis has released a new series of Servos which I believe are intended to replace the trusted AX/MX series I have been using before.

I thought it was a good idea to acquire a few of the new servos to use in one of my new robot designs. In targeted the XL430-W250T servos which should be the equivalent of the AX12A servos from the previous generation. I bought in total 6 servos and tried to connect and control them using the same setup as I had before. In this post I want to detail out a few of the challenges I faced on controlling these servos.

Powering up

The first simple challenge I had was to simply connect the servos. With the AX12 servo’s I used a USB2AX stick to control the servos using the Dynamixel 1.0 protocol. This worked great on the old servos and I was hoping this was all backwards compatible also in terms of powering the servos. It turns out Robotis has made some changes that make it difficult to 1 on 1 swap the servos if you are upgrading from the AX12.

The main challenge is simply the connector, it’s still TTL based hardware but the connector has been swapped to a more universal JST plug. This does mean all my existing accessoires are useless like hubs, power supply etc. I am sure that once these servo’s become more mainstream this will change. I solved this by purchasing these converter cables that on one end have the old connectors and the other end use the new JST plugs.

After plugging in the servo’s to my old SMPS power board and the hubs, the servo’s came to life. I used the windows Robotis Servo Manager on first initialisation to set a unique Servo ID. It worked mostly like the old servo’s however some parameters have moved and have wider address spaces (4 vs 1 or 2 bytes on the old servos).

Dynamixel 2.0 Protocol

This brings me to the Dynamixel 2.0 protocol, this protocol is a newer iteration but based on the same principles as the old protocol. You send packages directed to either a specific servo (identified by the ID) or broadcast to all the servos. For each package you send you get a response package from that one servo, for example requesting the position, temperature and other properties.

Java on Raspberry PI

I have written a small library I can use to control the servos from my Raspberry PI in Java which also includes a dashboard for using and controlling the servos. For those that are interested it can be found on github here:

If you want to start the library please make sure you have maven and Java 8 or higher installed on your raspberry PI or other device (Mac is also tested and working, assume Windows works as well).

From the root of the git repository fire up the following command

mvn -f dynamixel-web/pom.xml spring-boot:run 
-Ddynamixel.baudrate=57600 -Dprotocol.v2.enabled=true 

For the dynamixel.port please enter your com port connecting your USB2AX or other USB controller connected to your Dynamixels. In case you are using the new Dynamixel X series (or XL320) please set the protocol.v2.enabled startup flag to ‘true’ and set the approriate baudrate (default is 57600, but dynamixels out of the box are set to 1MBit, but the serial library I use cannot handle this on a Mac/Raspberry PI).

Once the software is started it should show the following line Robot construction finished

After starting you can acess the servo control dashboard on the following url:

Note: Please replace localhost with the correct IP if running remotely

The Dashboard should show roughly the following:


I hope the above library will help people who want to use the new Dynamixel X servos in Java, it was a bit of exploring to get it to work, but all in all it is very familiar if you are already using the older Dynamixel Servos. As always I love using the Dynamixel servos and seeing this new series being available with some additional control parameters is really great news. Hope to use these in a lot more upcoming experiments going forward.

Feel free to use / share / fork / copy my Library for controlling these servos here:

Building a Hexapod robot part 1: Design & build

It has been a while since I have been writing and this coincides with me moving to a new house and my previous project simply was just done. I have been looking at a new challenge and have been dabbing and experimenting a bit. I had already been thinking for a while to make a hexapod robot and finally I pulled the trigger and decided to start building my own design Raspberry PI based Hexapod robot.

Based on past experience I wanted to simplify my design considerably and went to inspire my design based on some existing hexapods like the Phoenix hexapod form lynxmotion.

Let’s start with the basics what is the bill of materials for the base electronics components:

  • Hobbyking 20A SBEC (for powering the PI)
  • Raspberry PI 3
  • GrovePI+ board for sensor components
  • GrovePI Compass & Gyro
  • Turnigy Nano Lipo 3s 2200mAh
  • 18x Dynamixel 12A servos
  • USB2AX usb to serial adapter for Dynamixel TTL servos
  • SMPS2Dynamixel (provides power to servos)
  • 2* Dynamixel AX/MX Hub
  • 100 M2.5 Bolts of length 8mm + 100 M2.5 Nuts
  • 100 M3 Bolts of length 12mm + 100 Locking Nuts M3
  • 2 Spools of Filament (ABS)

Designing the Hexapod

Compared to the Rover robot I designed before I am now trying to keep things as simple as possible as overcomplicating things always made it a lot harder. Looking a lot at the Lynxmotion Phoenix Hexapod I inspired my design on this and am going to make a simple base frame where the legs are attached to and have legs with three degrees of freedom.

The frame will be a simply frame that allows me to connect all six of the legs of the Hexapod and nothing more. The frame will consist of 7 pieces which are the main beams that will connect the legs.

In order to give the frame some structure I will use the servo’s as the connecting piece so they are in essence part of the structure. For this I designed some brackets holding the servo which are also used to give the frame its shape. To finish the frame I designed two bottom plates holding the raspberry pi and power systems and on top of that two plates to close the robot up.

This is how the design looks in Fusion 360 for the baseframe:


For the Legs themselves I decided to stick to roughly the same design as some of the Robotis frame pieces for the AX12 servos as they worked well in the past. I replicated their design in Fusion 360 and modified it slightly so I can 3d print them and they will have enough strength once printed.

Each leg consists of the servo attached to the frame (coxa), which is then connected using two brackets to the middle servo (femur) and this connects with two small frame pieces to the servo controlling the feet (tibia). On the feet we have a special angled frame piece with a small nub on the bottom for grip.

This looks as following in Fusion 360:

This time I also made sure to fully assemble the robot prototype in fusion 360 before actually printing the robot, this makes it less likely to make mistakes causing a reprint, given the sheer amount of parts something that really will save me in the long run. After quite some tweaking the full design looks like this in Fusion 360:

Constructing the robot

My intention is to start sharing some building steps and share the STL files publicly so hopefully other people can repeat and also build this robot. Once I will do that I also want to share some building steps, so far I have not done that, hence all I can really share are some assembly pictures:

Body under construction:

Legs being assembled:

Robot completed:


For the electronics I am sticking to some of the previous electronics work I have been doing in terms of powering the dynamixel servos and the raspberry pi. Here is a previous blogpost on how the electronics are wired together:

What is next?

This was the first part of a series of posts on building some more simpler robots based on previous learnings. With the base design and build of the hexapod finished the next step is making it walk and I will write this up as soon as possible and hopefully share the design files as well.

From the design to the build version of this robot only took me 3-4 weeks, where previous robots took me many months to get them operational. I strongly believe keeping things simply has enabled me to do this, and will try to stick to this principle for future robots as well. I might in the future redo my previous Rover robot based on the same simpler frame design and keep things as simple as possible which might create more effective robots going forward.

I am planning to share the design of the 3D parts on Github soon, keep your eyes out for a link to the repository in the coming weeks.

Building a Raspberry PI Robot Car part 3: Wheel design

It’s been a while since my last post on the design of my robot car. The reason is that I have been moving to a new house and also have been quite stuck with the wheel design for my robot car, and it took many iterations to get the design right. This post is thus all about robot wheels and the design of creating an awesome drive system for the robot 🙂

Original Design

The first iteration I started out with a more traditional wheel design with a simple rubber wheel. This wheel has a suspension system with a spring based actuation based on a simple triangular geometry.

This looked as following:

There are quite some obvious challenges with this design, one being that if you have 4 motors/friction components meaning you can not really drive this setup without some form of active steering. My Design did not include this type and this was also not the intention, the wheel setup was to validate the suspension design. Altho the suspension was weak it was ok enough for such simple wheels and therefore I kept the basic design for the moment.

Wheel Design

The design goal was to create some Mecanum wheels that would allow omni directional drive system. I had looked into buying these wheels online, but seeing prices of 130-150 dollar for a set of 4 wheels I thought why not print them myself. Initially I looked through existing designs and settled with something that I found on a blog / thingiverse ( and modified this to fit my Dynamixel Servo’s.

This resulted in the following wheel setup:

Altho it worked more or less, the main challenge here was that I never got the rollers to roll freely enough. This together with lack of friction made the drive quality quite poor and not reliable when driven in a ‘strafing’ mode for example as is visible in this movie:

Mecanum Wheel v2
Based on this I decided to take a fully different approach and start design my full wheel myself. I created a properly angled roller setup and used bearings for all the rollers. It took me many iterations and improvement cycles to get this right, there was always some challenge in terms of getting perfect frictionless setup. It took me as many as 20 prototypes before I settled on a final design.

Next to this the rollers where printed in a TPU based filament so they are more rubber like as you would have in real mecanum based wheels. This has significantly improved the wheel roller quality and resulted in the following single wheel setup:

Improved suspension

After having finished the mecanum wheels I found out the current suspension simply was to light for the more heavy wheels and there was to much flex in the system. Inspired by this Makeblock robot ( I started a design and ended up with the following:

Driving the Rover

The result of this is quite good, the robot has a perfect strafe and axis rotation.

Hope this shows that it is possible to create a fully open source robot design with mecanum wheels that work quite well. If there is interest I can upload the 3D files to github in the coming period.

Autoscaling Containers on Kubernetes on AWS

One of the challenges I faced recently was the ability to autoscale my containers on my Kubernetes cluster. I realised I had not written yet about this concept and thought I would share how this can be done and what the pitfalls there were for me.

If you combine this concept with my previous post about autoscaling your kube cluster ( you can create a very nice and balanced scalable deployment at lower costs.

Preparing your cluster

In my case I have used Kubernetes KOPS to create my cluster in AWS. However by default this does not install some of the add-ons we need for autoscaling our workloads like Heapster.

Heapster monitors and analyses the resource usage in our cluster. These metrics it monitors are very important to build scaling rules, it allows us for example to scale based on a cpu percentage. Heapster records these metrics and offers an API to Kubernetes so it can act based on this data.

In order to deploy heapster I used the following command:

kubectl create -f

Please note that in your own kubernetes setup you might already have heapster or want to run a different version.

Optional dashboard
I also find it handy to run the Kubernetes dashboard, which you can deploy as following under KOPS:

kubectl create -f

Deploying Workload

In order to get started I will deploy a simple workload, in this case its the command service for my robotics framework (see previous posts). This is a simple HTTP REST endpoint that takes in JSON data and passes this along to a message queue.

This is the descriptor of the deployment object for Kubernetes:

apiVersion: extensions/v1beta1
kind: Deployment
  name: command-svc
  replicas: 1
        app: command-svc
      - name: command-svc
        - containerPort: 8080
            memory: "256Mi"
            cpu: "250m"
            memory: "512Mi"
            cpu: "500m"
            port: 8080
          initialDelaySeconds: 30
          periodSeconds: 10
            port: 8080
          initialDelaySeconds: 60
          periodSeconds: 10
        - name: amq_host
          value: amq
          value: production

Readyness and Liveness
I have added a liveness and readyness probe to the container, this allows Kubernetes to detect when a container is ready and if its still alive. This is important in autoscaling as otherwise you might get pods already enabled in your loadbalanced service that are not actually ready to accept work. This is because Kubernetes by default can only detect if a pod has started, not if the process in the pod is ready for accepting workloads.

These probes test if a certain condition is true and only then the pod will get added to the load balanced service. In my case I have a probe to check if the port 8080 of my rest service is available. I am using a simple TCP probe as the HTTP probe that is also offered gave strange errors and the TCP probe works just as well for my purpose.

Now we are ready to deploy the workload and we deploy this as following:

kubectl create -f command-deployment.yaml

## Enabling Autoscaling
The next step is to enable autoscaling rules on our workload, as mentioned above we have deployed heapster which can monitor resource usage. In this case I have set some resource constraints for the pods to indicate how much CPU its allowed to consume. For the command-svc per pod we have a limit of 500m, which translates to roughly 0.5 CPU core. This means if we create a rule to scale at 80 cpu usage this is based on this limit, so it will scale 80% usage of the 0.5 CPU limit.

We can create a rule that says there is always minimum of 1 pod and a maximum of 3 and we scale-up once the cpu usage exceeds 80% of the pod limit.

kubectl autoscale deployment command-svc --cpu-percent=80 --min=1 --max=3

We can ask for information on the autoscaling with the following command and monitor the scaling changes:

kubectl get hpa -w

Creating a load

I have deployed the command-svc pod and want to simulate a load using a simple tool. For this I have simple resorted to Apache JMeter, its not a perfect tool but it works well and most important its free. I have created a simple thread group with 40 users doing 100k requests against the command-svc from my desktop.

This is the result when monitoring the autoscaler:

command-svc   Deployment/command-svc   1% / 80%   1         3         1          4m
command-svc   Deployment/command-svc   39% / 80%   1         3         1         6m
command-svc   Deployment/command-svc   130% / 80%   1         3         1         7m
command-svc   Deployment/command-svc   130% / 80%   1         3         1         7m
command-svc   Deployment/command-svc   130% / 80%   1         3         2         7m
command-svc   Deployment/command-svc   199% / 80%   1         3         2         8m
command-svc   Deployment/command-svc   183% / 80%   1         3         2         9m
command-svc   Deployment/command-svc   153% / 80%   1         3         2         10m
command-svc   Deployment/command-svc   76% / 80%   1         3         2         11m
command-svc   Deployment/command-svc   64% / 80%   1         3         2         12m
command-svc   Deployment/command-svc   67% / 80%   1         3         2         13m
command-svc   Deployment/command-svc   91% / 80%   1         3         2         14m
command-svc   Deployment/command-svc   91% / 80%   1         3         2         14m
command-svc   Deployment/command-svc   91% / 80%   1         3         3         14m
command-svc   Deployment/command-svc   130% / 80%   1         3         3         15m
command-svc   Deployment/command-svc   133% / 80%   1         3         3         16m
command-svc   Deployment/command-svc   130% / 80%   1         3         3         17m
command-svc   Deployment/command-svc   126% / 80%   1         3         3         18m
command-svc   Deployment/command-svc   118% / 80%   1         3         3         19m
command-svc   Deployment/command-svc   137% / 80%   1         3         3         20m
command-svc   Deployment/command-svc   82% / 80%   1         3         3         21m
command-svc   Deployment/command-svc   0% / 80%   1         3         3         22m
command-svc   Deployment/command-svc   0% / 80%   1         3         3         22m
command-svc   Deployment/command-svc   0% / 80%   1         3         1         22m

You can also see that it neatly scales down at the end once the load goes away again.


I have noticed a few things about the autoscaling that are important to take into account:
1. The CPU percentage is based on the resource limits you define in your pods, if you don't define them it won't work as expected
2. Make sure to have readyness and liveness probes in your container else your pods might not be ready but already get hit with external requests
3. I could only have probes that use TCP for some reason in AWS, unsure why this is the case, HTTP probes failed for me with timeout exceptions.


I hope this post helps people get the ultimate autoscaling setup of both your workloads and also your cluster. This is a very powerfull and dynamic setup on AWS in combination with the cluster autoscaler as described in my previous post:

Building a Raspberry PI Robot Car part 2

In the last post we talked about the electronics, in this post I will talk a bit about the 3D design and printing of the components. I have recently acquired an Ultimaker 3D printer and after quite some experimenting has led me to be able to start designing my own components for the robotcar. In this blog I will try to walk through the design of the robotcar.

Designing the robot

The robot itself is going to consist of 4 main components:
* Casing containing the main electronics (Raspberry PI, power distribution, etc.
* Casing containing the LiPo battery that allows easy battery replacement
* Frame that supports both the battery and electronics casing
* Wheel / suspension mechanism to hold the wheels

Note: The printer has a maximum print size of roughly 20x20x20 cm, so this is the main reason that the casing for the power, electronics and frame are separated from each other.

The software
For the design of the software I started out with TinkerCad which is an online free based 3D editor. However I quickly ran into problems with dimensions which get quickly complex. I switched after this to Autodesk Fusion 360 which is a lot better if it comes to designing technical components, as a hobbyist it is possible to get a free year license.

Wheel / Suspension

The suspension design is a spring based design that will allow some form of flex in the wheel design. The wheel design actually needs to attach to a servo, the wheel itself is attached to the servo. For this I have designed a small bracket suited for my Dynamixel servo’s.

Next I have one beam that will have the spring attached to it and two static beams that connect to the servo holder. The static beams will ensure linear motion of the servo holder and the spring ensures there is dampening of the motions. This looks as following:

For the wheel design I will at some point dedicate a special post as they have caused me a lot of headache. For now I will use some standard wheels that fit onto the servo’s, but ultimately these will become mecanum based wheels.

Designing the frame

The beams used for the suspension are actually part of the base frame. There are going to be 4 wheels, meaning 4 beams that are part of the frame. In order to create sufficient surface for the battery and electronics casing I have connected the beams in a longer frame using connecting pieces. I have design an end piece for the end pieces of the frame and a middle piece to connect the beams all together. This looks as following:

Each of the beams has a length of 12cm, the middle piece is 4cm and the end pieces each 2cm. This gives a total length of 32cm for the robotcar, this is quite long but for the current suspension design it is needed as the suspension beams cannot really be shortened. In the future I might want to shorten the design by redesigning the suspension, however for now its good enough.

Battery & Electronics case

The main battery and electronics case has caused me a lot of problems and many iterations to get right. Every time you print it, there is something that is not entirely right. The tricks has been to measure, measure and measure again all the components you want to fit. I have in the end drawn out a sketch on paper roughly showing the placement of the components. Both the battery and electronics case have to fit in a fixed length of 16cm and 10 cm in width to fit the baseframe. The electronics case contains special accomodation for the Raspberry PI, UBEC power converter, two grove Sensors and the Dynamixel power board:

Note: The electronics casing will have a separate lid which will allow closing up the electronics compartment and allow easy access.

For the battery case its a lot simpler, we just need something to contain the battery. However one of the challenges is that I do not want a lid here, it just needs to be easily replaceable. For this to work there will be two covers on either end of the case that hide the wires but are far enough apart to remove the battery. A not here is that I used round edges instead of sharp 90 degree angles to allow for better printing without support. The round angles allow for a pretty decent print on my ultimaker, and its a lot better than having support material in the case. The case looks as following:

Assembling the robot

Here are a series of pictures of the various parts in stages of assembly


The process of getting to the above design and printed parts has not been easy. I have had for each component many, many iterations before getting to the above. Even now I am still seeing improvement areas, however for now I do think its close to being a functional robot car which was the goal. In the future posts I will start talking a bit about the software and the drive system with the mecanum wheels.

For those wanting to have a look at the 3D parts, I have uploaded them to Github, the idea is in the future to provide a proper manual on how to print and assemble with a bill of materials needed, for now just have a look:

Here is a last picture to close with of the first powerup of the robot car:

Building a Raspberry PI Robot Car part1

In the recent few months I have been very focussed on a few topics like humanoid robotics and robot interaction. Recently I have had some extra time and decided to take the next step and really design a robot from scratch. I thought for my first from scratch robot it would be handy to start simple and go for a relatively simple wheel based robot.

I will write a series of blog posts about the robot and how I am taking next steps to design and hopefully perfect the robot. In this first post I will discuss the basic concept and shows how I am going to power up the servo’s and control unit.

The robot concept

Let’s start of with setting the design goal of the robot:
Design an open source wheel based robot that has a holonomic drive solution capable of detecting obstacles and recognising objects it encounters

Given this goal let’s first start off with some basic requirements for the robot and what it needs to consist of. I will design this robot based on principles I have used in previous modifications of robots, this has lead me to these requirements:
* It will be based on a Raspberry PI based with Wifi
* Entire robot should be powered by a single LiPo battery for simplicity
* Distance based sensors for obstacle detection
* Rotatable vision camera
* Holonomic drive system where I can use four individual wheel servos for multi-vector driving
* Arm / Gripper for interaction


For the servos in this project I will for the moment re-use my trustworthy Dynamixel AX-12A servos which can be used in continuous rotation mode and therefore act as wheel servos. However given the desire to open source this project and the costs of these servos they will be replaced in the future, however for the first iteration it is best to stick to what I know.

Powering the solution

One of the important principles for this robot design will be that it needs to be powered by a single power source. In previous robots I always used the combination of the Robotis Lipo battery with a separate battery solution for the Raspberry PI. This has caused in multiple projects issues, like balancing issues or simply nowhere to leave the batteries.

LiPo Battery
In this robot I will use a single LiPo battery, I have picked a Turnigy NanoTech 3S battery with a 2200mAh capacity. This should be plenty to power the Raspberry PI and the Servos for a estimated 30-60 minutes, and easy enough to increase capacity in the future.

Power conversion
The Raspberry PI accepts 5 volt as input and needs roughly 1-2 Amps of current. In order to use a single LiPo battery I need a power converter as the 3S Lipo has an output voltage of minimum ~11.1 Volt and Maximum ~12.8 Volt. For this I will use a simple UBEC (Universal Battery Elimination Circuit) from HobbyKing. This Ubec can convert input voltages ranging from 6 volt to 23 volt into a stable output voltage of 5.1 volt with a maximum current of 3 amps, which is perfect for the Raspberry PI.

For the Dynamixel Servos I will use the Dynamixel official power converter a SMPS2Dynamixel . This can take input voltage up to 20 volts so can be directly connected to the 3S Lipo. All we need is a small 2.1/5.5MM DC power jack, I have managed to source one with a screw terminal but you can find different types.

Power wire harnass
In order to connect both the UBEC and SMPS2Dynamixel to a single servo I have to create a small power harnass that splits the power output from the 3S Lipo to both power converters. For this I have custom built a harnass using XT60 power plugs and some cables I have soldered together and put a screw-cap on the end to protect the wire ends. All is in the end topped off with some electric insulation tape, this looks as following:

Combining it all
Next step is connecting all the electronics. In order to control the Dynamixel servo’s I will use my trusted USB2AX which allows controlling the servo’s via a Dynamixel serial protocol. What remains is wiring up the power with a servo and the Raspberry PI. What better way then to show this with a picture:

In order to connect the entire solution I have had to hook the UBEC directly onto the 5v/Gnd header connectors of the Raspberry PI. Do this with extreme care, any wrong polarity will directly blow up your Raspberry PI. Make sure to check the pinning layout properly RED = 5v Black = GND and they need to go to the respected pin header on the Raspberry PI

Look at this slightly more zoomed in picture for the polarity / Raspberry PI Ubec connection and click on it for full zoom:

Next Steps

In this post I zoomed into the big project plan and in particular laid out the power setup. In the next post I will start with the 3D design of the robot and will show how I use Fusion 360 to create the design of the robot.

Deploying a Highly Available Kubernetes cluster to AWS using KOPS

In my previous posts I have talked a lot about deploying a kubernetes cluster. For most part I have used kube-aws from CoreOS which has served me quite well. In the last few months however a lot has happened in the Kubernetes space and a new tool has started becoming very interest called Kops which is a subproject of the kubernetes project.

Both the kube-aws tool and Kops tool started getting support for HA deployments (a requirement for production workloads) and cluster upgrades. One of the major advantages of the Kops tool is its ability to manage and maintain multiple clusters because its stores the ‘cluster state’ in an s3 bucket.

In this post I will do a walkthrough on how to deploy a Highly available cluster using Kops. I will base this on the tutorial page here: The main difference I will describe deploying a HA deployment instead of a regular deployment.

Installing the toolset

Pretty much most of this is covered in the tutorial here( I am using a mac and will use ‘brew install’ to get the needed command line tools installed. You need both the AWS command line client(aws-cli), Kubernetes client (kubectl) and Kops client installed.

Install the clients:
brew install awscli
brew install kubernetes-cli
brew install kops

Once these tools are installed please make sure to configure aws-cli, you can see here how:

Creating the cluster

First we need to set an environment variable to the s3 bucket where we are going to keep the Kops state of all the deployed clusters. We can do this by simply setting an environment variable as following:

export KOPS_STATE_STORE=s3://my-kops-bucket-that-is-a-secret

This bucket is needed because the Kops tool maintains the cluster state in this s3 bucket. This means we can get an overview of all deployed clusters using the Kops tool and it will query the filestructure on the s3 bucket.

Once the s3 bucket is created and variable is set we can go ahead creating the cluster as following:

kops create cluster --master-zones=eu-west-1a,eu-west-1b,eu-west-1c --zones=eu-west-1a,eu-west-1b,eu-west-1c --node-size=t2.micro --node-count=5

The parameters are relatively self explanatory, it is however important that the name includes the fully qualified domain name. Kops will try to register the subdomain in the route53 hosted zone. The most important parameters that makes the setup HA is to specify multiple availability zones, for each zone it will deploy one master node and it will spread the worker nodes across the specified zones as well.

The above command has created a cluster configuration that is now stored in the s3 bucket, however the actual cluster is not yet launched. You can further edit the cluster configuration as following:

kops edit cluster

Launching the cluster
After you have finished editing we can Launch the cluster:

kops update cluster --yes

This will take a bit to complete, but after a while you should see roughly the following list of running EC2 instances, where we can see the nodes running in different availability zones:	eu-west-1c	m3.medium	running	eu-west-1b	m3.medium	running	eu-west-1a	m3.medium	running	eu-west-1a	t2.micro	running	eu-west-1b	t2.micro	running	eu-west-1b	t2.micro	running	eu-west-1c	t2.micro	running	eu-west-1c	t2.micro	running

Kops will also have sorted out your kubectl configuration so that we can ask for all available nodes as below:

kubectl get nodes
NAME                                           STATUS         AGE   Ready,master   8m   Ready          6m    Ready          7m    Ready,master   7m      Ready          7m    Ready          7m     Ready,master   8m    Ready          7m

Chaos monkey

I have tried actually killing some of the Master nodes and seeing if I could still schedule a load. The problem I faced here was that the cluster could still operate and the containers remained available, but I could not schedule new workloads. This was due to the ‘etcd’ cluster being deployed as part of the master nodes and suddenly the minimum number of nodes for the etcd cluster was no longer present. Most likely moving etcd out of the master nodes would increase the reliability further.

The good news is that once the master nodes recovered from the unexpected termination the cluster resumed regular operation.


I hope that above shows that it is now relatively easy to setup a HA Kubernetes cluster. In practice its quite handy to have HA cluster, next step is to move out etcd to make the solution even more resilient.

AutoScaling your Kubernetes cluster on AWS

One of the challenges I have faced in the last few months is the autoscaling of my Kubernetes cluster. This is perfectly working on Google Cloud, however as my cluster is deployed on AWS I have no such fortune. However since recently the autoscaling support for AWS has been made possible due to this little contribution that was made:

In this post I want to describe how you can autoscale your kubernetes cluster based on the container workload. This is a very important principle because normal autoscaling by AWS can not do this on metrics available to the cluster, it can only do it on for example memory or cpu utilisation. What we want is that in case a container cannot be scheduled on a cluster because there are not enough resources the cluster gets scaled out.

Defining resource usage

One of the very important things to do when you are defining your deployments against Kubernetes is to define the resource usage of your containers. In each deployment object it is therefore imperative that you specify the resource allocation that is expected. Kubernetes will then use this to allocate this to a node that has enough capacity. In this exercise we will deploy two containers that both at their limit require 1.5GB of memory, this looks as following in fragment of one of the deployment descriptors:

      - name: amq
        image: rmohr/activemq:latest
            memory: "1500Mi"
            cpu: "500m"

Setting up scaling

So given this we will start out with a cluster of one node of type m3.medium which has 3.75GB of memory, we do this on purpose with a limited initial cluster to test out our autoscaling.

If you execute kubectl get nodes we see the following response:

NAME                                       STATUS    AGE   Ready     10m

In order to apply autoscaling we need to deploy a specific deployment object and container that checks the Kubernetes Cluster for unscheduled workloads and if needed will trigger an AWS autoscale group. This deployment object looks as following:

apiVersion: extensions/v1beta1
kind: Deployment
  name: cluster-autoscaler
    app: cluster-autoscaler
  replicas: 1
      app: cluster-autoscaler
        app: cluster-autoscaler
        - image:
          name: cluster-autoscaler
              cpu: 100m
              memory: 300Mi
              cpu: 100m
              memory: 300Mi
            - ./cluster-autoscaler
            - --v=4
            - --cloud-provider=aws
            - --skip-nodes-with-local-storage=false
            - --nodes=MIN_SCALE:MAX_SCALE:ASG_NAME
            - name: AWS_REGION
              value: us-east-1
            - name: ssl-certs
              mountPath: /etc/ssl/certs/ca-certificates.crt
              readOnly: true
          imagePullPolicy: "Always"
        - name: ssl-certs
            path: "/etc/ssl/certs/ca-certificates.crt"

Note: The image we are using is a google supplied image with the autoscaler script on there, you can check here for the latest version:

In the above deployment object ensure to replace the MIN_SCALE and MAX_SCALE settings for the autoscaling and ensure the right autoscaling group (ASG_NAME) is set. Please note that the minimum and maximum scaling rule need to be allowed in the AWS scaling group as the scaling process cannot modify the auto scaling group rule itself.

AWS Policy
In AWS we need to ensure there is an IAM policy in place that allows all resources to query the auto scaling groups and modify the desired capacity of the group. I have used the below role definition, which is very wide:

    "Version": "2012-10-17",
    "Statement": [
            "Effect": "Allow",
            "Action": [
            "Resource": "*"

Make sure to attach this to the role that is related to the worked nodes, in my core-os kube-aws generated cluster this is something like ‘testcluster-IAMRoleController-GELKOS5QWHRU’ where testcluster is my clustername.

Deploying the autoscaler
Now let’s deploy the autoscaler just like any other deployment object against the Kubernetes cluster

kubectl create -f autoscaler.yaml

Let’s check next that the autoscaler is working:

➜  test kubectl get po
NAME                                  READY     STATUS    RESTARTS   AGE
cluster-autoscaler-2111878067-nhr01   1/1       Running   0          2m

We can also check the logs using kubectl logs cluster-autoscaler-2111878067-nhr01

No unschedulable pods
Scale down status: unneededOnly=true lastScaleUpTime=2017-01-06 08:44:01.400735149 +0000 UTC lastScaleDownFailedTrail=2017-01-06 08:44:01.400735354 +0000 UTC schedulablePodsPresent=false
Calculating unneded nodes
Node - utilization 0.780000
Node is not suitable for removal - utilization to big (0.780000)
Node - utilization 0.954000
Node is not suitable for removal - utilization to big (0.954000)

We can see the autoscaler checks regulary the workload on the nodes and check if they can be scaled down and check if additional worker nodes are needed.

Let’s try it out

Now that we have deployed our autoscaling container let’s start to schedule our workload against AWS. In this case we will deploy two objects, being ActiveMQ and Cassandra where both require a 1.5GB memory footprint. The combined deployment plus the system containers will cause the scheduler of Kubernetes to determine there is no capacity available, and in this case Cassandra cannot be scheduled as can be seen in below snippet from kubectl describe po/cassandra-2599509460-g3jzt:

FailedScheduling	pod (cassandra-2599509460-g3jzt) failed to fit in any node

When we check in the logs of the autoscaler we can see the below:

Estimated 1 nodes needed in testcluster-AutoScaleWorker-19KN6Y4AR18Z
Scale-up: setting group testcluster-AutoScaleWorker-19KN6Y4AR18Z size to 2
Event(api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"cassandra-2599509460-mt275", UID:"af53ac7b-d3ec-11e6-bd28-0add02d2d0c1", APIVersion:"v1", ResourceVersion:"2224", FieldPath:""}): type: 'Normal' reason: 'TriggeredScaleUp' pod triggered scale-up, group: testcluster-AutoScaleWorker-19KN6Y4AR18Z, sizes (current/new): 1/2

It is scheduling an additional worker by increasing the desired capacity of our auto scaling group in AWS. After a small wait we can see the additional node has been made available:

NAME                                       STATUS    AGE   Ready     27m    Ready     11s

And a short while after the node came up we also see that the pod with Cassandra has become active:

NAME                                  READY     STATUS    RESTARTS   AGE
amq-4187240469-hlkhh                  1/1       Running   0          20m
cassandra-2599509460-mt275            1/1       Running   0          8m
cluster-autoscaler-2111878067-nhr01   1/1       Running   0          11m


We have been able to autoscale our AWS deployed Kubernetes cluster which is extremely useful. I can use this in production to quickly scale out and down my cluster. But perhaps even more important for my case in development i can use it to during idle moments run a minimum size cluster and during workloads it scales back up to full capacity, saving me quite some money.

Remote Controlling Nao robot using a Raspberry Pi Robot

Today I want to take some time to write about the next step I am currently taking to have both my self-build Raspberry PI robot and the Nao robot interact with each other on a useful basis. You might have already seen some posts before like about robot interaction or perhaps the model train one However both these posts did not really demonstrate a practical use-case.

Recently I presented about this topic at the Devoxx conference in Antwerp where I attempt to demonstrate how to control one robot from another using Kubernetes, Helm and Minikube combined with some IoT glue 🙂 The scenario I demonstrated was to create a Robotic Arm from my Raspberry PI robot that I use to remote control a Nao robot.

Robot arm
In order to have some form of remote control I have created a Robot Arm which i can use as a sort of joystick. I have created the robot from the same parts as described in this post ( The robot arm is controller via a Raspberry PI that has a bit of Java software to connect it to MQTT to send servo position changes and to receive commands from MQTT to execute motions on the robot arm.

The robot looks like this:

Nao Robot
For the Nao robot I have written a customer Java controller that connects to the remote API of Nao. This controller software does nothing else but allowing remote control of the Nao robot by listening to commands coming from MQTT.

Connecting the Robots

Like before in previous setups I will be using my custom Robot Cloud deployment setup for this experiment. I will be deploying a number of micro-services to a Kubernetes cluster that is running on AWS. The most important public services are the MQTT message bus which is where the robots are sending status (sensors/servo’s) towards and received commands from (animations, walk commands etc.). For more detail on the actual services and their deployment you can check here

The most important part of bridging the gap between the robots is to have a specific container that receives updates from the servo’s on the robot arm. Based on events from those servo’s (move the joystick forward) I want to trigger the Nao robot to start walking. The full code with a lot more detail is available in this git repository:


It’s quite a complex setup, but the conclusion is that by using my Kubernetes deployed Robot Cloud stack I can use the robot Arm to control the Nao robot. If you want to see a bit more with a live demo you can check out my Devoxx presentation here:

One thing I could not demo at Devoxx was the interaction with a real Nao Robot, I have made a recording how that would look and also put this on youtube here: