Building a Raspberry PI Robot Car part 2

In the last post we talked about the electronics, in this post I will talk a bit about the 3D design and printing of the components. I have recently acquired an Ultimaker 3D printer and after quite some experimenting has led me to be able to start designing my own components for the robotcar. In this blog I will try to walk through the design of the robotcar.

Designing the robot

The robot itself is going to consist of 4 main components:
* Casing containing the main electronics (Raspberry PI, power distribution, etc.
* Casing containing the LiPo battery that allows easy battery replacement
* Frame that supports both the battery and electronics casing
* Wheel / suspension mechanism to hold the wheels

Note: The printer has a maximum print size of roughly 20x20x20 cm, so this is the main reason that the casing for the power, electronics and frame are separated from each other.

The software
For the design of the software I started out with TinkerCad which is an online free based 3D editor. However I quickly ran into problems with dimensions which get quickly complex. I switched after this to Autodesk Fusion 360 which is a lot better if it comes to designing technical components, as a hobbyist it is possible to get a free year license.

Wheel / Suspension

The suspension design is a spring based design that will allow some form of flex in the wheel design. The wheel design actually needs to attach to a servo, the wheel itself is attached to the servo. For this I have designed a small bracket suited for my Dynamixel servo’s.

Next I have one beam that will have the spring attached to it and two static beams that connect to the servo holder. The static beams will ensure linear motion of the servo holder and the spring ensures there is dampening of the motions. This looks as following:

For the wheel design I will at some point dedicate a special post as they have caused me a lot of headache. For now I will use some standard wheels that fit onto the servo’s, but ultimately these will become mecanum based wheels.

Designing the frame

The beams used for the suspension are actually part of the base frame. There are going to be 4 wheels, meaning 4 beams that are part of the frame. In order to create sufficient surface for the battery and electronics casing I have connected the beams in a longer frame using connecting pieces. I have design an end piece for the end pieces of the frame and a middle piece to connect the beams all together. This looks as following:

Each of the beams has a length of 12cm, the middle piece is 4cm and the end pieces each 2cm. This gives a total length of 32cm for the robotcar, this is quite long but for the current suspension design it is needed as the suspension beams cannot really be shortened. In the future I might want to shorten the design by redesigning the suspension, however for now its good enough.

Battery & Electronics case

The main battery and electronics case has caused me a lot of problems and many iterations to get right. Every time you print it, there is something that is not entirely right. The tricks has been to measure, measure and measure again all the components you want to fit. I have in the end drawn out a sketch on paper roughly showing the placement of the components. Both the battery and electronics case have to fit in a fixed length of 16cm and 10 cm in width to fit the baseframe. The electronics case contains special accomodation for the Raspberry PI, UBEC power converter, two grove Sensors and the Dynamixel power board:

Note: The electronics casing will have a separate lid which will allow closing up the electronics compartment and allow easy access.

For the battery case its a lot simpler, we just need something to contain the battery. However one of the challenges is that I do not want a lid here, it just needs to be easily replaceable. For this to work there will be two covers on either end of the case that hide the wires but are far enough apart to remove the battery. A not here is that I used round edges instead of sharp 90 degree angles to allow for better printing without support. The round angles allow for a pretty decent print on my ultimaker, and its a lot better than having support material in the case. The case looks as following:

Assembling the robot

Here are a series of pictures of the various parts in stages of assembly

Conclusion

The process of getting to the above design and printed parts has not been easy. I have had for each component many, many iterations before getting to the above. Even now I am still seeing improvement areas, however for now I do think its close to being a functional robot car which was the goal. In the future posts I will start talking a bit about the software and the drive system with the mecanum wheels.

For those wanting to have a look at the 3D parts, I have uploaded them to Github, the idea is in the future to provide a proper manual on how to print and assemble with a bill of materials needed, for now just have a look:
https://github.com/renarj/robo-max/tree/master/3d-parts

Here is a last picture to close with of the first powerup of the robot car:

Remote Controlling Nao robot using a Raspberry Pi Robot

Today I want to take some time to write about the next step I am currently taking to have both my self-build Raspberry PI robot and the Nao robot interact with each other on a useful basis. You might have already seen some posts before like https://renzedevries.wordpress.com/2016/06/10/robot-interaction-with-a-nao-robot-and-a-raspberry-pi-robot/ about robot interaction or perhaps the model train one https://renzedevries.wordpress.com/2016/09/13/having-fun-with-robots-and-model-trains/. However both these posts did not really demonstrate a practical use-case.

Recently I presented about this topic at the Devoxx conference in Antwerp where I attempt to demonstrate how to control one robot from another using Kubernetes, Helm and Minikube combined with some IoT glue 🙂 The scenario I demonstrated was to create a Robotic Arm from my Raspberry PI robot that I use to remote control a Nao robot.

Robot arm
In order to have some form of remote control I have created a Robot Arm which i can use as a sort of joystick. I have created the robot from the same parts as described in this post (https://renzedevries.wordpress.com/2016/03/31/build-a-raspberry-pi-robot-part-2/). The robot arm is controller via a Raspberry PI that has a bit of Java software to connect it to MQTT to send servo position changes and to receive commands from MQTT to execute motions on the robot arm.

The robot looks like this:
67d466ea-019e-4216-a1e0-4d577bf7038e

Nao Robot
For the Nao robot I have written a customer Java controller that connects to the remote API of Nao. This controller software does nothing else but allowing remote control of the Nao robot by listening to commands coming from MQTT.

Connecting the Robots

Like before in previous setups I will be using my custom Robot Cloud deployment setup for this experiment. I will be deploying a number of micro-services to a Kubernetes cluster that is running on AWS. The most important public services are the MQTT message bus which is where the robots are sending status (sensors/servo’s) towards and received commands from (animations, walk commands etc.). For more detail on the actual services and their deployment you can check here https://renzedevries.wordpress.com/2016/06/10/robot-interaction-with-a-nao-robot-and-a-raspberry-pi-robot/

The most important part of bridging the gap between the robots is to have a specific container that receives updates from the servo’s on the robot arm. Based on events from those servo’s (move the joystick forward) I want to trigger the Nao robot to start walking. The full code with a lot more detail is available in this git repository: https://github.com/renarj/robo-bridge

Conclusion

It’s quite a complex setup, but the conclusion is that by using my Kubernetes deployed Robot Cloud stack I can use the robot Arm to control the Nao robot. If you want to see a bit more with a live demo you can check out my Devoxx presentation here:

One thing I could not demo at Devoxx was the interaction with a real Nao Robot, I have made a recording how that would look and also put this on youtube here:

Deploying a Docker container to Kubernetes on Amazon AWS

Based on recent posts I am still very busy with my Robots, the goal I am working on now is to create multiple Robot interaction solution. The approach I am taken here is to utilize the Cloud as a means of brokering between the Robots. The Architecture I have in mind will be based on a reactive / event driven framework that the robots will use for communication.

One of the foundations for this mechanism is to utilise the Cloud and Event driven architecture. In another post I will give some more detail about how the robots will actually interact. But in this post I wanted to give some information about the specifics of my Cloud Architecture and how to deploy such a Cluster of microservices using Docker and Kubernetes.

Deploying to the cloud

The robot framework will have a significant number of microservices and their dependent supporting services that I need to deploy. With this in mind I have chose to go for a Docker / Microservices based Architecture where in the future I can easily extend the number of services to be deployed and simply grow the cluster if needed.

I have quite some experience now at this moment with Docker and really have grown fond of Kubernetes as an orchestrator so that will be my pick for orchestrator as it is so easy to administer on the Cloud. There are other orchestrators, but the other ones seems to be more early on, although the AWS ECS service does seem to get close

Note to google: I have played before on Google Cloud and Kubernetes and its actually a perfect fit. It is super easy to deploy containers and have your kubernetes cluster there. However Google does not allow individual developers (like me) without their own company a Cloud account in EU because of VAT ruling. So unfortunately for this exercise I will have to default to Amazon AWS. Hope Google reads this so they can fix this quickly!

Creating a Kubernetes cluster on AWS

In this next part I will demo on how to deploy the most important service, which is the MQTT one to a Kubernetes cluster.

In order to get started I need to first create an actual Kubernetes cluster on AWS. This is relatively simple, I just followed the steps on this page: http://kubernetes.io/docs/getting-started-guides/aws/

Prerequisite: Amazon AWS CLI is installed and configured http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html

I used the following variables to create the cluster, which will be a 4 node cluster using M3 Medium instances:

export KUBERNETES_PROVIDER=aws
export KUBE_AWS_ZONE=eu-west-1c
export NUM_NODES=2
export MASTER_SIZE=m3.medium
export NODE_SIZE=m3.medium
export AWS_S3_REGION=eu-west-1
export AWS_S3_BUCKET=robot-kubernetes-artifacts
export INSTANCE_PREFIX=robo

After that it is simply a case of running this:

wget -q -O - https://get.k8s.io | bash

If you already previously ran above command, a directory called ‘kubernetes’ will have been created and in there you can control the cluster using these commands:
For creating the cluster (only if you already ran above command once before):

cluster/kube-up.sh

For stopping the cluster:

cluster/kube-down.sh

After these above commands have been run your cluster should have been created. For kubernetes you still need a small tool the kubernetes command
line client called ‘kubectl’. The folder that was downloaded with kubernetes gets shipped with a client. You can simply add it to your path as following:

# OS X
export PATH=/Users/rdevries/kubernetes/platforms/darwin/amd64:$PATH

# Linux
export PATH=/Users/rdevries/kubernetes/platforms/linux/amd64:$PATH

You should be able to check your cluster is up and running with the following:

kubectl get nodes
NAME                                         STATUS    AGE
ip-172-20-0-115.eu-west-1.compute.internal   Ready     4m
ip-172-20-0-116.eu-west-1.compute.internal   Ready     4m
ip-172-20-0-117.eu-west-1.compute.internal   Ready     4m
ip-172-20-0-118.eu-west-1.compute.internal   Ready     4m

Using Amazon ECR

In order to get the containers to deploy on Kubernetes I have to make my containers available to Kubernetes and Docker. I could use Docker hub to distribute my containers and this is actually fine for MQTT which I have made available on Docker hub (https://hub.docker.com/r/renarj/mqtt/).

However some of the future containers are not really meant for public availability, so instead I am opting to upload the mqtt container to the new Amazon Container Registry just to test out how simple this is.

We can simply start using the AWS CLI and create a Container repository for MQTT:
aws ecr create-repository --repository-name mqtt

Next we can build the container from this git repo: https://github.com/renarj/moquette-spring-docker

git clone https://github.com/renarj/moquette-spring-docker.git
cd moquette-spring-docker
mvn clean package docker:build

Now we need to upload the container to AWS, these instructions will vary slightly for everyone due to account id being used. First we need to tag the container, login to AWS ECR and last push the container.
1. Tag container docker tag mqtt/moquette-spring:latest 1111.dkr.ecr.eu-west-1.amazonaws.com/mqtt:latest (Replace 1111 with your AWS account id)
2. Login to AWS with command return by this aws ecr get-login --region eu-west-1
3. Push container: docker push 1111.dkr.ecr.eu-west-1.amazonaws.com/mqtt:latest

This should make the container available in the Kubernetes cluster.

Deploying the containers
In order to deploy the containers I have to create a small Yaml file for the MQTT service and the Container to be deployed.

The first part is to tell Kubernetes how to deploy the actual container, we can do this with this simple Yaml file:

apiVersion: v1
kind: ReplicationController
metadata:
  name: mqtt
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: mqtt
    spec:
      containers:
      - name: mqtt
        image: 1111.dkr.ecr.eu-west-1.amazonaws.com/mqtt:latest
        ports:
        - containerPort: 1883
        imagePullPolicy: Always

If you look at above sample there are two important aspects, one if the image location. This is the same location as my ECR repository URL. If you do not know the url you can always execute this AWS CLI command: aws ecr describe-repositories which will return a list of all available repositories and url’s.

Note: I have noticed in older versions of Kubernetes there are some permission errors when using ECR as the image repository, I am unsure exactly why, but the later version of Kubernetes (1.2.4+) seem to properly work

Now that we have described how the container needs to be deployed, we want to tell Kubernetes how it can be made available. I will load balance the MQTT service using a ELB, the nice part about Kubernetes is that it can arrange all that for me.

In order to tell Kubernetes and as a result AWS to create a loadbalancer we define a Kubernetes service with type ‘LoadBalancer’:

apiVersion: v1
kind: Service
metadata:
  name: mqtt
  labels:
    app: mqtt
spec:
  type: LoadBalancer
  ports:
  - port: 1883
    targetPort: 1883
    protocol: TCP
    name: mqtt
  selector:
    app: mqtt

Using this we should have a loadbalancer available after the service is created, in order to find out the public address of the service you can describe the service from kubernetes and find the endpoint like this:

kubectl describe svc/mqtt
Name:           mqtt
Namespace:      default
Labels:         app=mqtt
Selector:       app=mqtt
Type:           LoadBalancer
IP:         10.0.225.68
LoadBalancer Ingress:   a9d650446271e11e6b3a30afb1473159-632581711.eu-west-1.elb.amazonaws.com
Port:           mqtt    1883/TCP
NodePort:       mqtt    30531/TCP
Endpoints:      10.244.1.5:1883
Session Affinity:   None
Events:
  FirstSeen LastSeen    Count   From            SubobjectPath   Type        Reason          Message
  --------- --------    -----   ----            -------------   --------    ------          -------
  2s        2s      1   {service-controller }           Normal      CreatingLoadBalancer    Creating load balancer
  0s        0s      1   {service-controller }           Normal      CreatedLoadBalancer Created load balancer

Please note the LoadBalancer Ingress address, this is the publicly available LB for the MQTT service.

Testing MQTT on Kubernetes

Well now that our cluster is deployed and we have a MQTT load balanced container running, how can we give this a try? Well for this I have used a simple node.js package called mqtt-cli. Simply run npm install mqtt-cli -g to install the package.

After this you can send a test-message to our load balanced mqtt container using this command with a -w at the end to watch the topic for incoming messages:

mqtt-cli a9d650446271e11e6b3a30afb1473159-632581711.eu-west-1.elb.amazonaws.com /test/Hello "test message" -w

Now let’s try in a new console window to again send a message but now without the ‘-w’ at the end:

mqtt-cli a9d650446271e11e6b3a30afb1473159-632581711.eu-west-1.elb.amazonaws.com /test/Hello "Message from another window"

If everything worked well it should now have sent the message to the other window, and voila we have a working kubernetes cluster with a load balanced mqtt broker on it 🙂

Next steps

I will be writing a next blog post soon on how all of the above has helped me connect my robots to the cloud. Hope this above info helps some people and shows Kubernetes and Docker are really not at all so scary to use in practice 🙂

Building a Raspberry PI Humanoid Robot in Java: part 1

In one of my previous blog posts I dove into the territory of robotics, the first big projects was building the robo-spider: https://renzedevries.wordpress.com/2015/10/04/raspberry-pi-robot/

As you might have also seen we have a Nao robot, however there is something about the magic of creating your own humanoid robot. Although it is hard to come close to what you can do with a Nao, I want to try to master the basics on building my own Humanoid Robot.

Goals

Perhaps it is good to write down what I have set out to achieve with my Humanoid robot:

  1. Have basic sensors for distance and axis detection
  2. Run wirelessly without requiring wired network, usb or power cables but is still controllable (wifi)
  3. Get the Robot walking and have other working movements
  4. Combine Sensors and movements, walk with obstacle avoidance
  5. Allow interaction and collaboration between the Nao and my Raspberry PI robot
  6. Have dynamic walking capability

In this post I want to address the points 1 and 2, by creating a small portable sensor solution on top of a raspberry pi powered by a set of Lipo batteries. In a later post I will put this all together and hopefully manage to address the point number 3 and further by having the robot work together with the Nao robot.

Hardware

So where do we get started, I have a Robotis Bioloid kit that comes some interesting sensors, being 3 distance sensors and a Gyro sensor. In the previous spider robot I used a Raspberry PI 2 with a USB powerpack. That was relatively easy solution as a spider robot provides a stable flat platform on the top to build onto.

Raspberry PI

Getting a proper stable platform is a lot more challenging on a Humanoid due to the center of gravity. In case of a Humanoid robot you either put the electronics in its chest, or on its back. But it is very important to keep the center of gravity and not sway the top either forward or backward to much, anything you do needs to be compensated by your servo’s then. The solution I had for the spider robot was simply not suitable anymore for this reason, the battery powering the PI2 was to heavy and also the PI2 was too big.

Bill of materials

So this means I needed something lighter and smaller, so luckily my girlfriend managed to get me a Raspberry PI Zero for christmass :D. This should solve the bulkyness of the PI at least.

For powering the robot I also need a lighter solution, here I went for an Adafruit powerboost that allows me to run the PI Zero with a single cell LIPO battery of 2500Mah which is very compact and flat.

Next to this I needed an Analog Digital Converter that allows me to read the sensors that came with the Robotis kit.

So the total list becomes as following for powering the sensors and PI:
* Raspberry PI Zero + 16GB micro-SD card
* Micro-usb hub + Wifi Dongle
* AdaFruit Powerboost 1000 Basic
* AdaFruit 1 Cell LIPO 2500Mah
* AdaFruit 1115S 16Bit 4 Channel Analog Digital Converter
* Small breadboard for putting it together

We will run the following sensors with this setup from the Robotis kit:
* Sharp Distance sensor GP2Y0A21YK (10 – 80 CM)
* X & Y Axis Gyro sensor

Next to this I also use these Servo’s and hardware to power them:
* USB2AX Dynamixel usb-serial communication stick
* 18x Dynamixel AX-12A servos from a Robotis Bioloid premium kit
* Robotis SMPS2Dynamixel to power the servo’s (allows to connect a Lipo 3S power pack)
* 2S LIPO 1200Mah for powering the Servo’s

Wiring it up

So how does that look all wired together, well it is relatively simple. I had to solder the connectors onto AdaFruit components and the GPIO connector onto the PI Zero.

After this the solutions look like below in the picture for just the Raspberry Pi Zero and the sensor parts:
IMG_1289

And this is how it looks on the back of the robot with the servo’s and all:
IMG_1297

Reading Sensor data

Now that the hardware was sorted, the next challenge was reading out the sensor data. Well luckily this is relatively easy with thanks to the chose Analog Digital Converter chip (ADS1115 from AdaFruit) that i have chosen. There is simply an example for the ADS1115 chip available in the PI4J project here: https://github.com/rlsutton1/piBot/blob/master/piBot/src/main/java/com/pi4j/gpio/extension/adafruit/ADS1115.java

So with this piece of code, it is a simple matter of running it, and this is what I got on my first run:

(MyAnalogInput-A0) : VOLTS=2.33 | PERCENT=57% | RAW=18677.0
(MyAnalogInput-A0) : VOLTS=1.43 | PERCENT=34.8% | RAW=11413.0
(MyAnalogInput-A0) : VOLTS=1.04 | PERCENT=25.4% | RAW=8321.0
(MyAnalogInput-A0) : VOLTS=0.87 | PERCENT=21.3% | RAW=6981.0
(MyAnalogInput-A0) : VOLTS=0.77 | PERCENT=18.8% | RAW=6171.0
(MyAnalogInput-A0) : VOLTS=0.68 | PERCENT=16.5% | RAW=5413.0
(MyAnalogInput-A0) : VOLTS=0.56 | PERCENT=13.7% | RAW=4498.0
(MyAnalogInput-A0) : VOLTS=0.44 | PERCENT=10.8% | RAW=3546.0
(MyAnalogInput-A0) : VOLTS=0.42 | PERCENT=10.3% | RAW=3391.0
(MyAnalogInput-A0) : VOLTS=0.37 | PERCENT=8.9% | RAW=2930.0
(MyAnalogInput-A0) : VOLTS=0.18 | PERCENT=4.3% | RAW=1419.0
(MyAnalogInput-A0) : VOLTS=0.02 | PERCENT=0.5% | RAW=175.0

This is the distance sensor, where 2.33 Volt is a distance of roughly 10CM and 0.02Volt represents a distance of 80CM or further. In the future this should be perfect for object collision detection.

Wrap up

In this first post of a few parts I managed to created the basics of my mobile robot. In the next post I will work on the Walking of the Robot which is quite a challenge as you can imagine. If you want to see a sneak peek into this you can already see a Youtube video of this here:

API Design: The Good, Bad and the Ugly

I was doubting if I should write this post, because the topic of good API design is a very controversial topic. What is a good API is something that can be debated about.

I do however want to talk about this a bit, because recently I have gotten my Nao Robot from Aldebaran and I naturally as a Java/Scala developer wanted to try out their Java SDK.

Once I got the Nao the Java SDK was at version 2.1.2 and I was rather shocked when I saw how I had to use the SDK to control the robot. And although I am not very keen on criticising someone’s hard work, in this i can’t help but but to remark about the usability of this API.

So let’s start with a small example, for example in order to let the Robot speak in the 2.1.2 SDK I had to use the following code:

Application app = new Application(args);
Session session = new Session();
com.aldebaran.qimessaging.Object tts = session.service("ALTextToSpeech");
tts.call("say", "I am NAO and will concur the world", "English");

What is wrong?

So what is wrong about above API design, well there are some very obvious issues about this. The first one is that the API design has chosen to create a class named ‘Object’ which is incredibly unhandy as everything in Java inherits form the java.lang.Object type, so this means you automatically everywhere need to fully qualify the usage of the API classes.

One of the most frustrating parts about this API is that there is no strongly typed API. Whenever I want to do any operation on the robot, from speaking to motion I need to provide the name of the method in the Nao API as if I am doing reflection which is very cumbersome.

Nao 2.1.4

So when I started writing this article the SDK was still 2.1.2, but in the meantime whilst writing and investigating this, it seems the API has been updated and they are now providing Java proxy objects which allow easier interaction. The same snippet above now looks more clear and as following:


Application application = new Application(args, PEP_URL);
Session session = application.session();
ALTextToSpeech tts = new ALTextToSpeech(session);
tts.say("I am NAO and will concur the world");

The majority of my concerns are now addressed one would say

Complex Event Callbacks

However there is still a bit of nitpicking in the newer more streamlined API, and this is the Event driven callback based API. If you want to get any events from the Robot, like his head was touched the following code is required:


Application application = new Application(args, PEP_URL);
Session session = application.session();
ALMemory memory = new ALMemory(session);
memory.subscribeToEvent("FrontTactilTouched", new EventCallback<Float>() {
@Override
public void onEvent(Float o) throws InterruptedException, CallError {
LOG.debug("Received head touched: {}", o);
}
});

So basically nothing to special, but what gets annoying on a Robot is that you might need/want to monitor a huge amount of sensors. So you very quickly get a huge amount of anonymous inner classes which makes the code ugly and hard to build any kind of higher level logic.

The solution?

So again now we get into the debate what is a good API, well in my opinion a good API prevents me doing extra work. It provides me out of the box what I need to accomplish my end result. So in case of a Robot I expect a minimal effort needed to monitor sensor events for example.

So I don’t want to rant without providing a solution to this problem. So how did I solve this myself, well I have written in the past a small in-process Event Bus mechanism that simply can use reflection and annotations to send Events to the right listener methods. So in the end I ended using this with a small bit of extra code like that makes listening to any event a lot easier. So how does listening to a Nao Event look in such a case:

@EventSubscribe
@EventSource({"FrontTactilTouched", "MiddleTactilTouched", "RearTactilTouched"})
public void receive(TriggerEvent triggerEvent) {
    if(triggerEvent.isOn()) {
        LOG.info("Head was touched: {}", triggerEvent.getSource());
    }
}

This above code is a simple method in a class that is annotated with a an ‘EventSubscribe’ telling the Local EventBus it is interested in messages. The EventBus determines the type of message it can receive by checking the type of the first parameter using Reflection .

Next to this I introduced an annotation EventSource to indicate which sensors of the Robot to listen to. I written a simple bit of logic that uses reflection to read all annotated methods with EventSource and automatically created the Nao SDK Event callbacks for it which then get forwarded to the Event listener using the in-process EventBus.

Conclusion

So what is my point now really, although you don’t have to agree with my API design in the solution, or perhaps don’t understand exactly how it works, but there is one very important point.

This point is that the API I introduced makes it a lot simpler to listen to any sensor on the Nao Robot. I no longer have to bother wiring up with the lower level callback logic and not even need to understand it, I simply as a developer can implement the logic I wanted to run with my Robot. This is in the end my take-away with API development, build the API that allows your users to solve their core problem.

In case of a Robot the core problem you want to solve is automate sensor reading and movement control with the Robot and perhaps even higher level logic like AI, complex behaviours etc. On that level you really do not want to be bothered by the Callback based complexity.

I strive to make more abstractions for the NAO Robot, and hopefully open source them at some point. Hopefully the developers at Aldebaran take a peek at this and can use it to improve the Java SDK 🙂

 

 

Raspberry PI Hexabot Robot powered by Java

Raspberry PI Hexabot Robot powered by Java

I have been a life-long fan of anything around robotics, but I was never able to get close to my dream of building a robot due to money, expensive kits or even knowledge.

But these days these things no longer have to be a blocking issue, there are kits out there that make it possible to build your own robot quite easily. So I set out to create my very own Robot based on readily available materials. My main goal was to hand create something myself without using a pre-scribed kit. A few months ago when I started the project I set out with a few ultimate criteria for the v1 version:

  1. I can write all the code myself easily in a JVM based language
  2. It is affordable and easily hackable
  3. It can run wirelessly without requiring network, usb or power cables but is still controllable (wifi)
  4. It provides an abstraction layer over complicated robotics principles (motion, sensors, etc.)
  5. Can take different forms (humanoid, spider, buggy, etc.)

Materials

I already had quite some experience with servo’s and remote controlled helicopters but this is just not the same. I needed a Servo that allowed me programmable control without having to hack to much :). These days there are quite some alternatives for this. In the end I settled on the Robotis Dynamixel AX-12A servo which has a serial bus that is controllable via a USB – Serial protocol. I bought it as part of a Robotis Bioloid Premium kit as it has also quite some other nice goodies which could be of use in the future. Ofcourse next to this there are also some other parts needed like a small form factor PC, obviously the Raspberry PI was a primary choice there 🙂

In the end I ended up with this materials list:

  • Raspberry PI 2 with Raspbian and JDK8 installed on a 16GB micro-SD card
  • Netgear USB Wifi-N stick
  • USB2AX Dynamixel usb-serial communication stick
  • 5000Mah powerpack to power the Raspberry (power output 2.1A)
  • 18x Dynamixel AX-12A servos from a Robotis Bioloid premium kit
  • Robotis SMPS2Dynamixel to power the servo’s
  • Frame from the Robotis Bioloid premium kit
  • 2S LIPO 1200Mah
  • LIPO connectors
  • DC connector to Lipo cable

Java Dynamixel Library

The first step I had to overcome was how to control the servo’s. The USB2AX stick combined with the Robotis SMPS2Dynamixel could allow me power and control. But there were no working Java libraries for this. Luckily there are quite some examples of the communication protocol. I ended up using a Java Serial driver library to talk via the USB stick to the Servo’s. It was quite some figuring out to the low-level where I had to build specific byte packages and I won’t bore you to much with the detail but the result is a working Java library to control the Dynamixel Servo’s. I have made this library available to people that would be interested on Github: https://github.com/renarj/dynamixel

The Spider

Now that I could control a servo it was time to create a Robot from the servos. Ultimately the Bioloid Premium kit comes with this very cool humanoid robot design, but this was way too challenging to control. Humanoid Bipedal motion is a very tough subject on its own. I just wanted to first validate my software concepts before diving into this. So I decided to build a spider robot from the Kit which I could learn to walk, stand and sit 🙂

This is the end result of the robot build, strapped with the Raspberry PI, powerpack, LIPO and all wiring done:

IMG_1182

Motion

The next challenge I had to overcome is now that I can control one servo, can I also control all 18 at the same time? And to make it even more challenging can I get some useful motion out of it 🙂

The solution lay in the software that came with the Bioloid kit, the software itself was pretty useless but they did came with some prescribed motion files for the robots. These motion files described some animations if using 18 servos that I could read and convert into my Java code. So all I had to do was build a converter from these motion files to an in memory structure that I could replay on the Raspberry.

The end result is visible here:

Remote control

The last piece of the puzzle for this post was the building of a small webservice so I could remotely control the robot. In my job at SDL I have supervised a team where I have architected and build a opensource REST OData framework (https://github.com/sdl/odata). This framework allows you to create very quickly a resource driven REST webservice based on the OData specifications (http://www.odata.org/)

In a few hours I managed to create a OData Webservice in the same Java project to control the robot. Moving the robot is as simple as calling a URL as following:

http://robopi:8080/robot.svc/Motions/Oberasoftware.Robot.Execute(motion='Forward%20walk',repeats=10)

The code for this WebService is available in the same Github repository as mentioned before: https://github.com/renarj/dynamixel

Conclusion

I have managed to create a Robot spider controller by my Raspberry PI in a relatively short timespan (2 months). There are still tons of things to do like integrate sensors, pi-cam and more.

It is a great fun project, and has inspired me to take things to the next level with a next great follow-up project. I will post a bit more on that in the coming few weeks. Hope you get inspired to do this yourself as well, don’t hesitate to drop me a line with questions on the robot or code.