Dynamixel XL430-W250T Servo on Raspberry PI using Java

As I am recently building multiple robots at once (Hexapod, Rover and some other projects you can read on my blog) I was running out of Servo’s and wanted to buy a few more. At the end of 2017 Robotis has released a new series of Servos which I believe are intended to replace the trusted AX/MX series I have been using before.

I thought it was a good idea to acquire a few of the new servos to use in one of my new robot designs. In targeted the XL430-W250T servos which should be the equivalent of the AX12A servos from the previous generation. I bought in total 6 servos and tried to connect and control them using the same setup as I had before. In this post I want to detail out a few of the challenges I faced on controlling these servos.

Powering up

The first simple challenge I had was to simply connect the servos. With the AX12 servo’s I used a USB2AX stick to control the servos using the Dynamixel 1.0 protocol. This worked great on the old servos and I was hoping this was all backwards compatible also in terms of powering the servos. It turns out Robotis has made some changes that make it difficult to 1 on 1 swap the servos if you are upgrading from the AX12.

The main challenge is simply the connector, it’s still TTL based hardware but the connector has been swapped to a more universal JST plug. This does mean all my existing accessoires are useless like hubs, power supply etc. I am sure that once these servo’s become more mainstream this will change. I solved this by purchasing these converter cables that on one end have the old connectors and the other end use the new JST plugs.

After plugging in the servo’s to my old SMPS power board and the hubs, the servo’s came to life. I used the windows Robotis Servo Manager on first initialisation to set a unique Servo ID. It worked mostly like the old servo’s however some parameters have moved and have wider address spaces (4 vs 1 or 2 bytes on the old servos).

Dynamixel 2.0 Protocol

This brings me to the Dynamixel 2.0 protocol, this protocol is a newer iteration but based on the same principles as the old protocol. You send packages directed to either a specific servo (identified by the ID) or broadcast to all the servos. For each package you send you get a response package from that one servo, for example requesting the position, temperature and other properties.

Java on Raspberry PI

I have written a small library I can use to control the servos from my Raspberry PI in Java which also includes a dashboard for using and controlling the servos. For those that are interested it can be found on github here: https://github.com/renarj/robo-sdk

If you want to start the library please make sure you have maven and Java 8 or higher installed on your raspberry PI or other device (Mac is also tested and working, assume Windows works as well).

From the root of the git repository fire up the following command

mvn -f dynamixel-web/pom.xml spring-boot:run 
-Ddynamixel.port=/dev/tty.usbmodem1441 
-Ddynamixel.baudrate=57600 -Dprotocol.v2.enabled=true 

For the dynamixel.port please enter your com port connecting your USB2AX or other USB controller connected to your Dynamixels. In case you are using the new Dynamixel X series (or XL320) please set the protocol.v2.enabled startup flag to ‘true’ and set the approriate baudrate (default is 57600, but dynamixels out of the box are set to 1MBit, but the serial library I use cannot handle this on a Mac/Raspberry PI).

Once the software is started it should show the following line Robot construction finished

After starting you can acess the servo control dashboard on the following url:
http://localhost:8080

Note: Please replace localhost with the correct IP if running remotely

The Dashboard should show roughly the following:

Conclusion

I hope the above library will help people who want to use the new Dynamixel X servos in Java, it was a bit of exploring to get it to work, but all in all it is very familiar if you are already using the older Dynamixel Servos. As always I love using the Dynamixel servos and seeing this new series being available with some additional control parameters is really great news. Hope to use these in a lot more upcoming experiments going forward.

Feel free to use / share / fork / copy my Library for controlling these servos here: https://github.com/renarj/robo-sdk

Advertisements

Building a Hexapod robot part 1: Design & build

It has been a while since I have been writing and this coincides with me moving to a new house and my previous project simply was just done. I have been looking at a new challenge and have been dabbing and experimenting a bit. I had already been thinking for a while to make a hexapod robot and finally I pulled the trigger and decided to start building my own design Raspberry PI based Hexapod robot.

Based on past experience I wanted to simplify my design considerably and went to inspire my design based on some existing hexapods like the Phoenix hexapod form lynxmotion.

Let’s start with the basics what is the bill of materials for the base electronics components:

  • Hobbyking 20A SBEC (for powering the PI)
  • Raspberry PI 3
  • GrovePI+ board for sensor components
  • GrovePI Compass & Gyro
  • Turnigy Nano Lipo 3s 2200mAh
  • 18x Dynamixel 12A servos
  • USB2AX usb to serial adapter for Dynamixel TTL servos
  • SMPS2Dynamixel (provides power to servos)
  • 2* Dynamixel AX/MX Hub
  • 100 M2.5 Bolts of length 8mm + 100 M2.5 Nuts
  • 100 M3 Bolts of length 12mm + 100 Locking Nuts M3
  • 2 Spools of Filament (ABS)

Designing the Hexapod

Compared to the Rover robot I designed before I am now trying to keep things as simple as possible as overcomplicating things always made it a lot harder. Looking a lot at the Lynxmotion Phoenix Hexapod I inspired my design on this and am going to make a simple base frame where the legs are attached to and have legs with three degrees of freedom.

The frame will be a simply frame that allows me to connect all six of the legs of the Hexapod and nothing more. The frame will consist of 7 pieces which are the main beams that will connect the legs.

In order to give the frame some structure I will use the servo’s as the connecting piece so they are in essence part of the structure. For this I designed some brackets holding the servo which are also used to give the frame its shape. To finish the frame I designed two bottom plates holding the raspberry pi and power systems and on top of that two plates to close the robot up.

This is how the design looks in Fusion 360 for the baseframe:

Legs

For the Legs themselves I decided to stick to roughly the same design as some of the Robotis frame pieces for the AX12 servos as they worked well in the past. I replicated their design in Fusion 360 and modified it slightly so I can 3d print them and they will have enough strength once printed.

Each leg consists of the servo attached to the frame (coxa), which is then connected using two brackets to the middle servo (femur) and this connects with two small frame pieces to the servo controlling the feet (tibia). On the feet we have a special angled frame piece with a small nub on the bottom for grip.

This looks as following in Fusion 360:

This time I also made sure to fully assemble the robot prototype in fusion 360 before actually printing the robot, this makes it less likely to make mistakes causing a reprint, given the sheer amount of parts something that really will save me in the long run. After quite some tweaking the full design looks like this in Fusion 360:

Constructing the robot

My intention is to start sharing some building steps and share the STL files publicly so hopefully other people can repeat and also build this robot. Once I will do that I also want to share some building steps, so far I have not done that, hence all I can really share are some assembly pictures:

Body under construction:

Legs being assembled:

Robot completed:

Electronics

For the electronics I am sticking to some of the previous electronics work I have been doing in terms of powering the dynamixel servos and the raspberry pi. Here is a previous blogpost on how the electronics are wired together:
https://renzedevries.wordpress.com/2017/03/29/building-a-raspberry-pi-robot-car-part1/

What is next?

This was the first part of a series of posts on building some more simpler robots based on previous learnings. With the base design and build of the hexapod finished the next step is making it walk and I will write this up as soon as possible and hopefully share the design files as well.

From the design to the build version of this robot only took me 3-4 weeks, where previous robots took me many months to get them operational. I strongly believe keeping things simply has enabled me to do this, and will try to stick to this principle for future robots as well. I might in the future redo my previous Rover robot based on the same simpler frame design and keep things as simple as possible which might create more effective robots going forward.

I am planning to share the design of the 3D parts on Github soon, keep your eyes out for a link to the repository in the coming weeks.

Building a Raspberry PI Robot Car part 2

In the last post we talked about the electronics, in this post I will talk a bit about the 3D design and printing of the components. I have recently acquired an Ultimaker 3D printer and after quite some experimenting has led me to be able to start designing my own components for the robotcar. In this blog I will try to walk through the design of the robotcar.

Designing the robot

The robot itself is going to consist of 4 main components:
* Casing containing the main electronics (Raspberry PI, power distribution, etc.
* Casing containing the LiPo battery that allows easy battery replacement
* Frame that supports both the battery and electronics casing
* Wheel / suspension mechanism to hold the wheels

Note: The printer has a maximum print size of roughly 20x20x20 cm, so this is the main reason that the casing for the power, electronics and frame are separated from each other.

The software
For the design of the software I started out with TinkerCad which is an online free based 3D editor. However I quickly ran into problems with dimensions which get quickly complex. I switched after this to Autodesk Fusion 360 which is a lot better if it comes to designing technical components, as a hobbyist it is possible to get a free year license.

Wheel / Suspension

The suspension design is a spring based design that will allow some form of flex in the wheel design. The wheel design actually needs to attach to a servo, the wheel itself is attached to the servo. For this I have designed a small bracket suited for my Dynamixel servo’s.

Next I have one beam that will have the spring attached to it and two static beams that connect to the servo holder. The static beams will ensure linear motion of the servo holder and the spring ensures there is dampening of the motions. This looks as following:

For the wheel design I will at some point dedicate a special post as they have caused me a lot of headache. For now I will use some standard wheels that fit onto the servo’s, but ultimately these will become mecanum based wheels.

Designing the frame

The beams used for the suspension are actually part of the base frame. There are going to be 4 wheels, meaning 4 beams that are part of the frame. In order to create sufficient surface for the battery and electronics casing I have connected the beams in a longer frame using connecting pieces. I have design an end piece for the end pieces of the frame and a middle piece to connect the beams all together. This looks as following:

Each of the beams has a length of 12cm, the middle piece is 4cm and the end pieces each 2cm. This gives a total length of 32cm for the robotcar, this is quite long but for the current suspension design it is needed as the suspension beams cannot really be shortened. In the future I might want to shorten the design by redesigning the suspension, however for now its good enough.

Battery & Electronics case

The main battery and electronics case has caused me a lot of problems and many iterations to get right. Every time you print it, there is something that is not entirely right. The tricks has been to measure, measure and measure again all the components you want to fit. I have in the end drawn out a sketch on paper roughly showing the placement of the components. Both the battery and electronics case have to fit in a fixed length of 16cm and 10 cm in width to fit the baseframe. The electronics case contains special accomodation for the Raspberry PI, UBEC power converter, two grove Sensors and the Dynamixel power board:

Note: The electronics casing will have a separate lid which will allow closing up the electronics compartment and allow easy access.

For the battery case its a lot simpler, we just need something to contain the battery. However one of the challenges is that I do not want a lid here, it just needs to be easily replaceable. For this to work there will be two covers on either end of the case that hide the wires but are far enough apart to remove the battery. A not here is that I used round edges instead of sharp 90 degree angles to allow for better printing without support. The round angles allow for a pretty decent print on my ultimaker, and its a lot better than having support material in the case. The case looks as following:

Assembling the robot

Here are a series of pictures of the various parts in stages of assembly

Conclusion

The process of getting to the above design and printed parts has not been easy. I have had for each component many, many iterations before getting to the above. Even now I am still seeing improvement areas, however for now I do think its close to being a functional robot car which was the goal. In the future posts I will start talking a bit about the software and the drive system with the mecanum wheels.

For those wanting to have a look at the 3D parts, I have uploaded them to Github, the idea is in the future to provide a proper manual on how to print and assemble with a bill of materials needed, for now just have a look:
https://github.com/renarj/robo-max/tree/master/3d-parts

Here is a last picture to close with of the first powerup of the robot car:

Remote Controlling Nao robot using a Raspberry Pi Robot

Today I want to take some time to write about the next step I am currently taking to have both my self-build Raspberry PI robot and the Nao robot interact with each other on a useful basis. You might have already seen some posts before like https://renzedevries.wordpress.com/2016/06/10/robot-interaction-with-a-nao-robot-and-a-raspberry-pi-robot/ about robot interaction or perhaps the model train one https://renzedevries.wordpress.com/2016/09/13/having-fun-with-robots-and-model-trains/. However both these posts did not really demonstrate a practical use-case.

Recently I presented about this topic at the Devoxx conference in Antwerp where I attempt to demonstrate how to control one robot from another using Kubernetes, Helm and Minikube combined with some IoT glue 🙂 The scenario I demonstrated was to create a Robotic Arm from my Raspberry PI robot that I use to remote control a Nao robot.

Robot arm
In order to have some form of remote control I have created a Robot Arm which i can use as a sort of joystick. I have created the robot from the same parts as described in this post (https://renzedevries.wordpress.com/2016/03/31/build-a-raspberry-pi-robot-part-2/). The robot arm is controller via a Raspberry PI that has a bit of Java software to connect it to MQTT to send servo position changes and to receive commands from MQTT to execute motions on the robot arm.

The robot looks like this:
67d466ea-019e-4216-a1e0-4d577bf7038e

Nao Robot
For the Nao robot I have written a customer Java controller that connects to the remote API of Nao. This controller software does nothing else but allowing remote control of the Nao robot by listening to commands coming from MQTT.

Connecting the Robots

Like before in previous setups I will be using my custom Robot Cloud deployment setup for this experiment. I will be deploying a number of micro-services to a Kubernetes cluster that is running on AWS. The most important public services are the MQTT message bus which is where the robots are sending status (sensors/servo’s) towards and received commands from (animations, walk commands etc.). For more detail on the actual services and their deployment you can check here https://renzedevries.wordpress.com/2016/06/10/robot-interaction-with-a-nao-robot-and-a-raspberry-pi-robot/

The most important part of bridging the gap between the robots is to have a specific container that receives updates from the servo’s on the robot arm. Based on events from those servo’s (move the joystick forward) I want to trigger the Nao robot to start walking. The full code with a lot more detail is available in this git repository: https://github.com/renarj/robo-bridge

Conclusion

It’s quite a complex setup, but the conclusion is that by using my Kubernetes deployed Robot Cloud stack I can use the robot Arm to control the Nao robot. If you want to see a bit more with a live demo you can check out my Devoxx presentation here:

One thing I could not demo at Devoxx was the interaction with a real Nao Robot, I have made a recording how that would look and also put this on youtube here:

Deploying a Docker container to Kubernetes on Amazon AWS

Based on recent posts I am still very busy with my Robots, the goal I am working on now is to create multiple Robot interaction solution. The approach I am taken here is to utilize the Cloud as a means of brokering between the Robots. The Architecture I have in mind will be based on a reactive / event driven framework that the robots will use for communication.

One of the foundations for this mechanism is to utilise the Cloud and Event driven architecture. In another post I will give some more detail about how the robots will actually interact. But in this post I wanted to give some information about the specifics of my Cloud Architecture and how to deploy such a Cluster of microservices using Docker and Kubernetes.

Deploying to the cloud

The robot framework will have a significant number of microservices and their dependent supporting services that I need to deploy. With this in mind I have chose to go for a Docker / Microservices based Architecture where in the future I can easily extend the number of services to be deployed and simply grow the cluster if needed.

I have quite some experience now at this moment with Docker and really have grown fond of Kubernetes as an orchestrator so that will be my pick for orchestrator as it is so easy to administer on the Cloud. There are other orchestrators, but the other ones seems to be more early on, although the AWS ECS service does seem to get close

Note to google: I have played before on Google Cloud and Kubernetes and its actually a perfect fit. It is super easy to deploy containers and have your kubernetes cluster there. However Google does not allow individual developers (like me) without their own company a Cloud account in EU because of VAT ruling. So unfortunately for this exercise I will have to default to Amazon AWS. Hope Google reads this so they can fix this quickly!

Creating a Kubernetes cluster on AWS

In this next part I will demo on how to deploy the most important service, which is the MQTT one to a Kubernetes cluster.

In order to get started I need to first create an actual Kubernetes cluster on AWS. This is relatively simple, I just followed the steps on this page: http://kubernetes.io/docs/getting-started-guides/aws/

Prerequisite: Amazon AWS CLI is installed and configured http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html

I used the following variables to create the cluster, which will be a 4 node cluster using M3 Medium instances:

export KUBERNETES_PROVIDER=aws
export KUBE_AWS_ZONE=eu-west-1c
export NUM_NODES=2
export MASTER_SIZE=m3.medium
export NODE_SIZE=m3.medium
export AWS_S3_REGION=eu-west-1
export AWS_S3_BUCKET=robot-kubernetes-artifacts
export INSTANCE_PREFIX=robo

After that it is simply a case of running this:

wget -q -O - https://get.k8s.io | bash

If you already previously ran above command, a directory called ‘kubernetes’ will have been created and in there you can control the cluster using these commands:
For creating the cluster (only if you already ran above command once before):

cluster/kube-up.sh

For stopping the cluster:

cluster/kube-down.sh

After these above commands have been run your cluster should have been created. For kubernetes you still need a small tool the kubernetes command
line client called ‘kubectl’. The folder that was downloaded with kubernetes gets shipped with a client. You can simply add it to your path as following:

# OS X
export PATH=/Users/rdevries/kubernetes/platforms/darwin/amd64:$PATH

# Linux
export PATH=/Users/rdevries/kubernetes/platforms/linux/amd64:$PATH

You should be able to check your cluster is up and running with the following:

kubectl get nodes
NAME                                         STATUS    AGE
ip-172-20-0-115.eu-west-1.compute.internal   Ready     4m
ip-172-20-0-116.eu-west-1.compute.internal   Ready     4m
ip-172-20-0-117.eu-west-1.compute.internal   Ready     4m
ip-172-20-0-118.eu-west-1.compute.internal   Ready     4m

Using Amazon ECR

In order to get the containers to deploy on Kubernetes I have to make my containers available to Kubernetes and Docker. I could use Docker hub to distribute my containers and this is actually fine for MQTT which I have made available on Docker hub (https://hub.docker.com/r/renarj/mqtt/).

However some of the future containers are not really meant for public availability, so instead I am opting to upload the mqtt container to the new Amazon Container Registry just to test out how simple this is.

We can simply start using the AWS CLI and create a Container repository for MQTT:
aws ecr create-repository --repository-name mqtt

Next we can build the container from this git repo: https://github.com/renarj/moquette-spring-docker

git clone https://github.com/renarj/moquette-spring-docker.git
cd moquette-spring-docker
mvn clean package docker:build

Now we need to upload the container to AWS, these instructions will vary slightly for everyone due to account id being used. First we need to tag the container, login to AWS ECR and last push the container.
1. Tag container docker tag mqtt/moquette-spring:latest 1111.dkr.ecr.eu-west-1.amazonaws.com/mqtt:latest (Replace 1111 with your AWS account id)
2. Login to AWS with command return by this aws ecr get-login --region eu-west-1
3. Push container: docker push 1111.dkr.ecr.eu-west-1.amazonaws.com/mqtt:latest

This should make the container available in the Kubernetes cluster.

Deploying the containers
In order to deploy the containers I have to create a small Yaml file for the MQTT service and the Container to be deployed.

The first part is to tell Kubernetes how to deploy the actual container, we can do this with this simple Yaml file:

apiVersion: v1
kind: ReplicationController
metadata:
  name: mqtt
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: mqtt
    spec:
      containers:
      - name: mqtt
        image: 1111.dkr.ecr.eu-west-1.amazonaws.com/mqtt:latest
        ports:
        - containerPort: 1883
        imagePullPolicy: Always

If you look at above sample there are two important aspects, one if the image location. This is the same location as my ECR repository URL. If you do not know the url you can always execute this AWS CLI command: aws ecr describe-repositories which will return a list of all available repositories and url’s.

Note: I have noticed in older versions of Kubernetes there are some permission errors when using ECR as the image repository, I am unsure exactly why, but the later version of Kubernetes (1.2.4+) seem to properly work

Now that we have described how the container needs to be deployed, we want to tell Kubernetes how it can be made available. I will load balance the MQTT service using a ELB, the nice part about Kubernetes is that it can arrange all that for me.

In order to tell Kubernetes and as a result AWS to create a loadbalancer we define a Kubernetes service with type ‘LoadBalancer’:

apiVersion: v1
kind: Service
metadata:
  name: mqtt
  labels:
    app: mqtt
spec:
  type: LoadBalancer
  ports:
  - port: 1883
    targetPort: 1883
    protocol: TCP
    name: mqtt
  selector:
    app: mqtt

Using this we should have a loadbalancer available after the service is created, in order to find out the public address of the service you can describe the service from kubernetes and find the endpoint like this:

kubectl describe svc/mqtt
Name:           mqtt
Namespace:      default
Labels:         app=mqtt
Selector:       app=mqtt
Type:           LoadBalancer
IP:         10.0.225.68
LoadBalancer Ingress:   a9d650446271e11e6b3a30afb1473159-632581711.eu-west-1.elb.amazonaws.com
Port:           mqtt    1883/TCP
NodePort:       mqtt    30531/TCP
Endpoints:      10.244.1.5:1883
Session Affinity:   None
Events:
  FirstSeen LastSeen    Count   From            SubobjectPath   Type        Reason          Message
  --------- --------    -----   ----            -------------   --------    ------          -------
  2s        2s      1   {service-controller }           Normal      CreatingLoadBalancer    Creating load balancer
  0s        0s      1   {service-controller }           Normal      CreatedLoadBalancer Created load balancer

Please note the LoadBalancer Ingress address, this is the publicly available LB for the MQTT service.

Testing MQTT on Kubernetes

Well now that our cluster is deployed and we have a MQTT load balanced container running, how can we give this a try? Well for this I have used a simple node.js package called mqtt-cli. Simply run npm install mqtt-cli -g to install the package.

After this you can send a test-message to our load balanced mqtt container using this command with a -w at the end to watch the topic for incoming messages:

mqtt-cli a9d650446271e11e6b3a30afb1473159-632581711.eu-west-1.elb.amazonaws.com /test/Hello "test message" -w

Now let’s try in a new console window to again send a message but now without the ‘-w’ at the end:

mqtt-cli a9d650446271e11e6b3a30afb1473159-632581711.eu-west-1.elb.amazonaws.com /test/Hello "Message from another window"

If everything worked well it should now have sent the message to the other window, and voila we have a working kubernetes cluster with a load balanced mqtt broker on it 🙂

Next steps

I will be writing a next blog post soon on how all of the above has helped me connect my robots to the cloud. Hope this above info helps some people and shows Kubernetes and Docker are really not at all so scary to use in practice 🙂

Building a Raspberry PI Humanoid Robot in Java: part 1

In one of my previous blog posts I dove into the territory of robotics, the first big projects was building the robo-spider: https://renzedevries.wordpress.com/2015/10/04/raspberry-pi-robot/

As you might have also seen we have a Nao robot, however there is something about the magic of creating your own humanoid robot. Although it is hard to come close to what you can do with a Nao, I want to try to master the basics on building my own Humanoid Robot.

Goals

Perhaps it is good to write down what I have set out to achieve with my Humanoid robot:

  1. Have basic sensors for distance and axis detection
  2. Run wirelessly without requiring wired network, usb or power cables but is still controllable (wifi)
  3. Get the Robot walking and have other working movements
  4. Combine Sensors and movements, walk with obstacle avoidance
  5. Allow interaction and collaboration between the Nao and my Raspberry PI robot
  6. Have dynamic walking capability

In this post I want to address the points 1 and 2, by creating a small portable sensor solution on top of a raspberry pi powered by a set of Lipo batteries. In a later post I will put this all together and hopefully manage to address the point number 3 and further by having the robot work together with the Nao robot.

Hardware

So where do we get started, I have a Robotis Bioloid kit that comes some interesting sensors, being 3 distance sensors and a Gyro sensor. In the previous spider robot I used a Raspberry PI 2 with a USB powerpack. That was relatively easy solution as a spider robot provides a stable flat platform on the top to build onto.

Raspberry PI

Getting a proper stable platform is a lot more challenging on a Humanoid due to the center of gravity. In case of a Humanoid robot you either put the electronics in its chest, or on its back. But it is very important to keep the center of gravity and not sway the top either forward or backward to much, anything you do needs to be compensated by your servo’s then. The solution I had for the spider robot was simply not suitable anymore for this reason, the battery powering the PI2 was to heavy and also the PI2 was too big.

Bill of materials

So this means I needed something lighter and smaller, so luckily my girlfriend managed to get me a Raspberry PI Zero for christmass :D. This should solve the bulkyness of the PI at least.

For powering the robot I also need a lighter solution, here I went for an Adafruit powerboost that allows me to run the PI Zero with a single cell LIPO battery of 2500Mah which is very compact and flat.

Next to this I needed an Analog Digital Converter that allows me to read the sensors that came with the Robotis kit.

So the total list becomes as following for powering the sensors and PI:
* Raspberry PI Zero + 16GB micro-SD card
* Micro-usb hub + Wifi Dongle
* AdaFruit Powerboost 1000 Basic
* AdaFruit 1 Cell LIPO 2500Mah
* AdaFruit 1115S 16Bit 4 Channel Analog Digital Converter
* Small breadboard for putting it together

We will run the following sensors with this setup from the Robotis kit:
* Sharp Distance sensor GP2Y0A21YK (10 – 80 CM)
* X & Y Axis Gyro sensor

Next to this I also use these Servo’s and hardware to power them:
* USB2AX Dynamixel usb-serial communication stick
* 18x Dynamixel AX-12A servos from a Robotis Bioloid premium kit
* Robotis SMPS2Dynamixel to power the servo’s (allows to connect a Lipo 3S power pack)
* 2S LIPO 1200Mah for powering the Servo’s

Wiring it up

So how does that look all wired together, well it is relatively simple. I had to solder the connectors onto AdaFruit components and the GPIO connector onto the PI Zero.

After this the solutions look like below in the picture for just the Raspberry Pi Zero and the sensor parts:
IMG_1289

And this is how it looks on the back of the robot with the servo’s and all:
IMG_1297

Reading Sensor data

Now that the hardware was sorted, the next challenge was reading out the sensor data. Well luckily this is relatively easy with thanks to the chose Analog Digital Converter chip (ADS1115 from AdaFruit) that i have chosen. There is simply an example for the ADS1115 chip available in the PI4J project here: https://github.com/rlsutton1/piBot/blob/master/piBot/src/main/java/com/pi4j/gpio/extension/adafruit/ADS1115.java

So with this piece of code, it is a simple matter of running it, and this is what I got on my first run:

(MyAnalogInput-A0) : VOLTS=2.33 | PERCENT=57% | RAW=18677.0
(MyAnalogInput-A0) : VOLTS=1.43 | PERCENT=34.8% | RAW=11413.0
(MyAnalogInput-A0) : VOLTS=1.04 | PERCENT=25.4% | RAW=8321.0
(MyAnalogInput-A0) : VOLTS=0.87 | PERCENT=21.3% | RAW=6981.0
(MyAnalogInput-A0) : VOLTS=0.77 | PERCENT=18.8% | RAW=6171.0
(MyAnalogInput-A0) : VOLTS=0.68 | PERCENT=16.5% | RAW=5413.0
(MyAnalogInput-A0) : VOLTS=0.56 | PERCENT=13.7% | RAW=4498.0
(MyAnalogInput-A0) : VOLTS=0.44 | PERCENT=10.8% | RAW=3546.0
(MyAnalogInput-A0) : VOLTS=0.42 | PERCENT=10.3% | RAW=3391.0
(MyAnalogInput-A0) : VOLTS=0.37 | PERCENT=8.9% | RAW=2930.0
(MyAnalogInput-A0) : VOLTS=0.18 | PERCENT=4.3% | RAW=1419.0
(MyAnalogInput-A0) : VOLTS=0.02 | PERCENT=0.5% | RAW=175.0

This is the distance sensor, where 2.33 Volt is a distance of roughly 10CM and 0.02Volt represents a distance of 80CM or further. In the future this should be perfect for object collision detection.

Wrap up

In this first post of a few parts I managed to created the basics of my mobile robot. In the next post I will work on the Walking of the Robot which is quite a challenge as you can imagine. If you want to see a sneak peek into this you can already see a Youtube video of this here:

API Design: The Good, Bad and the Ugly

I was doubting if I should write this post, because the topic of good API design is a very controversial topic. What is a good API is something that can be debated about.

I do however want to talk about this a bit, because recently I have gotten my Nao Robot from Aldebaran and I naturally as a Java/Scala developer wanted to try out their Java SDK.

Once I got the Nao the Java SDK was at version 2.1.2 and I was rather shocked when I saw how I had to use the SDK to control the robot. And although I am not very keen on criticising someone’s hard work, in this i can’t help but but to remark about the usability of this API.

So let’s start with a small example, for example in order to let the Robot speak in the 2.1.2 SDK I had to use the following code:

Application app = new Application(args);
Session session = new Session();
com.aldebaran.qimessaging.Object tts = session.service("ALTextToSpeech");
tts.call("say", "I am NAO and will concur the world", "English");

What is wrong?

So what is wrong about above API design, well there are some very obvious issues about this. The first one is that the API design has chosen to create a class named ‘Object’ which is incredibly unhandy as everything in Java inherits form the java.lang.Object type, so this means you automatically everywhere need to fully qualify the usage of the API classes.

One of the most frustrating parts about this API is that there is no strongly typed API. Whenever I want to do any operation on the robot, from speaking to motion I need to provide the name of the method in the Nao API as if I am doing reflection which is very cumbersome.

Nao 2.1.4

So when I started writing this article the SDK was still 2.1.2, but in the meantime whilst writing and investigating this, it seems the API has been updated and they are now providing Java proxy objects which allow easier interaction. The same snippet above now looks more clear and as following:


Application application = new Application(args, PEP_URL);
Session session = application.session();
ALTextToSpeech tts = new ALTextToSpeech(session);
tts.say("I am NAO and will concur the world");

The majority of my concerns are now addressed one would say

Complex Event Callbacks

However there is still a bit of nitpicking in the newer more streamlined API, and this is the Event driven callback based API. If you want to get any events from the Robot, like his head was touched the following code is required:


Application application = new Application(args, PEP_URL);
Session session = application.session();
ALMemory memory = new ALMemory(session);
memory.subscribeToEvent("FrontTactilTouched", new EventCallback<Float>() {
@Override
public void onEvent(Float o) throws InterruptedException, CallError {
LOG.debug("Received head touched: {}", o);
}
});

So basically nothing to special, but what gets annoying on a Robot is that you might need/want to monitor a huge amount of sensors. So you very quickly get a huge amount of anonymous inner classes which makes the code ugly and hard to build any kind of higher level logic.

The solution?

So again now we get into the debate what is a good API, well in my opinion a good API prevents me doing extra work. It provides me out of the box what I need to accomplish my end result. So in case of a Robot I expect a minimal effort needed to monitor sensor events for example.

So I don’t want to rant without providing a solution to this problem. So how did I solve this myself, well I have written in the past a small in-process Event Bus mechanism that simply can use reflection and annotations to send Events to the right listener methods. So in the end I ended using this with a small bit of extra code like that makes listening to any event a lot easier. So how does listening to a Nao Event look in such a case:

@EventSubscribe
@EventSource({"FrontTactilTouched", "MiddleTactilTouched", "RearTactilTouched"})
public void receive(TriggerEvent triggerEvent) {
    if(triggerEvent.isOn()) {
        LOG.info("Head was touched: {}", triggerEvent.getSource());
    }
}

This above code is a simple method in a class that is annotated with a an ‘EventSubscribe’ telling the Local EventBus it is interested in messages. The EventBus determines the type of message it can receive by checking the type of the first parameter using Reflection .

Next to this I introduced an annotation EventSource to indicate which sensors of the Robot to listen to. I written a simple bit of logic that uses reflection to read all annotated methods with EventSource and automatically created the Nao SDK Event callbacks for it which then get forwarded to the Event listener using the in-process EventBus.

Conclusion

So what is my point now really, although you don’t have to agree with my API design in the solution, or perhaps don’t understand exactly how it works, but there is one very important point.

This point is that the API I introduced makes it a lot simpler to listen to any sensor on the Nao Robot. I no longer have to bother wiring up with the lower level callback logic and not even need to understand it, I simply as a developer can implement the logic I wanted to run with my Robot. This is in the end my take-away with API development, build the API that allows your users to solve their core problem.

In case of a Robot the core problem you want to solve is automate sensor reading and movement control with the Robot and perhaps even higher level logic like AI, complex behaviours etc. On that level you really do not want to be bothered by the Callback based complexity.

I strive to make more abstractions for the NAO Robot, and hopefully open source them at some point. Hopefully the developers at Aldebaran take a peek at this and can use it to improve the Java SDK 🙂