Robot interaction with a Nao Robot and a Raspberry Pi Robot

In my recent blogpost I have mainly been working on creating the Raspberry PI based robot. As in the first post I mentioned (https://renzedevries.wordpress.com/2016/02/28/building-a-raspberry-pi-humanoid-robot-in-java-part-1/) the next challenge after getting the Raspberry PI robot to walk is to have it interact with a Nao robot via an IoT solution: (https://renzedevries.wordpress.com/2015/11/26/api-design-the-good-bad-and-the-ugly/).

Robot framework

This has been a lot more challenging than I originally anticipated, mainly due to the fact that I decided to do this properly and build a framework for it. The approach I have taken here is to build an SDK where I can create a standard Java robot model. This capabilities model defines the properties of the robot (speec, movement, sonar, etc.) and generalises them for different robot types. For both the RPI robot and the Nao robot I implement this framework so they in the end can speak the same language.

The benefits of this framework are great, because the idea is to expose and enable this framework via the Cloud using MQTT which is an IoT pub-sub message broker. Meaning all sensor data and commands for the robot will be sent via this MQTT broker. This also enables another option that I can run just a small piece of software on the robot that remotely exposes their capabilities and talk remotely via the MQTT broker to these capabilities.

I have chosen for MQTT because it is a very simple protocol and is already heavily adopted in the IoT industry. Next to this my current home automation system runs via MQTT, so this offers in the future a very nice integration between multiple robots and the home automation 😀

In this post I will describe two scenarios:
1. Having the Nao robot trigger a movement in the Raspberry PI robot
2. Have the Nao robot react to a sensor event on the Raspberry PI robot

Implementation

In order to do this I have to implement my framework for each of the robots. The robot framework consists of the following high level design:
robo-sdk

Capability
The capability indicates something the robot can do, these capabilities can vary from basic capabilities like motion and low level capabilities like servo control and sensor drivers but also higher level capabilities like Speech capability.

Sensor
These indicate sensors on the robots that can provide feedback based on actions that happen on the robot.

Robot
The robot has a list of capabilities and sensors which form the entire robot entity.

The code

Here is an example of bootstrapping the framework on the Nao robot, this is a spring boot application that connects to the robot and installs all the capabilities. All the logic for dealing with the Robot’s capabilities is contained in the specific capability classes. These capabilities also are not aware of the cloud connectivity. The cloud connectivity is dealt with by a special driver that listens to all local robot event and forwards them to the MQTT bridge, and listens to all MQTT commands coming in meant for this robot.

Robot robot = new SpringAwareRobotBuilder("peppy", context)
        .motionEngine(NaoMotionEngine.class)
        .servoDriver(NaoServoDriver.class)
        .capability(NaoSpeechEngine.class)
        .capability(NaoQRScanner.class)
        .sensor(new DistanceSensor("distance", NaoSensorDriver.SONAR_PORT), NaoSensorDriver.class)
        .sensor(new BumperSensor("head", NaoSensorDriver.TOUCH_HEAD), NaoSensorDriver.class)
        .remote(RemoteCloudDriver.class)
        .build();

RobotEventHandler eventHandler = new RobotEventHandler(robot);
robot.listen(eventHandler);

public static class RobotEventHandler implements EventHandler {
    private Robot robot;

    public RobotEventHandler(Robot robot) {
        this.robot = robot;
    }

    @EventSubscribe
    public void receive(DistanceSensorEvent event) {
        LOG.info("Received a distance event: {}", event);

        if(event.getDistance() < 30) {
            LOG.info("Stopping walking motion");
            robot.getMotionEngine().stopWalking();
        }
    }

    @EventSubscribe
    public void receive(TextualSensorEvent event) {
        LOG.info("Barcode scanned: {}", event.getValue());

        robot.getCapability(NaoSpeechEngine.class).say(event.getValue(), "english");
    }

    @EventSubscribe
    public void receive(BumperEvent event) {
        LOG.info("Head was touched on: {}", event.getLabel());
    }
}

The robot has a main identifier which is called the ‘controllerId’, in the above example this is ‘peppy’. Also all sensors have a name, for example the BumperSensor defined below has a name of type ‘head’. Each component can generate multiple events which all need to be labeled identifying the source of the event. For example in case the head gets touched the label will indicate where on the head the robot is touched.

Effectively this means any sensor event sent to the cloud will always have three identifiers (controllerId, itemId and label). For commands send from the MQTT bridge there will also always be three identifiers with slightly different meaning (controllerId, itemId and commandType).

Here is an example of a sample sensor event coming from a robot forwarded to the MQTT broker. On MQTT each message is sent to a topic where you can subscribe to, events from the robot are sent to the following topic for below example /states/peppy/head/FrontTactilTouched. The message sent to that topic has a body as following:

{
   "value":[
      "com.oberasoftware.home.api.impl.types.ValueImpl",
      {
         "value":false,
         "type":"BOOLEAN"
      }
   ],
   "controllerId":"peppy",
   "channelId":"head",
   "label":"FrontTactilTouched"
}

For a command send via the cloud the message to the MQTT bridge would be sent to a topic like this: /command/peppy/tts/say. The message for letting the robot say something would be as following:

{
  "controllerId":"peppy",
  "itemId":"tts",
  "commandType":"say",
  "properties":{
  	"text":"Hello everyone, how are you all doing?",
    "language":"english"
  }
}

If you are curious and would like to see what it takes to develop your own robot to connect to the cloud you can find all the code on Github. The full code for the Nao implementation can be found here: https://github.com/renarj/robo-pep. I have similar code for the Raspberry Pi robot which can be found here: https://github.com/renarj/robo-max

The Architecture

Now that the robot has been implemented it is up to the microservices to coordinate these robots. I have decided to split the architecture in two parts a public message ring using MQTT and an internal one that is secure using ActiveMQ.

At the border of the cloud there is a message processor that picks up message from MQTT and forward to ActiveMQ. This processor checks if the messages coming in are valid and allowed, and if so forwards it to the secure internal ring which is using ActiveMQ. This way I can have a filter on the edge to protect against attacks and authorize the sensor data and commands, this is just nothing more than a bit of safety and scalability for potentially a future.

So the architecture in essence looks as following:
robo-arch

Deploying to the cloud

Based on above architecture I have a number of services I need to deploy including their supporting services. I will deploy the entire stack using Docker on a Kubernetes cluster running on Amazon AWS. For more information on how to deploy to Kubernetes please read my previous blog post: <>

Deploying the services

Getting back to deploying my services, this is the list of services / containers to deploy:
* MQTT message broker based on Moquette for public Robot messages
* ActiveMQ message broker for internal system messages
* Edge processor that forward message from MQTT to ActiveMQ and other way around
* Command service for sending robot commands to the cloud
* State service for processing robot state (sensor data)
* Dashboard service for interacting with the robot (future)

If you want to know more on how to deploy a Docker container on a Kubernetes cluster and how to create one, please check out my previous blog on that: https://renzedevries.wordpress.com/2016/05/31/deploying-a-docker-container-to-kubernetes-on-amazon-aws/

All of the above services can also be found on my github account: https://github.com/renarj

Putting all together

The goal I had was to connect two robots via a brokering architecture and let them interact. It was a lengthy process and it was not easy, but I did finally manage to pull it off. Once the above services where deployed, all I had to do was start the implemented spring-boot application on the individual robots. In order to get the interaction going tho, i had to write a third spring-boot application that would receive events from both robots and take action based on these events. The awesome part about the above architecture is that i can now do that simply, by writing remote capability connectors for the robot framework that directly speak to the MQTT bridge.

You can see this in the below code:

Robot max = new SpringAwareRobotBuilder("max", context)
        .motionEngine(RemoteMotionEngine.class)
        .remote(RemoteCloudDriver.class, true)
        .build();

Robot pep = new SpringAwareRobotBuilder("peppy", context)
        .motionEngine(RemoteMotionEngine.class)
        .capability(RemoteSpeechEngine.class)
        .remote(RemoteCloudDriver.class, true)
        .build();

MaxRobotEventHandler maxHandler = new MaxRobotEventHandler(pep);
max.listen(maxHandler);

PepRobotEventHandler pepHandler = new PepRobotEventHandler(max);
pep.listen(pepHandler);

public static class MaxRobotEventHandler implements EventHandler {
    private Robot pep;

    private AtomicBoolean guard = new AtomicBoolean(true);

    private MaxRobotEventHandler(Robot pep) {
        this.pep = pep;
    }

    @EventSubscribe
    public void receive(ValueEvent valueEvent) {
        LOG.info("Received a distance: {}", valueEvent.getValue().asString());
        if(valueEvent.getControllerId().equals("max") && valueEvent.getLabel().equals("distance")) {
            int distance = valueEvent.getValue().getValue();
            if(distance < 30) {
                if(guard.compareAndSet(true, false)) {
                    LOG.info("Distance is too small: {}", distance);
                    pep.getCapability(SpeechEngine.class).say("Max, are you ok, did you hit something?", "english");

                    Uninterruptibles.sleepUninterruptibly(10, TimeUnit.SECONDS);
                    guard.set(true);
                    LOG.info("Allowing further distance events");
                }
            }
        }
    }
}

public static class PepRobotEventHandler implements EventHandler {
    private Robot max;

    public PepRobotEventHandler(Robot max) {
        this.max = max;
    }

    @EventSubscribe
    public void receive(ValueEvent valueEvent) {
        LOG.info("Received an event for pep: {}", valueEvent);
        if(valueEvent.getControllerId().equals("peppy") && valueEvent.getItemId().equals("head")) {
            if(valueEvent.getValue().asString().equals("true")) {

                max.getMotionEngine().runMotion("Bravo");
            }
        }
    }
}

What happens in this code is that when we receive an event from Pep (the Nao robot) indicating his head was touched, we trigger a motion to run in the other robot named Max (Raspberry Pi robot). The other way around if we receive a distance event on Max indicating he is about to hit a wall we execute a remote operation on the speech engine of Pep to say something.

So how does that look, well see for yourself in this Youtube video:

Conclusion

It has been an incredible challenge to get this interaction all working. But i finally did manage to get it working and i am just at the starting point now. The next step is to work out all the capabilities in the framework including for example video/vision capabilities in both robots. After that the next big step will become to get both robots to explore the room and try to find each other and then collaborate. More on that in posts to come in the future.

Advertisements

3 thoughts on “Robot interaction with a Nao Robot and a Raspberry Pi Robot

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s