API Design: The Good, Bad and the Ugly

I was doubting if I should write this post, because the topic of good API design is a very controversial topic. What is a good API is something that can be debated about.

I do however want to talk about this a bit, because recently I have gotten my Nao Robot from Aldebaran and I naturally as a Java/Scala developer wanted to try out their Java SDK.

Once I got the Nao the Java SDK was at version 2.1.2 and I was rather shocked when I saw how I had to use the SDK to control the robot. And although I am not very keen on criticising someone’s hard work, in this i can’t help but but to remark about the usability of this API.

So let’s start with a small example, for example in order to let the Robot speak in the 2.1.2 SDK I had to use the following code:

Application app = new Application(args);
Session session = new Session();
com.aldebaran.qimessaging.Object tts = session.service("ALTextToSpeech");
tts.call("say", "I am NAO and will concur the world", "English");

What is wrong?

So what is wrong about above API design, well there are some very obvious issues about this. The first one is that the API design has chosen to create a class named ‘Object’ which is incredibly unhandy as everything in Java inherits form the java.lang.Object type, so this means you automatically everywhere need to fully qualify the usage of the API classes.

One of the most frustrating parts about this API is that there is no strongly typed API. Whenever I want to do any operation on the robot, from speaking to motion I need to provide the name of the method in the Nao API as if I am doing reflection which is very cumbersome.

Nao 2.1.4

So when I started writing this article the SDK was still 2.1.2, but in the meantime whilst writing and investigating this, it seems the API has been updated and they are now providing Java proxy objects which allow easier interaction. The same snippet above now looks more clear and as following:


Application application = new Application(args, PEP_URL);
Session session = application.session();
ALTextToSpeech tts = new ALTextToSpeech(session);
tts.say("I am NAO and will concur the world");

The majority of my concerns are now addressed one would say

Complex Event Callbacks

However there is still a bit of nitpicking in the newer more streamlined API, and this is the Event driven callback based API. If you want to get any events from the Robot, like his head was touched the following code is required:


Application application = new Application(args, PEP_URL);
Session session = application.session();
ALMemory memory = new ALMemory(session);
memory.subscribeToEvent("FrontTactilTouched", new EventCallback<Float>() {
@Override
public void onEvent(Float o) throws InterruptedException, CallError {
LOG.debug("Received head touched: {}", o);
}
});

So basically nothing to special, but what gets annoying on a Robot is that you might need/want to monitor a huge amount of sensors. So you very quickly get a huge amount of anonymous inner classes which makes the code ugly and hard to build any kind of higher level logic.

The solution?

So again now we get into the debate what is a good API, well in my opinion a good API prevents me doing extra work. It provides me out of the box what I need to accomplish my end result. So in case of a Robot I expect a minimal effort needed to monitor sensor events for example.

So I don’t want to rant without providing a solution to this problem. So how did I solve this myself, well I have written in the past a small in-process Event Bus mechanism that simply can use reflection and annotations to send Events to the right listener methods. So in the end I ended using this with a small bit of extra code like that makes listening to any event a lot easier. So how does listening to a Nao Event look in such a case:

@EventSubscribe
@EventSource({"FrontTactilTouched", "MiddleTactilTouched", "RearTactilTouched"})
public void receive(TriggerEvent triggerEvent) {
    if(triggerEvent.isOn()) {
        LOG.info("Head was touched: {}", triggerEvent.getSource());
    }
}

This above code is a simple method in a class that is annotated with a an ‘EventSubscribe’ telling the Local EventBus it is interested in messages. The EventBus determines the type of message it can receive by checking the type of the first parameter using Reflection .

Next to this I introduced an annotation EventSource to indicate which sensors of the Robot to listen to. I written a simple bit of logic that uses reflection to read all annotated methods with EventSource and automatically created the Nao SDK Event callbacks for it which then get forwarded to the Event listener using the in-process EventBus.

Conclusion

So what is my point now really, although you don’t have to agree with my API design in the solution, or perhaps don’t understand exactly how it works, but there is one very important point.

This point is that the API I introduced makes it a lot simpler to listen to any sensor on the Nao Robot. I no longer have to bother wiring up with the lower level callback logic and not even need to understand it, I simply as a developer can implement the logic I wanted to run with my Robot. This is in the end my take-away with API development, build the API that allows your users to solve their core problem.

In case of a Robot the core problem you want to solve is automate sensor reading and movement control with the Robot and perhaps even higher level logic like AI, complex behaviours etc. On that level you really do not want to be bothered by the Callback based complexity.

I strive to make more abstractions for the NAO Robot, and hopefully open source them at some point. Hopefully the developers at Aldebaran take a peek at this and can use it to improve the Java SDK 🙂

 

 

Advertisements

Devoxx 2015 Summary :)

I went to the Devoxx 2015 conference in Belgium this year and thought I would give a quick summary from the Devoxx conference with some take-aways that I took myself while it is still fresh. 

  • Micro-service are hot
    • Lot of presentations based on microservices with Spring-boot 
    • Docker, Docker-compose and Swarm are really becoming big lots of talks on getting started as a developer or in the cloud
    • Slightly related, cluster management is getting quite some tooling (Kubernetes, Swarm, etc.)
  • The next big thing seems to really be about Machine Learning and Data streams
    • Instead of offline analytics, we analyse Streams online and offline and combine data (Apache Spark Stream, Akka Streams, Java Streams)
    • Standard has been developed around this called Reactive Streams initiative
      • Will most likely be adopted as part of JDK 9
  • JDK 9 is all about modularisation
    • No longer will you be required to drag the entire JDK with you, only the parts you depend on
    • Introducing a linker to package a mobile version of your app with minimal JRE dependencies
    • No support for multiple versions -> this needs to be resolved by build tools
    • Expected slow adoption by Oracle due to breaking changes (internal classes removed, that where being used, etc.)
  • Chaos engineering
  • Fun topics
    • Developing with robotics with Aldebaran (Yeah the NAO 😀 which i also have, see my other posts )
    • Inspirational talk about silver bullet syndrome: https://www.youtube.com/watch?v=3wyd6J3yjcs (hint, there is no silver bullet 🙂
    • Capturing the air around you for $7.
      • Really loved this session, using a computer and a small radio receiver he ‘hijacked’ the wireless audio stream from a session next door 🙂

All the sessions where recorded, and from this year they have put them on youtube:

https://www.youtube.com/channel/UCCBVCTuk6uJrN3iFV_3vurg/videos

Robotics and Home Automation the next step

In one of my first posts I talked about Robotics, in particular a Raspberry PI project with Java to create a spider robot. That project was great fun, and in the end it tasted like I should really finally take the next step.

A few years ago I was at a conference in San Francisco the JavaOne, there was a company there called Aldebaran to demo their little Robot. Instantly I knew that Robot was so great I wanted to experiment with one. However it was very hard to get one, and not so affordable.

Although the affordable part has not changed, I am happy to say that I did manage to acquire one almost 3 years after meeting him the first time. I am happy to introduce my new buddy that we call “Peppy”:

IMG_1188

Aldebaran NAO Robot

The above robot is a NAO robot from a company called Aldebaran. He is one of the more advanced Humanoid research platform robots you can get these days at a relatively affordable level. He has 25 degrees of freedom, 2 cameras, 4 microphones and several other sensors all over his body. The robot also has Wifi and is powered by an Intel Atom processor running a Linux OS based on Gentoo distribution.

What is great about the robot is the great SDK that comes with it. Out of the box there is a graphical studio called Choregraphe which has a blocks based editor allowing you to develop behaviours for your robot. This is very easy and I could already quickly assemble this little dialog (In Dutch):

The robot also comes with a Python and Java SDK. The cool part about this SDK is that it does not matter if it runs remotely or on the robot itself. I will write a bit more about the SDK in future articles.

Goal

I have had the robot already for a few weeks, and often I get asked what am I planning to do with it. Well I want to fuse the world of home automation and robotics with each other. Would it not be great if you open the door and a robot can greet you? And you can tell your robot your preferences and he can automatically arrange things for you.

Is this any different than a Siri, perhaps not so much, but the robot can make it more personal and physical which is something Siri simply cannot do. In the coming months I will explore and research this topic more, and hopefully share a bit more about this, stay tuned..