Continuous Deployment with Docker-Compose and Spring boot

Recently i have visited quite some conferences, and of the hot topics these days really seems to be microservices and docker. The combination of these two seems to be a really good fit.

However the issue you still had with vanilla Docker is that it still required orchestration and scheduling. In the last year there has been a real uprise in tools around scheduling and composability of containers.

In my job at SDL we recently released our SDL Web 8 (Tridion) containing a major re-architecture where we converted a part of our product stack (Content Delivery) into Microservices using spring-boot. Although we scoped out the internal docker support from the release, it was something I relatively easily could add after the fact due to spring-boot nature, simply drop the package in the docker container and build it and off we go.

Is there a problem?

So you wonder why all the fuss, it works right? One of the main challenges in our company is deploying an always up-to-date in our manual and automated test environments. This is due to the fact that there are a lot of components in play, and orchestration is hard, it is easy to miss a configuration step.

In fact the core part of the product requires 3 microservices (discovery, deployer and content service) and we ideally want to have some dependent services to run (mysql, logstash, elasticsearch, kibana). I am not even talking about load balancer etc. So how do you orchestrate this?

Docker compose

Well this is where the fun starts, I decided to first start out with the docker-compose functionality that is now standard part of the docker toolbox since the 1.9 release. The main goal is to create a stack that is fully independent so i will include a mysql database and full monitoring stack. I want to deploy the following services:

  • *SQL database
  • SDL Web 8 discovery service
  • SDL Web 8 content service
  • SDL Web 8 deployer service
  • ELK stack (elasticsearch, logstash and kibana)

This looks roughly as following in a deployment diagram:

deployment-model

 

Docker compose file:

So let’s script all of this using some docker containers, we do this by creating a docker-compose.yml file containing the following code.

Note: It is good to keep in mind that I have locally build my docker containers, so they are not available in Docker-hub. In order to get same results you would need to build a docker container for each microservice referred below.

mysql:
image: mysql
ports:
- "3306:3306";
environment:
MYSQL_ROOT_PASSWORD: mysecretpassword
container_name: "db";
volumes:
- ./mysql-data:/docker-entrypoint-initdb.d
discovery:
image: discovery-service
ports:
- "8082:8082"
container_name: "discovery";
deployer:
image: deployer-service
ports:
- "8084:8084";
container_name: "deployer";
content:
image: content-service
ports:
- "8081:8081";
container_name: "content";
elasticsearch:
image: elasticsearch:latest
container_name: "elasticsearch";
command: elasticsearch -Des.network.host=0.0.0.0
ports:
- "9200:9200";
logstash:
image: cd-logstash
container_name: "logstash";
volumes:
- ./logstash/config:/etc/logstash/conf.d
ports:
- "5000:5000";
kibana:
image: cd-docker/cd-kibana
container_name: "kibana";
volumes:
- ./kibana/config/kibana.yml:/opt/kibana/config/kibana.yml
ports:
- "5601:5601";

So you would say, let’s get this show on the road right with above configuration file? Well there are a few things to remark:

Database credentials and location:

Docker-compose starts an overlay network, our services however use a XML configuration file to locate the DB Hostname. Because manipulating this XML file is unhandy, for now things like hostname,username and psasword are hard-coded in the config file as ‘cd-docker.db’ and will always resolve to the docker container running the DB.

This is fine in a test/QA setup, but in production would definitely need to be solved differently.

Container startup dependencies

One of the most tricky parts of this docker setup was the container dependencies. For example our content-service microservice depends on the service called ‘discovery-service’. But all of these services depends on the fundamental base services mysql, logstash and elasticsearch.

In order to resolve that I have built in a bit of health monitoring in the service startup of the docker containers. Each of the docker containers use a small bash script to start the actual program running in the container. I simply wrote an augmentation like following example:


echo "Waiting for MYSQL to be up and running"
while ! mysqladmin ping -h"db" -p1234 --silent; do
printf '.'
sleep 1
done
echo "MySQL is up and running"

echo "Waiting for LogStash to be up and running"
while true; do
nc -q 1 logstash 5000 2>/dev/null && break
sleep 5
echo "Trying again to see if LogStash is up"
done
echo "LogStash up and running"

This are the dependencies of our discovery-service microservice which needs the database and logstash to be up. In term logstash depends on Elasticsearch, which also in its startup script has a small check:


echo "Waiting for Elasticsearch"
while true; do
nc -q 1 elasticsearch 9200 2>/dev/null && break
done

End result:

So now the last but most important step, starting the entire stack can be done with a simple command in the directory containing the docker-compose.yml file:

docker-compose -d --x-networking up

After a few minutes you should see the following output when running docker ps

CONTAINER ID        IMAGE                                                             COMMAND                  CREATED             STATUS              PORTS                              NAMES
11f5bd3fee2f        cd-docker/discovery-service   "/bin/sh -c 'bash -C "   12 days ago         Up 2 minutes        0.0.0.0:8082->8082/tcp             discovery
4591cce69f02        cd-docker/cd-logstash         "/docker-entrypoint.s"   12 days ago         Up 2 minutes        0.0.0.0:5000->5000/tcp             logstash
c69489d9450c        cd-docker/content-service     "/bin/sh -c 'bash -C "   12 days ago         Up 2 minutes        0.0.0.0:8081->8081/tcp             content
99487697737a        cd-docker/cd-kibana           "/docker-entrypoint.s"   12 days ago         Up 2 minutes        0.0.0.0:5601->5601/tcp             kibana
fa50324ce8c3        cd-docker/deployer-service    "/bin/sh -c 'bash -C "   12 days ago         Up 2 minutes        0.0.0.0:8084->8084/tcp             deployer
a85ca06765c4        mysql                                                             "/entrypoint.sh mysql"   4 weeks ago         Up 2 minutes        0.0.0.0:3306->3306/tcp             db
26970b231a3c        elasticsearch:latest                                              "/docker-entrypoint.s"   4 weeks ago         Up 2 minutes        0.0.0.0:9200->9200/tcp, 9300/tcp   elasticsearch

Now that this is done, we check if the microservices are available. Because i am running on a mac the actual docker host is a virtualbox image that is running. So in order to locate the actual endpoint i first run this:

docker-machine ls
default   *        virtualbox   Running   tcp://192.168.99.100:2376  

So i have a docker-machine running on 192.168.99.100, so in theory I should have a working microservice running on 192.168.99.100:8082 for example, let’s see that:

curl -i http://192.168.99.100:8082/
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Set-Cookie: JSESSIONID=0C61030C4471F321D1A27BF50CA35142; Path=/; HttpOnly
Set-Cookie: TAFSessionId=tridion_3dac7590-06f8-4bfd-9ffc-8627b3b29fb0; path=/; HttpOnly
Set-Cookie: TAFTrackingId=tridion_4e0bf1ba-853d-4f0a-86d0-f3565ff56990; Expires=Fri, 01-Jan-2100 00:00:00 GMT; path=/; HttpOnly
X-Application-Context: application:8082
Content-Type: text/plain;charset=ISO-8859-1
Content-Length: 35
Date: Wed, 23 Dec 2015 14:49:47 GMT

CD Service Container up and running

So that is excellent news, the entire stack works, the other endpoints on port 8081 and 8084 work fine as well. Another nice bonus is that also the Kibana dashboard works just fine and is reporting on the Log output of all our microservices.
Screen Shot 2015-12-23 at 15.51.45

Conclusion

The conclusion I have drawn from this little experiment is that docker-compose provides a nice and easier way to get a more complicated stack up and running. I do however see that it would for the moment with our product be hard to have a perfect customer production setup in this stack, for example due to hard-coded DB host/credentials. But this is something we can improve hopefully in the future, with upcoming technologies like Kubernetes and Nomad it will become even more exciting in the next year.

I do think the ideal usage of docker-compose for us (at least for now) is the (automation) tester and developers. No longer do they have to spend a huge amount of time setting up a stack. Simply run the docker-compose up command and off we go with the latest release of the product.

Advertisements