Autoscaling Containers on Kubernetes on AWS

One of the challenges I faced recently was the ability to autoscale my containers on my Kubernetes cluster. I realised I had not written yet about this concept and thought I would share how this can be done and what the pitfalls there were for me.

If you combine this concept with my previous post about autoscaling your kube cluster (https://renzedevries.wordpress.com/2017/01/10/autoscaling-your-kubernetes-cluster-on-aws/) you can create a very nice and balanced scalable deployment at lower costs.

Preparing your cluster

In my case I have used Kubernetes KOPS to create my cluster in AWS. However by default this does not install some of the add-ons we need for autoscaling our workloads like Heapster.

Heapster monitors and analyses the resource usage in our cluster. These metrics it monitors are very important to build scaling rules, it allows us for example to scale based on a cpu percentage. Heapster records these metrics and offers an API to Kubernetes so it can act based on this data.

In order to deploy heapster I used the following command:

kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/monitoring-standalone/v1.3.0.yaml

Please note that in your own kubernetes setup you might already have heapster or want to run a different version.

Optional dashboard
I also find it handy to run the Kubernetes dashboard, which you can deploy as following under KOPS:

kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/kubernetes-dashboard/v1.5.0.yaml

Deploying Workload

In order to get started I will deploy a simple workload, in this case its the command service for my robotics framework (see previous posts). This is a simple HTTP REST endpoint that takes in JSON data and passes this along to a message queue.

This is the descriptor of the deployment object for Kubernetes:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: command-svc
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: command-svc
    spec:
      containers:
      - name: command-svc
        image: ecr.com/command-svc:latest
        ports:
        - containerPort: 8080
        resources:
          requests:
            memory: "256Mi"
            cpu: "250m"
          limits:
            memory: "512Mi"
            cpu: "500m"
        readinessProbe:
          tcpSocket:
            port: 8080
          initialDelaySeconds: 30
          periodSeconds: 10
        livenessProbe:
          tcpSocket:
            port: 8080
          initialDelaySeconds: 60
          periodSeconds: 10
        env:
        - name: amq_host
          value: amq
        - name: SPRING_PROFILES_ACTIVE
          value: production

Readyness and Liveness
I have added a liveness and readyness probe to the container, this allows Kubernetes to detect when a container is ready and if its still alive. This is important in autoscaling as otherwise you might get pods already enabled in your loadbalanced service that are not actually ready to accept work. This is because Kubernetes by default can only detect if a pod has started, not if the process in the pod is ready for accepting workloads.

These probes test if a certain condition is true and only then the pod will get added to the load balanced service. In my case I have a probe to check if the port 8080 of my rest service is available. I am using a simple TCP probe as the HTTP probe that is also offered gave strange errors and the TCP probe works just as well for my purpose.

Deploying
Now we are ready to deploy the workload and we deploy this as following:

kubectl create -f command-deployment.yaml


## Enabling Autoscaling
The next step is to enable autoscaling rules on our workload, as mentioned above we have deployed heapster which can monitor resource usage. In this case I have set some resource constraints for the pods to indicate how much CPU its allowed to consume. For the command-svc per pod we have a limit of 500m, which translates to roughly 0.5 CPU core. This means if we create a rule to scale at 80 cpu usage this is based on this limit, so it will scale 80% usage of the 0.5 CPU limit.

We can create a rule that says there is always minimum of 1 pod and a maximum of 3 and we scale-up once the cpu usage exceeds 80% of the pod limit.

kubectl autoscale deployment command-svc --cpu-percent=80 --min=1 --max=3

We can ask for information on the autoscaling with the following command and monitor the scaling changes:

kubectl get hpa -w

Creating a load

I have deployed the command-svc pod and want to simulate a load using a simple tool. For this I have simple resorted to Apache JMeter, its not a perfect tool but it works well and most important its free. I have created a simple thread group with 40 users doing 100k requests against the command-svc from my desktop.

This is the result when monitoring the autoscaler:

command-svc   Deployment/command-svc   1% / 80%   1         3         1          4m
command-svc   Deployment/command-svc   39% / 80%   1         3         1         6m
command-svc   Deployment/command-svc   130% / 80%   1         3         1         7m
command-svc   Deployment/command-svc   130% / 80%   1         3         1         7m
command-svc   Deployment/command-svc   130% / 80%   1         3         2         7m
command-svc   Deployment/command-svc   199% / 80%   1         3         2         8m
command-svc   Deployment/command-svc   183% / 80%   1         3         2         9m
command-svc   Deployment/command-svc   153% / 80%   1         3         2         10m
command-svc   Deployment/command-svc   76% / 80%   1         3         2         11m
command-svc   Deployment/command-svc   64% / 80%   1         3         2         12m
command-svc   Deployment/command-svc   67% / 80%   1         3         2         13m
command-svc   Deployment/command-svc   91% / 80%   1         3         2         14m
command-svc   Deployment/command-svc   91% / 80%   1         3         2         14m
command-svc   Deployment/command-svc   91% / 80%   1         3         3         14m
command-svc   Deployment/command-svc   130% / 80%   1         3         3         15m
command-svc   Deployment/command-svc   133% / 80%   1         3         3         16m
command-svc   Deployment/command-svc   130% / 80%   1         3         3         17m
command-svc   Deployment/command-svc   126% / 80%   1         3         3         18m
command-svc   Deployment/command-svc   118% / 80%   1         3         3         19m
command-svc   Deployment/command-svc   137% / 80%   1         3         3         20m
command-svc   Deployment/command-svc   82% / 80%   1         3         3         21m
command-svc   Deployment/command-svc   0% / 80%   1         3         3         22m
command-svc   Deployment/command-svc   0% / 80%   1         3         3         22m
command-svc   Deployment/command-svc   0% / 80%   1         3         1         22m

You can also see that it neatly scales down at the end once the load goes away again.

Pitfalls

I have noticed a few things about the autoscaling that are important to take into account:
1. The CPU percentage is based on the resource limits you define in your pods, if you don't define them it won't work as expected
2. Make sure to have readyness and liveness probes in your container else your pods might not be ready but already get hit with external requests
3. I could only have probes that use TCP for some reason in AWS, unsure why this is the case, HTTP probes failed for me with timeout exceptions.

Conclusion

I hope this post helps people get the ultimate autoscaling setup of both your workloads and also your cluster. This is a very powerfull and dynamic setup on AWS in combination with the cluster autoscaler as described in my previous post: https://renzedevries.wordpress.com/2017/01/10/autoscaling-your-kubernetes-cluster-on-aws/

Advertisements

Deploying a Highly Available Kubernetes cluster to AWS using KOPS

In my previous posts I have talked a lot about deploying a kubernetes cluster. For most part I have used kube-aws from CoreOS which has served me quite well. In the last few months however a lot has happened in the Kubernetes space and a new tool has started becoming very interest called Kops which is a subproject of the kubernetes project.

Both the kube-aws tool and Kops tool started getting support for HA deployments (a requirement for production workloads) and cluster upgrades. One of the major advantages of the Kops tool is its ability to manage and maintain multiple clusters because its stores the ‘cluster state’ in an s3 bucket.

In this post I will do a walkthrough on how to deploy a Highly available cluster using Kops. I will base this on the tutorial page here: https://github.com/kubernetes/kops/blob/master/docs/aws.md The main difference I will describe deploying a HA deployment instead of a regular deployment.

Installing the toolset

Pretty much most of this is covered in the tutorial here(https://github.com/kubernetes/kops/blob/master/docs/aws.md). I am using a mac and will use ‘brew install’ to get the needed command line tools installed. You need both the AWS command line client(aws-cli), Kubernetes client (kubectl) and Kops client installed.

Install the clients:
brew install awscli
brew install kubernetes-cli
brew install kops

Once these tools are installed please make sure to configure aws-cli, you can see here how: https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-credentials

Creating the cluster

First we need to set an environment variable to the s3 bucket where we are going to keep the Kops state of all the deployed clusters. We can do this by simply setting an environment variable as following:

export KOPS_STATE_STORE=s3://my-kops-bucket-that-is-a-secret

This bucket is needed because the Kops tool maintains the cluster state in this s3 bucket. This means we can get an overview of all deployed clusters using the Kops tool and it will query the filestructure on the s3 bucket.

Once the s3 bucket is created and variable is set we can go ahead creating the cluster as following:

kops create cluster --name=dev.robot.mydomain.com --master-zones=eu-west-1a,eu-west-1b,eu-west-1c --zones=eu-west-1a,eu-west-1b,eu-west-1c --node-size=t2.micro --node-count=5

The parameters are relatively self explanatory, it is however important that the name includes the fully qualified domain name. Kops will try to register the subdomain in the route53 hosted zone. The most important parameters that makes the setup HA is to specify multiple availability zones, for each zone it will deploy one master node and it will spread the worker nodes across the specified zones as well.

The above command has created a cluster configuration that is now stored in the s3 bucket, however the actual cluster is not yet launched. You can further edit the cluster configuration as following:

kops edit cluster dev.robot.mydomain.com

Launching the cluster
After you have finished editing we can Launch the cluster:

kops update cluster dev.robot.mydomain.com --yes

This will take a bit to complete, but after a while you should see roughly the following list of running EC2 instances, where we can see the nodes running in different availability zones:

master-eu-west-1c.masters.dev.robot.mydomain.com	eu-west-1c	m3.medium	running
master-eu-west-1b.masters.dev.robot.mydomain.com	eu-west-1b	m3.medium	running
master-eu-west-1a.masters.dev.robot.mydomain.com	eu-west-1a	m3.medium	running
nodes.dev.robot.mydomain.com	eu-west-1a	t2.micro	running
nodes.dev.robot.mydomain.com	eu-west-1b	t2.micro	running
nodes.dev.robot.mydomain.com	eu-west-1b	t2.micro	running
nodes.dev.robot.mydomain.com	eu-west-1c	t2.micro	running
nodes.dev.robot.mydomain.com	eu-west-1c	t2.micro	running

Kops will also have sorted out your kubectl configuration so that we can ask for all available nodes as below:

kubectl get nodes
NAME                                           STATUS         AGE
ip-172-20-112-108.eu-west-1.compute.internal   Ready,master   8m
ip-172-20-114-138.eu-west-1.compute.internal   Ready          6m
ip-172-20-126-52.eu-west-1.compute.internal    Ready          7m
ip-172-20-56-106.eu-west-1.compute.internal    Ready,master   7m
ip-172-20-58-2.eu-west-1.compute.internal      Ready          7m
ip-172-20-69-113.eu-west-1.compute.internal    Ready          7m
ip-172-20-75-48.eu-west-1.compute.internal     Ready,master   8m
ip-172-20-86-155.eu-west-1.compute.internal    Ready          7m

Chaos monkey

I have tried actually killing some of the Master nodes and seeing if I could still schedule a load. The problem I faced here was that the cluster could still operate and the containers remained available, but I could not schedule new workloads. This was due to the ‘etcd’ cluster being deployed as part of the master nodes and suddenly the minimum number of nodes for the etcd cluster was no longer present. Most likely moving etcd out of the master nodes would increase the reliability further.

The good news is that once the master nodes recovered from the unexpected termination the cluster resumed regular operation.

Conclusion

I hope that above shows that it is now relatively easy to setup a HA Kubernetes cluster. In practice its quite handy to have HA cluster, next step is to move out etcd to make the solution even more resilient.

AutoScaling your Kubernetes cluster on AWS

One of the challenges I have faced in the last few months is the autoscaling of my Kubernetes cluster. This is perfectly working on Google Cloud, however as my cluster is deployed on AWS I have no such fortune. However since recently the autoscaling support for AWS has been made possible due to this little contribution that was made: https://github.com/kubernetes/contrib/tree/master/cluster-autoscaler/cloudprovider/aws

In this post I want to describe how you can autoscale your kubernetes cluster based on the container workload. This is a very important principle because normal autoscaling by AWS can not do this on metrics available to the cluster, it can only do it on for example memory or cpu utilisation. What we want is that in case a container cannot be scheduled on a cluster because there are not enough resources the cluster gets scaled out.

Defining resource usage

One of the very important things to do when you are defining your deployments against Kubernetes is to define the resource usage of your containers. In each deployment object it is therefore imperative that you specify the resource allocation that is expected. Kubernetes will then use this to allocate this to a node that has enough capacity. In this exercise we will deploy two containers that both at their limit require 1.5GB of memory, this looks as following in fragment of one of the deployment descriptors:

    spec:
      containers:
      - name: amq
        image: rmohr/activemq:latest
        resources:
          limits:
            memory: "1500Mi"
            cpu: "500m"

Setting up scaling

So given this we will start out with a cluster of one node of type m3.medium which has 3.75GB of memory, we do this on purpose with a limited initial cluster to test out our autoscaling.

If you execute kubectl get nodes we see the following response:

NAME                                       STATUS    AGE
ip-10-0-0-236.eu-west-1.compute.internal   Ready     10m

In order to apply autoscaling we need to deploy a specific deployment object and container that checks the Kubernetes Cluster for unscheduled workloads and if needed will trigger an AWS autoscale group. This deployment object looks as following:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: cluster-autoscaler
  labels:
    app: cluster-autoscaler
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cluster-autoscaler
  template:
    metadata:
      labels:
        app: cluster-autoscaler
    spec:
      containers:
        - image: gcr.io/google_containers/cluster-autoscaler:v0.4.0
          name: cluster-autoscaler
          resources:
            limits:
              cpu: 100m
              memory: 300Mi
            requests:
              cpu: 100m
              memory: 300Mi
          command:
            - ./cluster-autoscaler
            - --v=4
            - --cloud-provider=aws
            - --skip-nodes-with-local-storage=false
            - --nodes=MIN_SCALE:MAX_SCALE:ASG_NAME
          env:
            - name: AWS_REGION
              value: us-east-1
          volumeMounts:
            - name: ssl-certs
              mountPath: /etc/ssl/certs/ca-certificates.crt
              readOnly: true
          imagePullPolicy: "Always"
      volumes:
        - name: ssl-certs
          hostPath:
            path: "/etc/ssl/certs/ca-certificates.crt"

Note: The image we are using is a google supplied image with the autoscaler script on there, you can check here for the latest version: https://console.cloud.google.com/kubernetes/images/tags/cluster-autoscaler?location=GLOBAL&project=google-containers&pli=1

Settings
In the above deployment object ensure to replace the MIN_SCALE and MAX_SCALE settings for the autoscaling and ensure the right autoscaling group (ASG_NAME) is set. Please note that the minimum and maximum scaling rule need to be allowed in the AWS scaling group as the scaling process cannot modify the auto scaling group rule itself.

AWS Policy
In AWS we need to ensure there is an IAM policy in place that allows all resources to query the auto scaling groups and modify the desired capacity of the group. I have used the below role definition, which is very wide:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "autoscaling:DescribeAutoScalingGroups",
                "autoscaling:DescribeAutoScalingInstances",
                "autoscaling:SetDesiredCapacity",
                "autoscaling:TerminateInstanceInAutoScalingGroup"
            ],
            "Resource": "*"
        }
    ]
}

Make sure to attach this to the role that is related to the worked nodes, in my core-os kube-aws generated cluster this is something like ‘testcluster-IAMRoleController-GELKOS5QWHRU’ where testcluster is my clustername.

Deploying the autoscaler
Now let’s deploy the autoscaler just like any other deployment object against the Kubernetes cluster

kubectl create -f autoscaler.yaml

Let’s check next that the autoscaler is working:

➜  test kubectl get po
NAME                                  READY     STATUS    RESTARTS   AGE
cluster-autoscaler-2111878067-nhr01   1/1       Running   0          2m

We can also check the logs using kubectl logs cluster-autoscaler-2111878067-nhr01

No unschedulable pods
Scale down status: unneededOnly=true lastScaleUpTime=2017-01-06 08:44:01.400735149 +0000 UTC lastScaleDownFailedTrail=2017-01-06 08:44:01.400735354 +0000 UTC schedulablePodsPresent=false
Calculating unneded nodes
Node ip-10-0-0-135.eu-west-1.compute.internal - utilization 0.780000
Node ip-10-0-0-135.eu-west-1.compute.internal is not suitable for removal - utilization to big (0.780000)
Node ip-10-0-0-236.eu-west-1.compute.internal - utilization 0.954000
Node ip-10-0-0-236.eu-west-1.compute.internal is not suitable for removal - utilization to big (0.954000)

We can see the autoscaler checks regulary the workload on the nodes and check if they can be scaled down and check if additional worker nodes are needed.

Let’s try it out

Now that we have deployed our autoscaling container let’s start to schedule our workload against AWS. In this case we will deploy two objects, being ActiveMQ and Cassandra where both require a 1.5GB memory footprint. The combined deployment plus the system containers will cause the scheduler of Kubernetes to determine there is no capacity available, and in this case Cassandra cannot be scheduled as can be seen in below snippet from kubectl describe po/cassandra-2599509460-g3jzt:

FailedScheduling	pod (cassandra-2599509460-g3jzt) failed to fit in any node

When we check in the logs of the autoscaler we can see the below:

Estimated 1 nodes needed in testcluster-AutoScaleWorker-19KN6Y4AR18Z
Scale-up: setting group testcluster-AutoScaleWorker-19KN6Y4AR18Z size to 2
Event(api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"cassandra-2599509460-mt275", UID:"af53ac7b-d3ec-11e6-bd28-0add02d2d0c1", APIVersion:"v1", ResourceVersion:"2224", FieldPath:""}): type: 'Normal' reason: 'TriggeredScaleUp' pod triggered scale-up, group: testcluster-AutoScaleWorker-19KN6Y4AR18Z, sizes (current/new): 1/2

It is scheduling an additional worker by increasing the desired capacity of our auto scaling group in AWS. After a small wait we can see the additional node has been made available:

NAME                                       STATUS    AGE
ip-10-0-0-236.eu-west-1.compute.internal   Ready     27m
ip-10-0-0-68.eu-west-1.compute.internal    Ready     11s

And a short while after the node came up we also see that the pod with Cassandra has become active:

NAME                                  READY     STATUS    RESTARTS   AGE
amq-4187240469-hlkhh                  1/1       Running   0          20m
cassandra-2599509460-mt275            1/1       Running   0          8m
cluster-autoscaler-2111878067-nhr01   1/1       Running   0          11m

Conclusion

We have been able to autoscale our AWS deployed Kubernetes cluster which is extremely useful. I can use this in production to quickly scale out and down my cluster. But perhaps even more important for my case in development i can use it to during idle moments run a minimum size cluster and during workloads it scales back up to full capacity, saving me quite some money.

Deploying Docker containers to Kubernetes with Jenkins

After all my recent posts about deploying a Kubernetes cluster to AWS the one step I still wanted to talk about is how you can deploy the Docker containers to a Kubernetes cluster using a bit of automation. I will try to explain here how you can relatively simply do this by using Jenkins pipelines and some groovy scripting πŸ™‚

Pre-requisites
* Working Kubernetes cluster (see here: https://renzedevries.wordpress.com/2016/07/18/deploying-kubernetes-to-aws-using-jenkins/)
* Jenkins slave/master setup
* Kubectl tool installed and configured on the Jenkins master/slave and desktop
* Publicly accessible Docker images (AWS ECR for example see: https://renzedevries.wordpress.com/2016/07/20/publishing-a-docker-image-to-amazon-ecr-using-jenkins/)

What are we deploying
In order to deploy containers against kubernetes there are two things that are needed. First I need to deploy the services that will ensure that we have ingress traffic via AWS ELB’s and this also ensures we have an internal DNS lookup capability for service to service communication. Second I need to deploy the actual containers using Kubernetes Deployments.

In this post I will focus on mainly one service which is called ‘command-service’. If you want to read a bit more about the services that I deploy you can find that here: https://renzedevries.wordpress.com/2016/06/10/robot-interaction-with-a-nao-robot-and-a-raspberry-pi-robot/

Creating the services

The first task I do is to create the actual kubernetes service for the command-service. The service descriptors are relatively simple in my case, the command-service needs to be publicly load balanced so I want kubernetes to create an AWS ELB for me. I will deploy this service by first checking out my git repository where I contain the service descriptors using Kubernetes yaml files. I will then use a Jenkins pipeline with some groovy scripting to deploy it.

The service descriptor for the public loadbalanced command-svc looks like this. This is a load balancer that is backed by all pods that have a label ‘app’ with value ‘command-svc’ and then attached to the AWS ELB backing this service.

apiVersion: v1
kind: Service
metadata:
  name: command-svc
  labels:
    app: command-svc
spec:
  type: LoadBalancer
  ports:
  - port: 8080
    targetPort: 8080
    protocol: TCP
    name: command-svc
  selector:
    app: command-svc

In order to actually create this services I use the below Jenkins pipeline code. In this code I use the apply command because the services are not very likely to change and this way it works both in clean and already existing environments. Because I constantly create new environments and sometimes update existing ones, I want all my scripts to be runnable multiple times regardless of current cluster/deployment state.

import groovy.json.*

node {
    stage 'prepare'
    deleteDir()

    git credentialsId: 'bb420c66-8efb-43e5-b5f6-583b5448e984', url: 'git@bitbucket.org:oberasoftware/haas-build.git'
    sh "wget http://localhost:8080/job/kube-deploy/lastSuccessfulBuild/artifact/*zip*/archive.zip"
    sh "unzip archive.zip"
    sh "mv archive/* ."

    stage "deploy services"
    sh "kubectl apply -f command-svc.yml --kubeconfig=kubeconfig"
    waitForServices()
}

Waiting for creation
One of the challenges I faced tho is that I have a number of containers that I want to deploy that depend on these service definitions. However it takes a bit of time to deploy these services and for the ELB’s to be fully created. So I have created a bit of small waiting code in Groovy that checks if the services are up and running. This is being called using the ‘waitForServices()’ method in the pipeline, you can see the code for this below:

def waitForServices() {
  sh "kubectl get svc -o json > services.json --kubeconfig=kubeconfig"

  while(!toServiceMap(readFile('services.json')).containsKey('command-svc')) {
        sleep(10)
        echo "Services are not yet ready, waiting 10 seconds"
        sh "kubectl get svc -o json > services.json --kubeconfig=kubeconfig"
  }
  echo "Services are ready, continuing"
}

@com.cloudbees.groovy.cps.NonCPS
Map toServiceMap(servicesJson) {
  def json = new JsonSlurper().parseText(servicesJson)

  def serviceMap = [:]
  json.items.each { i ->
    def serviceName = i.metadata.name
    def ingress = i.status.loadBalancer.ingress
    if(ingress != null) {
      def serviceUrl = ingress[0].hostname
      serviceMap.put(serviceName, serviceUrl)
    }
  }

  return serviceMap
}

This should not complete until at least all the services are ready for usage, in this case my command-svc with its ELB backing.

Creating the containers

The next step is actually the most important, deploying the actual container. In this example I will be using the deployments objects that are there since Kubernetes 1.2.x.

Let’s take a look again at the command-svc container that I want to deploy. I use again the yaml file syntax for describing the deployment object:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: command-svc
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: command-svc
    spec:
      containers:
      - name: command-svc
        image: account_id.dkr.ecr.eu-west-1.amazonaws.com/command-svc:latest
        ports:
        - containerPort: 8080
        env:
        - name: amq_host
          value: amq
        - name: SPRING_PROFILES_ACTIVE
          value: production

Let’s put all that together for the rest of my deployments for the other containers. In this case I have one additional container that I deploy the edge-service. Using Jenkins pipelines this looks relatively simple:

    stage "deploy"
    sh "kubectl apply -f kubernetes/command-deployment.yml --kubeconfig=kubeconfig"

I currently do not have any active health checking at the end of the deployment, i am still planning on it. For now I just check that the pods and deployments are properly deployed, you can also do this by simply running these commands:
kubectl get deployments

This will yield something like below:

NAME          DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
command-svc   1         1         1            1           1m

If you check the running pods kubectl get po you can see the deployment has scheduled a single pod:

NAME                          READY     STATUS    RESTARTS   AGE
command-svc-533647621-e85yo   1/1       Running   0          2m

Conclusion

I hope in this article I have taken away a bit of the difficulty on how to deploy your containers against Kubernetes. It can be done relatively simple, of course its not production grade but it shows on a very basic level how with any basic scripting (groovy) you can accomplish this task by just using Jenkins.

Upgrades
In this particular article I have not zoomed into the act of upgrading a cluster or the containers running on them. I will discuss this in a future blog post where I will zoom in on the particulars of doing rolling-updates on your containers and eventually will address the upgrade of the cluster itself on AWS.

Publishing a Docker image to Amazon ECR using Jenkins

I wanted to do a quick post, because some recent posts have lead to some questions about how do I actually make a docker container available on AWS. Luckily Amazon has a solution for this and its called Amazon ECR (EC2 Container Registry).

How to push a container

Let me share a few quick steps on how you can push your Docker container to the Amazon ECR repository.

Step1: Creating a repository
The first step is to create a repository where your Docker container can be pushed to. A single repository can contain multiple versions of a docker container with a maximum of 2k versions. For different docker containers you would create individual repositories.

In order to create a repository for let’s say our test-svc docker container let’s just run this command using the AWS CLI:

aws ecr create-repository --repository-name test-svc

Please note the returned repositoryUri we will need it in the next steps.

Step2: Logging in to ECR
In order to be able to push containers via Docker, you need to login to the AWS ECR repository. You can do this by running this AWS CLI:

aws ecr get-login

This will give an output something like this:

docker login -u AWS -p password -e none https://aws_account_id.dkr.ecr.eu-west-1.amazonaws.com

You need to take that output and run it in the console to do the actual login so that you can push your container.

Step3: Pushing the container
Now that we are authenticated we can start pushing the docker container, let’s make sure that we tag the container we want to push first:

docker tag test-svc:latest aws_account_id.dkr.ecr.eu-west-1.amazonaws.com/test-svc:latest

And after this we push the container as following:

docker push aws_account_id.dkr.ecr.eu-west-1.amazonaws.com/test-svc:latest

Note: Please make sure to replace aws_account_id with your actual AWS account id. This repository URL with the account ID is also returned when your repository was created in Step 1.

Automating in a Jenkins job

For people that have read my other posts, I tend to automate everything via Jenkins this also includes docker container publishing to Amazon ECR. This can be quite simply done by creating a small Jenkins job using this Jenkinsfile, I ask for input to confirm publish is needed, after that input it gets published to AWS ECR:

node {
    stage 'build-test-svc'
    //this triggers the Jenkins job that builds the container
    //build 'test-svc'

    stage 'Publish containers'
    shouldPublish = input message: 'Publish Containers?', parameters: [[$class: 'ChoiceParameterDefinition', choices: 'yes\nno', description: '', name: 'Deploy']]
    if(shouldPublish == "yes") {
     echo "Publishing docker containers"
     sh "\$(aws ecr get-login)"

     sh "docker tag test-svc:latest aws_account_id.dkr.ecr.eu-west-1.amazonaws.com/state-svc:latest"
     sh "docker push aws_account_id.dkr.ecr.eu-west-1.amazonaws.com/test-svc:latest"
    }
}

Note: There is also a plugin in Jenkins that can publish to ECR, however up until this moment that does not support the eu-west region in AWS ECR yet and gives a login issue.

Conclusion

Hope the above helps for people that want to publish their Docker containers to AWS ECR πŸ™‚ If you have any questions do not hesitate to reach out to me via the different channels.

Deploying Kubernetes to AWS using Jenkins

Based on some of my previous posts I am quite busy creating a complete Continuous Integration pipeline using Docker and Jenkins-pipelines. For those who have read my previous blogposts I am a big fan of Kubernetes for Docker container orchestration and what I ideally want to achieve is have a full CI pipeline where even the kubernetes cluster gets deployed.

In this blog post I will detail how you can setup Jenkins to be able to deploy a Kubernetes cluster. I wanted to use cloudformation scripts as that makes most sense for a structural deployment method. It seems actually the easiest way to do this is using the kube-aws tool from core-os. The kube-aws tool is provided by core-os and can generate a cloudformation script that we can use for deploying a core-os based kubernetes cluster on AWS.

Preparing Jenkins

I am still using a Docker container to host my Jenkins installation (https://renzedevries.wordpress.com/2016/06/30/building-containers-with-docker-in-docker-and-jenkins/). If you just want to know how to use the kube-aws tool from Jenkins please skip this section :).

In order for me to use the kube-aws tool I need to do modify my docker Jenkins container, I need to do three things for that:

  1. Install the aws command line client
    Installing the aws-cli is actually relatively simple, I just need to make sure python & python pip are install them I can simply install the python package. These two packages are simply available from the apt-get repository, so regardless if you are running Jenkins in Docker or on a regular Ubuntu box you can install aws-cli as following:

    RUN apt-get install -qqy python
    RUN apt-get install -qqy python-pip
    RUN pip install awscli
    

  2. Install the kube-aws tool
    Next we need to install the kube-aws tool from core-os, this is a bit more tricky as there is nothing available in the default package repositories. So instead i simply download a specific release from the core-os site using wget, unpack it and then move the tool binary to /usr/local/bin

    RUN wget https://github.com/coreos/coreos-kubernetes/releases/download/v0.7.1/kube-aws-linux-amd64.tar.gz
    RUN tar zxvf kube-aws-linux-amd64.tar.gz
    RUN mv linux-amd64/kube-aws /usr/local/bin
    

3.Β Provide AWS keys
Because I do not want to prebake the AWS identity keys into the Jenkins image I will instead use the ability to inject these as environment variables during Docker container startup. Because I start Jenkins using a docker-compose start sequence, I can simply modify my compose file to inject the AWS identity keys, this looks as following:

jenkins:
  container_name: jenkins
  image: myjenkins:latest
  ports:
    - "8080:8080"
  volumes:
    - /Users/renarj/dev/docker/volumes/jenkins:/var/jenkins_home
    - /var/run:/var/run:rw
  environment:
    - AWS_ACCESS_KEY_ID=MY_AWS_ACCESS_KEY
    - AWS_SECRET_ACCESS_KEY=MY_AWS_ACCESS_KEY_SECRET
    - AWS_DEFAULT_REGION=eu-west-1

Putting it all together
Jenkins Dockerfile used to install all the tooling, looks as following:

from jenkinsci/jenkins:latest

USER root
RUN apt-get update -qq
RUN apt-get install -qqy apt-transport-https ca-certificates
RUN apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
RUN echo deb https://apt.dockerproject.org/repo debian-jessie main > /etc/apt/sources.list.d/docker.list
RUN apt-get update -qq
RUN apt-get install -qqy docker-engine
RUN usermod -a -G staff jenkins
RUN apt-get install -qqy python
RUN apt-get install -qqy python-pip
RUN pip install awscli
RUN wget https://github.com/coreos/coreos-kubernetes/releases/download/v0.7.1/kube-aws-linux-amd64.tar.gz
RUN tar zxvf kube-aws-linux-amd64.tar.gz
RUN mv linux-amd64/kube-aws /usr/local/bin

USER jenkins

Generating the CloudFormation script

The kube-aws tooling has relatively simple input parameters, and the tool is relatively straightforward. What the tool does is to generate a cloudformation script you can use to deploy a Kubernetes stack on AWS.

The tool has in essence three phases
1. Initialise the cluster settings
2. Render the CloudFormation templates
3. Validate and Deploy the cluster

To start we need to initialise the tool, we do this by specifying a number of things. We need to tell the name of the cluster, we need to specify what the external DNS name will be (for the kube console for example) and you need to specify the name of the key-pair to use for the nodes that will be created.

kube-aws init --cluster-name=robo-cluster \
   --external-dns-name=cluster.mydomain.com \
   --region=eu-west-1 \
   --availability-zone=eu-west-1c \
   --key-name=kubelet-key \
   --kms-key-arn='arn:aws:kms:eu-west-1:11111111:key/dfssdfsdfsfsdfsfsdfdsfsdfsdfs'

You also will need a KMS encryption key from AWS (see here how: http://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html), the ARN of that encryption you need to specify in the above call to the kube-aws tool.

Rendering CloudFormation templates
The next step is actually very simple, we need to render the CloudFormation templates, we do this by simply executing the below command:

kube-aws render

This renders in the local directory a number of files, where most importantly you have the cluster.yaml which contains all the configuration settings used in the CloudFormation template. The cloudformation template is available as well under the filename stack-template.json

Setting nodeType and number of nodes
Before we deploy we need to set some typical settings in the generated cluster.yaml settings file. The main settings I want to change are the AWS instance type and the number of kubernetes worker nodes. Because I want to automate the deployment, I have chosen to do a simple in line replacement of values using sed

With the below code I change the workerInstanceType to a Environment variable called ‘INSTANCE_TYPE’ and I change the workerCount property (the amount of kube nodes) to an environment variable ‘$WORKER_COUNT’.

sed -i '''s/#workerCount: 1/workerCount: '''$WORKER_COUNT'''/''' cluster.yaml
sed -i '''s/#workerInstanceType: m3.medium/workerInstanceType: '''$INSTANCE_TYPE'''/''' cluster.yaml

The reason I use environment variables is because this way I can use Jenkins job input parameters later on to specify them. This way when deploying a cluster I can specify the values on triggering of the job.

Last step to do is to validate the changes we have made are correct, we do this as following

kube-aws validate

Deploying the Kubernetes cluster to AWS

The next and final step is to actually run the CloudFormation scripts, the kube-aws tool has a handy command for this, running this will execute the CloudFormation template. Simply run this:

kube-aws up

You are of course free to run the actual template yourself manually via the aws-cli or using the web console. I do like this handy shortcut built in the kube-aws tool for automation purposes. All the files for running this manually are available after the render and validate step described above. The output you need to use are the cluster.yaml,stack-template.json,userdata/* folder and the credentials folder.

Kubeconfig
After the kube-aws command completes it will have written a valid kubeconfig file to the local working directory. This kubeconfig can be used to access the kubernetes cluster. The cluster will take roughly an additional 5-10 minutes to be available but once it is up you can execute the regular kubernetes commands to validate your cluster is up and running:

kubectl --kubeconfig=kubeconfig get nodes

Creating a Jenkins Job

I want to put all of the above together in a single Jenkins job. For this I have created a Jenkins pipelines file where we put the entire process in as put in smaller pieces above. On top of this I introduced a multi-stage Jenkins pipeline:
1. Initialise the tool
2. Change the cluster configuration based on input parameters
3. Archive the generated cloudformation scripts
4. Deploy the cluster
5. Destroy the cluster

Input parameters
In the Jenkins pipeline I have defined multiple input parameters. These allow me to customise the worker count and instance types as described before. In order to use them you can do this two ways, you can hard code this in a freestyle job, but if you want to use the new style build pipelines in Jenkins you can use this:

   WORKER_COUNT = input message: 'Number of Nodes', parameters: [[$class: 'StringParameterDefinition', defaultValue: '4', description: '', name: 'WORKER_COUNT']]
   INSTANCE_TYPE = input message: 'Number of Nodes', parameters: [[$class: 'StringParameterDefinition', defaultValue: 't2.micro', description: '', name: 'INSTANCE_TYPE']]

For step 4 and 5 I use an additional Input step in Jenkins to check if the cluster really needs to be deployed. Also an additional fifth step is introduced to potentially allow destroying of the cluster.

The full pipeline looks like this:

node {
   stage 'Kube-aws init'
   deleteDir()

   sh "kube-aws init --cluster-name=robo-cluster \
   --external-dns-name=kube.robot.renarj.nl \
   --region=eu-west-1 \
   --availability-zone=eu-west-1c \
   --key-name=kube-key \
   --kms-key-arn='arn:aws:kms:eu-west-1:11111111:key/dfssdfsdfsfsdfsfsdfdsfsdfsdfs'"

   stage "Kube-aws render"

   WORKER_COUNT = input message: 'Number of Nodes', parameters: [[$class: 'StringParameterDefinition', defaultValue: '4', description: '', name: 'WORKER_COUNT']]
   INSTANCE_TYPE = input message: 'Number of Nodes', parameters: [[$class: 'StringParameterDefinition', defaultValue: 't2.micro', description: '', name: 'INSTANCE_TYPE']]

   sh "kube-aws render"
   sh "sed -i '''s/#workerCount: 1/workerCount: '''$WORKER_COUNT'''/''' cluster.yaml"
   sh "sed -i '''s/#workerInstanceType: m3.medium/workerInstanceType: '''$INSTANCE_TYPE'''/''' cluster.yaml"
   sh "kube-aws validate"

   stage "Archive CFN"
   step([$class: 'ArtifactArchiver', artifacts: 'cluster.yaml,stack-template.json,credentials/*,userdata/*', fingerprint: true])

   stage "Deploy Cluster"
   shouldDeploy = input message: 'Deploy Cluster?', parameters: [[$class: 'ChoiceParameterDefinition', choices: 'yes\nno', description: '', name: 'Deploy']]
   if(shouldDeploy == "yes") {
    echo "deploying Kubernetes cluster"
    sh "kube-aws up"
    step([$class: 'ArtifactArchiver', artifacts: 'kubeconfig', fingerprint: true])
   }

   stage "Destroy cluster"
   shouldDestroy = input message: 'Destroy the cluster?', parameters: [[$class: 'BooleanParameterDefinition', defaultValue: false, description: '', name: 'Destroy the cluster']]
   if(shouldDestroy) {
    sh "kube-aws destroy"
   }
}

In Jenkins you can create a new job of type ‘Pipeline’ and then simply copy this above Jenkinsfile script in the job. But you can also check this file into sourcecontrol and let Jenkins scan your repositories for the presence of these files using the Pipeline multibranch capability.

The pipeline stage-view will roughly look as following after the above:
Screen Shot 2016-07-18 at 08.30.57

Conclusion

I hope based on the above people one way of automating the deployment of a Kubernetes cluster. Please bear in mind that this is not ready for a production setup and will require more work. I hope the above demonstrates the possibilities of deploying and automating a Kubernetes cluster in a simple way. However it is still lacking significant steps like doing rolling upgrades of existing clusters, deploying parallel clusters, cluster expansion etc. etc.

But for now for my use-case, it helps me to quickly and easily deploy a Kubernetes cluster for my project experiments.

Deploying a Docker container to Kubernetes on Amazon AWS

Based on recent posts I am still very busy with my Robots, the goal I am working on now is to create multiple Robot interaction solution. The approach I am taken here is to utilize the Cloud as a means of brokering between the Robots. The Architecture I have in mind will be based on a reactive / event driven framework that the robots will use for communication.

One of the foundations for this mechanism is to utilise the Cloud and Event driven architecture. In another post I will give some more detail about how the robots will actually interact. But in this post I wanted to give some information about the specifics of my Cloud Architecture and how to deploy such a Cluster of microservices using Docker and Kubernetes.

Deploying to the cloud

The robot framework will have a significant number of microservices and their dependent supporting services that I need to deploy. With this in mind I have chose to go for a Docker / Microservices based Architecture where in the future I can easily extend the number of services to be deployed and simply grow the cluster if needed.

I have quite some experience now at this moment with Docker and really have grown fond of Kubernetes as an orchestrator so that will be my pick for orchestrator as it is so easy to administer on the Cloud. There are other orchestrators, but the other ones seems to be more early on, although the AWS ECS service does seem to get close

Note to google: I have played before on Google Cloud and Kubernetes and its actually a perfect fit. It is super easy to deploy containers and have your kubernetes cluster there. However Google does not allow individual developers (like me) without their own company a Cloud account in EU because of VAT ruling. So unfortunately for this exercise I will have to default to Amazon AWS. Hope Google reads this so they can fix this quickly!

Creating a Kubernetes cluster on AWS

In this next part I will demo on how to deploy the most important service, which is the MQTT one to a Kubernetes cluster.

In order to get started I need to first create an actual Kubernetes cluster on AWS. This is relatively simple, I just followed the steps on this page: http://kubernetes.io/docs/getting-started-guides/aws/

Prerequisite: Amazon AWS CLI is installed and configured http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html

I used the following variables to create the cluster, which will be a 4 node cluster using M3 Medium instances:

export KUBERNETES_PROVIDER=aws
export KUBE_AWS_ZONE=eu-west-1c
export NUM_NODES=2
export MASTER_SIZE=m3.medium
export NODE_SIZE=m3.medium
export AWS_S3_REGION=eu-west-1
export AWS_S3_BUCKET=robot-kubernetes-artifacts
export INSTANCE_PREFIX=robo

After that it is simply a case of running this:

wget -q -O - https://get.k8s.io | bash

If you already previously ran above command, a directory called ‘kubernetes’ will have been created and in there you can control the cluster using these commands:
For creating the cluster (only if you already ran above command once before):

cluster/kube-up.sh

For stopping the cluster:

cluster/kube-down.sh

After these above commands have been run your cluster should have been created. For kubernetes you still need a small tool the kubernetes command
line client called ‘kubectl’. The folder that was downloaded with kubernetes gets shipped with a client. You can simply add it to your path as following:

# OS X
export PATH=/Users/rdevries/kubernetes/platforms/darwin/amd64:$PATH

# Linux
export PATH=/Users/rdevries/kubernetes/platforms/linux/amd64:$PATH

You should be able to check your cluster is up and running with the following:

kubectl get nodes
NAME                                         STATUS    AGE
ip-172-20-0-115.eu-west-1.compute.internal   Ready     4m
ip-172-20-0-116.eu-west-1.compute.internal   Ready     4m
ip-172-20-0-117.eu-west-1.compute.internal   Ready     4m
ip-172-20-0-118.eu-west-1.compute.internal   Ready     4m

Using Amazon ECR

In order to get the containers to deploy on Kubernetes I have to make my containers available to Kubernetes and Docker. I could use Docker hub to distribute my containers and this is actually fine for MQTT which I have made available on Docker hub (https://hub.docker.com/r/renarj/mqtt/).

However some of the future containers are not really meant for public availability, so instead I am opting to upload the mqtt container to the new Amazon Container Registry just to test out how simple this is.

We can simply start using the AWS CLI and create a Container repository for MQTT:
aws ecr create-repository --repository-name mqtt

Next we can build the container from this git repo: https://github.com/renarj/moquette-spring-docker

git clone https://github.com/renarj/moquette-spring-docker.git
cd moquette-spring-docker
mvn clean package docker:build

Now we need to upload the container to AWS, these instructions will vary slightly for everyone due to account id being used. First we need to tag the container, login to AWS ECR and last push the container.
1. Tag container docker tag mqtt/moquette-spring:latest 1111.dkr.ecr.eu-west-1.amazonaws.com/mqtt:latest (Replace 1111 with your AWS account id)
2. Login to AWS with command return by this aws ecr get-login --region eu-west-1
3. Push container: docker push 1111.dkr.ecr.eu-west-1.amazonaws.com/mqtt:latest

This should make the container available in the Kubernetes cluster.

Deploying the containers
In order to deploy the containers I have to create a small Yaml file for the MQTT service and the Container to be deployed.

The first part is to tell Kubernetes how to deploy the actual container, we can do this with this simple Yaml file:

apiVersion: v1
kind: ReplicationController
metadata:
  name: mqtt
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: mqtt
    spec:
      containers:
      - name: mqtt
        image: 1111.dkr.ecr.eu-west-1.amazonaws.com/mqtt:latest
        ports:
        - containerPort: 1883
        imagePullPolicy: Always

If you look at above sample there are two important aspects, one if the image location. This is the same location as my ECR repository URL. If you do not know the url you can always execute this AWS CLI command: aws ecr describe-repositories which will return a list of all available repositories and url’s.

Note: I have noticed in older versions of Kubernetes there are some permission errors when using ECR as the image repository, I am unsure exactly why, but the later version of Kubernetes (1.2.4+) seem to properly work

Now that we have described how the container needs to be deployed, we want to tell Kubernetes how it can be made available. I will load balance the MQTT service using a ELB, the nice part about Kubernetes is that it can arrange all that for me.

In order to tell Kubernetes and as a result AWS to create a loadbalancer we define a Kubernetes service with type ‘LoadBalancer’:

apiVersion: v1
kind: Service
metadata:
  name: mqtt
  labels:
    app: mqtt
spec:
  type: LoadBalancer
  ports:
  - port: 1883
    targetPort: 1883
    protocol: TCP
    name: mqtt
  selector:
    app: mqtt

Using this we should have a loadbalancer available after the service is created, in order to find out the public address of the service you can describe the service from kubernetes and find the endpoint like this:

kubectl describe svc/mqtt
Name:           mqtt
Namespace:      default
Labels:         app=mqtt
Selector:       app=mqtt
Type:           LoadBalancer
IP:         10.0.225.68
LoadBalancer Ingress:   a9d650446271e11e6b3a30afb1473159-632581711.eu-west-1.elb.amazonaws.com
Port:           mqtt    1883/TCP
NodePort:       mqtt    30531/TCP
Endpoints:      10.244.1.5:1883
Session Affinity:   None
Events:
  FirstSeen LastSeen    Count   From            SubobjectPath   Type        Reason          Message
  --------- --------    -----   ----            -------------   --------    ------          -------
  2s        2s      1   {service-controller }           Normal      CreatingLoadBalancer    Creating load balancer
  0s        0s      1   {service-controller }           Normal      CreatedLoadBalancer Created load balancer

Please note the LoadBalancer Ingress address, this is the publicly available LB for the MQTT service.

Testing MQTT on Kubernetes

Well now that our cluster is deployed and we have a MQTT load balanced container running, how can we give this a try? Well for this I have used a simple node.js package called mqtt-cli. Simply run npm install mqtt-cli -g to install the package.

After this you can send a test-message to our load balanced mqtt container using this command with a -w at the end to watch the topic for incoming messages:

mqtt-cli a9d650446271e11e6b3a30afb1473159-632581711.eu-west-1.elb.amazonaws.com /test/Hello "test message" -w

Now let’s try in a new console window to again send a message but now without the ‘-w’ at the end:

mqtt-cli a9d650446271e11e6b3a30afb1473159-632581711.eu-west-1.elb.amazonaws.com /test/Hello "Message from another window"

If everything worked well it should now have sent the message to the other window, and voila we have a working kubernetes cluster with a load balanced mqtt broker on it πŸ™‚

Next steps

I will be writing a next blog post soon on how all of the above has helped me connect my robots to the cloud. Hope this above info helps some people and shows Kubernetes and Docker are really not at all so scary to use in practice πŸ™‚