Versioning and deploying Docker containers using Kubernetes and Jenkins

In the last few posts the main comment that keeps coming back is ‘you should not use latest’. I totally agree on that and in this blog I will finally do something about it 🙂 I mainly used the setup for development purpose, however in production I need something more reliable, meaning versioned deployments.

To do this there are of course two parts, first I have to actually create a build job that can version the containers and second I will need a build job that rolls out the update deployed container.

Versioning and Releasing

Using the below Jenkins pipeline I have a build job that uses a Jenkins input parameter that defines the release version. In order to use this create a Jenkins pipeline job that has one input parameter called ‘RELEASE_VERSION’. The default value I keep to ‘dev-latest’ for development purposes, but when releasing we obviously need to use a sensible number.

node {
    stage 'build'
    build 'home projects/command-svc/master'

    stage 'test-deploy'
    sh "\$(aws ecr get-login)"
    sh "docker tag home-core/command-svc:latest aws_account_id.dkr.ecr.eu-west-1.amazonaws.com/command-svc:dev-latest"
    sh "docker push aws_account_id.dkr.ecr.eu-west-1.amazonaws.com/command-svc:dev-latest"    

    stage 'qa'
    build 'home projects/command-svc-tests/master'

    stage 'Publish containers'
    sh "docker tag home-core/command-svc:latest aws_account_id.dkr.ecr.eu-west-1.amazonaws.com/command-svc:'$RELEASE_VERSION'"
    sh "docker push aws_account_id.dkr.ecr.eu-west-1.amazonaws.com/command-svc:'$RELEASE_VERSION'"
}

Now when I trigger the build Job, Jenkins will ask me to input the version for releasing that container. So let’s in this article use the version ‘0.0.1’. When running the build job, Jenkins will go through a few stages.
1. Building the container
2. Pushing a dev-latest version
3. Running tests which deploy a container against the test cluster
4. Release the container using a fixed version.

In the latest stage I release the container using the input parameter specified on the job triggering. The build job does not actually deploy to production, that is for the time being still a manual action.

Deploying to Kubernetes

For deploying to production I have done an initial deployment of the service using below deployment descriptor. I started out deploying version ‘0.0.1’ which I have built above.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: command-svc
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: command-svc
    spec:
      containers:
      - name: command-svc
        image: aws_account_id.dkr.ecr.eu-west-1.amazonaws.com/command-svc:0.0.1
        ports:
        - containerPort: 8080
        env:
        - name: amq_host
          value: amq
        - name: SPRING_PROFILES_ACTIVE
          value: production

Doing a rolling update

This works fine for an initial deployment, however if i want to do an upgrade of my production containers I need something more. In production in essence I just want to upgrade the container image version. This is a relatively simple operation with Kubernetes. Let’s assume i have released a newer version of the container with version ‘0.0.2’.

In order to update the container in Kubernetes I can simply do a rolling update by changing the image of the Deployment object in Kubernetes as following:

kubectl set image deployment/command-svc command-svc=aws_account_id.dkr.ecr.eu-west-1.amazonaws.com/command-svc:0.0.2

I am currently not yet integrating this into my build pipeline as I want a production upgrade to be a conscious decision still. But once all the quality gates are in place, there should be no reason to not automate the above step as well. More on this in some future blog posts.

Advertisements

Deploying Docker containers to Kubernetes with Jenkins

After all my recent posts about deploying a Kubernetes cluster to AWS the one step I still wanted to talk about is how you can deploy the Docker containers to a Kubernetes cluster using a bit of automation. I will try to explain here how you can relatively simply do this by using Jenkins pipelines and some groovy scripting 🙂

Pre-requisites
* Working Kubernetes cluster (see here: https://renzedevries.wordpress.com/2016/07/18/deploying-kubernetes-to-aws-using-jenkins/)
* Jenkins slave/master setup
* Kubectl tool installed and configured on the Jenkins master/slave and desktop
* Publicly accessible Docker images (AWS ECR for example see: https://renzedevries.wordpress.com/2016/07/20/publishing-a-docker-image-to-amazon-ecr-using-jenkins/)

What are we deploying
In order to deploy containers against kubernetes there are two things that are needed. First I need to deploy the services that will ensure that we have ingress traffic via AWS ELB’s and this also ensures we have an internal DNS lookup capability for service to service communication. Second I need to deploy the actual containers using Kubernetes Deployments.

In this post I will focus on mainly one service which is called ‘command-service’. If you want to read a bit more about the services that I deploy you can find that here: https://renzedevries.wordpress.com/2016/06/10/robot-interaction-with-a-nao-robot-and-a-raspberry-pi-robot/

Creating the services

The first task I do is to create the actual kubernetes service for the command-service. The service descriptors are relatively simple in my case, the command-service needs to be publicly load balanced so I want kubernetes to create an AWS ELB for me. I will deploy this service by first checking out my git repository where I contain the service descriptors using Kubernetes yaml files. I will then use a Jenkins pipeline with some groovy scripting to deploy it.

The service descriptor for the public loadbalanced command-svc looks like this. This is a load balancer that is backed by all pods that have a label ‘app’ with value ‘command-svc’ and then attached to the AWS ELB backing this service.

apiVersion: v1
kind: Service
metadata:
  name: command-svc
  labels:
    app: command-svc
spec:
  type: LoadBalancer
  ports:
  - port: 8080
    targetPort: 8080
    protocol: TCP
    name: command-svc
  selector:
    app: command-svc

In order to actually create this services I use the below Jenkins pipeline code. In this code I use the apply command because the services are not very likely to change and this way it works both in clean and already existing environments. Because I constantly create new environments and sometimes update existing ones, I want all my scripts to be runnable multiple times regardless of current cluster/deployment state.

import groovy.json.*

node {
    stage 'prepare'
    deleteDir()

    git credentialsId: 'bb420c66-8efb-43e5-b5f6-583b5448e984', url: 'git@bitbucket.org:oberasoftware/haas-build.git'
    sh "wget http://localhost:8080/job/kube-deploy/lastSuccessfulBuild/artifact/*zip*/archive.zip"
    sh "unzip archive.zip"
    sh "mv archive/* ."

    stage "deploy services"
    sh "kubectl apply -f command-svc.yml --kubeconfig=kubeconfig"
    waitForServices()
}

Waiting for creation
One of the challenges I faced tho is that I have a number of containers that I want to deploy that depend on these service definitions. However it takes a bit of time to deploy these services and for the ELB’s to be fully created. So I have created a bit of small waiting code in Groovy that checks if the services are up and running. This is being called using the ‘waitForServices()’ method in the pipeline, you can see the code for this below:

def waitForServices() {
  sh "kubectl get svc -o json > services.json --kubeconfig=kubeconfig"

  while(!toServiceMap(readFile('services.json')).containsKey('command-svc')) {
        sleep(10)
        echo "Services are not yet ready, waiting 10 seconds"
        sh "kubectl get svc -o json > services.json --kubeconfig=kubeconfig"
  }
  echo "Services are ready, continuing"
}

@com.cloudbees.groovy.cps.NonCPS
Map toServiceMap(servicesJson) {
  def json = new JsonSlurper().parseText(servicesJson)

  def serviceMap = [:]
  json.items.each { i ->
    def serviceName = i.metadata.name
    def ingress = i.status.loadBalancer.ingress
    if(ingress != null) {
      def serviceUrl = ingress[0].hostname
      serviceMap.put(serviceName, serviceUrl)
    }
  }

  return serviceMap
}

This should not complete until at least all the services are ready for usage, in this case my command-svc with its ELB backing.

Creating the containers

The next step is actually the most important, deploying the actual container. In this example I will be using the deployments objects that are there since Kubernetes 1.2.x.

Let’s take a look again at the command-svc container that I want to deploy. I use again the yaml file syntax for describing the deployment object:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: command-svc
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: command-svc
    spec:
      containers:
      - name: command-svc
        image: account_id.dkr.ecr.eu-west-1.amazonaws.com/command-svc:latest
        ports:
        - containerPort: 8080
        env:
        - name: amq_host
          value: amq
        - name: SPRING_PROFILES_ACTIVE
          value: production

Let’s put all that together for the rest of my deployments for the other containers. In this case I have one additional container that I deploy the edge-service. Using Jenkins pipelines this looks relatively simple:

    stage "deploy"
    sh "kubectl apply -f kubernetes/command-deployment.yml --kubeconfig=kubeconfig"

I currently do not have any active health checking at the end of the deployment, i am still planning on it. For now I just check that the pods and deployments are properly deployed, you can also do this by simply running these commands:
kubectl get deployments

This will yield something like below:

NAME          DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
command-svc   1         1         1            1           1m

If you check the running pods kubectl get po you can see the deployment has scheduled a single pod:

NAME                          READY     STATUS    RESTARTS   AGE
command-svc-533647621-e85yo   1/1       Running   0          2m

Conclusion

I hope in this article I have taken away a bit of the difficulty on how to deploy your containers against Kubernetes. It can be done relatively simple, of course its not production grade but it shows on a very basic level how with any basic scripting (groovy) you can accomplish this task by just using Jenkins.

Upgrades
In this particular article I have not zoomed into the act of upgrading a cluster or the containers running on them. I will discuss this in a future blog post where I will zoom in on the particulars of doing rolling-updates on your containers and eventually will address the upgrade of the cluster itself on AWS.

Publishing a Docker image to Amazon ECR using Jenkins

I wanted to do a quick post, because some recent posts have lead to some questions about how do I actually make a docker container available on AWS. Luckily Amazon has a solution for this and its called Amazon ECR (EC2 Container Registry).

How to push a container

Let me share a few quick steps on how you can push your Docker container to the Amazon ECR repository.

Step1: Creating a repository
The first step is to create a repository where your Docker container can be pushed to. A single repository can contain multiple versions of a docker container with a maximum of 2k versions. For different docker containers you would create individual repositories.

In order to create a repository for let’s say our test-svc docker container let’s just run this command using the AWS CLI:

aws ecr create-repository --repository-name test-svc

Please note the returned repositoryUri we will need it in the next steps.

Step2: Logging in to ECR
In order to be able to push containers via Docker, you need to login to the AWS ECR repository. You can do this by running this AWS CLI:

aws ecr get-login

This will give an output something like this:

docker login -u AWS -p password -e none https://aws_account_id.dkr.ecr.eu-west-1.amazonaws.com

You need to take that output and run it in the console to do the actual login so that you can push your container.

Step3: Pushing the container
Now that we are authenticated we can start pushing the docker container, let’s make sure that we tag the container we want to push first:

docker tag test-svc:latest aws_account_id.dkr.ecr.eu-west-1.amazonaws.com/test-svc:latest

And after this we push the container as following:

docker push aws_account_id.dkr.ecr.eu-west-1.amazonaws.com/test-svc:latest

Note: Please make sure to replace aws_account_id with your actual AWS account id. This repository URL with the account ID is also returned when your repository was created in Step 1.

Automating in a Jenkins job

For people that have read my other posts, I tend to automate everything via Jenkins this also includes docker container publishing to Amazon ECR. This can be quite simply done by creating a small Jenkins job using this Jenkinsfile, I ask for input to confirm publish is needed, after that input it gets published to AWS ECR:

node {
    stage 'build-test-svc'
    //this triggers the Jenkins job that builds the container
    //build 'test-svc'

    stage 'Publish containers'
    shouldPublish = input message: 'Publish Containers?', parameters: [[$class: 'ChoiceParameterDefinition', choices: 'yes\nno', description: '', name: 'Deploy']]
    if(shouldPublish == "yes") {
     echo "Publishing docker containers"
     sh "\$(aws ecr get-login)"

     sh "docker tag test-svc:latest aws_account_id.dkr.ecr.eu-west-1.amazonaws.com/state-svc:latest"
     sh "docker push aws_account_id.dkr.ecr.eu-west-1.amazonaws.com/test-svc:latest"
    }
}

Note: There is also a plugin in Jenkins that can publish to ECR, however up until this moment that does not support the eu-west region in AWS ECR yet and gives a login issue.

Conclusion

Hope the above helps for people that want to publish their Docker containers to AWS ECR 🙂 If you have any questions do not hesitate to reach out to me via the different channels.

Deploying Kubernetes to AWS using Jenkins

Based on some of my previous posts I am quite busy creating a complete Continuous Integration pipeline using Docker and Jenkins-pipelines. For those who have read my previous blogposts I am a big fan of Kubernetes for Docker container orchestration and what I ideally want to achieve is have a full CI pipeline where even the kubernetes cluster gets deployed.

In this blog post I will detail how you can setup Jenkins to be able to deploy a Kubernetes cluster. I wanted to use cloudformation scripts as that makes most sense for a structural deployment method. It seems actually the easiest way to do this is using the kube-aws tool from core-os. The kube-aws tool is provided by core-os and can generate a cloudformation script that we can use for deploying a core-os based kubernetes cluster on AWS.

Preparing Jenkins

I am still using a Docker container to host my Jenkins installation (https://renzedevries.wordpress.com/2016/06/30/building-containers-with-docker-in-docker-and-jenkins/). If you just want to know how to use the kube-aws tool from Jenkins please skip this section :).

In order for me to use the kube-aws tool I need to do modify my docker Jenkins container, I need to do three things for that:

  1. Install the aws command line client
    Installing the aws-cli is actually relatively simple, I just need to make sure python & python pip are install them I can simply install the python package. These two packages are simply available from the apt-get repository, so regardless if you are running Jenkins in Docker or on a regular Ubuntu box you can install aws-cli as following:

    RUN apt-get install -qqy python
    RUN apt-get install -qqy python-pip
    RUN pip install awscli
    

  2. Install the kube-aws tool
    Next we need to install the kube-aws tool from core-os, this is a bit more tricky as there is nothing available in the default package repositories. So instead i simply download a specific release from the core-os site using wget, unpack it and then move the tool binary to /usr/local/bin

    RUN wget https://github.com/coreos/coreos-kubernetes/releases/download/v0.7.1/kube-aws-linux-amd64.tar.gz
    RUN tar zxvf kube-aws-linux-amd64.tar.gz
    RUN mv linux-amd64/kube-aws /usr/local/bin
    

3. Provide AWS keys
Because I do not want to prebake the AWS identity keys into the Jenkins image I will instead use the ability to inject these as environment variables during Docker container startup. Because I start Jenkins using a docker-compose start sequence, I can simply modify my compose file to inject the AWS identity keys, this looks as following:

jenkins:
  container_name: jenkins
  image: myjenkins:latest
  ports:
    - "8080:8080"
  volumes:
    - /Users/renarj/dev/docker/volumes/jenkins:/var/jenkins_home
    - /var/run:/var/run:rw
  environment:
    - AWS_ACCESS_KEY_ID=MY_AWS_ACCESS_KEY
    - AWS_SECRET_ACCESS_KEY=MY_AWS_ACCESS_KEY_SECRET
    - AWS_DEFAULT_REGION=eu-west-1

Putting it all together
Jenkins Dockerfile used to install all the tooling, looks as following:

from jenkinsci/jenkins:latest

USER root
RUN apt-get update -qq
RUN apt-get install -qqy apt-transport-https ca-certificates
RUN apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
RUN echo deb https://apt.dockerproject.org/repo debian-jessie main > /etc/apt/sources.list.d/docker.list
RUN apt-get update -qq
RUN apt-get install -qqy docker-engine
RUN usermod -a -G staff jenkins
RUN apt-get install -qqy python
RUN apt-get install -qqy python-pip
RUN pip install awscli
RUN wget https://github.com/coreos/coreos-kubernetes/releases/download/v0.7.1/kube-aws-linux-amd64.tar.gz
RUN tar zxvf kube-aws-linux-amd64.tar.gz
RUN mv linux-amd64/kube-aws /usr/local/bin

USER jenkins

Generating the CloudFormation script

The kube-aws tooling has relatively simple input parameters, and the tool is relatively straightforward. What the tool does is to generate a cloudformation script you can use to deploy a Kubernetes stack on AWS.

The tool has in essence three phases
1. Initialise the cluster settings
2. Render the CloudFormation templates
3. Validate and Deploy the cluster

To start we need to initialise the tool, we do this by specifying a number of things. We need to tell the name of the cluster, we need to specify what the external DNS name will be (for the kube console for example) and you need to specify the name of the key-pair to use for the nodes that will be created.

kube-aws init --cluster-name=robo-cluster \
   --external-dns-name=cluster.mydomain.com \
   --region=eu-west-1 \
   --availability-zone=eu-west-1c \
   --key-name=kubelet-key \
   --kms-key-arn='arn:aws:kms:eu-west-1:11111111:key/dfssdfsdfsfsdfsfsdfdsfsdfsdfs'

You also will need a KMS encryption key from AWS (see here how: http://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html), the ARN of that encryption you need to specify in the above call to the kube-aws tool.

Rendering CloudFormation templates
The next step is actually very simple, we need to render the CloudFormation templates, we do this by simply executing the below command:

kube-aws render

This renders in the local directory a number of files, where most importantly you have the cluster.yaml which contains all the configuration settings used in the CloudFormation template. The cloudformation template is available as well under the filename stack-template.json

Setting nodeType and number of nodes
Before we deploy we need to set some typical settings in the generated cluster.yaml settings file. The main settings I want to change are the AWS instance type and the number of kubernetes worker nodes. Because I want to automate the deployment, I have chosen to do a simple in line replacement of values using sed

With the below code I change the workerInstanceType to a Environment variable called ‘INSTANCE_TYPE’ and I change the workerCount property (the amount of kube nodes) to an environment variable ‘$WORKER_COUNT’.

sed -i '''s/#workerCount: 1/workerCount: '''$WORKER_COUNT'''/''' cluster.yaml
sed -i '''s/#workerInstanceType: m3.medium/workerInstanceType: '''$INSTANCE_TYPE'''/''' cluster.yaml

The reason I use environment variables is because this way I can use Jenkins job input parameters later on to specify them. This way when deploying a cluster I can specify the values on triggering of the job.

Last step to do is to validate the changes we have made are correct, we do this as following

kube-aws validate

Deploying the Kubernetes cluster to AWS

The next and final step is to actually run the CloudFormation scripts, the kube-aws tool has a handy command for this, running this will execute the CloudFormation template. Simply run this:

kube-aws up

You are of course free to run the actual template yourself manually via the aws-cli or using the web console. I do like this handy shortcut built in the kube-aws tool for automation purposes. All the files for running this manually are available after the render and validate step described above. The output you need to use are the cluster.yaml,stack-template.json,userdata/* folder and the credentials folder.

Kubeconfig
After the kube-aws command completes it will have written a valid kubeconfig file to the local working directory. This kubeconfig can be used to access the kubernetes cluster. The cluster will take roughly an additional 5-10 minutes to be available but once it is up you can execute the regular kubernetes commands to validate your cluster is up and running:

kubectl --kubeconfig=kubeconfig get nodes

Creating a Jenkins Job

I want to put all of the above together in a single Jenkins job. For this I have created a Jenkins pipelines file where we put the entire process in as put in smaller pieces above. On top of this I introduced a multi-stage Jenkins pipeline:
1. Initialise the tool
2. Change the cluster configuration based on input parameters
3. Archive the generated cloudformation scripts
4. Deploy the cluster
5. Destroy the cluster

Input parameters
In the Jenkins pipeline I have defined multiple input parameters. These allow me to customise the worker count and instance types as described before. In order to use them you can do this two ways, you can hard code this in a freestyle job, but if you want to use the new style build pipelines in Jenkins you can use this:

   WORKER_COUNT = input message: 'Number of Nodes', parameters: [[$class: 'StringParameterDefinition', defaultValue: '4', description: '', name: 'WORKER_COUNT']]
   INSTANCE_TYPE = input message: 'Number of Nodes', parameters: [[$class: 'StringParameterDefinition', defaultValue: 't2.micro', description: '', name: 'INSTANCE_TYPE']]

For step 4 and 5 I use an additional Input step in Jenkins to check if the cluster really needs to be deployed. Also an additional fifth step is introduced to potentially allow destroying of the cluster.

The full pipeline looks like this:

node {
   stage 'Kube-aws init'
   deleteDir()

   sh "kube-aws init --cluster-name=robo-cluster \
   --external-dns-name=kube.robot.renarj.nl \
   --region=eu-west-1 \
   --availability-zone=eu-west-1c \
   --key-name=kube-key \
   --kms-key-arn='arn:aws:kms:eu-west-1:11111111:key/dfssdfsdfsfsdfsfsdfdsfsdfsdfs'"

   stage "Kube-aws render"

   WORKER_COUNT = input message: 'Number of Nodes', parameters: [[$class: 'StringParameterDefinition', defaultValue: '4', description: '', name: 'WORKER_COUNT']]
   INSTANCE_TYPE = input message: 'Number of Nodes', parameters: [[$class: 'StringParameterDefinition', defaultValue: 't2.micro', description: '', name: 'INSTANCE_TYPE']]

   sh "kube-aws render"
   sh "sed -i '''s/#workerCount: 1/workerCount: '''$WORKER_COUNT'''/''' cluster.yaml"
   sh "sed -i '''s/#workerInstanceType: m3.medium/workerInstanceType: '''$INSTANCE_TYPE'''/''' cluster.yaml"
   sh "kube-aws validate"

   stage "Archive CFN"
   step([$class: 'ArtifactArchiver', artifacts: 'cluster.yaml,stack-template.json,credentials/*,userdata/*', fingerprint: true])

   stage "Deploy Cluster"
   shouldDeploy = input message: 'Deploy Cluster?', parameters: [[$class: 'ChoiceParameterDefinition', choices: 'yes\nno', description: '', name: 'Deploy']]
   if(shouldDeploy == "yes") {
    echo "deploying Kubernetes cluster"
    sh "kube-aws up"
    step([$class: 'ArtifactArchiver', artifacts: 'kubeconfig', fingerprint: true])
   }

   stage "Destroy cluster"
   shouldDestroy = input message: 'Destroy the cluster?', parameters: [[$class: 'BooleanParameterDefinition', defaultValue: false, description: '', name: 'Destroy the cluster']]
   if(shouldDestroy) {
    sh "kube-aws destroy"
   }
}

In Jenkins you can create a new job of type ‘Pipeline’ and then simply copy this above Jenkinsfile script in the job. But you can also check this file into sourcecontrol and let Jenkins scan your repositories for the presence of these files using the Pipeline multibranch capability.

The pipeline stage-view will roughly look as following after the above:
Screen Shot 2016-07-18 at 08.30.57

Conclusion

I hope based on the above people one way of automating the deployment of a Kubernetes cluster. Please bear in mind that this is not ready for a production setup and will require more work. I hope the above demonstrates the possibilities of deploying and automating a Kubernetes cluster in a simple way. However it is still lacking significant steps like doing rolling upgrades of existing clusters, deploying parallel clusters, cluster expansion etc. etc.

But for now for my use-case, it helps me to quickly and easily deploy a Kubernetes cluster for my project experiments.

Building containers with Docker in Docker and Jenkins

Today I wanted to share some of my experiences I have when I am at home experimenting with Docker. I really love Docker it makes my life of deploying my software a lot easier. However as a hobbyist in my home setup I face one challenge and that is limited funds. I don’t have a nice build farm at home, and I do not want to run a cloud build server 24/7.

However what I want to have is a reliable Continuous Integration environment on hardware that is not always on and might be portable. Based on this I set out to run Jenkins as a Docker container which is relatively simple. You can simply run ”’docker run -d -p”8080:8080″ jenkinsci/jenkins”’ and voila a running Jenkins installation. For most of my challenges this indeed seems to be the case, I can create portable projects using Jenkins pipelines where I just need to point Jenkins to my github account.

Docker in Docker with Jenkins

The main challenge I faced was that I actually also wanted to build new containers as part of the CI setup. But the problem here is that in order to build a Docker container there needs to be a running Docker engine in the Jenkins host. I actually would like to have access to a Docker engine inside my already existing Docker container so I can build new containers.

There are actually multiple solutions to the problem:
1. Really run a Docker engine inside the Jenkins Docker container
2. Map the Docker from the Jenkins Docker host to be accesible inside the Jenkins container

There is an obvious downside to nr 1, and many other blogs do not recommend using this ever…. So this left me with solution nr 2. using the already running Docker inside my parent host environment.

Solving the problem

In order to make this work I need to do a few things:

1: Install the Docker client inside the Jenkins container
The first part is actually the hardest, I can make the docker engine of the host system available but I still need the client tools installed. The default Jenkins container does ofcourse not contain these tools. As I had to make some more modifications to my Jenkins container I set out to simply extend the current container.

In order to install the Docker CLI, you can use the below Dockerfile to extend the official Jenkins 2.x container with the latest docker client that is available.

from jenkinsci/jenkins:latest

USER root
RUN apt-get update -qq
RUN apt-get install -qqy apt-transport-https ca-certificates
RUN apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
RUN echo deb https://apt.dockerproject.org/repo debian-jessie main > /etc/apt/sources.list.d/docker.list
RUN apt-get update -qq
RUN apt-get install -qqy docker-engine

USER jenkins

Next thing I just need to build this container once on the host system:
docker build -t myjenkins .

2: Start the Jenkins container with a mapper Docker socket
In order to make the Docker from the host system available I need to make the API available to the Jenkins docker container. You can do this by mapping the docker socket that is available on the parent system. I have created a small docker-compose file where I map both my volumes and the docker socket as following:

jenkins:
  container_name: jenkins
  image: myjenkins:latest
  ports:
    - "8080:8080"
  volumes:
    - /Users/devuser/dev/docker/volumes/jenkins:/var/jenkins_home
    - /var/run:/var/run:rw

Please not specially mapping the ‘/var/run’ with rw privileges, this is needed to make sure the Jenkins container has access to the host systems docker.sock.

3: Building a container
In order to demonstrate what better way than to create a Jenkins pipeline project that can build the Jenkins Docker container itself 🙂

In essence it is quite simple, using Jenkins 2.x you can create a Jenkins Pipeline project and supply the project with this Jenkinsfile in the configuration:

node {
   stage 'Checkout'

   git credentialsId: 'aa123c45-6abc-78e9-a1b2-583c1234e123', url: 'git@bitbucket.org:myorg/jenkins-repo.git'

   stage 'Build Jenkins Container'
   sh "docker pull jenkinsci/jenkins:latest"
   sh "docker build -t myjenkins -f ./jenkins/Dockerfile ."
}

This above Jenkinsfile checks out a git repository where I have put the Dockerfile for the Jenkins container. It uses the credentials that are already saved inside Jenkins for providing the credentials for GIT.

Because I always want the latest Jenkins 2.x container, i first pull in the latest version from Docker hub. The next stage is to simply run the Docker build and voila we have a completely build Jenkins container.

If i now run docker images in the host I see the following:

REPOSITORY                                                         TAG                 IMAGE ID            CREATED             SIZE
myjenkins                                                          latest              f89d0d6ba4a5        24 seconds ago        1.07 GB

Conclusion

When using standard home setups where hardware does not run fulltime you have some limitations. However with Docker and Jenkins you can create a very nice and effective Docker CI pipeline.

In some of the next few blog posts I will detail more about how to create a fully fledged Kubernetes deployment using Jenkins and Docker.