JAX DevOps Blog

JAX DEVOPS BLOG

JAX DevOps, 3-6 April 2017
The Conference for Continuous Delivery, Microservices, Docker & Clouds

10 Feb 2017

no magic pill that’ll make your build and deployment pipeline sing image via Shutterstock

"ship code at the 'push of a button'"! In preparation for his talk at JAX DevOps, speaker Jussi Nummelin shows us how to keep up with the contemporary software development: move fast, make rapid changes and adapt.

Containers have brought great opportunities to organizations – big and small – to enable them to efficiently run their applications and services. As containers provide the same environment for the apps and services in any deployment target, they really sound like some pixie-dust ingredient that makes all your cooking taste wonderful. The truth is, that containers really make some things easier and more manageable but you still have to use them properly. One area where containers can really make a difference is the automated deployment pipeline.

Pipeline?

No, we’re not talking about the highly controversial oil pipelines. 🙂 For us the automated deployment pipeline means the capability to ship the code at the “push of a button”. Whether the button is a real button on some UI, a function of git push or heck, I’ve seen demos of teams using real physical buttons in their coffee rooms to do deployments. I think Martin Fowler summarises the subject really well in his article Continuous Delivery:

…Continuous Delivery is a software development discipline where you build software in such a way that the software can be released to production at any time.”

Although arguably, continuous deployment is something more than that, in the continuous deployment model you pretty much deploy every single change to production. But naturally that means that you have to have the automation in place to handle this rapid deployment cycle.

Why?

Today’s business world is extremely fast-paced with new competing start-ups popping up like mushrooms. To be able to compete in this environment you need to be able to move fast, make rapid changes and adapt. In software development this really means that you need to be able to push the changes to production fast.

On the other hand, when you are pushing smaller changes the chances of introducing stop-the-press bugs are smaller. And when the whole pipeline is developed to such an extent that the testing process is fully automated, you have the confidence that every push meets the needed quality criteria.

How can containers help?

Teams have been building delivery pipelines long before the container hype using various technologies. Now with containers on the table we see lots of potential benefits from using them to build the deployment pipeline. Especially when used with proper container orchestration tooling.

Firstly, and most importantly, containers provide the same environment for your application no matter what the deployment target environment is. We’ve all heard, and probably said, the infamous words: “works on my machine”. There are always slight differences between the testing environment and the production environment. Now when the application is packaged and run within a container, the container image is exactly the same in all the steps in the pipeline.

Secondly, containers provide a “standard” way to deploy and run any application or service of yours. With plain Docker for example, the process is always the same:

docker pull my-app:1.2.3  
docker run my-app:1.2.3  
Repeat on every machine running the app.

And when you are running a container orchestration tool, such as Kontena, your cluster-wide deployment becomes really simple:

kontena stack upgrade my-app my-app.yml  

Container image as a build artefact

When you use a container image as a build artefact, you can ensure that in every step of the pipeline there is zero variance of the app. In practice this means that the container image built should be tagged with a build number, git commit hash or other similar identifier that uniquely identifies the version of the application (sources) being built.

Integration testing

When doing automated integration testing, there’s typically a truckload of dependencies needed for your application. Those might be databases or some other downstream apps that provide some APIs that your application uses. How to get those easily available during testing? Doh, with containers of course.

With the help of containers and tools like docker-compose it is almost trivial to spin up an integration testing environment with all the dependencies running. This also means that you don’t have to host and maintain dedicated integration environments as anyone can spin up the integration environment in seconds. You can even bake custom database images that contain proper data for testing purposes.

Application deployment

In the last steps of the pipeline, when doing the actual deployment you have two alternatives. One option is to use the same tagged image that was built during the previous steps in the pipeline. This means that you probably have to change the deployment descriptor to refer to this newly built image on-the-fly. With Kontena you’d be using some variable logic in your stack yaml file:

stack: jussi/web  
description: My cool web app  
version: 0.0.1  
variables:  
  git_commit:
    from:
      env: GIT_COMMIT
services:  
  web:
    image: my-app:${git_commit}
    ports:
      - 80:80

The benefit, over the second approach, is that it’s right away clear what version of the app is running in production.

Another option is to use a special “deployable-version” tag for your images. In practice this means that you’d be tagging your built and tested image before deploying it to e.g. production with a special tag that is used in your deployment descriptor. With Kontena, the stack yaml would look something like:

stack: jussi/web  
description: My cool web app  
version: 0.0.1  
variables:  
services:  
  web:
    image: my-app:production
    ports:
      - 80:80

And in the pipeline before the actual deployment, you’d have something like this executed:

docker tag my-app:$GIT_COMMIT my-app:production  

The benefit of this approach is that your deployment descriptors don’t change during the pipeline but you lose some visibility of what exact image is actually running in production. It is still identifiable with image sha checksums, as all the layers are shared but requires some detective work.

What comes to the actual application deployment, there’s really no way better than doing it with container orchestration tools. The out-of-box orchestrator takes care of things like:

  • cluster wide rolling deployment
  • dynamic loadbalancer configuration
  • scheduling

Summary

Containers are no magic pill that’ll make your build and deployment pipeline sing but they can really provide a lot of help when used properly. And especially when the pipeline integrates with container orchestration tools, such as Kontena, you can pretty easily create fully automated cluster-wide deployment pipelines.

JAX DevOps talks by Jussi Nummelin:

Behind the Tracks

BUSINESS & COMPANY CULTURE
the process of becoming fully agile
CLOUD PLATFORMS
Cloud-based & native apps
DOCKER & KUBERNETES
Docker, Kubernetes, Mesos & Co
CONTINUOUS DELIVERY
Build, test and deploy agile
MICROSERVICES
Maximize development productivity
Business & Company Culture

Business & Company Culture

Cloud Platforms

Cloud Platforms

Docker & Kubernetes

Docker & Kubernetes

Continuous Delivery

Continuous Delivery

Microservices

Microservices

Monitoring & Diagnostics

Monitoring & Diagnostics