Image for post
Image for post

Vagrant Provisioning with Docker

Provisioning Virtual Systems with Docker

With Vagrant, you have many choices to provision your system, with your own custom scripts using the shell, proper change configuration system to move your systems to a desired state, or just forget any runtime configuration altogether, and just run containers using Docker.

Evolution of Change Management

I wanted to introduce a brief history of change management and how containers fits within the whole ecosystem.

During the Iron Age of Computing, where infrastructure is configured through human managed hardware, systems could be automated by imaging the system, and then reusing that image, a process called baking. This style of automation was initially popular, but soon became difficult to maintain, as any image contains all the versions of software baked into a single deployable artifact.

Those maintaining this style of automation, baking, found themselves maintain large image libraries. Any slight variation in a component in the image, like a different version of Apache HTTPD server, would have to create another image. The more variations, the more images: dev-web server, dev-db server, dev-web-db server, prod-web server, prod-db server, etc. This anti-pattern became known as the golden image pattern.

It is no surprise that that early system builders like KickStart or FAI, and later change change configuration solutions like CFEngine, Puppet, and Chef, became popular. You can now configure or fry your systems with an exact versions and components (resources) required, and you only need to maintain a single script or set of scripts to do this.

With the arrival of the Cloud Age of computing, where the whole infrastructure can be configured through scripts, change configuration is now mainstream. The new paradigm introduced a new set of problems. Unlike the previous age where systems were mostly stationary, in cloud platforms, systems were ephemeral, where the IP address or hostname, are not guaranteed. Some orchestration tools, like MCollective, Ansible and Salt Stack, helped with some of these challenges.

There was still one outstanding problem: managing the ever growing sea of dependencies from fast moving web platforms and other services. Many services may have conflicting dependencies. Various platforms had ways to segregate dependencies, but it wasn’t until the arrival of containers, where there was a universal way to segregate resources across all solutions running on Linux.

Containers can segregate operating system resources using Linux kernel technologies of namespaces and cgroups. You can get similar benefits of virtualization, without the costs of virtualization. There are several container solutions, but none are more popular currently than Docker.

The magic behind containers is the underlying packaging through a layered image approach, where each layer is uniquely hashed. Each layer represents a unique configuration or operation, and if that layer changes, then only that layer and layers that inherit this layer are rebuilt, not the whole image. With this layered image approach, you avoid the earlier golden image problems in the past.

In cloud computing, where you may need to scale up or down several services, you can configure systems using disposable containers built from these layered images. This is new style of layered baking is becoming extremely popular over frying (configuring) systems with change configuration solutions.

  • Cloud Age vs Iron Age
  • Baking vs. Frying
  • Golden Image vs. Layered Image

Prerequisites

You need to have Vagrant and Virtualbox installed for this tutorial to work.

You could potentially use a different virtualization solution, such as VMWare, Hyper-V, or KVM via libvrt, but will need to find a Linux Vagrant box that supports alternative VMs. This tutorial will use the box ubuntu/xenial64 made by Ubuntu and published on VagrantCloud.

Additionally, these instructions are written to use Curl and Bash, so if these are not available, you’ll have to find the equivalent commands for your shell.

Image for post
Image for post

I wrote some previous guides for how to install Vagrant and Virtualbox on Linux (Fedora), macOS, and Windows that you might find useful:

Using Chocolatey to install the requirements:

Using Homebrew to install the requirements:

Using native package manager Dandified YUM, or in short DNF, to install requirements:

Part I: Building Docker Image

Image for post
Image for post

In this mini-tutorial, we’ll build a Docker image, instead of using a fully pre-baked image from from Docker Hub. We’ll inherit a base image that supports an Ubuntu environment and then launch our image as a container using Docker.

A note on terminology, a Docker image is what we build, and when launch the image into runtime, it is called a container, or more specifically run our image as a container.

This will create our working area and files we’ll use.

WORKAREA=${WORKAREA:-"${HOME}/vagrant-docker"}
mkdir -p ${WORKAREA}/build/public-html
touch ${WORKAREA}/build/{Dockerfile,Vagrantfile}
cat <<-'HTML' > ${WORKAREA}/build/public-html/index.html
<html>
<body>
<h1>Hello World!</h1>
</body>
</html>
HTML
cd ${WORKAREA}/build

The working area will look like this:

.
└── build
├── Dockerfile
├── Vagrantfile
└── public-html
└── index.html

This Vagrantfile configuration uses Ubuntu 16.04 system, but could easily be swapped out for another Linux distro like Debian, CentOS, or Fedora. The result will be the same, as the configuration is baked into the image itself at build time, rather configuring an operating system precisely at runtime to support the service.

The only thing configured on the server is the installation of the Docker engine, so we can run our container that we built. Vagrant handles installation of Docker.

Vagrant.configure("2") do |config|
config.vm.box = "ubuntu/xenial64"
config.vm.network "forwarded_port", guest: 80, host: 8081
####### Provision #######
config.vm.provision "docker" do |docker|
docker.build_image "/vagrant",
args: "-t example/hello_web"
docker.run "hello_web",
image: "example/hello_web:latest",
args: "-p 80:80"
end
end

This provisioning section does two things, builds the docker image based on our Dockerfile, which we call example/hello_web, and then proceeds to runs run the image as a container named hello_web.

This small Dockerfile will inherit an Ubuntu 16.04 image as the starting point. We’ll use the package management system to install Apache 2 application. Packages installed in this fashion detect that they are in a privilege mode, behave different, which removes a lot of complexity.

We copy over the content, which is our HTML page, into Ubuntu Apache2 docroot, and then run apache in the foreground, which is needed to keep the Docker container alive. If that foreground process stops, the container will stop.

FROM ubuntu:16.04
RUN apt-get -qq update && \
apt-get install -y apache2 && \
apt-get clean
COPY public-html/index.html /var/www/html/
EXPOSE 80
CMD apachectl -D FOREGROUND

We can test the results by using the curl command:

curl -i http://127.0.0.1:8081

This results with the following, which is what you’d expect from Apache 2 running in an Ubuntu environment (whether a container or a full-blown system):

HTTP/1.1 200 OK
Date: Thu, 09 Aug 2018 13:07:53 GMT
Server: Apache/2.4.18 (Ubuntu)
Last-Modified: Thu, 09 Aug 2018 12:09:13 GMT
ETag: "3c-572ff7ecacc40"
Accept-Ranges: bytes
Content-Length: 60
Content-Type: text/html
<html>
<body>
<h1>Hello World!</h1>
</body>
</html>

Part II: Using Existing Image

Image for post
Image for post

Instead of building our own image, we can used a canned solution found on Docker Hub, or from another Docker repository like Quay, BinTray, GCR, or ECR, or your own custom private one that you installed locally.

In the previous scenario, we built our Docker image, and performed build time configuration. In this scenario, we’ll simply run a Docker container, and make whatever configurations that are needed at deploy time.

These two concepts are an important takeaway, because professionally, you’ll use build time configuration for packaging dependencies and such, and deploy time configuration for configuring the environment, such as the database in a staging environment or production environment.

As with the first part, nothing is being configured on the system itself other than installing the Docker engine, and running containers using the Docker engine.

We’ll create a new area for this tutorial:

WORKAREA=${WORKAREA:-"${HOME}/vagrant-docker"}
mkdir -p ${WORKAREA}/image/public-html
touch ${WORKAREA}/image/Vagrantfile
cat <<-'HTML' > ${WORKAREA}/image/public-html/index.html
<html>
<body>
<h1>Hello World!</h1>
</body>
</html>
HTML
cd ${WORKAREA}/image

This will create new items highlight in the tree output below:

.
├── build
│ ├── Dockerfile
│ ├── Vagrantfile
│ └── public-html
│ └── index.html
└── image
├── Vagrantfile
└── public-html
└── index.html

Instead of building our own custom Apache 2 image, we can download one that is made by the community, and then run it locally.

Vagrant.configure("2") do |config|
config.vm.box = "ubuntu/xenial64"
config.vm.network "forwarded_port", guest: 80, host: 8081
####### Provision #######
config.vm.provision "docker", images: %w(httpd:2.4) do |docker|
docker.run "hello_web",
image: "httpd:2.4",
args: "-p 80:80 " +
"-v /vagrant/public-html:/usr/local/apache2/htdocs/"
end
end

When using the docker provisioner in this way, we have to fetch all the images we may use. To do this, we supply a list of images as the second parameter, which for us is a single image of httpd:2.4.

Since are are not building an image, where we would copy the index.html into the image, we can allow the container to access the content by mounting the directory as Apache’s docroot, which is /usr/local/apache2/htdocs/ for this image.

We will do the same as last time using curl:

curl -i http://127.0.0.1:8081

This results in:

HTTP/1.1 200 OK
Date: Thu, 09 Aug 2018 13:29:22 GMT
Server: Apache/2.4.34 (Unix)
Last-Modified: Thu, 09 Aug 2018 09:59:26 GMT
ETag: "3c-572fdaea69b80"
Accept-Ranges: bytes
Content-Length: 60
Content-Type: text/html
<html>
<body>
<h1>Hello World!</h1>
</body>
</html>

The slight different here, is that using the Docker image httpd:2.4, we an see version 2.4.34, where a package installed from Ubuntu is 2.4.18.

The advantage of using the image, where Apache is compiled from source, is that we get more up to date versions. The Linux distro package systems, at least with Ubuntu and more so with RHEL, tend to fall behind the current versions.

Final Thoughts

The main take away here is that you can provision systems using Docker in addition to scripts or change configuration. This can be a great way to get started with Docker, especially if you have familiarity with Vagrant.

This however is not the optimal solution, as you have to navigate Docker through Vagrant’s interface, rather than Docker’s interface. As an alternative to Vagrant, you can use Docker Machine for using running Docker commands on the host, but then are executed on the virtual guest system. And if this is not obvious, Linux can natively run Docker, so you really don’t need a virtual Linux environment to run Docker.

There are a few niche cases where Vagrant may be more desirable than Docker Machine, and these are where you may need to migrate change configuration scripts to a pure-Docker path, or you may need to use a mixture of Docker and change configuration, and perhaps combine this with other automation, such as a build system like Jenkins.

Some Links

Here are some links to resources to learn more about things mentioned in this mini-tutorial (or not mentioned):

This is general information about containers, in general.

These are tools to orchestrate docker containers or create sandbox environments.

Popular change configuration and orchestration solutions, that still used in conjunction with containers, used to deploy containers, or even help build containers, with tools like Packer.

These are tools used to provision systems, and some like Kickstart and Preseed are still used for building systems with tools like Packer or VeeWee.

Here’s a sample of some deployment tools used for deploying web applications before the popularity of containers or new orchestration platforms.

Without containers, these were a few solutions out there for segregating scripting environments and libraries. This is by no means comprehensive, just a sample.

Written by

Linux NinjaPants Automation Engineering Mutant — exploring DevOps, Kubernetes, CNI, IAC

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store