Image for post
Image for post

Virtualbox and Friends on Fedora 28

VirtualBox, Vagrant, Test Kitchen, Docker Machine, Minikube

Introduction

Whether you are on Linux, Windows, or macOS, you’ll at some point hear about the celebrity tool called Vagrant along with the free virtual machine platform VirtualBox, especially in devOps oriented organizations.

With these two tools, you can prototype systems and infrastructure, and rapidly develop configuration scripts, such as using CAPS(Chef, Ansible, Puppet, Salt Stack), or Docker.

Beyond these two essentials tools are Test Kitchen, a tool for running integration tests on top Vagrant managed systems, Docker Machine, a tool for running a segregated Docker environment on a virtual system, and Minikube, a tool for running a Kubernetes cluster on top of a virtual system.

This is a tutorial that demonstrates how to install and use these tools on a Fedora 28 host.

Virtualbox

Image for post
Image for post

The central cross platform tool (Linux, Windows, macOS) to make all of this magic work is VirtualBox. It can also be the most complicated.

Prerequisite Conditions

First, on any your system, make sure that Trusted Computing is disabled and Virtual Technology is enabled in the BIOS settings.

Additionally, any other virtualization solutions should not be installed, because only one virtual platform can run (or more specifically act as the hypervisor).

On Fedora, you can check to see if KVM is enabled by running lsmod | grep kvm. If either this is enabled, you can remove it as long as it is not being used: modprobe -r $(lsmod | grep -o ' kvm_.*$').

Installing VirtualBox

Once these prerequisites are met, you can begin the installation of VirtualBox:

# Install Repository Entry 
$ sudo cat <<-'VBOXREPOENTRY' > /etc/yum/repos.d/virtualbox.repo
[virtualbox]
name=Fedora $releasever - $basearch - VirtualBox
baseurl=http://download.virtualbox.org/virtualbox/rpm/fedora/$releasever/$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://www.virtualbox.org/download/oracle_vbox.asc
VBOXREPOENTRY
# Upgrade Packages
sudo dnf -y update
# Test if reboot is needed
NEW_VER=$(rpm -qa kernel | sort -V | tail -n 1 | cut -d- -f2)
CUR_VER=$(uname -r | cut -d- -f1)
[[ ${NEW_VER//.} > ${OLD_VER//.} ]] \
&& echo "Kernel Updated from '${OLD}' to '${NEW}', please reboot"

Should this update cause an upgrade of the kernel, then we’ll need to reboot, by simply typing sudo reboot, to use the new kernel version. Afterward, you can continue the installation:

# Install kernel development packages
sudo dnf install -y \
binutils \
gcc \
make \
patch \
libgomp \
glibc-headers \
glibc-devel \
kernel-headers \
kernel-devel \
dkms
# Install/Setup VirtualBox 5.2.x
sudo dnf install -y VirtualBox-5.2
sudo /usr/lib/virtualbox/vboxdrv.sh setup
# Test Version
vboxmanage --version
5.2.16r123759
# Enable Current User
usermod -a -G vboxusers ${USER}

The last command allows the current user to create and run virtual machines.

Vagrant

Image for post
Image for post

Vagrant automates virtual machines and containers. It is essentially an orchestrator to juggle running different systems and quickly provision them with your favorite change configuration solution, like shell, CFEngine or CAPS (Chef, Ansible, Puppet, Salt Stack).

Installing Vagrant

We can install Vagrant easily with sudo dnf install -y vagrant, but it won’t be the latest. To fetch the latest, we need to fetch it from Vagrant website.

VER=$(
curl -s https://releases.hashicorp.com/vagrant/ | \
grep -oP '(\d\.){2}\d' | \
head -1
)
PKG="vagrant_${VER}_$(uname -p).rpm"
curl -oL https://releases.hashicorp.com/vagrant/${VER}/${PKG}
sudo rpm -Uvh ${PKG}
vagrant --version
Vagrant 2.1.2

Using Vagrant

With vagrant, we can quickly download system images and experiment with different operating systems. Here’s a quick demonstration running Gentoo Linux and Arch Linux.

In this small demo, we’ll use neofetch to display information about the virtual guest:

cd 
mkdir mygentoo && cd mygentoo
vagrant init generic/gentoo && vagrant up
# install & run neofetch
vagrant ssh --command 'sudo emerge -a app-misc/neofetch'
vagrant ssh --command 'neofetch'

This will display something like following:

Image for post
Image for post
Neofetch running on Gentoo guest system

We can run through this same process for ArchLinux:

cd
mkdir myarch && cd myarch
vagrant init archlinux/archlinux && vagrant up
# install and run neofetch
vagrant ssh --command 'sudo pacman -S neofetch'
vagrant ssh --command 'neofetch'

This gives us something like this:

Image for post
Image for post
Neofetch running on Arch Linux guest system

Test Kitchen

Image for post
Image for post

Test Kitchen is a test harness, and similar to Vagrant, it orchestrates several systems and then facilitates running integration tests on them. Out of the box, Test Kitchen leverages off of Vagrant as the backend driver to create the virtual guest systems.

Test Kitchen can further provision the systems with your automation scripts, and verify the correctness of the systems using a test framework like ServerSpec or InSpec.

This is useful to test out systems before pushing changes to a staging or production environment.

Installing Test Kitchen

If you have a ruby environment, you can install Test Kitchen with gem install test-kitchen.

For demonstration purposes, and easy scaffolding, we’ll install Test Kitchen using ChefDK set of tools:

VER=3.2.30
PKG=chefdk-${VER}-1.el7.x86_64.rpm
URL=https://packages.chef.io/files/stable/chefdk/${VER}/el/7/${PKG}
curl -O ${URL}
sudo rpm -Uvh ${PKG}

Using Test Kitchen with ChefDK

For this small demo, we can use the Chef to auto-generate a cookbook, which includes a default Test Kitchen configuration:

Image for post
Image for post
Test Kitchen configuration

To get started, we generate a cookbook, and bring up the environment.

# Generate example
chef generate cookbook helloworld && cd helloworld
# Create Ubuntu and CentOS systems
kitchen create

This will create Ubuntu and CentOS systems using Vagrant as the backend driver. We can then view the running systems with kitchen list:

Image for post
Image for post

Run Something on Test Systems

Let’s download a ScreenFetch shell script that does something similar to neofetch and install it into. We’re using this because the later requires time consuming compilation on some distros.

wget https://github.com/KittyKatt/screenFetch/archive/master.zip
unzip master.zip
mv screenFetch-master/ ${HOME}/.kitchen/cache/

The ~/.kitchen/cache is a global shared directory for all Vagrant managed systems managed by Test Kitchen. We can exchange files between the host and virtual guest systems using this mechanism.

When we run the script on the host, it will look like this:

Image for post
Image for post
ScreenFetch running on host system Fedora 28

With ScreenFetch script is available from the shared directory, let’s use it across all systems.

# Install pciutils on CentOS (required by screenfetch)
kitchen exec centos --command='sudo yum -y install pciutils'
# Install a snap on Ubuntu (avoids warnings w/ screenfetch)
kitchen exec ubuntu --command='sudo snap install hello-world'
# Run screenfetch script on all systems
kitchen exec default* \
--command='sudo \
/tmp/omnibus/cache/screenFetch-master/screenfetch-dev'

This will print something pretty for all clients:

Image for post
Image for post
Screenfetch scirpt on all test-kitchen clients

Docker Machine

Image for post
Image for post

Docker Machine is an automation tool that can spin up a virtual system for facilitating Docker. Normally, on Linux, you never need this as Docker is supported natively. This could be used for niche situations, where you may want to experiment with a new version of Docker in a segregated environment.

Installing Docker Machine

We can fetch and install a recent version of Docker Machine from Github.

VER=v0.14.0
BASE=https://github.com/docker/machine/releases/download/${VER}
curl -L ${BASE}/docker-machine-$(uname -s)-$(uname -m) \
> /tmp/docker-machine
sudo install /tmp/docker-machine /usr/local/bin/docker-machine

Installing Docker Client

Now that we have Docker running on a separate system, we need to install the client to be able to connect to the Docker daemon:

REPOURL=https://download.docker.com/linux/fedora/docker-ce.repo
sudo dnf config-manager --add-repo ${REPO_URL}
sudo dnf install -y docker-ce

Using Docker

With the client and Docker daemon running on a virtual system, let’s run something:

# Create a docker machine environment
docker-machine create --driver virtualbox default
# Tell docker engine to use our machine's docker
eval $(docker-machine env default)
# Run a container form docker hub
docker run docker/whalesay cowsay Hello World
_____________
< Hello World >
-------------
\
\
\
## .
## ## ## ==
## ## ## ## ===
/""""""""""""""""___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~
\______ o __/
\ \ __/
\____\______/

Minikube

Image for post
Image for post

Minikube allows you to run Kubernetes cluster locally. It leverages off of Docker Machine to create a single-node Kubernetes cluster.

Some features supported are: DNS, NodePorts, ConfigMaps and Secrets, Dashboard, Container Runtimes (Docker, rkt, CRI-O), and Enabling CNI (Container Network Interface) and Ingress.

Installing Minikube

You can fetch Minikube from a Google bucket, and start up locally.

# Install minikube
BASE=https://storage.googleapis.com/minikube/releases
curl -Lo minikube ${BASE}/v0.28.1/minikube-linux-amd64 && \
chmod +x minikube && \
sudo mv minikube /usr/local/bin/

Installing Kubectl Client

We need to install a Kubernetes client like Kube Control, or kubectl:

BASE=https://storage.googleapis.com/kubernetes-release/release
VER=$(curl -s ${BASE}/stable.txt)
curl -Lo kubectl ${BASE}/${VER}/bin/linux/amd64/kubectl && \
chmod +x kubectl && \
sudo mv kubectl /usr/local/bin/

Using MiniKube

We can now start up our cluster and deploy some containers on our cluster. When we start up Minikube, it will make the necessary client configurations needed for kubectl.

# Start minikube environment
minikube start --vm-driver=virtualbox
$ # Deploy Something
kubectl run hello-minikube \
--image=k8s.gcr.io/echoserver:1.4 \
--port=8080
kubectl expose deployment hello-minikube \
--type=NodePort

We check to see if the pod(s) are running.

kubectl get pod
NAME READY STATUS RESTARTS AGE
hello-minikube-6c47c66d8-bx8wf 1/1 Running 0 13m

Once the pods are ready, we can connect to the running service’s endpoint:

curl $(minikube service hello-minikube --url)
CLIENT VALUES:
client_address=172.17.0.1
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://192.168.99.102:8080/
SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001
HEADERS RECEIVED:
accept=*/*
host=192.168.99.102:31059
user-agent=curl/7.59.0
BODY:
-no body in request-

Conclusion

Tools Installed

In total, we have installed 7 tools (1 virtual management platform, 4 x orchestration systems, and 2 client tools):

printf "\nVirtualBox %s\n" $(vboxmanage --version) && \
vagrant --version && \
kitchen --version && \
docker-machine --version && \
docker --version && \
minikube version && \
printf "Kubectl Client: %s\n" \
$(kubectl version | awk -F\" \
'/Client/{ print $6 }')
VirtualBox 5.2.16r123759
Vagrant 2.0.2
Test Kitchen version 1.22.0
docker-machine version 0.14.0, build 89b8332
Docker version 18.06.0-ce, build 0ffa825
minikube version: v0.28.1
Kubectl Client: v1.11.1

Virtual Guest Systems Created

With these 4 orchestration tools, we have created 6 virtual guests on VirtualBox:

$ vboxmanage list runningvms | cut -d'"' -f2
default-ubuntu-1604_default_1532211317228_36884
default-centos-7_default_1532211437914_37521
default
minikube
mygentoo_default_1532220879695_11742
myarch_default_1532221813504_11015

And with the graphical Virtualbox Manager console, we can see the same thing:

Image for post
Image for post
VirtualBox Manager v5.2 GUI

Cleaning Up

Now that we are finish, we can stop the running virtual machines hosting our development and test solutions, and optionally remove them completely.

######## vagrant w/ gentoo linux cleanup ########
cd
cd mygentoo
vagrant halt # stop running vm guest
vagrant destroy # delete vm guest entirely
######## vagrant w/ archlinux cleanup ########
cd
cd myarch
vagrant halt # stop running vm guest
vagrant destroy # delete vm guest entirely
######## testkitchen cleanup ########
cd
cd helloworld
kitchen destroy # destroys all test systems
######## minkube cleanup ########
minikube stop # stop kubernetes cluster
minikube rm # remove vm hosting cluster and kubectl config entries
######## dockermachine cleanup ########
docker-machine stop # stop vm hosting docker
docker-machine rm default # remove vm entirely

After stopping the virtual guests, vboxmanage list runningvms will show an empty list. If we remove them as well, then vboxmanage list vms will also show an empty list.

Final Thoughts

My hopes for this is to drive interest and passion around these tools, foster some creative ideas, and expose you to the possibilities. The greater ability to test solutions upstream, in small sandboxes, we can increase confidence in our application deployment and infrastructure quality, as it can be painful to replace infrastructure after it is deployed and heavily used.

I would like to do future articles on macOS, other Linux distros, and even Windows, and further delve into advance usage of these tools.

Resources

Here are some links for these tools.

Virtualbox

Vagrant

Vagrant supports has many backend provider solutions besides Virtualbox and has built in provisioner support for popular change configuration solutions.

Test Kitchen

Test Kitchen has rich community supportive for a variety of backend driver support, provisioner support, and verifier support:

Docker Machine

Docker Machine has a few options for backend providers outside of Virtualbox:

Minikube

Written by

Linux NinjaPants Automation Engineering Mutant — exploring DevOps, Kubernetes, CNI, IAC

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store