Getting Started with Kubeadm

Using Kubeadm to install Kubernetes on a local server

Joaquín Menchaca (智裕)
11 min readJun 3, 2024

--

This article is a small tutorial that walks you through setting up a Kubernetes node using the kubeadm utility. This is within the realm of “Doing Kubernetes the Hard Waytype of tutorial to help you get familiar with the services required to get Kubernetes to work.

There are of course full installers that can install and configure every aspect of Kubernetes for you, but these insulate you from the components installed, and as such are not the ideal solution to learn the underlying components that make Kubernetes function.

Upon completing this tutorial, you’ll have a fully functional Kubernetes node ready for learning purposes. Pairing this with a version control platform like Gitea allows you to delve into Continuous Delivery tools such as Spinnaker, FluxCD, and ArgoCD, enriching your understanding of Kubernetes deployment and management.

Previous Article

In a previous article, I covered how to install the Gitea solution, an essential part of GitOps.

(Optional) Virtual Guest

To successfully complete this tutorial, you’ll need a Debian or Ubuntu based Linux distribution. You’re welcome to use any system available to you. If you prefer to run a virtual guest using a virtualization solution, below are some quick notes to help you get started quickly.

You can use Vagrant to download and run a virtual guest with a single command.

Required Tools for Virtual Guest

For Intel/AMD based systems running either Linux, macOS, or Windows, you will need to install the following:

For macOS running on either Apple Silicon (ARM64) or Intel, you will need the following:

Vagrant Configuration

Below is a Vagrantfile configuration that can run on the above Virtualbox or HVF.

# Vagrantfile
Vagrant.configure("2") do |config|
if RUBY_PLATFORM =~ /^x86_64/
# use qemu/virtualbox image for x86_64
config.vm.box = "generic/ubuntu2204"
if RUBY_PLATFORM =~ /darwin$/
# configure QEMU/HVF if qemu provider
config.vm.provider "qemu" do |qe|
qe.ssh_port = "50025" # change ssh port as needed
qe.qemu_dir = "/usr/local/share/qemu"
qe.arch = "x86_64"
qe.machine = "q35,accel=hvf"
qe.net_device = "virtio-net-pci"
end
end
elsif RUBY_PLATFORM =~ /^arm64.?-darwin.*$/
# arm64 image for macOS on Apple Silicon
config.vm.box = "perk/ubuntu-2204-arm64"
# configure QEMU/HVF if qemu provider
config.vm.provider "qemu" do |qe|
qe.ssh_port = "50026" # change ssh port as needed
end
end
end

Save the above configuration as Vagrantfile in your project directory. When ready, you can download the guest box image and start the virtual guest system with a single command.

Launch using Virtualbox (Intel)

On Linux, macOS, or Windows with Vagrant and Virtualbox installed, you can run the following:

vagrant up --provider="virtualbox"

📔 NOTE: This will work on macOS, Windows, or Linux with Virtualbox installed. Either Hyper-V for Windows or KVM (or Xen) for Linux cannot be enabled on the host.

Launch using QEMU (macOS)

On macOS with Vagrant, QEMU, and the vagrant-qemu plugin installed, you can run the following:

vagrant up --provider="qemu" # 'vagrant-qemu' plugin + 'qemu' required

📔 NOTE: This will work on Macintosh hardware that has either Intel or Apple Silicon (ARM64) processors. On Intel processors, both Virtualbox and HVF can be used at the same time.

Installing Kubernetes

The following steps below will install the necessary components containerd, runc, and CNI drivers, before running kubeadm, which installs the remaining components and configures the cluster.

If you are using a virtual guest with Vagrant, log on to that system:

vagrant ssh

Linux Configuration

Before installing Kubernetes, we need to configure some necessary steps on the virtual guest to prepare it for Kubernetes.

First, we need to disable swap memory, where the disk is used to store memory. This is necessary because of “the inherent difficulty in guaranteeing and accounting for pod memory utilization when swap memory is involved” (ref). Swap memory can be disabled with the following steps.

# See if swap is enabled
swapon --show

# Turn off swap
sudo swapoff -a

# Disable swap completely
sudo sed -i -e '/swap/d' /etc/fstab

We need to enabled the kernel modules overlay and br_netfilter. The overlay module is necessary for running containers with an overlay filesystems, and the br_netfilter module is important when traffic is bridged between two or more network interfaces.

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe -a overlay br_netfilter

For the kube-proxy, we need to enable IP forwarding so that traffic can be forwarded to the pods running on Kuberentes.

# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF

# Apply sysctl params without reboot
sudo sysctl --system

If you have the firewall up, make sure that you open 6443. For example with Ubuntu, you could run:

sudo ufw allow 6443/tcp

Alternatively, if you don’t want to use the firewall on Ubuntu, you can disable it:

sudo systemctl stop apparmor
sudo systemctl disable apparmor

Install Containerd

You can download and install the containerd with the following command:

get_latest_release() {
curl --silent "https://api.github.com/repos/$1/$2/releases/latest" \
| grep '"tag_name":' \
| sed -E 's/.*"([^"]+)".*/\1/'
}

# variables used to compose URLS (avoid vertical scrollbars)
CONTAINERD_VER=$(get_latest_release containerd containerd) # v1.7.17
PKG_ARCH="$(dpkg --print-architecture)"
CONTAINERD_PKG="containerd-${CONTAINERD_VER#v}-linux-$PKG_ARCH.tar.gz"
CONTAINERD_URL_PATH="releases/download/$v$CONTAINERD_VER/$CONTAINERD_PKG"
CONTAINERD_URL="https://github.com/containerd/containerd/$CONTAINERD_URL_PATH"

# download package
curl -fLo $CONTAINERD_PKG $CONTAINERD_URL
# Extract the binaries
sudo tar Cxzvf /usr/local $CONTAINERD_PKG

Once completed, this will install the following binaries:

/usr/
└── local
└── bin
├── containerd
├── containerd-shim
├── containerd-shim-runc-v1
├── containerd-shim-runc-v2
├── containerd-stress
└── ctr

You can verify the version installed with containerd --version.

You can setup the containerd configuration with the following commands:

sudo mkdir -p /etc/containerd/
sudo sh -c 'cat << EOF > /etc/containerd/config.toml
version = 2
[plugins]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
runtime_type = "io.containerd.runc.v2"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
EOF'

You can setup the start-stop supervisor systemd unit script with the following:

sudo sh -c 'cat << EOF > /etc/systemd/system/containerd.service
# Copyright The containerd Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target

[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd

Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity

# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target
EOF'

Now start the service:

sudo systemctl daemon-reload
sudo systemctl enable --now containerd

When completed, you can check on the status of the service with:

sudo systemctl status containerd

This should show something like the following:

Install Runc

Runc is a command-line tool for running containers.

get_latest_release() {
curl --silent "https://api.github.com/repos/$1/$2/releases/latest" \
| grep '"tag_name":' \
| sed -E 's/.*"([^"]+)".*/\1/'
}

RUNC_VER=$(get_latest_release opencontainers runc) # v1.1.12
PKG_ARCH="$(dpkg --print-architecture)"
RUNC_URL_PATH="releases/download/$RUNC_VER/runc.$PKG_ARCH"
RUNC_URL="https://github.com/opencontainers/runc/$RUNC_URL_PATH"

# download
curl -fSLo runc.$PKG_ARCH $RUNC_URL

# install
sudo install -m 755 runc.$PKG_ARCH /usr/local/sbin/runc

This will create the following binary

/usr/
└── local
└── sbin
└── runc

You can check the version with runc --version.

Install CNI Plugins

These are network plugins maintained by the CNI team.

get_latest_release() {
curl --silent "https://api.github.com/repos/$1/$2/releases/latest" \
| grep '"tag_name":' \
| sed -E 's/.*"([^"]+)".*/\1/'
}

CNI_VERS=$(get_latest_release containernetworking plugins) # v1.5.0
PKG_ARCH="$(dpkg --print-architecture)"
CNI_PKG="cni-plugins-linux-$PKG_ARCH-$CNI_VERS.tgz"
CNI_URL_PATH="releases/download/$CNI_VERS/$CNI_PKG"
CNI_URL="https://github.com/containernetworking/plugins/$CNI_URL_PATH"

# download
curl -fLo $CNI_PKG $CNI_URL

# install
sudo mkdir -p /opt/cni/bin
sudo tar Cxzvf /opt/cni/bin $CNI_PKG

This will create the following files:

/opt/
└── cni
└── bin
├── bandwidth
├── bridge
├── dhcp
├── dummy
├── firewall
├── host-device
├── host-local
├── ipvlan
├── LICENSE
├── loopback
├── macvlan
├── portmap
├── ptp
├── README.md
├── sbr
├── static
├── tap
├── tuning
├── vlan
└── vrf

Install kubeadm

The following will download, install, enable, and start kubeadm service. Important to note the K8S_VERS environment variable and intentionally set this to the desired version of Kubernetes.

# Install prerequisite packages
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg

# Determine version of Kubernetes (instructions may vary)
# This is tested with v1.30.
K8S_VERS="v1.30"

# variables to make code readible
K8S_GPG_KEY_PATH="/etc/apt/keyrings/kubernetes-apt-keyring.gpg"
K8S_APT_REPO_URI="https://pkgs.k8s.io/core:/stable:/$K8S_VERS/deb/"

# Download signing key
[[ -d /etc/apt/keyrings ]] || sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/$K8S_VERS/deb/Release.key \
| sudo gpg --dearmor -o $K8S_GPG_KEY_PATH

# Add the appropriate Kubernetes apt repository

echo "deb [signed-by=$K8S_GPG_KEY_PATH] $K8S_APT_REPO_URI /" \
| sudo tee /etc/apt/sources.list.d/kubernetes.list

# Update the apt package index, install kubelet, kubeadm and kubectl, and pin their version
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

# Enable the kubelet service before running kubeadm (optional)
sudo systemctl enable --now kubelet

You can check the status of the kubelet service with

sudo systemctl status kubelet

Before Kubernetes is install, this should fail, until we run some more steps in the next section.

Install Kubernetes using kubeadm

Now we can install Kubernetes using kubeadm with the following command:

sudo kubeadm config images pull
sudo kubeadm init --pod-network-cidr=10.244.0.0/16

This will install the control-plane and allow us to run workloads on this system.

We can check the status of the kubelet service, which should now be working:

sudo systemctl status kubelet

This should show something like the following:

Lastly, make sure that the port 6443 is open. If you are using a firewall, you should open this port.

nc 127.0.0.1 6443 -v

Configure Kubernetes Client Access

To configure access, run the following command:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

After this, list all the resources deployed on Kubernetes:

kubectl get all --all-namespaces

This should show something like the following:

Install a Pod Network Add-on

You may have observed CoreDNS pods stuck in a pending state along with the kubelet service reporting DNS errors. This occurs due to the absence of a CNI-based Pod network add-on that is essential for inter-pod communication. CoreDNS won’t initialize until a network is installed. To resolve this, you can install a network add-on from the options listed in the Installing Addons documentation.

Here’s an example of installing an overlay network called Flannel:

get_latest_release() {
curl --silent "https://api.github.com/repos/$1/$2/releases/latest" \
| grep '"tag_name":' \
| sed -E 's/.*"([^"]+)".*/\1/'
}

FLANNEL_VERS=$(get_latest_release flannel-io flannel) # v0.25.3
FLANNEL_URL_PATH="releases/download/$FLANNEL_VERS/kube-flannel.yml"
FLANNEL_URL="https://github.com/flannel-io/flannel/$FLANNEL_URL_PATH"

kubectl apply --filename $FLANNEL_URL

You can monitoring the progress of flannel with:

kubectl get pods --namespace kube-flannel --watch

When it is in a running state, run

kubectl get all --all-namespaces

Now when you look at the deployed resources on Kubernetes, you will see pods in a running state:

Remove Taints (non-production only)

As a security measure, pods are not scheduled on control plane nodes by default. However, for this learning cluster, if you wish to allow pod scheduling on control plane nodes, you’ll need to remove the taint with the following command below:

kubectl taint nodes --all node-role.kubernetes.io/control-plane-

⚠️ WARNING: While this is convenient for learning clusters, do not do this on production servers. This is not only insecure, it could cause major disruptions for all services running on the cluster.

Testing the Cluster Works

We need to ensure that applications can be successfully deployed to the cluster. A quick way to verify this is by deploying a web server, such as Apache HTTP. Here’s how you can do it:

kubectl create namespace "httpd-svc"

kubectl create deployment "httpd" \
--image httpd \
--replicas 3 \
--port 80 \
--namespace "httpd-svc"

kubectl expose deployment httpd \
--port 80 \
--target-port 80 \
--type NodePort \
--namespace "httpd-svc"

You can test this service is working with the following:

NODE_PORT=$(kubectl get service/httpd \
--namespace httpd-svc \
--output jsonpath='{.spec.ports[0].nodePort}'
)

curl --include http://localhost:$NODE_PORT

You should something like the following:

Afterward you can delete these resources by deleting the namespace:

kubectl delete namespace "httpd-svc"

Resources

Conclusion

This tutorial aimed to provide a comprehensive walkthrough of Kubernetes installation using kubeadm, covering the setup of essential components like containerd, runc, and CNI drivers, alongside Linux system configuration. Additionally, it introduced a virtualization management solution using Vagrant, offering flexibility through Virtualbox or QEMU for rapid provisioning of guest systems for testing and development purposes.

These tools open up avenues for exploring various complex solutions, from configuration management to containerization and observability. With a Kubernetes cluster coupled with a Git repository management system like Gitea, you can delve into deployment solutions such as Spinnaker, FluxCD, and ArgoCD, empowering experimentation and innovation in your projects.

--

--

Joaquín Menchaca (智裕)
Joaquín Menchaca (智裕)

Written by Joaquín Menchaca (智裕)

DevOps/SRE/PlatformEng — k8s, o11y, vault, terraform, ansible

No responses yet