Master of Puppets with Vagrant

Streamlining development using Puppet Agent provisioning

Joaquín Menchaca (智裕)
11 min readOct 2, 2024

--

When working with Puppet, a powerful tool for change configuration, you have two primary modes of operation: managed and unmanaged. In managed mode, Puppet uses an agent-server model where a centralized Puppet Master coordinates changes across your infrastructure. This ensures that any configuration drift on managed nodes is automatically corrected to align with the desired state, as defined in your Puppet manifests. On the other hand, unmanaged mode — using puppet apply—allows for more flexibility, as changes are applied locally without the need for a Puppet Master, leaving the process in the hands of the operator.

In a previous article, I introduced the basics of using puppet apply alongside Vagrant, a popular tool for virtual machine management, to develop and test Puppet manifests locally. Now, we'll step into a more sophisticated setup: testing with a Puppet Master. By simulating a real-world environment with a master-agent architecture, you can prepare for more advanced topics in configuration management and orchestration—giving you the tools to manage complex infrastructures effectively.

Previous Article

Requirements

These are the tools required for this tutorial.

  • Vagrant [vagrant]: virtualization management tool
  • Virtualbox [vboxmanage]: virtualization solution that is used by Vagrant
  • POSIX-compliant shell [bash or zsh]: tutorial uses commands compatible with one of these shells. On Windows, you can run these shells with either MSYS2 or Cygwin environments.

Projects Setup

You can create the project directory structure and files with the following command.

PROJ_HOME=~/vagrant-puppetserver

# craete directory structure
mkdir -p \
$PROJ_HOME/site/{data,manifests,modules/hello_web/{files,manifests}}

cd $PROJ_HOME

# create files
touch \
Vagrantfile \
bootstrap.sh \
site/manifests/site.pp \
site/modules/hello_web/{manifests/init.pp,files/index.html,metadata.json}

This should create a directory structure that looks like the following below:

Vagrant Setup

To keep things simple, we’ll use VirtualBox as the default provider, which works similarly on macOS (Intel-based), Windows, and Linux.

📓 NOTE: If you have a Mac with Apple Silicon (ARM64), you’ll need to use Vagrant with QEMU instead of VirtualBox. See Vagrant with Macbook Mx (arm64) for more details.

The first step is to create a Vagrant configuration file (Vagrantfile) that will set up three virtual machines: one for the Puppet Server and two for the client nodes.

Update Vagrantfile with the following below:

Vagrant.configure("2") do |config| 
config.vm.provision "bootstrap", before: :all,
type: "shell", path: "./bootstrap.sh"

# Puppet Master node
config.vm.define "puppetserver01" do |puppetserver|
puppetserver.vm.box = "generic/ubuntu2204"
puppetserver.vm.hostname = "puppetserver01.local"
puppetserver.vm.network "private_network", ip: "192.168.50.4"
puppetserver.vm.synced_folder "site",
"/etc/puppetlabs/code/environments/production"
end

# Node 1
config.vm.define "node01" do |node01|
node01.vm.box = "generic/ubuntu2204"
node01.vm.hostname = "node01.local"
node01.vm.network "private_network", ip: "192.168.50.5"
node01.vm.provision "puppet_server" do |puppet|
puppet.puppet_server = "puppetserver01.local"
puppet.options = "--verbose --debug"
end
end

# Node 2
config.vm.define "node02" do |node02|
node02.vm.box = "generic/ubuntu2204"
node02.vm.hostname = "node02.local"
node02.vm.network "private_network", ip: "192.168.50.6"
node02.vm.provision "puppet_server" do |puppet|
puppet.puppet_server = "puppetserver01.local"
puppet.options = "--verbose --debug"
end
end
end

About the Configuration: Vagrantfile

In this tutorial, the three virtual machines need to communicate with each other. To achieve this, we will assign each one a private IP address and corresponding hostname. Once configured, the systems will use these hostnames to connect.

All three virtual machines will be set up using a bootstrap.sh script, which installs and configures both the Puppet Server and client agents on each node.

Vagrant will also mount the code from ./site on the host to /etc/puppetlabs/code/environments/production on the Puppet Server. The client agents will then retrieve their configurations from the Puppet Server when they do a provision.

Launch Virtual Guests

You can launch all three systems with this command:

vagrant up --no-provision

To check the status, run:

vagrant status

This should show you something like the following:

Installer Script

The bootstrap.sh installation script needs to be robust, as it configures both the server and the client, with the client configured to communicate with the server. Since we're operating in a network environment without a proper DNS server, we must also manually add entries to the /etc/hosts file to ensure all systems can communicate with each other.

Update the bootstrap.sh script with the following below:

#!/usr/bin/env bash

# global variables
PUPPET_FQDN="puppetserver01.local"
HOSTNAME_FQDN="$(hostname -f)"
HOSTS_ENTRIES="192.168.50.4 puppetserver01.local puppetserver01
192.168.50.5 node01.local node01
192.168.50.6 node02.local node02"

# main
main() {
if [[ "$HOSTNAME_FQDN" == "$PUPPET_FQDN" ]]; then
setup_hosts_file
if ! systemctl status puppetserver > /dev/null; then
install_puppet_server
configure_puppet_server "$PUPPET_FQDN"
sudo systemctl start puppetserver
systemctl status puppetserver && sudo systemctl enable puppetserver
else
echo "Puppet Server is already installed! skipping"
fi
else
setup_hosts_file
if ! command -v puppet > /dev/null; then
install_puppet_agent
configure_puppet_agent "$PUPPET_FQDN" "$HOSTNAME_FQDN"
else
echo "The Puppet Agent is already installed! skipping"
fi
fi
}

# setup /etc/hosts file
setup_hosts_file() {
if [[ "$HOSTNAME_FQDN" == "$PUPPET_FQDN" ]]; then
grep -q 'puppet$' /etc/hosts \
|| sudo sed -i '/127\.0\.0\.1 localhost/s/$/ puppet/' /etc/hosts
fi

while read -r ENTRY; do
grep -q ${ENTRY##* } /etc/hosts || \
sudo sh -c "echo '$ENTRY' >> /etc/hosts"
done <<< "$HOSTS_ENTRIES"
}

# add remote registry for puppet packages
add_puppet_registry() {
wget https://apt.puppetlabs.com/puppet8-release-$(lsb_release -cs).deb
sudo dpkg -i puppet8-release-$(lsb_release -cs).deb
}

# install puppet agent
install_puppet_agent() {
add_puppet_registry
sudo apt-get -qq update
sudo apt-get install -y puppet-agent
}

# install puppet server
install_puppet_server() {
add_puppet_registry
sudo apt-get -qq update
sudo apt-get install -y puppetserver
}

# configure puppet server
configure_puppet_server() {
# add entries if they do not yet exist
grep -q "dns_alt_names" /etc/puppetlabs/puppet/puppet.conf \
|| sudo sh -c \
"echo 'dns_alt_names = $1,${1%%.*},puppet' >> /etc/puppetlabs/puppet/puppet.conf"
grep -q "certname" /etc/puppetlabs/puppet/puppet.conf \
|| sudo sh -c "echo 'certname = $1' >> /etc/puppetlabs/puppet/puppet.conf"

# set default memory for test vm guest
sudo sed -i \
's/JAVA_ARGS="-Xms2g -Xmx2g/JAVA_ARGS="-Xms512m -Xmx512m/' \
/etc/default/puppetserver
}

# configure puppet agent
configure_puppet_agent() {
sudo bash -c "cat << EOF > /etc/puppetlabs/puppet/puppet.conf
server = $1
certname = $2
EOF"
}

main

About the Script: bootstrap.sh

This script updates the /etc/hosts file on each system so they can communicate. In a production environment, a DNS server (and possibly a DHCP server that updates the DNS server) would typically handle this.

The Puppet Server is configured to run with minimal memory and uses a local Certificate Authority (CA). While this setup works for a minimalist tutorial, it’s not suitable for production. In production, you should use a proper PKI setup with an offline root CA and intermediate issuing CAs.

The client environment is set up to connect to a single Puppet Server, which is fine for testing, but in production, you should configure at least two Puppet Servers behind a load balancer, using a generic FQDN for redundancy.

Install the Puppet Server

Now that we have a boostrap.sh installer script, we can install the Puppet Server with the following command:

vagrant provision puppetserver01 --provision-with "bootstrap"

This should start and enable the puppetserver service, which will create a default certificate authority. To verify, run the following command:

vagrant ssh puppetserver01 --command \
"sudo /opt/puppetlabs/bin/puppetserver ca list --all"

You should see something like this:

Install the Puppet Agents

You can install Puppet Agent on both of the client nodes with the following command:

for NODE in node0{1..2}; do
vagrant provision $NODE --provision-with "bootstrap"
done

Authorize the Puppet Agents

In this phase, we need to authorize each of the clients to communicate with the puppet server.

Issue Certificate Requests

We can begin the process, but testing the connectivity of the puppet agent to the server, which should fail. The agents will make a request for authorization during this process, which we can authorize later.

Run the following command below to issue a test from each client node:

for NODE in node0{1..2}; do
printf "\n$NODE: Testing connection (expect failure)\n"
vagrant ssh $NODE \
--command 'sudo /opt/puppetlabs/bin/puppet agent --test'
done

Each of these sessions should fail, as the clients are not yet authorized. An example session would look something like this when running the test command:

Now check to see if the Puppet Server has received any certificate requests:

vagrant ssh puppetserver01 --command \
"sudo /opt/puppetlabs/bin/puppetserver ca list"

We should see something like this:

Sign the Certificates

Now that there are request for certificates from the CA, we can authorize the clients to connect by signing their certificates with the following command:

for NODE in node0{1..2}.local; do
printf "\nSigning $NODE\n"
vagrant ssh puppetserver01 --command \
"sudo /opt/puppetlabs/bin/puppetserver ca sign --certname $NODE"
done

When running this, you should see a message that the certificates were successfully signed. You verify by listing the signed certificates with the following command:

vagrant ssh puppetserver01 --command \
"sudo /opt/puppetlabs/bin/puppetserver ca list --all"

This should show something like the following:

Test Connectivity

Now you can run the test again to verify the that client agents can communicate to the puppet server:

for NODE in node0{1..2}; do
printf "\n$NODE: Testing connection (expect success)\n"
vagrant ssh $NODE \
--command 'sudo /opt/puppetlabs/bin/puppet agent --test'
done

This time, the results should be successful. Each session should look something like this:

Demonstrating Puppet

To demonstrate Puppet’s functionality, we’ll start with a minimalist hello_web module. Setting up the Puppet Server and client agents is complex enough for this article, and in the future, we can explore more advanced examples.

In this tutorial, we’ll use the following components:

  • Manifest: A file containing instructions that define the desired state of system resources, such as files, services, packages, and users, on a server.
  • Module: A package that includes a collection of manifests, files, and templates, which can be reused and applied across different nodes.
  • Site Manifest: A master inventory that controls the configuration of all nodes (systems) managed by Puppet, specifying what each system should do.

The Site Manifest

The point of entry in this environment will be the site manifest, a list of the nodes to configure and what exactly goes on these nodes.

Update the file ./site/manifests/site.pp with the following:

node "node01.local" {
class { 'hello_web': }
}

node "node02.local" {
class { 'hello_web': }
}

The Module

This hello_web module will install Apache web service, and a simple HTML page that will be installed into the documentation root.

First let’s start with the HTML file, update the ./site/module/hello_web/files/index.html with the following:

<html>
<body>
<h1>Hello World!</h1>
</body>
</html>

Now we will add the main manifests for our module, update the ./site/hello_web/module/manifests/init.pp with the following:

class hello_web (
$package_name = 'apache2',
$service_name = 'apache2',
$doc_root = '/var/www/html'
) {

package { $package_name:
ensure => present,
}

service { $service_name:
ensure => running,
enable => true,
}

file { "$doc_root/index.html":
source => "puppet:///modules/hello_web/index.html",
}
}

We also need to provide metadata that describes our module. Update the ./site/module/hello_web/metadata.json with the following:

{
"name": "joachim8675309-hello_web",
"version": "0.1.0",
"author": "joachim8675309",
"summary": "Hello World Tutorial",
"license": "Apache-2.0",
"source": "https://github.com/darkn3rd/blog_tutorials",
"dependencies": [],
"operatingsystem_support": [
{
"operatingsystem": "Ubuntu",
"operatingsystemrelease": ["22.04"]
}
],
"requirements": [
{
"name": "puppet",
"version_requirement": ">= 7.24 < 9.0.0"
}
]
}

Running the Code

With the Puppet Server running and client agents authorized to communicate with it, and the necessary code in place, we can now apply the configuration by running vagrant provision:

for NODE in node0{1..2}; do vagrant provision $NODE; done

This should show something like:

After the run, you can test the results with:

for NODE in node0{1..2}; do 
vagrant ssh $NODE --command "curl --include localhost"
done

This should show something like this below:

Cleanup

To remove the virtual guests can be done with this command:

vagrant destroy --force

Addendum: Control Repo

The infrastructure-as-code used in this tutorial is minimal and designed to keep things simple for learning purposes. However, in production environments, you’ll need a more robust setup that can support multiple environments like test, stage, and production.

One effective way to manage this complexity is by using a control repository. A control repository is a centralized code repository that organizes and manages your Puppet code, Hiera data, and environment configurations. By cloning and maintaining a control repository, you can standardize your infrastructure across different environments, streamline updates, and ensure consistency throughout your deployments.

In a control repository, you’ll typically have Git branches for each environment, such as test, stage, and production. Puppet automatically creates corresponding Puppet environments based on these branches.

In practice, as you test your infrastructure code, you promote it through these environments by merging changes from one branch to the next. For example, once your code passes tests in the test branch, you merge it into the stage branch and test it there. When you're confident that everything is stable, you can merge the stage branch into production to deploy the changes live.

Conclusion

In this tutorial, we covered the key steps to setting up a change configuration system using a Puppet server. You learned how to authorize Puppet agents on various nodes and set up a code repository to manage configuration across those nodes. While we kept the example minimal, I also introduced the concept of using a control repository, which helps manage Puppet code across different environments — like test, stage, and production.

Another goal of this tutorial was to demonstrate how to use Vagrant with the Puppet Agent provisioner for local testing. This setup allows you to test not only Puppet manifests in a managed environment but also the underlying infrastructure, such as the Puppet Server itself. It’s especially useful when testing integrations with external systems, like a Certificate Authority, or running through upgrade scenarios (e.g., upgrading from Puppet Server 5 to 8). Testing locally with Vagrant is faster and more cost-effective compared to testing in full-scale environments.

For scenarios where you’re strictly testing Puppet code — like manifests and modules — and don’t need to simulate the full server infrastructure, you can use the Puppet Apply provisioner with a single virtual machine for a simpler setup.

Resources

Source Code

The source code for this blog can be found in the following repository:

Puppet Control Repository

Puppet Topics

Vagrant Topics

Videos

Industry Events

--

--