DevOps Tools: Introducing Helmfile
Automate Helm Charts with Helmfile
In the Kubernetes community, it would be a surprise to find anyone that does not yet know about the popular Helm tool to deploy services. Similar to tools like Homebrew for macOS or Chocolatey for Windows, you can install a solution on Kubernetes easily with helm install <package-name>
.
Using helm charts
Helm charts share one thing in common with change config tools like Chef, Consul-Template, Ansible, Puppet, or Salt Stack, where you can use a template engine like ERB or Jinja to dynamically compose a configuration file. With Helm, the configuration files are Kubernetes manifests, which are dynamically built using go-template template engine with the Sprig library and customized with with values you pass during deployment.
As an example, you can enable an endpoint and storage in a typical chart like this:
# add the remote repo
helm repo add foobar http://example.com/charts# deploy the chart with set values
helm install stage foobar/foobar \
--set service.type=LoadBalancer \
--set persistence.enabled=true
The helm
tool also supports passing in a configuration file that contains these values, so you can achieve the same results with the following:
# create my-values.yaml
cat <<-EOF > my-values.yaml
service:
type: LoadBalancer
persistence:
enabled: true
EOF# add the remote repo
helm repo add foobar https://example.com/charts# deploy the chart with external values file
helm install stage foobar/foobar --values my-values.yaml
Introducing Helmfile
A Helm chart package is useful for installing a single service on Kubernetes. Here’s the problem: What happens when you need to install several charts with values that need to be coordinated with all of the charts?
This is where the tool like helmfile is useful. With a single configuration script (helmfile.yaml
), you can install multiple charts and supply a set of values for all of the charts.
Using the example from above, we can compose same thing using helmfile
as the following with the Helm chart values embedded into helmfile.yaml
.
# create helmfile.yaml
cat <<-EOF > helmfile.yaml
repositories:
- name: foobar
url: https://example.com/charts
releases:
- name: stage
chart: foobar/foobar
values:
- service:
type: LoadBalancer
persistence:
enabled: true
EOF# add the remote repo + deploy the chart with embedded values
helmfile apply
If you wanted the values in a separate external values file, you can do the same thing with the following:
# create my-values.yaml
cat <<-EOF > my-values.yaml
service:
type: LoadBalancer
persistence:
enabled: true
EOF# create helmfile.yaml
cat <<-EOF > helmfile.yaml
repositories:
- name: foobar
url: https://example.com/charts
releases:
- name: stage
chart: foobar/foobar
values:
- my-values.yaml
EOF# add the remote repo + deploy the chart with external values file
helmfile apply
A Complete Example
This is an full example using helmfile
to integrate two solutions, a highly performant distributed graph database called Dgraph with an object storage (buckets) solution using MinIO.
The Helmfile Script
Below is an example helmfile.yaml
for MinIO and Dgraph helm charts. When using this configuration, helmfile will add the Helm chart repositories and then apply these two Helm charts with values files found in ./values
directory.
MinIO Chart Values
Below is a MinIO configuration that references a recent image of MinIO and supplies access key and secret key, as well as a default bucket called dgraph
. This dgraph
bucket is created using the mc
command line tool from a batch job that is run during deployment.
Dgraph Chart Values
Below is a Dgraph configuration that configures backups to be sent to a MinIO server. The Dgraph helm chart will configure a Kubernetes Cronjob that runs a script† to instruct Dgraph to do backups to this remote MinIO server.
† The script that supports backups will issue RESTful or GraphQL request to Dgraph with appropriate support for Access JWT tokens (see ACLs) and Mutual TLS (see TLS) if these features are enabled. This script works with Dgraph v21.03.0 and earlier versions, using GraphQL if supported in the Dgraph version.
Applying this Configuration
Before we begin, in addition to downloading the above code snippets, we need to set some environment variables that can be used to inject values into our helmfile configuration.
First, download the above code snippets and create a directory structure like the following:
.
├── helmfile.yaml
└── values
├── dgraph.yaml.gotmpl
└── minio.yaml.gotmpl
You can set these required and optional environment variables:
# required
export MINIO_ACCESS_KEY=backups
export MINIO_SECRET_KEY=password123# optional (set values that make sense in your environment)
export MINIO_NAMESPACE=test # default is 'minio'
export DGRAPH_NAMESPACE=test # default is 'dgraph'
Once these are set, you can just run the following:
helmfile apply
After this, if MinIO and Dgraph are installed into the same namespace, you should something similar to the following resources with kubectl get all
:
Testing the Results
The MinIO has a graphical interface, which you can view using port-forward:
export MINIO_POD_NAME=$(kubectl get pods \
--namespace $MINIO_NAMESPACE \
--selector "release=minio" \
--output jsonpath="{.items[0].metadata.name}"
)kubectl port-forward $MINIO_POD_NAME 9000:9000
After running this command, you can log in to MinIO at http://localhost:9000:
After you log in, you should see a single bucket dgraph
:
After about 15 minutes, which is the time set for full backups in the configuration (values/dgraph.yaml.gotmpl)
, you should see the contents created by Dgraph:
Should there be an error with the backups, you can check the status using the following method:
JOBS=( $(kubectl get jobs \
--namespace $DGRAPH_NAMESPACE \
--no-headers \
--output custom-columns=":metadata.name"
) )for JOB in "${JOBS[@]}"; do
JOB_POD=$(kubectl get pods \
--namespace $DGRAPH_NAMESPACE \
--selector job-name=$JOB \
--no-headers \
--output custom-columns=":metadata.name"
) printf "\nLogs for job=$JOB\n------------------------\n" kubectl logs --namespace $DGRAPH_NAMESPACE $JOB_POD
done
NOTE (Troubleshooting): Starting in Dgraph v21.03.0, errors will not be reported from a failed backup request. In previous versions of Dgraph, backups were synchronous (blocking), so if the an error occurred, such as bad MinIO address, credentials, licensing, etc., you can get an error back immediately.
However in Dgraph v21.03.0, the backups are now asynchronous (non-blocking running the background) so you will have to consult the logs and guess which log entry is connected to the triggered backup request based on the timestamp.
For example:
kubectl logs $DGRAPH_NAMESPACE-dgraph-alpha-0 | \
grep -E 'backup_(ee|processor).go'
TIP (Alerting): This is out of the scope of this simple example, but should you explore setting up some observability to capture backup failures, this is how you can go about doing this.
Previous to Dgraph v21.03.0, you can setup alerts using something like kube-state-metrics and scan for Kubernetes Cronjob failures related to backup cron job labels. As an example using Prometheus AlertManager, here’s an example: dgraph-backup-alert-rules.yaml.
With Dgraph v21.03.0, you will need configuring log heuristics to detect and report errors from backup events.
Resources
Here are some links that I have come across on the topics in this article:
Helm
- What is a Helm chart? https://www.freecodecamp.org/news/what-is-a-helm-chart-tutorial-for-kubernetes-beginners/
- Home page: https://helm.sh/
- Helm chart repository: https://artifacthub.io/
- Create your first Helm chart: https://docs.bitnami.com/tutorials/create-your-first-helm-chart/
- Helm Chart Structure: https://helm.sh/docs/topics/charts/
Helmfile
- What is Helmfile? https://tanzu.vmware.com/developer/guides/kubernetes/helmfile-what-is/
- Source code: https://github.com/roboll/helmfile
- Helmfile examples: https://github.com/cloudposse/helmfiles
Text Editors
Color syntax highlighting of gotemplate yaml files:
- Code extension (
gotemplate-syntax
): https://marketplace.visualstudio.com/items?itemName=casualjim.gotemplate - vscode-gotemplate source: https://github.com/casualjim/vscode-gotemplate
Helm charts
- Minio Helm chart: https://artifacthub.io/packages/helm/minio/minio
- Dgraph Helm chart: https://artifacthub.io/packages/helm/dgraph/dgraph
Source code for this tutorial
Conclusion
As you can see, Helm charts when combined with helmfile is quite powerful. Using these together, you can deploy an entire organizational infrastructure on Kubernetes within minutes.
This article would not be complete without mentioning that there are alternatives to Helm and helmfile…
Kustomize as an alternative
Another tool that fills the same role as Helm charts is kustomize, which is built into the Kubernetes client tool (kubectl
) or available externally as kustomize
command. This does not use templates or variables, but directories of manifests and kustomize files that are used to patch Kubernetes manifests during deployment.
Personally, I find this harder to manage a directory of patch files for complex solutions. In practice, errors are quite common, and the solution becomes both ungainly and unmanageable quite fast. Still, if Kubernetes operators are used and there are no Helm charts to built around the operator, then kustomize may be the only solution.
Pros
- Standardized Tooling: the kustomize tooling is embedded into the
kubectl
command, so no further tooling is required. - Helm is Unavailable: some solutions such as Kubernetes operators may not have a Helm chart required to dynamically create manifests, so kustomize is an available alternative
Cons:
- Complexity: Managing directories of patch files can be difficult to troubleshoot and debug.
- Maintenance: Managing directory of patches can get ungainly and unmanageable quickly, and with numerous small files, it is error prone.
If you would like to integrate kustomize solutions into helmfile, you’ll be happy to know that there are two methods to support kustomize. inside your helmfile scripts:
- run an external script like
helmify-kustomize
when deploying a chart, where this script can use kustomize to dynamically compose a Helm chart. (Thanks to Yusuke Kuoka for this solution) - support kustomize syntax directly inside helmfile. This may require a helm plugin, and is currently not documented except in the form of go code. (Thanks to Yusuke Kuoka for making this happen)
A big shout out to Yusuke Kuoka for kustomize + helmfile integration and so many wonderful contributions to helmfile core tool. Forever thank you.
Terraform as an alternative
Another solution is to use Terrafom to manage Kubernetes resources through the Kubernetes provider and the Helm provider, in combination with an embedded template engine using the HCL language.
Using this solution, you could provision cloud resources, and based on those result of these resources created (such as uids), deploy Helm charts using values based on the cloud resources. For example: create a GKE cluster and a GCS bucket, then deploy charts with values that reference the GCS bucket on the same GKE cluster. The Kubernetes credentials required for access would come from the provisioned GKE cluster.
Pros:
- Integration (Standardized Tooling): Manage cloud resources (Azure, GCP, AWS) including the Kubernetes installation itself (AKS, GKE, EKS) and Kubernetes resources with the same tooling.
Cons:
- Integration (Segregated Provisioning): Integration between resources managed by different providers is not completely seamless due to how Terrafom evaluates provider blocks verses the actual resources (see Provider Configuration). Essentially resources should not be created in the same Terrafom module where Kubernetes provider resources are also utilized.
- Complexity (Different Tools): Though you may share the same tooling in Terrafom language (HCL) for cloud and Kubernetes resources, Terrafom is not as popular as pure
kubectl
andhelm
tools used by most of the community, and Terraform’s template engine used by templatefile is not popular, at least compared to gotemplate. - Complexity (Source of Truth): The source of truth are not the live Kubernetes resources themselves as reported by the Kubernetes API, but rather an offline state file that needs to be synchronized with the actual online state. This has been more of a hassle for cloud resources than Kubernetes provider resources in practice, so this issue may be negligible.
- Maintenance (Dependency Hell): There can be dependency nightmares where changes to versions of the
terraform
command line tool, providers, modules, as well as module dependencies, can cause disruptive changes, where infrastructure must be destroyed and recreated. - Quality or Feature Limitations: Terrafom providers that allow you to interact with cloud resources have had and will have bugs that will not exist in the Kubernetes API accessed by
kubectl
or Helm. For features, such as using kubeconfig files, this is not supported (or had problems) in later versions of the Kubernetes provider. Some functions like jsonencode will convert booleans and ints into strings, creating undesirable JSON types. Many gotcha and poignant issues like this are automatically closed through Hashicorp installed git bot automation, as no one from Hashicorp has given the issue proper attention.
The above may seem like I do not like Terrafom, but honestly, I love both Terrafom and helmfile, and think both are great solutions. If you would like to integrate both helmfile and Terrafom together, there’s a helmfile provider that you can explore.
A big shout out to Yusuke Kuoka for helmfile provider
Change Configuration as an alternative
Another solution is to actually use a change configuration platform like Puppet or Chef, as these use a template engine that can dynamically compose Kubernetes manifests, or may have modules or cookbooks that talk directly to the Kubernetes API.
Pros
- Standardized Platform: Use the same tooling for both Kubernetes resources with immutable infrastructure as well as traditional configuration management with mutable infrastructure.
Cons
- Unnecessary (immutable images): Many of the features around a change configuration platform like Puppet or Chef to manage mutable infrastructure, where configuration state is converged to the desired state will not be used given that container images are immutable.
- Redundant (state already managed): A change configuration platform like Puppet or Chef is redundant, as the infrastructure components deployed by Kubernetes are managed by an embedded reconciliation loop, similar to a convergence loop to manage mutable infrastructure.
- Complexity: Change configuration platforms like Puppet or Chef are complex and domain specific (have their own DSLs or interface), so this adds further unnecessary complexity, especially when most of the features around convergence will never be used.
- Expensive: given the above complexity, there be added costs in time and resources, in both lead time, testing, and maintenance. Furthermore, these platforms can have excessive licensing costs depending on usage.
The discussion above is oriented around pull-based agent solutions (Chef, Puppet, CFEngine), so some of this may not be applicable to a push-based agent-less tool like Ansible or other remote-execution platforms like Salt Stack.
Ansible in some ways is similar to Terrafom in regards to provisioning of cloud resources, except that Ansible doesn’t maintain an offline state file, but uses the source of truth from direct interaction with Kubernetes or other cloud web API.
Wrapping up
I find helmfile to be the most flexible and efficient tool to manage Helm charts, and the Helm chart itself to be the most effective way to manage dynamic Kubernetes manifests in practice.
There may be some niche situations where you may want to use kustomize or Terrafom. As for change configuration like Puppet or Chef, I would never recommend this as it adds an incredible level of complexity and expense (lead time, maintenance, and licensing). Just don’t do it.