Building GKE with Terraform

Provision Google Kubernetes Engine with Terraform

Joaquín Menchaca (智裕)
5 min readJun 19, 2020

--

This article shows how to build a Kubernetes cluster using GKE (Google Kubernetes Engine) using the popular Terraform tool.

About Terraform

For those that may not familiar with Terraform, it is a tool that does change configuration for cloud resources. The resources are things you configure to a desired state, such as provisioning GKE on Google Cloud.

The Terraform scripts themselves use a declarative human readable language to describe our desired infrastructure. We use the Terraform tool to apply the changes expressed in the script. This whole process, with scripts controlled through git or another source code control system, is called infrastructure as code.

Previous Article

The previously article demonstrated how to quickly stand up a GKE cluster using Google Cloud SDK.

Requirements

You will need the following tools setup and configured:

  1. Google Cloud SDK: these tools are needed to manage Google Cloud
  2. Terraform: provision a new cluster with human readable language HCL.
  3. Kubectl (pronounced koob-cuttle) is the Kubernetes client cli tool to interact with your newly created cluster.

Provisioning GKE with Terraform

If you only use the basic default public GKE cluster (with nodes exposed to the internet), provisioning is a breeze, as you can get by with google_container_resource (ref)

In Google Cloud, the default public GKE cluster can be setup with defaults options easily easily with google_container_resource (ref) and google_container_node_pool (ref).

The complexity shoots up dramatically if you veer away from default options, such as using private cluster. For this scenarios, it will be easier to use a module that encapsulates this complexity, such as Gruntwork’s GKE, JetStack’s GKE Cluster, or Hashicorp-Google’s Kubernetes Engine.

This article demonstrates using kubernetes-engine module by Hashicorp and Google.

Create Project File Structure

In bash, you can create a project structure like this:

mkdir $HOME/gke_project
cd $HOME/gke_project
touch main.tf provider.tf

For provider.tf, add the following content:

provider.tf

For main.tf, add the following content

This will create a cluster similar to what we did with gcloud container clusters create command in the previous article.

Initializing Terraform

We need to download the module and providers. We can do this with the following command.

terraform init

This code uses Kubernetes Engine module from the Terraform Google Modules series.

Configuring Variables

For the google provider we need to specify a project and a region. We also need to provide the very least a name of the cluster.

We can set these using environment variables. Type these in bash (or POSIX compliant shell):

export TF_VAR_project=$(gcloud config get-value project)
export TF_VAR_region="us-central1"
export TF_VAR_cluster_name="my-terraform-gke-cluster"

Provisioning a Cluster

With our project initialized and variables set, we can now provision the cluster with the following:

terraform apply

Test the Cluster

Run this command and watch until there are 3 nodes under the field NUM_NODES.

watch gcloud container clusters list \
--filter name=my-terraform-gke-cluster

After the three nodes come up, hit CTRL-C. You will now want to add credentials to your KUBECONFIG (~/.kube/config) so that you can interact with this cluster:

gcloud container clusters get-credentials my-terraform-gke-cluster \
--region us-central1

Verify the new context is added and is the primary one with:

kubectl config get-contexts

Now list the components on this fresh new GKE cluster:

kubectl get all --all-namespaces

Cleaning Up

When finished you can simply type

terraform destroy

Deploying an Application

Now that we have a GKE cluster provisioned, we can deploy web application hello-kubernetes.

Deploy the Deployment

The first resource we will deploy is a deployment controller. This describes a set of three pods that will automatically recover should one of the pods fail.

Create a file named hello-k8s-deploy.yaml with the following contents:

kubectl

Now deploy this resource with the following:

kubectl apply --filename hello-k8s-deploy.yaml

You can check the status with:

kubectl get deployment

Deploy the Service

For some high availability, we will want to talk to any one of three pods. We can do this with a service resource, where the service will route to one of three pods.

Create a file hello-k8s-svc.yaml with the following contents:

Now deploy this resource with the following:

kubectl apply --filename hello-k8s-svc.yaml

You can check the status with:

kubectl get service

Testing the Deployment

You can run this command to view the web application locally.

kubectl port-forward service/hello-kubernetes 8080:8080

After this, the hello-kubernetes can be viewed from a web browser at http://localhost:8080, and should like something similar to this:

http://localhost:8080

Resources

Here are some resources that may be useful in exploring GKE:

Blog Source Code

I placed the source code used in this blog here:

Google Terraform Modules

Google and Hashicorp created a Terraform google modules, including Kubernetes Engine module.

GruntWork’s GKE Module

JetStack’s GKE Module

Learn Terraform: Provision a GKE Cluster

Conclusion

I hope this is useful for those wishing to explore, Kubernetes, GKE, or Terraform. These days, in many modern infrastructure automation, it is amazing how far we have come. From a few small scripts, we can deploy an entire infrastructure in less than 15 minutes. Previously, this could easily weeks or months of planning. Now we are only limited by our imaginations.

In followup articles, I hope to demonstration integration with Cloud DNS and using certificates to encrypt web traffic, as well as using ingress and app meshes. If you like this article, have questions, or want to see other articles, drop me a note.

--

--

Joaquín Menchaca (智裕)

DevOps/SRE/PlatformEng — k8s, o11y, vault, terraform, ansible