Deploy Kubernetes Apps w. Terraform

Using Terraform to Manage Kubernetes Resources (EKS Version)

Why use Terraform?

  • Standing up an Amazon EKS cluster with some integration to AWS cloud resources installed into Kubernetes, e.g. external-dns for Rout53, tiller for helm charts, an ingress like nginx-ingress or aws-alb-ingress-controller, security with kube-iam, and so forth.
  • Installing an application that is configured to use provisioned resources, like S3 buckets, SNS, SQS, ECR, IAM User, etc.

Related Article

Required Tools

  • AWS CLI needed to interact with AWS cloud resources. A profile with administrative access should be configured.
  • EKSCtl (eks-cuttle or exseey-cuttle) tool to easily create an EKS cluster. [Note: this is optional should you want to eksctl to quickly create EKS cluster]
  • KubeCtl (koob-cuttle), the Kubernetes client tool, to interact with EKS
  • Helm to install applications on a Kubernetes cluster (Helm 2 explicitly, see below) [Note: this is needed to demonstrate the Tiller service is working]
  • Terraform CLI to manage Kubernetes and AWS resources, as well as create an EKS cluster.
  • Bash Shell is not strictly required, but the commands in this tutorial are tested with bash [Note: this is default in mac OS and popular Linux distros, or msys2 with Windows].

Getting Helm 2

URL=https://get.helm.sh/helm-v2.16.1-darwin-amd64.tar.gz
cd ~/Downloads && curl -sO $URL
tar xvzf ${URL##*/}
sudo cp darwin-amd64/helm /usr/local/bin/helm2
helm2 version | cut -d\" -f2

Part 0: Setup Project Area

export PROJECT_HOME=$HOME/projects/eks-with-tiller
mkdir -p $PROJECT_HOME && cd $PROJECT_HOME

Part I: Creating a Kubernetes Cluster

Method 1: Using EKSCtl for EKS

eksctl create cluster \
--name=wonderful-unicorn \
--kubeconfig=wonderful-unicorn-kubeconfig.yaml
# point KUBECONFIG to only our cluster
export
KUBECONFIG=$PROJECT_HOME/wonderful-unicorn-kubeconfig.yaml
# test kubectl works on new kubernetes cluster
kubectl
get all --all-namespaces

Method 2: Using Terraform for EKS

cat <<-CLUSTER_EOF > eks_cluster.tf
variable region {}
variable eks_cluster_name {}
module "eks-cluster" {
source = "github.com/darkn3rd/eks-basic?ref=v0.0.1"
region = var.region
eks_cluster_name = var.eks_cluster_name
}
CLUSTER_EOF
export TF_VAR_eks_cluster_name=wonderful_unicorn
export TF_VAR_region=us-west-2
terraform init
terraform apply
# point KUBECONFIG to only our cluster
export
KUBECONFIG=$PROJECT_HOME/kubeconfig_wonderful-unicorn
# test kubectl works on new kubernetes cluster
kubectl
get all --all-namespaces

Part 2: Tiller Service Example

Setup

my_modules/
└── tiller/
├── crb.tf
├── deploy.tf
├── provider.tf
├── sa.tf
├── svc.tf
└── variables.tf
mkdir -p $PROJECT_HOME/my_modules/tiller/
pushd $PROJECT_HOME/my_modules/tiller/
touch ./{provider.tf,variables.tf,sa.tf,crb.tf,deploy.tf,svc.tf}
popd

Variables

tiller/variables.tf

AWS and Kubernetes Providers

tiller/provider.tf
  1. AWS provider: used to get the credentials from Amazon EKS data sources.
  2. Kubernetes Provider: used to change state of resources in Kubernetes.

Tiller Service Account Manifest

tiller/sa.tf

Tiller Cluster Role Binding Manifest

tiller/crb.tf

Tiller Deployment Manifest

tiller/deploy.tf

Tiller Service Manifest

tiller/svc.tf

Part 3: Deploy the Service with Terraform

Method 1: Directly Use the Module

cd $PROJECT_HOME/my_modules/tiller && terraform init
terraform apply \
-var eks_cluster_name=wonderful-unicorn
-var region=us-west-2
Output of Running Module

Method 2: Directly Use the Module

cd $PROJECT_HOME# create k8s_addons.tf
cat <<-K8SADDONS > k8s_addons.tf
module "tiller_install" {
source = "~/projects/eks-with-tiller/my_modules/tiller"
region = "us-west-2
eks_cluster_name = "wonderful-unicorn"
}
K8SADDONS
terraform init
terraform apply -target=tiller_install

Part 4: Testing the Deployed Service

Testing the Tiller Service

helm2 install stable/spinnakerexport DECK_POD=$(kubectl get pods \
--namespace default \
-l "cluster=spin-deck" \
-o jsonpath="{.items[0].metadata.name}")
export GATE_POD=$(kubectl get pods \
--namespace default \
-l "cluster=spin-gate" \
-o jsonpath="{.items[0].metadata.name}"
)
kubectl port-forward --namespace default $GATE_POD 8084 &kubectl port-forward --namespace default $DECK_POD 9000 &
Spinnaker Deck UI application

Part 5: Cleanup Resources

Delete Installed Chart

helm2 delete $(helm2 ls | grep spinnaker | cut -f1)

Remove the Tiller Service

terraform destroy -var cluster=wonderful-unicorn

Destroy the EKS Cluster

eksctl delete cluster --name wonderful-unicorn
cd $PROJECT_HOME
terraform destroy

Conclusion

Update

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Joaquín Menchaca (智裕)

Linux NinjaPants Automation Engineering Mutant — exploring DevOps, o11y, k8s, progressive deployment (ci/cd), cloud native infra, infra as code