Create EKS with an Existing VPC

Provision Amazon EKS cluster with Existing VPC using Eksctl

Now that we have an existing VPC infrastructure, we can provision Amazon EKS. In this article I will cover two main topic take-aways:

Previous Article

This code will create a EKS-ready VPC cluster: private and public subnets per availability zone and tag appropriately for EKS.

Tools

Method 1: The Labor Intensive Way

The eksctl command line tool can create a cluster by either command-line options or using a eksctl config file to define our infrastructure. The default method to provision EKS with this tool is to create both the VPC and EKS that uses that VPC, but this is not as flexible. You can create a configuration that uses the existing VPC, provided that you tell EKS all the private and public subnets for EKS to use.

Part 1: Lovingly Handcraft the Config

Below is an example eksctl config file with fictitious subnet ids. You will make some edits before using this.

Some things you’ll want to change in the file (cluster_config.yaml):

Provision the Cluster

Once finished, assuming you used the same name, you can provision EKS using this command:

eksctl create cluster --config-file ./cluster_config.yaml

Method 2: Terraform Creates the Config

Wouldn’t it be cool if we could have a tool that can fetch the subnets created earlier, match them to the corresponding avialability zone, and then use these to automatically create a configuration file we can use?!?

Well, we can, the tool is called Terraform.

Part 1.0: Create The Project Structure

In the previous article, we have a vpc module and code that uses the module. You can resuse those files or copy these to create a structure like the following in your project area:

.
├── main.tf
├── provider.tf
├── terraform.tfvars
└── vpc
├── locals.tf
├── main.tf
├── variables.tf
└── versions.tf

Part 2.0: Create Config Module

Now we’ll create the eksctl_config module. In bash, we can create the module with the following:

mkdir eksctl_config
touch eksctl_config/{data,locals,main,variables}.tf
touch eksctl_config/cluster_config.yaml.tmpl

The final structure will now look like this with the new files emboldened:

.
├── eksctl_config
│ ├── cluster_config.yaml.tmpl
│ ├── data.tf
│ ├── locals.tf
│ ├── main.tf
│ └── variables.tf

├── main.tf
├── provider.tf
└── vpc
├── locals.tf
├── main.tf
├── variables.tf
└── versions.tf

Part 2.1 Variables

Let’s populate the variables we’ll use:

Part 2.2 Main

The main will have one single resource local_file to create the file we want.

This will create a file with the content we’ll build from a local variable to the module cluster_config_values. The filename will be the full path to where we wan to save this file.

Part 2.3 Data Sources

Given a list of private and public subnet ids from variables, we need to find out the corresponding availability zone where these lives. We can do that with a data source.

This will allow us to build a map that we can iterate through to build the configuration.

Part 2.4 Local Variables

We want to create a map of variables cluster_config_vars that we can pass to templatefile(), which will use this along with our template file to render a final result as the string cluster_config_values, the content of the file we’ll create.

We build two maps subnet_private and subnet_public, where the keys are the availability zones we fetched using the data sources specified earlier with values corresponding to the subnet ids.

Part 2.5 The Template File

The final part of this puzzle is the actual template file that templatefile() will ingest.

This the rendered template, most of which should be straightforward: the ${variable} is a key from the map sent to templatefile().

For building out list of subnets we use a collection loop to walk through the key-value pairs representing the availability zones and subnet ids.

NOTE: One thing you might have noticed is that public: is outdented. This is needed because on unfortunate side effect of template for loops is that they will alter content outside for their construct. So we have to do this until this bug is fixed, if it is ever fixed.

Part 3: Add Output Variables from VPC Module

In the previous article, we created vpc module that will stand-up the configuration. You’ll want to reuse or copy this code to add a new file called vpc/output.tf that will have the following content:

This output will be the two lists that we’ll pass to eksctl_config module. The final updated structure will now have the following with the new file emboldened:

.
├── eksctl_config
│ ├── cluster_config.yaml.tmpl
│ ├── data.tf
│ ├── locals.tf
│ ├── main.tf
│ └── variables.tf
├── main.tf
├── provider.tf
├── terraform.tfvars
└── vpc
├── locals.tf
├── main.tf
├── output.tf
├── variables.tf
└── versions.tf

Part 4: Update Main Config

In the previous article, we had a main.tf that used a vpc module. We’ll update this to use the module we just created.

We pass the output from vpc module to eks-config module. We also pass in the current directory to this module.

Part 5: Create the EKS Cluster

The final result of all of this is cluster_config.yaml file after running terraform apply. From here we can create the EKS cluster with the following command:

eksctl create cluster --config-file ./cluster_config.yaml

Method 2.1: But Wait There’s More

Some might have spotted immediately that the eksctl_config module requires passing in two lists (private and public subnets). This would be cumbersome to manually specify this if the VPC was not created with same code.

As an alternative, to make this module fully independent, it would be easier to just pass in the a single value, the vpc_id. The eksctl_config module will then need to discover and dynamically build the map.

As long as the subnets are appropriately tagged and you unleash the full power for Terraform 0.12+ language…. [insert maniacal laugh], um, yeah, this is possible.

The main script can be update dated to pass the vpc_id instead of subnets:

Modify the following files in the eksctl_config module with these updates…

Variables

We remove the variables for two subnet lists and replace this with a single vpc_id.

Data

We’ll have four data sources now:

eksctl_config/data.tf

Two data sources will return back the private subnets ids and public subnet ids respectively. With this, we can also have two other data sources that act like maps indexed by subnet id.

Locals

We can use the feature in Terraform 0.12 language to dynamically build a map from the data sources.

And voila, we can get the same results with only a vpc_id, which makes it easier to use any tool to build the infrastructure.

As before, you can build and deploy this with:

# create cluster_config.yaml
terraform
apply
# provision using cluster_config.yaml
eksctl
create cluster --config-file ./cluster_config.yaml

Clean Up

Before deleting a cluster, you should remove any persistent storage by deleting pvc resources, or these will become orphaned and eat up costs. Also, it’s good to delete any ingresses or services that use ELB.

Delete the Cluster

When ready, you can remove EKS with this command:

eksctl delete cluster --config-file ./cluster_config.yaml

Resources

These are some links I used to reference material to create this blog:

Blog Source Code

Terraform

Terraform Bugs

Eksctl

Conclusion

There you have it, whether you use the hand crafted static version or dynamically generated this automatically with Terraform, you can now quickly provision and de-provision EKS clusters while reusing existing VPC infrastructure.

There were two major take-ways from this:

I added a section to show case some more advance language functions in Terraform 0.12+ where you can build a map with a for loop, similar to dict comprehensions in Python, but more intuitive.

I hope this helps in your Terraform and Kubernetes adventures. Drop me a note if you liked this as well as any suggestions or requests.

Linux NinjaPants Automation Engineering Mutant — exploring DevOps, o11y, k8s, progressive deployment (ci/cd), cloud native infra, infra as code