Creating AWS Elastic Container Service for Kubernetes (EKS) the Right Way

IntelliJ IDEA with Terraform Plugin.

Introduction

I already did a small exercise how to create a managed Kubernetes service infra in the Azure side using Terraform — see my previous blog post “Creating Azure Kubernetes Service (AKS) the Right Way”. Now I wanted to see how the same Kubernetes managed service can be created using AWS Elastic Container Service for Kubernetes (EKS). I describe in this blog post about my experiences regarding that.

Use an Example

The best way to setup a bit more complex cloud infra is to use an example from a reliable source — so that you can trust that the example can be used as a working reference when you mold it to your own purposes. I used document “Terraform AWS EKS Introduction” and the related git repo “eks-getting-started” as an example and did some modifications like modularized the solution, used my own Terraform conventions etc. Using a working example is pretty important if you need to debug your own solution. One good cloud infra best practice is to deploy the working solution and your own non-working solution into the same cloud account (in different VPCs etc) and then compare the resources side by side.

The Solution

The solution is pretty much the same as in the Azure side: create AWS S3 bucket to store the Terraform state, create AWS configuration in a modular manner using Terraform, and deploy the infra incrementally to AWS when you write new resource configurations.

1. Create AWS S3 Bucket and DynamoDB Table for Terraform Backend

You don’t need this step for small pocs but I like to do things with best practices. So, the best practice is to store the Terraform state in a place where many developers can access the state (i.e. in the cloud, naturally). In the AWS side the natural place for the Terraform backend is S3. You should also create a DynamoDB table as a locking mechanism so that Terraform can take care of locking of the state so that many developers are not running cloud infra updates concurrently breaking the coherence of the infra.

2. Create AWS Infra Code Using Terraform

The next step is to create the AWS cloud infra code using Terraform (see directory terraform). I usually create a “main” configuration part which defines what modules comprise the main configuration: file env-def.tf. If you look at that file you see that we create here the DynamoDB tables for the application, a VPC (probably not necessary but I followed the original example here), ECR repositories for storing the Docker images, EKS main infra with all resources EKS needs (security group, security group rules, IAM role, some policies, and finally the actual EKS cluster itself), and the EKS worker nodes and their related infra (security group, IAM role, policies, launch configuration for EC2 images and auto scaling group).

3. Deploy Cloud Infra!

As I already explained in the Azure blog post you don’t have to create the whole cloud infra in one shot before applying it to the cloud provider. I create the cloud infra incrementally, resource by resource. I try “terraform plan”, and if everything looks good I apply the new resources using command “terraform apply”. This way you can create the cloud infra incrementally which is pretty nice.

4. Smoke Test with EKS

This time I did try to deploy the application (the single-node version anyway) to the EKS cluster before I wrote this blog post (see the experiences in the related Azure AKS blog post). For trying the EKS cluster you need the Kubernetes context. If you haven’t installed aws-cli or aws-iam-authenticator install them now. Then you can query the Kubernetes context and use it with kubectl tool

# First use terraform to print the cluster name: 
AWS_PROFILE=YOUR-AWS-PROFILE terraform output -module=env-def.eks
# Then get the Kubernetes cluster context:
AWS_PROFILE=YOUR-AWS-PROFILE aws eks update-kubeconfig --name YOUR-EKS-CLUSTER-NAME
# Check the contexts:
AWS_PROFILE=YOUR-AWS-PROFILE kubectl config get-contexts # => You should find your new EKS context there.
# Check initial setup of the cluster:
AWS_PROFILE=YOUR-AWS-PROFILE kubectl get all --all-namespaces # => prints the system pods...
# Store the config map that is needed in the next command.
# NOTE: Store file outside your git repository.
AWS_PROFILE=YOUR-AWS-PROFILE terraform output -module=env-def.eks-worker-nodes > ../../../tmp/config_map_aws_auth.yml
emacs ../../../tmp/config_map_aws_auth.yml # => Delete the first rows until "apiVersion: v1"
AWS_PROFILE=YOUR-AWS-PROFILE kubectl apply -f ../../../tmp/config_map_aws_auth.yml
# In terminal 2:
while true; do echo "*****************" ; AWS_PROFILE=YOUR-AWS-PROFILE kubectl get all --all-namespaces ; sleep 10; done

Comparing the Azure AKS vs AWS EKS

Let’s make a short comparison between Azure AKS and AWS EKS regarding my experiences. Let’s first compare the cloud infra configuration files and lines of code:

| Cloud          | Files |  LoC  |
| - - - - - - - -| - - - | - - - |
| Azure | 24 | 418 |
| AWS | 23 | 674 |
| AWS - no VPC | 20 | 589 |

Some Observations Regarding EKS Infra Development

There were a couple of observations that I’d like to emphasize here. The first one is that the AWS EKS infra code is more explicit and transparent but also more complex than the equivalent Azure AKS infra code. Especially the worker node configuration is a lot more complex in the AWS side (and almost non-existing in the Azure side). Since the solution is more complex there is also more room for errors (as I noticed — you have to be careful with the tags, security group references between the EKS control plane and worker nodes etc.).

Conclusions

Creating a Kubernetes cluster as a managed service is pretty straightforward using Terraform both in the Azure and AWS sides. The AKS and EKS solutions provide a bit different abstractions to the Kubernetes clusters but both seem to be solid environments to run your Kubernetes applications.

I’m a Software architect and developer. Currently implementing systems on AWS / GCP / Azure / Docker / Kubernetes using Java, Python, Go and Clojure.