Image for post
Image for post
The workloads in the GKE cluster.

Introduction

My earlier AWS EKS and Azure AKS blog posts are here:

This GCP GKE Kubernetes exercise can be found in my gcp repo in directory simpleserver-kube. I might, later on, continue with this exercise — creating a Helm chart for the Clojure simple server to be deployed to this GKE cluster.

Initialization Scripts

In the init directory you can find a few scripts that I used to automate the admin project infrastructure. I gathered all relevant information I needed into the env-vars-template.sh file — one should copy-paste this template e.g. to ~/.gcp/my-kube-env-vars.sh and provide the values for the environment variables. Not all environment variables are needed, I created a bunch of them, some are for administrational purposes and used to label resources (optional), but some are needed by GCP (billing information) and to create the resources (admin / infra project id…).

Then you are ready to create the Admin project and related resources: create-admin-project.sh. This script just uses the environment variables and creates the admin project, then sets certain configuration values and creates a gcloud configuration for the admin project. Finally, we link the billing account to this admin project and enable container.googleapis.com so that the Service Account that belongs to this admin project and is used by Terraform can, later on, create GKE cluster.

We are going to store Terraform state in the GCP Cloud Storage Bucket therefore we need to create it: create-admin-bucket.sh.

Finally, we are ready to create the last admin project resource: the Service Account that is used by Terraform to create the resources in the infra project side: create-service-account.sh. The script also binds certain roles to that Service Account — e.g. the role needed to create the infra project.

The last thing to create is the infra project gcloud configurations. Since we already need the infra project id we are going to use let’s create the configuration now already: create-infra-configuration.sh so it is ready when we start creating the infra resources — we can use this gcloud configuration to examine the resources with the gcloud cli.

Terraform Solution

All modules comprise a setup.tf file that includes the Terraform google provider and the state configuration. All modules comprise also main.tf, variables.tf, and outputs files - respectively giving configurations for the main resources, variables used, and outputs.

The project and vpc modules just create the infra project and the vpc used in this project and are more or less trivialities. Let’s spend some time with the kube module instead.

The GKE Terraform configuration is ridiculously small, just 60 lines. First, we create the cluster itself and then the nodes used by the cluster. The simplicity and easiness of the GKE Terraform solution was a pleasant surprise. And more surprises ahead: it took just some 60 seconds for Terraform to create the GKE cluster. I remember that in our previous project creating AWS EKS using Pulumi took quite a while.

Connecting and Testing the GKE Cluster

gcloud container clusters get-credentials YOUR-CLUSTER-NAME --region YOUR-REGION

Then you can use kubectl cli tool to examine the GKE cluster and its resources. Another nice tool is Lens.

Just to make sure the cluster works properly I deployed some dummy service to it:

kubectl create deployment hello-server --image=gcr.io/google-samples/hello-app:1.0
kubectl get pods --all-namespaces
kubectl expose deployment hello-server --type LoadBalancer --port 80 --target-port 8080
kubectl get service hello-server

You’ll get the external IP for the service and you can curl it:

λ> curl http://EXTERNAL-IP
Hello, world!
Version: 1.0.0

Resource Naming

locals {
workspace_name = terraform.workspace
module_name = "kube"
res_prefix = "${var.PREFIX}-${local.workspace_name}"
...
resource "google_container_cluster" "kube-cluster" {
name = "${local.res_prefix}-${local.module_name}-cluster"

So, if the prefix is the project name (e.g. projx) and the Terraform workspace is e.g. dev all resources have a resource prefix projx-dev. E.g. the gke cluster name will be: projx-dev-kube-cluster. You can have many environments in the same GCP account using this pattern, e.g. dev, qa, test etc. - all environments have a dedicated Terraform state. Just to make it explicit - you should always keep your production environment in a dedicated production account.

Patterns for Creating Environments

Conclusions

The writer is working at Metosin using Clojure in cloud projects. If you are interested to start a Clojure project in Finland or you are interested in getting Clojure training in Finland you can contact me by sending an email to my Metosin email address or contact me via LinkedIn.

Kari Marttila

Written by

I’m a Software architect and developer. Currently implementing systems on AWS / GCP / Azure / Docker / Kubernetes using Java, Python, Go and Clojure.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store