Automated Deployments with Google Kubernetes Engine (GKE) and IaC with Terraform
Google Kubernetes Engine (GKE) is a powerful cluster manager and orchestration system for running Docker containers, while Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. In this guide we will see how to create a Kubernetes Cluster on Google Cloud via Terraform (Infrastructure as Code — IaC) and configure automated deployments (Continuous Integration/Continuous Delivery — CI/CD) through the GCP Console, connecting a GitHub repository.
Automated deployment for GKE is powered by Cloud Build, an industry-leading cloud-native CI/CD platform that allows pipelines to scale up and down without having to pre-provision servers or pay in advance for additional capacity. Cloud Build also provides pipelines with baked-in security and compliance enhancements to meet specific workflow and policy needs. The pipelines run automatically whenever changes are made to the source code, allowing you to deploy new features and fixes quickly and reliably.
Architectural Diagram
Google Cloud Free Tier
For this demo purpose i’ll use the Google Cloud Free Tier. You can register to the Google Cloud Platform Free Tier and get 300$ of free credit.
GitHub Repository
We will setup the GitHub repository with two folders:
- terraform: this folder contains the Terraform file (.tf) that will describe the infrastructure (the GKE cluster).
- gke-deployment: this folder contains an hello world node.js application and the Dockerfile with the instructions on how to build the Docker image.
Example Repository: kubernetes-terraform
You can fork the repository if you want to replicate all the steps.
Step 1: Create a GKE Cluster via Terraform
Let’s first have a look to the Terraform file. This will simply create a Kubernetes Cluster in the region us-central1 with 3 nodes (1 for each zone of the region). This can be highly customizable based on business needs. Refer to Terraform Documentation for more details.
- First, go to the Google Cloud Platform Console and open the Cloud Shell (a GCP provisioned vm with all the tools we need).
- Clone the repository and cd to the terraform directory.
git clone https://github.com/cassanellicarlo/kubernetes-terraform.gitcd terraform
- Initialize the Terraform workspace. This will download and inizialize the google provider plugin.
terraform init
- Provision the GKE Cluster. It will ask you if you really want to perform these actions.
terraform apply
Wait a few minutes…
Congratulations! The GKE Cluster is created. You can see it under Kubernetes Engine > Clusters.
Step 2: Configure automated deployments
Let’s first have a look to our application. This is a simple Node.js Express application that will listen on port 8080 and responds with “Hello World”.
And the Dockerfile, that contains all the instructions to assemble the Docker image.
- In the GCP Console, go to Kubernetes Engine > Workloads. Click Deploy.
- Select “New container image”. As Repository Provider, select GitHub. It will ask you to authenticate and enable the APIs.
- Select the repository.
- As Dockerfile path, write gke-deployment/ that is our folder that contains the dockerized application.
- Click “Done”.
- In the second step, “Configuration”, you can write the application name, such as “node-hello-world”.
- Click Deploy.
- Wait for that to complete. It will open the Deployments Details.
- Click “Set up an automated pipeline for this workload”.
- As before, select the Repository Provider and the Repository.
- Under “Build Configurations”, write gke-deployment/as Dockerfile directory.
- Under “Automated deployment configuration”, select gke-deployment/ as YAML location.
- Click “View Google Recommended YAML”. Copy the code, and create or update the app.yaml file in the gke-deployment directory. Compare it to your own YAML file, if you have it, and update yours based on this one.
Be sure you substitute the image field with the url of your docker image. If you copy the Google Recommended YAML, everything is setup correctly. Then you can update your repository and push the changes.
Let’s have a look to the app.yaml file.
This will configure the deployment “node-hello-world” with 3 replicas and the container in the pod based on the docker image stored in the Google Container Registry. It will also setup and Horizontal Pod Autoscaler to scale up the pods max to 5, based on the CPU Target Average Utilization.
- Click “Set Up”.
This will create a Cloud Build Trigger. Whenever you push changes to the repository, the Cloud Build does the following steps:
- Step 0: Build the Docker image.
- Step 1: Push the Docker image to Google Container Registry.
- Step 2: Prepare the deploy with the configuration files.
- Step 3: Save configs to Cloud Storage.
- Step 4: Apply deploy. This will actually run kubectl apply.
You can see all the Builds History in Cloud Build > History.
Step 3: Expose the deployment
To let others access your deployment, expose it to create a service.
- In the “Deployment Details”, click “Expose”.
- As target port write “8080”, because the node.js server is exposed on that port. Users will reach the Load Balancer at port 80. As protocol leave TCP.
- Click “Expose”.
This will create an External Network Load Balancer. Copy its IP Address and verify that everything is working.
Now you can update the application code, push the changes to the GitHub Repo, and see how the automated deployment works in action again!
Benefits of using Automated Deployments
(From Google Cloud Blog)
- Recommended Kubernetes configuration: Automated deployment suggests the Kubernetes YAML to be used to deploy your application. You no longer have to fine-tune the configuration by hand.
- Hassle-free continuous delivery setup: Configure all the steps required for an automated deployment pipeline — a connection to your source code repository, the conditions under which to trigger the pipeline, and the steps to build and deploy your containerized application — with a couple of clicks in a single flow.
- Reduced CI/CD maintenance: Because continuous delivery pipelines run in Cloud Build, you don’t have to spend time installing and maintaining your own CI/CD system.
- End-to-end traceability: Workloads deployed using automated deployment can be linked to the pipeline and source code commit that created them. Using Binary Authorization, you can create secure software supply-chain policies that only allow workloads deployed using continuous delivery pipelines.
- “Shift left” with preview deployments: Quickly test whether your application is working as intended before merging code changes, to ensure issues are identified as early as possible in the development process.
Thanks!
I hope you’ll find this topic interesting and if you have any question, suggestion or feedback please let me know!