, March 22, 2023

0 results found in this keyword

Terraform + Helm Charts = Joy


  •   6 min reads
Terraform + Helm Charts = Joy
DEPRECATED UPDATE: This was the primer to the way I currently build. Originally while testing everything out I put it all in one repo. Now after using this in production for a couple years I have a nice pattern to keep this in check at scale. Please refer to this when you want to see where the current stack came from.

devops.miami/my-stack/

This is a write up to help get folks familiar with the some new tools for easy deployment of services to Kubernetes clusters.

I have an account on Google's Cloud Platform which runs a Kubernetes cluster serving this Ghost blog service created via a Helm Chart. The infrastructure is maintained with Terraform which uses modules to provision components across different IaaS providers like AWS, GCE, and Azure.

Just a side note, the colors in the GCE console images are inverted because I enjoy dark themes. #darkreader

Why this Method?

Things to consider.

  • IaaS Agnostic - Telling your provider you're taking the million dollar op to their competitor and all it takes is flipping a switch is really powerful.
  • Platform agnostic - Works the same across all the platforms. You don't fall apart at the seams because one of the team members uses Mac and another Linux.
  • Infra as code - Keeping track of things in version control makes life a bit easier. Infrastructure as code is the new norm and enabled by software like Terraform and Ansible stored in git repos.
  • Versioning - Infra as code gives rise to versioning which makes things a little more manageable over time.
  • Stateful - The tools selected are aware of the state of the infrastructure.
  • Automation - I build CI/CD pipelines and being able to trigger automation from git repos is a great way to keep things moving in a consistent fashion. Now apply that same logic to Terraform and suddenly updating your files and committing them to master can literally restructure the whole environment.

Prereqs - Setup Tools and Access

We are going to need a couple of tools and a Google Cloud account. Create the account here and follow the links below to install the required packages.  

Installation varies on the OS you are using; follow the instructions unique to your OS.

Auth Token for GCE

After installing the CLI tool you must create a token for Terraform to authenticate. This is beyond the scope of this tutorial; reference this page to create the access token required.

export GOOGLE_APPLICATION_CREDENTIALS=./token.json

Infra - Terraform

With two files we can provision all the infrastructure we need to serve up the blog. Terraform can be configured many different ways but this is a basic setup for demo only; HTTPS and scaling will be covered in other posts.

This setup has two files, one which defines the infrastructure, a Kubernetes cluster with two nodes in different availability zones. And another for the variables which are kept in a separate file to make it easy to use something like Vault or BitWarden to store them securely.

Phase 1

The first phase is a base Ghost service served over port 80. The deployment will have a two pods running on a two node cluster. One pod will serve a Ghost blog linked to a load balancer with an external IP address and the other pod a Postgres DB. This verifies you can talk to GCP, the tools (helm,terraform,gcp,kubectl) are installed correctly, and is the quickest route to a result.

Phase 1 - unencrypted traffic over port 80

Grab the Code

Clone the demo repo and change directory into it's root.

git clone https://github.com/iamleet/terraformHelmDemo && cd terraformHelmDemo

Terraform Files

  • kubeCluster.tf - Used by Terraform to provision a Kubernetes cluster on GCE.
  • example-secrets.tf - Secrets used by Terraform during kube.tf provisioning.
  • terraform.tfstate - Keeps the state of the infrastructure after provisioning assets.

Setup Terraform Secrets

Rename example-secrets.tf to secrets.tf, it will be ignored by version control, then modify it to your needs. Below is a chart with fields which require modification before you move forward.

option default requires update
Cluster Name devopscluster
Node count 1
Project name devops-miami-demo true
Cluster username root
Cluster password REPLACEME true
Availability zone us-east1-c
Domain label devopsmiami
Cluster Tags blog, demo, helm, terraform

Fresh Projects

New projects which have never used the Kubernetes Engine before need to visit the console page to initialize the service for the first time.

Launch the Cluster

Ready to launch!
It is kind of mind warping how little is required to get a cluster going. The majority of the work up to this point has been installing the tools and setting up the access to Google.

Initialize Terraform, this will load the required module(s) for executing your setup.
terraform init

Terraform apply will display the potential changes, ask you for confirmation, and then provision the changes.
terraform apply

Terraform creates a terraform.tfstate file on first run which keeps track of all the assets provisioned. Each time you run apply with modifications it will figure out what needs to be done to match the update based on the current state.

Running changes generates a tfstate.lock  so no other user or process can run changes at the same time. This is great setup if you are using Jenkins and VCP hooks for automation.

Log into Kube

Log into the cluster you just configured by visiting the console page on GCP in the Kubernetes section, clicking connect on the newly provisioned cluster, then copying and pasting the contents into your terminal.

Try running the command below which will display cluster relative information if you are correctly connected.
kubectl cluster-info

kubectl cluster-info results

Helm

Now that we have a working cluster we can communicate with we can move into adding a  service. I like Helm better than something like Ansible because Tiller actively manages things.

  • example-values.yaml - this file contains the configurable values and secrets for the Helm Chart.
  • create-healm-service-account.yaml - this file creates a service account for Helm on the Kubernetes cluster.

Helm Setup

In order to get Helm and Tiller running correctly we need to configure the Kubernetes cluster with the correct namespace, rules and permissions.

Create service account for Helm

kubectl apply -f create-helm-service-account.yaml

Initialize Helm

helm init --service-account helm --override 'spec.template.spec.containers[0].command'='{/tiller,--storage=secret}'

Tiller takes a few moments to provision; you can check the status by running the following:

helm list

Create an External IP

We need a public facing IP; we can reserve one and set it with the ghostLoadBalancerIP in the following section. Run the following command then navigate in the console to VPC network -> External IP addresses.

$ gcloud compute addresses create ghost-public-ip

External IP address - GCE

On this page you should see an entry labeled ghost-public-ip; copy the external address for the next step.

Update the Secrets

Rename the file named example-values.yaml to secrets.yaml and then change all the values labeled required in the chart below. Seriously, it doesn't work if you don't take care of all the values correctly.

option default requires update notes
ghostLoadBalancerIP no default required must match lb external IP
ghostUsername user@example.com
ghostPassword no default password required
ghostEmail user@example.com your login
ghostDatabasePassword no default password required must match mariadb.user.password & mariadb.rootUser.password
mariadb.user.password no default set required must match ghostDatabasePassword
mariadb.rootUser.password no default set required must match ghostDatabasePassword
resources.cpu 300m required set to 200m for clusters under 2 nodes

Install Ghost via Helm Chart

Now, time to run the final step before seeing an actual blog pop up.

helm install --name demo-blog -f secrets.yaml stable/ghost

helm install results

You can click on the link to Blog URL and see what you created in a few simple steps.

If all went well you should be greeted with a page that looks like this.

Clean Up

Once you are done exploring you can destory everything with two commands.

Destroy Helm deployment.
helm del --purge demo-blog

Destroy Terraform assets.
terraform destroy

Final Thoughts

While this is far from production ready; it is a great start and test bed for deployment methods.  

Terraform makes it easy to provision the infrastructure while Helm manages and deploys your services.

Related News

Zsh + Zgen
Zsh + Zgen

  Jan 14, 2021

You've successfully subscribed to Devops Miami Blog
Great! Next, complete checkout for full access to Devops Miami Blog
Welcome back! You've successfully signed in
Success! Your account is fully activated, you now have access to all content.
Success! Your billing info is updated.
Billing info update failed.
Your link has expired.