I wrote an article about VMware Harbor deployment with Terraform, Ansible and GitLab CI in my previous post. I thought to publish a writeup about building a Kubernetes cluster on AWS using Terraform, Ansible and GitlLab CI. This is a cluster with a Master and three worker nodes running on the AWS cloud platform. I used AWS backend S3 and DynamoDb table to store and maintain the state configuration for terraform. I didn’t use the dedicated runners to perform the GitLab Automation so cluster is deployed to the public subnet. If this is a production deployment with dedicated GitLab runners this can be changed and can be deployed to the private subnet.
Here is my GitLab repository for this project and, I used couple of Ansible Playbooks to install the dependencies, configure Kube Master and Worker nodes and, Connect deployed Workers to Master. All the playbooks are stored in the repository itself.
Read More : How To Configure Terraform AWS Backend With S3 And DynamoDB Table
In the GitLab pipeline couple of automated and manual stages been used. Automated stages were created to build the cluster, install the dependencies and configure the Kubernetes cluster. Also, I used a manual job to destroy the cluster whenever needed, it was quite useful for my testing purposes and, also another job created to destroy the entire cluster if there was a failure in the pipeline to start from the beginning.

Here is my “gitlab-ci.yml” file with those configuration.
image:
name: arunalakmal/tc-terraform-ansible-aws:latest
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
- ‘PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID} AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY} AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}'
before_script:
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo $IRONMANSSH | base64 -d > ~/.ssh/ironman
- chmod 700 ~/.ssh/ironman
- echo $IRONMANPUBSSH | base64 -d > ~/.ssh/ironman.pub
- chmod 700 ~/.ssh/ironman.pub
- eval $(ssh-agent -s)
- ssh-add ~/.ssh/ironman
- rm -rf .terraform
- terraform --version
stages:
- cluster build
- kubedeploy
- destroy
cluster build:
stage: cluster build
script:
- terraform init --backend-config="access_key=$AWS_ACCESS_KEY_ID" --backend-config="secret_key=$AWS_SECRET_ACCESS_KEY" --backend-config="dynamodb_table=$DYNAMODB_TABLE" --backend-config="bucket=$BUCKET"
- terraform validate
- terraform apply -auto-approve
- mv kube_hosts kube_hosts_data
artifacts:
paths:
- kube_hosts_data
kubedeploy:
stage: kubedeploy
only:
- master
script:
- export ANSIBLE_HOST_KEY_CHECKING=False
- ansible-playbook -i kube_hosts_data ./kube_playbooks/kube_dependencies.yml
- ansible-playbook -i kube_hosts_data ./kube_playbooks/kube_master.yml
- ansible-playbook -i kube_hosts_data ./kube_playbooks/kube_workers_connect.yml
dependencies:
- cluster build
terraform-destroy_on_failure:
stage: destroy
script:
- terraform init --backend-config="access_key=$AWS_ACCESS_KEY_ID" --backend-config="secret_key=$AWS_SECRET_ACCESS_KEY" --backend-config="dynamodb_table=$DYNAMODB_TABLE" --backend-config="bucket=$BUCKET"
- terraform destroy -auto-approve
dependencies: []
when: on_failure
terraform-destroy:
stage: destroy
script:
- terraform init --backend-config="access_key=$AWS_ACCESS_KEY_ID" --backend-config="secret_key=$AWS_SECRET_ACCESS_KEY" --backend-config="dynamodb_table=$DYNAMODB_TABLE" --backend-config="bucket=$BUCKET"
- terraform destroy -auto-approve
dependencies: []
when: manual
Few variables were stored in the GitLab variables to use as the environment variables when the job runs. Aws account access keys, Keypairs for the AWS resources, AWS backend S3 and DynamoDB name for the backend configuration were the used variables.

How Trigger A GitLab Pipeline
Commit to a Master or anytother branch will run the pipeline but, in my pipeline Kubernetes Deployment only applies to the master branch and other branch commits only deploys the cluster. It was quite useful for me to work with the cluster deployment.

Otherwise, manual pipeline can be performed. To do that navigate to CI/CD -> Pipeline

Previous history will be displayed with your triggers and pipeline can be triggered here.

In the next step, branch can be specified and additional environment variable can be passed to the pipeline, run the pipeline to start the deployment

If everything goes well jobs will be succeeded

Terraform Init With AWS Backend
One thing I’d like to highlight in this post, I used AWS Backend S3 bucket and DynamoDB Table to maintain the state configuration of the environment. I initialized the terraform configuration as below in my pipeline

Maintaining the remote state was useful to run my destruction jobs on failure or on demand manual triggers.
Kubernetes Deployment Ansible Playbooks
As I said, I used couple of Ansible Playbooks, to perform the Kubernetes nodes configurations.
To install the dependencies on the nodes I created this Ansible Playbook.
- hosts: all
become: yes
tasks:
- name: install Docker
yum:
name: docker
state: present
update_cache: true
- name: start Docker
service:
name: docker
state: started
# - name: disable SELinux
# command: setenforce 0
# - name: disable SELinux on reboot
# selinux:
# state: disabled
- name: ensure net.bridge.bridge-nf-call-ip6tables is set to 1
sysctl:
name: net.bridge.bridge-nf-call-ip6tables
value: 1
state: present
- name: ensure net.bridge.bridge-nf-call-iptables is set to 1
sysctl:
name: net.bridge.bridge-nf-call-iptables
value: 1
state: present
- name: add Kubernetes' YUM repository
yum_repository:
name: Kubernetes
description: Kubernetes YUM repository
baseurl: https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
gpgkey: https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
gpgcheck: yes
- name: install kubelet
yum:
name: kubelet-1.14.0
state: present
update_cache: true
- name: install kubeadm
yum:
name: kubeadm-1.14.0
state: present
- name: start kubelet
service:
name: kubelet
enabled: yes
state: started
- hosts: master
become: yes
tasks:
- name: install kubectl
yum:
name: kubectl-1.14.0
state: present
allow_downgrade: yes
To configure the Kubernetes Master I created this Ansible Playbook.
- hosts: master
become: yes
tasks:
- name: initialize the cluster
shell: kubeadm init --ignore-preflight-errors=all --pod-network-cidr=10.244.0.0/16 >> cluster_initialized.txt
args:
chdir: $HOME
creates: cluster_initialized.txt
- name: create .kube directory
become: yes
become_user: ec2-user
file:
path: $HOME/.kube
state: directory
mode: 0755
- name: copy admin.conf to user's kube config
copy:
src: /etc/kubernetes/admin.conf
dest: /home/ec2-user/.kube/config
remote_src: yes
owner: ec2-user
- name: install Pod network
become: yes
become_user: ec2-user
shell: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml >> pod_network_setup.txt
args:
chdir: $HOME
creates: pod_network_setup.txt
To configure the Kubernetes Worker nodes, generate the “Cluster Join command” from Kubernetes Master, save it to a file, copy to the /tmp folder of the worker nodes and execute done by the same Ansible Playbook.
- hosts: master
become: yes
gather_facts: false
tasks:
- name: Generate join command
command: kubeadm token create --print-join-command
register: join_command
- name: Copy join command to local file
local_action: copy content="{{ join_command.stdout_lines[0] }}" dest="./join-command"
- hosts: kubeworkers
become: yes
handlers:
- name: docker status
service: name=docker state=started
tasks:
- name: Copy the join command to server location
copy: src=join-command dest=/tmp/join-command.sh mode=0777
- name: Join the node to cluster
command: sh /tmp/join-command.sh
Since these nodes are deployed by Terraform at the “Cluster Build” stage Ansible inventory is generated dynamically with their public addresses. Again this public subnet configuration can be changed if you are using the dedicated runners inside the VPC.
Creating this dynamic Ansible inventory done by the Terraform null resource as below.

This is just the method, which I followed and there can be million ways of achieving similar setup. I personally use this cluster to provision my Kubernetes cluster on AWS.