Automate K8S setup using Ansible over EC2

Yukta chakravarty
4 min readMay 15, 2021

Ansible Role to Configure K8S Multi Node Cluster over AWS Cloud.
đź”… Create Ansible role to launch 3 AWS EC2 Instance
đź”… Create role to configure K8S Master, K8S Worker Nodes on the above created EC2 Instances using kubeadm.

Step1: Setup Dynamic inventory for EC2

  1. Download files for ec2 dynamic inventory ec2.py, ec2.ini
  2. Export AWS credentials in terminal using

export AWS_ACCESS_KEY_ID=’YOUR_AWS_API_KEY’

export AWS_SECRET_ACCESS_KEY=’YOUR_AWS_API_SECRET_KEY’

You can also put these command in ~/.bashrc so that you don’t have to export it everytime

Step2: Launch EC2 instances

  1. Create role for launching instances using ansible-galaxy init roleName
  2. Create a VPC and subnet in AWS, create Internet gateway and attach to VPC then add route in route table of VPC
Route Table of VPC

3. Create security group , and allow IP of Ansible Host and same security group

Security group named k8s_sg Inbound rules

The second entry in image is of same security group k8s_sg, this will allow only instances in that VPC to communicate with each other.

Give the subnet Id and security group name when using role

tasks/main.yml of ec2_launch role

This code will create master node with tags “ClusType:k8s” and “Node:Master” and slave nodes with tags “ClusType:k8s” and “Node:Slave”

We have to provide region,keyName, instance type,imageId(provide Amazon Linux AMI ID), Subnet Id, security group name and exact_count

exact_count launches nodes only if the count of instances with desired tags is not satisfied, For example: If we want only 1 master node and we already have 1 master node with the required tags then it will not launch a node again

Step3: Configure k8s on launched instances

vars/main.yml file

To do the tasks on ec2 instance like installing we need sudo power so we use ansible_become: true here

daemon.json file
  1. To configure k8s first we need container engine here we are using Docker, so we first install docker , please make sure you have given Amazon Linux AMI ID while launching instances then only the tasks would work properly. If you want to use other Linux AMI then configure repository for docker
  2. For k8s we need to change driver to systemd, for this we need to add code to the file daemon.json
  3. Start docker service and enable
  4. Configure yum repository for k8s
  5. Install kubelet,kubeadm and kubectl commands
  6. Enable kubelet
  7. Install iproute-tc
  8. Add to /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

handlers/main.yml
k8s.conf file

This configuration is to be done on all nodes

Step 4: Configuration to be done on master only

  1. Pull images
  2. Run kubeadm init
  3. Creating directory for kube
  4. Flannel setup
  5. Generating token for slave to join

Step 5: Configurations only for slave nodes

  1. Here first task we retrieve IP of master node
  2. Then we retrieve join command from fact that we set on the master
  3. Use command to join master

Step6 : Create Setup file that uses role

k8s_setup.yml

Run this file first to launch ec2 instances

k8s_setup1.yml

Run this file after instances have been launched , here the IP’s are retrieved dynamically based on tags

Hence, the K8s Multi Node Cluster is setup

You can find entire code here

--

--