Automate K8S setup using Ansible over EC2
Ansible Role to Configure K8S Multi Node Cluster over AWS Cloud.
đź”… Create Ansible role to launch 3 AWS EC2 Instance
đź”… Create role to configure K8S Master, K8S Worker Nodes on the above created EC2 Instances using kubeadm.
Step1: Setup Dynamic inventory for EC2
export AWS_ACCESS_KEY_ID=’YOUR_AWS_API_KEY’
export AWS_SECRET_ACCESS_KEY=’YOUR_AWS_API_SECRET_KEY’
You can also put these command in ~/.bashrc so that you don’t have to export it everytime
Step2: Launch EC2 instances
- Create role for launching instances using ansible-galaxy init roleName
- Create a VPC and subnet in AWS, create Internet gateway and attach to VPC then add route in route table of VPC
3. Create security group , and allow IP of Ansible Host and same security group
The second entry in image is of same security group k8s_sg, this will allow only instances in that VPC to communicate with each other.
Give the subnet Id and security group name when using role
This code will create master node with tags “ClusType:k8s” and “Node:Master” and slave nodes with tags “ClusType:k8s” and “Node:Slave”
We have to provide region,keyName, instance type,imageId(provide Amazon Linux AMI ID), Subnet Id, security group name and exact_count
exact_count launches nodes only if the count of instances with desired tags is not satisfied, For example: If we want only 1 master node and we already have 1 master node with the required tags then it will not launch a node again
Step3: Configure k8s on launched instances
To do the tasks on ec2 instance like installing we need sudo power so we use ansible_become: true here
- To configure k8s first we need container engine here we are using Docker, so we first install docker , please make sure you have given Amazon Linux AMI ID while launching instances then only the tasks would work properly. If you want to use other Linux AMI then configure repository for docker
- For k8s we need to change driver to systemd, for this we need to add code to the file daemon.json
- Start docker service and enable
- Configure yum repository for k8s
- Install kubelet,kubeadm and kubectl commands
- Enable kubelet
- Install iproute-tc
- Add to /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
This configuration is to be done on all nodes
Step 4: Configuration to be done on master only
- Pull images
- Run kubeadm init
- Creating directory for kube
- Flannel setup
- Generating token for slave to join
Step 5: Configurations only for slave nodes
- Here first task we retrieve IP of master node
- Then we retrieve join command from fact that we set on the master
- Use command to join master
Step6 : Create Setup file that uses role
Run this file first to launch ec2 instances
Run this file after instances have been launched , here the IP’s are retrieved dynamically based on tags
Hence, the K8s Multi Node Cluster is setup