AUTOMATE KUBERNETES CLUSTER OVER AWS USING ANSIBLE
🤔 KUBERNETES CLUSTER!! What is it!?
Kubernetes is an Orchestration tool which manages service containers with the help of unit named as POD. Main Idea of the tool goes like this — In Production Environment Our Service must reach client with no downtime, and load balancing feature is an integral part of Kubernetes. The PODS are managed in either single Node or Multi-node Kubernetes Cluster. In Cluster each node contains multiple components like kubelet, etcd, scheduler, selector, service, flunnel, kubeadm etc that registers Slave-nodes with master-node.
Pre-conditions to be noted before jumping to the Cluster Setup and further discussing on this blog topic:
👉🏻 The Orchestration Tool — Kubernetes :
Kubernetes is an Orchestration Tool that monitor and manage the Containers. It was designed by Google in 2014 and is a Open-Source tool. Kubernetes is established for deployment, maintenance and scaling of containers to maintain a particular state and provide Continuous support to the environment by various features that include Pods, Labels, Selectors, Controller, Replication Controller, Deployment Controller, Replica Set and Services. Kubernetes uses declarative approach for Orchestration hence uses declarative language i.e. YAML or ….. (Refer my Previous Blog on Kubernetes)
👉🏻 The Configuration Tool — Ansible :
Automation is there in the world of IT from decades but Ansible is mainly designed for “Configuration Management”, the tool come to market with High Scalability and ability to kick Start the business, without giving high wages to employees and just Making Scripts to keep on configuring each node started by the Load Balancers. Now the question arises what all Ansible is providing as automation, Can we rely on it for all the Steps of Configuration Management ?……(Refer my Previous Blog on Ansible)
👉🏻 The Cloud Service Provider — AWS :
AWS (Amazon Web Services) provide a great level of services due to which it is the most preferred public cloud in world. According to recent Gartner report AWS Rank 1st in the World of Providing Resources and Services with greatest Availability and Security. AWS provides a set of fully managed services that you can use to build and run serverless applications. Serverless applications don’t require provisioning, maintaining, and administering servers for backend components such as compute, databases, storage, stream processing, message queueing, and more….(Refer my blog on AWS)
Let’s Brief our Plan before Proceeding further:
As written in the Heading that the Task of Setting up Cluster is going to be automated for which we are going to use Ansible Roles. Now, There would be part by Part Description of the following:
1 ▪ What Exactly Ansible Role is?
2 ▪ How Kubernetes Master Role be written using Ansible Role?
3 ▪ How Kubernetes Slave/Worker Role be written using Ansible Role?
4 ▪ How AWS EC2 instance Creation Role be written using Ansible Role?
5 ▪ Finally we will Configure whole setup of Kubernetes Multi-node cluster over AWS by running Setup.yml file in Ansible from our local System.
Note: It’s combined task of Mine and Raktim Midya so for any queries you can reach us.
My Local Machine Configurations:
> RHEL 8 VM on top of VBox Manager with 2 CPU, 4GB RAM.
> Ansible version 2.10.4 installed.
> Proper Network connectivity using Bridge Adapter.
Step 1: Create Ansible Configuration file:
Ansible being Agentless Automation Tool need the inventory file at the Controller node which i have mentioned to be our local System. Inventory file could be created either globally inside controller node in the path (/etc/ansible/ansible.cfg) or could be created at the workspace where we are going to run our playbooks/roles.
Create Workspace for this Project:
# mkdir kube_ansible
# cd kube_ansible
# mkdir roles
Configuration file:
# vim ansible.cfg
[defaults]
host_key_checking=False
command_warnings=False
deprecation_warnings=False
ask_pass=False
roles_path= ./roles
force_valid_group_names = ignore
private_key_file= ./key.pem
remote_user=ec2-user
[privilege_escalation]
become=True
become_method=sudo
become_user=root
become_ask_pass=False
For explanations of the above file Visit!
Step 2: Next we will create three Roles i.e. Kubernetes Master Role, Kubernetes Slave/Worker Role and AWS EC2 instance Creation Role.
# ansible-galaxy init aws-ec2
# ansible-galaxy init kube-master
# ansible-galaxy init kube-slave
1 ▪ What Exactly Ansible Role is?
An Ansible role has a defined directory structure with seven main standard directories. Roles let you automatically load related vars_files, tasks, handlers, and other Ansible artifacts based on a known file structure. Once you group your content in roles, you can easily reuse them and share them with other users.
The command ansible-galaxy automatically create a repository comprising all those file name. (Click For further details)2 ▪ How Kubernetes Master Role be written using Ansible Role?
There will be some plays that will start and download the required services written below:
Step 3: For Kubernetes Master Node Setup role describing How to configure in tasks folder of kube_master role. Also the vars folder contaning some of the variables values:
# cd role/kube_master/tasks
# vim main.yml
---
# tasks file for kube_master
- name: Add kubeadm repositories on Master Node
yum_repository:
name: kube
description: Kubernetes repo
baseurl: https://packages.cloud.google.com/yum/repos/kubernetes-el7-$basearch
enabled: 1
gpgcheck: 1
gpgkey: https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
- name: Installing Docker & kubeadm on Master Node
package:
name:
- docker
- kubeadm
- iproute-tc
state: present
- name: Staring & enabling Docker & kubelet on Master Node
service:
name: "{{ item }}"
state: started
enabled: yes
loop: "{{ service_names }}"
- name: Pulling the images of k8s master
command: kubeadm config images pull
- name: Updating Docker cgroup on Master Node
copy:
dest: /etc/docker/daemon.json
content: |
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
- name: Restart docker on Master Node
service:
name: docker
state: restarted
- name: Initializing k8s cluster
command: kubeadm init --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=NumCPU --ignore-preflight-errors=Mem
- name: Setting up kubectl on Master Node
shell:
cmd: |
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
- name: Deploying Flannel on Master Node
command: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
- name: Creating token for Slave
command: kubeadm token create --print-join-command
register: token
- name: Cleaning Caches on RAM
shell: echo 3 > /proc/sys/vm/drop_caches# cd role/kube_master/vars
# vim main.yml
---
# vars file for kube_master
service_names:
- "docker"
- "kubelet"
As we can see in above code there are many keys and their values. Here we are first installing Docker, Kubeadm and ip-tables which are pre-requisite software at master node. then further changing container service and initializing Kubernetes master then flunnel for tunneling and connection between Slave and Master.
3 ▪ How Kubernetes Slave/Worker Role be written using Ansible Role?
There will be some plays that will start and download the required services written below:
Step 4: Now lets write plays for kube_slave role inside the tasks folder and than mentioning the variable values in the vars folder:
# cd role/kube_slave/tasks
# vim main.yml
---
# tasks file for kube_slave
- name: Add kubeadm repositories on Slave Node
yum_repository:
name: kube
description: Kubernetes repo
baseurl: https://packages.cloud.google.com/yum/repos/kubernetes-el7-$basearch
enabled: 1
gpgcheck: 1
gpgkey: https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
- name: Installing Docker & kubeadm on Slave Node
package:
name:
- docker
- kubeadm
- iproute-tc
state: present
- name: Staring & enabling Docker & kubelet on Slave Node
service:
name: "{{ item }}"
state: started
enabled: yes
loop: "{{ service_names }}"
- name: Updating Docker cgroup on Slave Node
copy:
dest: /etc/docker/daemon.json
content: |
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
- name: Restart Docker on Slave Node
service:
name: docker
state: restarted
- name: Updating IP tables on Slave Node
copy:
dest: /etc/sysctl.d/k8s.conf
content: |
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
- name: Reloading sysctl on Slave Node
command: sysctl --system
- name: Joining the master node
command: "{{ hostvars[groups['ec2_master'][0]]['token']['stdout'] }}"
- name: Cleaning Caches on RAM
shell: echo 3 > /proc/sys/vm/drop_caches
Same as Kubernetes Master, Slave pre-requisites are the three software i.e. Docker, Kubeadm and ip-tables. We have to have updates for the ip table for which we have used /etc/sysctl.d/k8s.conf file in slave.
Registering / Joining of Slave node to master node could only be done via the key that is provided by the mater after the whole setup and initialization. We need to copy the key in slave nodes. For this purpose we have used tokens here.
4 ▪ How AWS EC2 instance Creation Role be written using Ansible Role?
AWS is a Public Cloud Provider that uses 3 modes of interaction — CLI/WebUI/API. Ansible connect AWS through API. We need some of the credentials and some API based package which are mentioned below:
Step 5: For launching instances in AWS EC2 we have the following aws-ec2 role. For this we need some credentials for login into aws account. Then we will see how the roles are written:
Making credential.yml vault file:
# ansible-vault create credential.yml
access_key: HGEWIFEWFEN
secret_key: AHJB3982649^$^*JIKJB@JKHIU
Your IAM account Access_key and Secret_key are required here so create one IAM and describe the key here in vault file. Purpose of using vault file instead of normal test file is the security of our credentials as ansible vault encrypt the content using AES256 Encryption Algorithm.
Another thing needed is ec2-key-pairs which we need to further login and do work in the launched instances there on our AWS account. For this Go to EC2
-> Key-pairs -> Craeate Key-pair -> Give name to key (key) -> save in .PEM format -> Craete -> Download the key-pair.
You need to change the key in read-only mode, for that use following command:
# chmod 400 key.pem
In aws-ec2 role, plays will be written in tasks folder in main.yml file:
# cd role/aws-ec2/tasks
# vim main.yml
---
# tasks file for aws-ec2
- name: Installing boto & boto3 on local system
pip:
name: "{{ item }}"
state: present
loop: "{{ python_pkgs }}"
- name: Creating Security Group for K8s Cluster
ec2_group:
name: "{{ sg_name }}"
description: Security Group for allowing all port
region: "{{ region_name }}"
aws_access_key: "{{ access_key }}"
aws_secret_key: "{{ secret_key }}"
rules:
- proto: all
cidr_ip: 0.0.0.0/0
rules_egress:
- proto: all
cidr_ip: 0.0.0.0/0
- name: Launching three EC2 instances on AWS
ec2:
key_name: "{{ keypair }}"
instance_type: "{{ instance_flavour }}"
image: "{{ ami_id }}"
wait: true
group: "{{ sg_name }}"
count: 1
vpc_subnet_id: "{{ subnet_name }}"
assign_public_ip: yes
region: "{{ region_name }}"
state: present
aws_access_key: "{{ access_key }}"
aws_secret_key: "{{ secret_key }}"
instance_tags:
Name: "{{ item }}"
register: ec2
loop: "{{ instance_tag }}"
- name: Add 1st instance to host group ec2_master
add_host:
hostname: "{{ ec2.results[0].instances[0].public_ip }}"
groupname: ec2_master
- name: Add 2nd instance to host group ec2_slave
add_host:
hostname: "{{ ec2.results[1].instances[0].public_ip }}"
groupname: ec2_slave
- name: Add 3rd instance to host group ec2_slave
add_host:
hostname: "{{ ec2.results[2].instances[0].public_ip }}"
groupname: ec2_slave
- name: Wait for SSH to come up
wait_for:
host: "{{ ec2.results[2].instances[0].public_dns_name }}"
port: 22
state: started
Two python libraries are required to work with AWS api i.e. boto3 and boto. We are going to create 3 instances (say nodes), 1 for Kube_Master and two for Kube_Slave so we have entered these instance these in loop variable inside vars folder main.yml file:
# cd role/aws-ec2/vars
# vim main.yml
---
# vars file for ec2
instance_tag:
- master
- slave1
- slave2
python_pkgs:
- boto
- boto3
sg_name: Allow_All_SG
region_name: ap-south-1
subnet_name: subnet-6dfdc705
ami_id: ami-0bcf5425cdc1d8a85
keypair: ansible
instance_flavour: t2.micro
Done with the roles part now lets Finally Write our Setup.yml file that will automate all the tasks cumulatively by just calling those roles. We need to create the file inside our main workspace i.e kube_ansible:
# cd kube_ansible/
# vim setup.yml- hosts: localhost
gather_facts: no
vars_files:
- credential.yml
tasks:
- name: Running EC2 Role
include_role:
name: aws-ec2
- hosts: ec2_master
gather_facts: no
tasks:
- name: Running K8s Master Role
include_role:
name: kube_master
- hosts: ec2_slave
gather_facts: no
tasks:
- name: Running K8s Slave Role
include_role:
name: kube_slave
We have given “localhost” as host of the playbook as whole task is running dynamically and with Automation over the AWS Cloud. Followed by the different roles that we have included would run step by step.
Step 6: To run the playbook using ansible-playbook command and giving vault password to authenticate and login into AWS Account.
# ansible-playbook setup.yml -ask-vault-pass
Hence we have achieved our target of “Automating Kubernetes Cluster Over AWS Using Ansible ”.
Following is the Github Repository for your reference just pull and check how this Automation is Done :)
I want to thank Vimal Sir and Raktim for guidance and full support in the task. Also if you wanna connect me on LinkedIN 👇🏻