Integrating technologies …
Integration of technologies will not only help to get a better insight into them but will also lead you to understand the real-life use cases for the same. The interesting task here is to create ansible roles that will launch the Kubernetes cluster i.e. a Kubernetes master and Kubernetes worker node over AWS Cloud.
The first step is to have ansible in your base your. Install it using pip3 install ansible. Before creating a playbook we need to set up an inventory for the same. For the same, the best practice is to use Dynamic Inventory
Setup Dynamic Inventory
To set up the dynamic inventory follow the given points:
- Create a directory, let us mydb. Using mkdir /mydb
- Download the ec2.py and ec2.ini file using: wget <URL> https://github.com/ansible/ansible/tree/stable-2.9/contrib/inventory
- Install boto library : pip3 install boto
- Change the interpreter written in the first line in the ec.py file.
- Now make the files executable using: chmod +x filename
- ec2.py will therefore act as the dynamic inventory for you.
Set the configuration file
The next step is the ansible configuration. ( /etc/ansible/ansible.cfg)
Along with adding the inventory, we add a remote user since we want to launch the cluster remotely over AWS. We require the key for the same. We connect remotely using ssh. Therefore add the connection as well. Also, we provide some privilege escalations so that the remote system performs the tasks that are assigned to it without fail.
Next, we create three roles: to launch instances in the cloud, to configure the master, and to configure the worker node or slave.
How to create roles?
Creating roles is an easy task. Firstly create a separate directory for roles. Let us say, K8s_Cluster_Roles. Use the following command to create the role inside this directory.
ansible-galaxy init <name_of_role>
Role 1: Launches ec2-instances
Once the role has been created you can see the following directory inside it. Each directory contains a main.yml file where the playbook has to be written. We write the tasks to be done in the /tasks/main.yml file.
The above playbook is written using aws ec2 help. The important thing to remember here is the tag name allotted to the instances. We will use them to run the other two roles because of the dynamic inventory created. Rest changes here are flexible according to the user's need.
Role 2: Configures the instance as K8Smaster
Similarly, write the playbook for master inside tasks and the variables inside the vars directory. Since this time the playbook is big, I have included three screenshots.
The long playbook configures the instance as Kube master. In the last use the blockinfile module to store the join command in the variable file of the slave. This helps the slave to connect with the master automatically when the playbook is run. Also, I haven't added variables for this one you can if you want to.
Role3: Configure the instance as WorkerNode
The following playbook configures the worker node. It similar to the master node accept that certain tasks are not to be done here. Also, the join command will automatically be written in vars/main.yml
Create the Main setup file :
After you are done with the roles. The next step is to execute the roles. This can be done by adding the role path into the ansible configuration file. Also, we create the main file with the following parameters.
Create the Main file:
The main file includes the three roles that have been created. The host for the first role is localhost. This calls the API of AWS cloud behind the scene and launches instances. The host for master includes the tag name given to the instance while launching. It automatically detects the IP. The same goes for the slave.
Run the main playbook:
Run the main block using : ansible-playbook <name_of_playbook>
Thankyou for the read !!