Created by Akanksha

Configure Apache Web Server, HAPROXY and further Auto-Register Backend Server IPs to HAPROXY using Ansible

roles controlling webserver versions and solving challenge for host IPs addition dynamically over each Managed Node in HAPROXY.cfg file which increases the scalability and high availability of our site…

Akanksha Singh
Geek Culture
Published in
10 min readMay 18, 2021

--

In this Agile World we have our web sites to be deployed to Production Environment. But Whenever our Web Server node scale we have to register it to the Load Balancer which in turn working in distributing traffic and managing Request-Response from client side.

The Scenario must be clear now, that what are we going to discuss in this blog? Yes, It’s Auto Registration of Web Server IPs to Load balancer. We might need to setup the whole environment if everything collapse at our System on data Center. So this Ansible would create the whole setup within some minutes using it’s Auto-Declared code.

🤔 What Ansible is ?

Automation is there in the world of IT from decades but Ansible is mainly designed for “Configuration Management”, the tool come to market with High Scalability and ability to kick Start the business, without giving high wages to employees and just Making Scripts to keep on configuring each node started by the Load Balancers……Go through my previous blog of Ansible where I have discussed the real industry use-case of Ansible. Click Here!!

🤔 Why we are using HAPROXY ?

HAProxy, which stands for High Availability Proxy, is a popular open source software TCP/HTTP Load Balancer and proxying solution which can be run on Linux, Solaris, and FreeBSD. Its most common use is to improve the performance and reliability of a server environment by distributing the workload across multiple servers (e.g. web, application, database).

HAProxy uses health checks to determine if a backend server is available to process requests. This avoids having to manually remove a server from the backend if it becomes unavailable. The default health check is to try to establish a TCP connection to the server i.e. it checks if the backend server is listening on the configured IP address and port.

If a server fails a health check, and therefore is unable to serve requests, it is automatically disabled in the backend i.e. traffic will not be forwarded to it until it becomes healthy again. If all servers in a backend fail, the service will become unavailable until at least one of those backend servers becomes healthy again.

🤔 What is the work of Apache HTTP Server ?

The basic job of all web servers is to accept requests from clients (e.g. a visitor’s web browser) and then send the response to that request (e.g. the components of the page that a visitor wants to see). Apache is the most popular web server.

Apache functions as a way to communicate over networks from client to server using the TCP/IP protocol. Apache can be used for a wide variety of protocols, but the most common is HTTP/S. HTTP/S or Hyper Text Transfer Protocol (S stands for Secure) is one of the main protocols on the web, and the one protocol Apache is most known for.

Apache Web Application Architecture:

Apache is just one component that is needed in a web application stack to deliver web content. One of the most common web application stacks involves LAMP, or Linux, Apache, MySQL, and PHP.

Linux is the operating system that handles the operations of the application. Apache is the web server that processes requests and serves web assets and content via HTTP. MySQL is the database that stores all your information in an easily queried format. PHP is the programming language that works with apache to help create dynamic web content.

Let’s Brief our Plan before Proceeding further:

  1. Firstly We will Create ansible.cfg file in which we would have to mention all the Managed nodes IP.
  2. Creating Static Inventory for all the Managed Node Metadata for the SSH connection that Ansible Create with playbook deployment from Controller Node.
  3. Creating Role For Apache Web Server Configuration.
  4. Creating Role for HAPROXY Configuration and auto-registration of Backend Web Server.
  5. Creating Setup.yml Ansible playbook file for deploying those Roles in order to achieve our Requirements.

Pre-Requisite:

✔ Controller Node with RHEL8 Linux Machine on to which Python & Ansible 2.10.8 must be installed.

✔ Some Managed Nodes for Web Server Configurations. And One for HAPROXY Configurations.

We are creating Roles from Ansible-galaxy so as to manage our declaration code well and also Proceeding Step by Step towards Completing this Use-case simply by running one command of ansible-playbook setup.yml file and Final Setup would be a done at one click.

Step 1: Create Ansible Configuration file:

Ansible being Agentless Automation Tool need the inventory file at the Controller node which i have mentioned to be our local System. Inventory file could be created either globally inside controller node in the path (/etc/ansible/ansible.cfg) or could be created at the workspace where we are going to run our playbooks/roles.

Create Workspace for this Project:

# mkdir haproxy-ansible
# cd haproxy-ansible
# mkdir roles
# vim ansible.cfg

Here we are creating a Local Ansible Configuration File that goes like following. For this just you need to create a workspace where you are going to keep all your Roles, Inventories, Templates for the Ansible Controller node Playbook Configuration Project.

We have three ansible systems here one is [default] under which we are just writing role_path : for defining location of path, host_key_checking: for disabling checking of host key while SSH remote login to the managed node. And most important that inventory file path where address of all the Managed Hosts are mentioned.

Inside [inventory] system we have some plugins that enables additional work that we be then able to do on our inventory file like: host_list-Listing Host, Script-Adding Some Scripts, Auto-Ansible usage in automation scenarios, yaml-understand yaml structure descriptions, ini-initialize the inventory listing and gathering facts and lot more plugin that we can add to get more troubleshooting and better solutions to optimize our work and make configuration management simpler.

The [privilege_escalation] block contain become power for the sudo user and some privileges that we are giving to the Managed Node kernel in turn to run the programs and configurating the setup by doing some root power things more or less it refer to stating/stoping the services and making changes in critical files. Just we need to write our requisite there like become, become_method, become_user, become_ask_pass.

For more details about the Configuration File Setup Visit Here!!

Step 2 : Creating Static Inventory file:

Ansible works against multiple managed nodes or “hosts” in your infrastructure at the same time, using a list or group of lists known as inventory. Inventory could be either build or pulled by YAML or ini file. We have mainly two kind of Inventory:

  1. Static Inventory : Here you can mention the IP or DNS Name of multiple Managed Nodes with password or can use key based authentication method to connect our Remote instances for managing their configurations. We can create default group, Host Group or Range of hosts in this file on which we need to work. But these are convenient for managing small infrastructure.
  2. Dynamic Inventory : Some Dynamic Inventory Scripts run on local Configuration Controller Machine and finding your Remote System information using which retrieves Address or DNS of the systems are loaded and the further Roles / Playbooks run on these inventory host setup on real time. These scripts are executable programs that collect information from external source and output the inventory in JSON format when passed the location of the remote system through ansible.cfg file or through -i option. This file is always executable and suits the environment where we are creating and destroying machines frequently and need to configure all at that time.

To know more about inventory visit!!

So concluding our use-case we have only few system where configurations are required. The systems are in our local machine and we are also not performing any Scale-up or Scale-down operations. Then we need to use Static Inventory as follows:

Here we have two host groups named [web] for web Services and [lb] for load balancer information. Each Record (Line) in the file holds one managed Node information about their IP address, username, Password, connection method (i.e. if Linux then SSH and if Windows the RDP).

Step 3: Creating Ansible Roles :

In total we have two roles named “web” and “lbrole” for Configuring Webserver and HAPROXY Load Balancer respectively. As the name of the roles clearly verify their tasks:

Web Role will do following things:
◾ Install Apache Web Server (httpd)
◾ Copy the web pages
◾ Start the httpd service

lbrole will do the following things:
◾ Install Haproxy Load Balancer
◾ Setup haproxy.cfg file
◾ Start firewalld service and expose Proxy Port
◾ Lastly Start HAPROXY service

Always its a good Practice to make separate workspace for our roles so that we can easily remember the hierarchy of files. So for that we have “roles” workspace:

# mkdir roles
# cd roles
# ansible-galaxy init web
# ansible-galaxy init lbrole

Now we have all the tasks, template, vars file other Ansible artifacts based on a known file structure that are pre-embedded in roles just we need to write the declaration/description (in YAML Language) of what all things we need by including Modules and jinja attribute annotations.

To know more about roles visit!

Step 4 : Creating Web Server Ansible Role:

In this Role we have Three things to be done, first is to install httpd software, secondly sending (coping) file to the managed node at a particular destination on the managed node. Lastly to run the service of httpd Apache Web Server. It included the tasks , vars and files feature of our role.

# cd role/web/tasks
# vim main.yml
tasks.yml

In the task file above me have used variables and files feature of our web role, Following are vars and Files(index.html) respectively:

# cd role/web/vars
# vim main.yml
var.yml
# cd role/web/files
index.html

Step 5: Creating Load Balancer Ansible Role:

In Load balancer role “lbrole” it is creating load balancer using HAPROXY Software and using which we further manage the traffic load from client side and will proxy there request from frontend to backend get the response to the client, the backend Servers are been automatically registered to our load balancer and also we can add the facility of scale-up and scale-down when load increases in sessional basis.

# cd role/lbrole/tasks
# vim main.yml

Tasks including in HAPROXY Load Balancer role are as follows:

tasks.yml

Various Ansible Module we have used here like:

Package Module: Installs, upgrade, removes, and lists packages and groups with the yum/dnf package manager.

template module: The jinja2 template files are deployed on managed node using this module. The value associated with src key specifies the source Jinja2 template, and the value associated with the dest key specifies the file to be created on the destination hosts.

service module: To Start the system or program services, To Stop and Restart the service.

command module: The additional module that don’t work on idempotence and holds the capability to run the shell commands as it is in the managed node.

firewalld module: This module allows for addition or deletion of services and ports (either TCP or UDP) in either running or permanent firewalld rules.

Also note one thing here that we are providing file (haproxy.cfg.j2) in jinja2 template so that it can be processed and we can add the number of webserver we have for registering purpose to our load balancer node, using ‘for loop’. We are importing the IP address by the host groups mentioned in our inventories.

Templating a file is powerful way to manage the configuration file automatically customized for managed hosts when the file is deployed using variables and facts. Ansible uses Jinja2 templating System for template files that internally uses delimiters. We can deploy this file in managed nodes by using template module that support to transfer the file to the dest (destination) from src (source) i.e. Ansible Controller Node. We have used for statements with the variable containing list of hosts to be managed in the [web] group from the inventory would be listed in the file.

{% for web in groups[‘web’] %}
{{ web }}
{% endfor %}

# cd role/lbrole/template

The Variable file is as follows which is called at the task execution by Ansible Task file. /lbrole/var folder contain the file of the variable named main.yml. We have two variable named soft and port_no.

# cd role/lbrole/vars
# vim main.yml

Step 6: Create Setup Playbook to run all the roles:

Finally Write our Setup.yml file that will automate all the tasks cumulatively by just calling those roles. We need to create the file inside our main workspace i.e haproxy_ansible:

# cd haproxy_ansible/
# vim setup.yml

Therefore it would finally run on two host groups i.e. web and lb.

Step 7: Running the Playbook Setup.yml:

Now just run ansible-playbook for final execution of all the roles and collections at one playbook.

# ansible-playbook setup.yml 

Hence we have achieved our target of “Configure Apache Web Server, HAPROXY and further Auto-Register Backend Server IPs to HAPROXY”. Below is the demo video while running the Ansible Playbook:

You can find this project over my GitHub, just fork and lets make the project more system independent and reliable to customers.

To contribute the Project, and for further query or opinion you may connect me over LinkedIN:

Thanks for reading. Hope this blog have given you some valuable inputs!!

(: — )

--

--

Akanksha Singh
Geek Culture

Platform Engineer | Kubernetes | Docker | Terraform | Helm | AWS | Azure | Groovy | Jenkins | Git, GitHub | Sonar | NMAP and other Scan and Monitoring tool