Wednesday 13 April 2016

How to configure High Availability(Heartbeat Cluster) on Redhat 5.x/6.x with Apache web Server

Requirements :-

  • 2 linux nodes, RHEL 5.11
  • Node-1: 192.168.0.1
  • Node-2: 192.168.0.2
  • epel repo enable on both node's
  • LAN & Internet connection.
  • A yum server setup.
  • Virtaul IP Address (VIP) :-  192.168.0.200


Step 1. Set the fully qualified hostnames and give corresponding entries as below files 
/etc/hosts and
/etc/sysconfig/network

node-1 :- 192.168.0.1 - node1.ha.com
node-2 :- 192.168.0.2 - node2.ha.com

[root@node1 ~]# hostname
node1.ha.com
[root@node1 ~]# uname -n
node1.ha.com

[root@node2 ~]# hostname
node2.ha.com
[root@node2]# uname -n
node2.ha.com

Edit /etc/hosts file on node1 as below

192.168.0.1 node1.ha.com  node1
192.168.0.2 node2.ha.com  node2


Edit /etc/hosts file on node2 as below

192.168.0.1 node1.ha.com node1
192.168.0.2 node2.ha.com node2


Configuration :

Step 2. Verifiy the below list of packages are installed if not yet, Install the following packages in both nodes:

# yum install glibc* gcc* lib* flex* net-snmp* OpenIPMI* python-devel perl* openhpi*

Step 3. Save the repo file for online repository in both nods. Its available in  http://download.fedoraproject.org/pub/epel/5/i386/repoview/epel-release.html

Step 4. install epel repo
 [root@node1 ~]#wget  http://download.fedoraproject.org/pub/epel/5/i386/repoview/epel-release.html
 [root@node1 ~]#rpm -Uvh http://dl.fedoraproject.org/pub/epel/5/x86_64/epel-release-5-4.noarch.rpm

 [root@node2 ~]#wget  http://download.fedoraproject.org/pub/epel/5/i386/repoview/epel-release.html
 [root@node2 ~]#rpm -Uvh http://dl.fedoraproject.org/pub/epel/5/x86_64/epel-release-5-4.noarch.rpm

# cd /etc/yum.repos.d/
(Note : Add latest EPEL repository )

Step 5. Then install heartbeat packages on both nodes:
[root@node1 ~]# yum install heartbeat*
[root@node1 ~]# rpm -qa |grep heartbeat
heartbeat-2.1.4-11.el5
heartbeat-pils-2.1.4-11.el5
heartbeat-pils-2.1.4-11.el5
heartbeat-stonith-2.1.4-11.el5
heartbeat-2.1.4-11.el5
heartbeat-stonith-2.1.4-11.el5

[root@node2 ~]# yum install heartbeat*
[root@node2 ~]# rpm -qa |grep heartbeat
heartbeat-2.1.4-11.el5
heartbeat-pils-2.1.4-11.el5
heartbeat-pils-2.1.4-11.el5
heartbeat-stonith-2.1.4-11.el5
heartbeat-2.1.4-11.el5
heartbeat-stonith-2.1.4-11.el5

Step 6. Setting Configuration files:
        We can do all configuration in one system and copy the /etc/ha.d to anthoer node(node2).

[root@node1 ~]#cd /etc/ha.d

[root@node1 ha.d]# ll
-rwxr-xr-x 1 root root   745 Mar 20  2010 harc
drwxr-xr-x 2 root root  4096 Apr 12 07:15 rc.d
-rw-r--r-- 1 root root   692 Mar 20  2010 README.config
drwxr-xr-x 2 root root  4096 Apr 12 07:15 resource.d
-rw-r--r-- 1 root root  7862 Mar 20  2010 shellfuncs

[root@node1 ~]#cat README.config

Step 7. But there is one more thing to do, that is to copy as below files to the /etc/ha.d directory.

authkeys : It contains information for Heartbeat to use when authenticating cluster members. It                             cannot be readable or writable by anyone other than root.
ha.cf
haresources


Step 8. The details about configuration files are explained in this file. We have to copy three
configuration files to this directory (/etc/ha.d/)

[root@node1 ~]#cp /usr/share/doc/heartbeat-2.1.2/authkeys /etc/ha.d/
[root@node1 ~]#cp /usr/share/doc/heartbeat-2.1.2/ha.cf /etc/ha.d/
[root@node1 ~]#cp /usr/share/doc/heartbeat-2.1.2/haresources /etc/ha.d/


Step 9. Now let's start configuring heartbeat. First we will deal with the authkeys file, we will use authentication method 2 (sha1).
        For this we will make changes in the authkeys file as below.

[root@node1 ~]#vi /etc/ha.d/authkeys

    #Then add the following lines:

    auth 2
    2 sha1 ha-testing

:wq!

[root@node1 ~]#

Change the permission of the authkeys file:

[root@node1 ~]#chmod 600 /etc/ha.d/authkeys


Step 10. Moving to our second file (ha.cf) which is the most important. So edit the ha.cf

Add the following lines in the ha.cf file:

logfile /var/log/ha-log
logfacility local0
keepalive 2
deadtime 30
initdead 120
bcast eth0
udpport 694
auto_failback on
node node01  // node1.ha.com
node node02  // node2.ha.com

Step 11. haresources :- This file contains the information about resources which we want to highly enable.

[root@node1 ~]# vi /etc/ha.d/haresources

node1.ha.com  192.168.0.200  httpd
:wq!
[root@node1 ~]#

Step 12. Copy the /etc/ha.d/ directory from node01 to node02:

[root@node1 ~]#scp -r /etc/ha.d/ root@node02.ha.com:/etc/


Step 13. Configuring Apache on both nodes(node1,node2)

[root@node1 ~]#yum install httpd mod_ssl

On Node1:
[root@node1 ~]#vim /var/www/html/index.html
This is test page of node1.ha.com of Heartbeat HA cluster
:wq!
[root@node1 ~]#

On Node2:
[root@node2 ~]# vim /var/www/html/index.html
This is test page of node2.ha.com of Heartbeat HA cluster
:wq!
[root@node2 ~]#

On both nodes:(NODE1 & NODE2)
[root@node1 ~]#vim /etc/httpd/conf/httpd.conf
Listen 192.168.0.200:80

[root@node2 ~]# vim /etc/httpd/conf/httpd.conf
Listen 192.168.0.200:80

Note:- You dont have to create an interface and set this IP or make a IP alias in network-scripts. Heartbeat will take care of it Automatically.

Now start the service in both nodes.
[root@node1 ~]#/etc/init.d/httpd restart   // errorr

Note:- It won’t work until heartbeat is started. So don’t worry


Step 14. Now exchange and save authorized keys between node1 and node2
[root@node1 ~]#ssh-keygen -t rsa
[root@node1 ~]#ssh-copy-id -i ~/.ssh/id_rsa.pub node2.ha.com

[root@node2 ~]#ssh-keygen -t rsa
[root@node2 ~]#ssh-copy-id -i ~/.ssh/id_rsa.pub node1.ha.com

Step 15. Start Heartbeat service on both nodes:
[root@node1 ~]#/etc/init.d/heartbeat start
[root@node1 ~]#chkconfig heartbeat on


[root@node1 ~]#/etc/init.d/heartbeat start
Starting High-Availability services:
2016/04/12_08:07:51 INFO:  Resource is stopped
[  OK  ]
[root@node1 ha.d]# service heartbeat status
heartbeat OK [pid 25356 et al] is running on testnoc.system.com [node1.ha.com]...

[root@node1 ha.d]# netstat -antlpu|grep heartbeat
udp        0      0 0.0.0.0:694                 0.0.0.0:*                               25362/heartbeat: wr
udp        0      0 0.0.0.0:34021               0.0.0.0:*                               25362/heartbeat: wr
[root@node1 ha.d]#


[root@node2 ~]#/etc/init.d/heartbeat status
Starting High-Availability services:
2016/04/12_17:39:08 INFO:  Resource is stopped
[  OK  ]

[root@node2 ~]#/etc/init.d/heartbeat start
Starting High-Availability services:
2016/04/12_08:07:51 INFO:  Resource is stopped
[  OK  ]

[root@node2 ~]#/etc/init.d/heartbeat status
heartbeat OK [pid 12065 et al] is running on zeta-install-dlp [node2.ha.com]...


Step 16. Open web-browser and type in the URL:

http://192.168.0.200

This is test page of node1.ha.com (NODE 1) of Heartbeat HA cluster


Step 17. Now stop the hearbeat daemon on node01:

[root@node1 ~]#/etc/init.d/heartbeat stop

In your browser type in the URL http://192.168.0.200 and press enter.

This is test page of node2.ha.com (NODE 2) of Heartbeat HA cluster

#######
FYI,

Note:- You dont have to create an interface and set this IP or make a IP alias in network-scripts. Heartbeat will take care of it Automatically.

[root@node1 ~]# ifconfig
eth0      Link encap:Ethernet  HWaddr E1:C5:6D:62:4A:84
          inet addr:192.168.0.1  Bcast:192.168.0.255  Mask:255.255.255.0
          inet6 addr: fe80::e0c5:6dff:fe6d:4a86/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
   

eth0:0    Link encap:Ethernet  HWaddr E1:C5:6D:62:4A:84 
          inet addr:192.168.0.200  Bcast:192.168.0.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          Interrupt:185 Base address:0xc000 

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:3362894 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3362894 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0

[root@node1 ~]# /usr/share/heartbeat/hb_takeover    ##### Use this command to takover the service on live env Manually