高可用集群原理详解之多节点集群_Heartbeat v1
Heartbeat v1 自带的资源管理器
haresources:
Heartbeat v2 自带的资源管理器
haresources
crm
Heartbeat v3: 资源管理器crm发展为独立的项目,pacemaker
前提:
1)本配置共有两个测试节点,分别node1.heyuxuan.com和node2.heyuxuan.com,相应的IP地址分别为192.168.1.10和192.168.1.12;
2)集群服务为apache的httpd服务;
3)提供web服务的地址为192.168.1.100;
4)系统为rhel5.8,并挂载安装镜像,设置Yum;
1、准备工作
为了配置一台Linux主机成为HA的节点,通常需要做出如下的准备工作:
1)所有节点的主机名称和对应的IP地址解析服务可以正常工作,且每个节点的主机名称需要跟"uname -n“命令的结果保持一致;因此,需要保证两个节点上的/etc/hosts文件均为下面的内容:
192.168.1.10 node1.heyuxuan.com node1
192.168.1.12 node2.heyuxuan.com node2
为了使得重新启动系统后仍能保持如上的主机名称,还分别需要在各节点执行类似如下的命令:
【Node1】:
# sed -i 's@\(HOSTNAME=\).*@\1node1.heyuxuan.com@g' /etc/sysconfig/network
# hostname node1.heyuxuan.com
【Node2】:
# sed -i 's@\(HOSTNAME=\).*@\1node2.heyuxuan.com@g' /etc/sysconfig/network
# hostname node2.heyuxuan.com
2)设定两个节点可以基于密钥进行ssh通信,这可以通过类似如下的命令实现:
【Node1节点执行】:
# ssh-keygen -t rsa
或者执行以下命令:
[root@node1 ~]# ssh-keygen -t rsa -f ~/.ssh/id_rsa -P ''
Generating public/private rsa key pair.
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
cd:70:b7:64:2e:b7:11:e0:71:98:13:17:c6:57:81:7a root@node1
复制密钥至节点二
# ssh-copy-id -i ~/.ssh/id_rsa.pub root@node2
[root@node1 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@node2
root@node2's password:
Now try logging into the machine, with "ssh 'root@node2'", and check in:
.ssh/authorized_keys
to make sure we haven't added extra keys that you weren't expecting.
【Node2节点执行】:
# ssh-keygen -t rsa
或者执行以下命令:
[root@node2 ~]# ssh-keygen -t rsa -f ~/.ssh/id_rsa -P ''
Generating public/private rsa key pair.
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
b3:72:eb:d0:8b:32:a7:e2:e3:b4:07:25:b5:31:22:cf root@node2
复制密钥至节点一
# ssh-copy-id -i ~/.ssh/id_rsa.pub root@node1
Warning: Permanently added 'node1,192.168.1.10' (RSA) to the list of known hosts.
root@node1's password:
Now try logging into the machine, with "ssh 'root@node1'", and check in:
.ssh/authorized_keys
to make sure we haven't added extra keys that you weren't expecting.
3)确保iptables防火墙状态,本实验环境需要,关闭iptables服务与自启动选项
-----注意:服务器生产环境禁止关闭防火墙自启动选项
【node1节点】
[root@node1 ~]# service iptables status
Firewall is stopped.
[root@node1 ~]# chkconfig iptables off
[root@node1 ~]# chkconfig --list iptables
iptables 0:off 1:off 2:off 3:off 4:off 5:off 6:off
【node2节点】
[root@node2 ~]# service iptables status
Firewall is stopped.
[root@node2 ~]# chkconfig iptables off
[root@node2 ~]# chkconfig --list iptables
iptables 0:off 1:off 2:off 3:off 4:off 5:off 6:off
4)确保selinux处于disable状态
【node1节点】
[root@node1 ~]# cat /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - SELinux is fully disabled.
SELINUX=disabled
# SELINUXTYPE= type of policy in use. Possible values are:
# targeted - Only targeted network daemons are protected.
# strict - Full SELinux protection.
SELINUXTYPE=targeted
【node2节点】
[root@node2 ~]# cat /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - SELinux is fully disabled.
SELINUX=disabled
# SELINUXTYPE= type of policy in use. Possible values are:
# targeted - Only targeted network daemons are protected.
# strict - Full SELinux protection.
SELINUXTYPE=targeted
5)下载安装包
EPEL/zh-cn - FedoraProject站点提供Heartbeat安装包下载
【需下载的安装包】
【heartbeat】 - Heartbeat subsystem for High-Availability Linux
【heartbeat-devel】 - Heartbeat development package
【heartbeat-gui】 - Provides a gui interface to manage heartbeat clusters --图形化的组件
【heartbeat-ldirectord】 - Monitor daemon for maintaining high availability resources --为ipvs高可用提供规则自动生成以及后端realserver健康状态检查的组件
【heartbeat-pils】 - Provides a general plugin and interface loading library
【heartbeat-stonith】 - Provides an interface to Shoot The Other Node In The Head
6)下载安装依赖包
【node1节点】、【node2节点】分别安装如下依赖包:
【perl-MailTools-1.77】
[root@node1 home]# rpm -ivh perl-MailTools-1.77-1.el5.noarch.rpm
warning: perl-MailTools-1.77-1.el5.noarch.rpm: Header V3 DSA signature: NOKEY, key ID 217521f6
Preparing... ########################################### [100%]
1:perl-MailTools ########################################### [100%]
【libnet-1.1.6-7】
[root@node1 home]# yum --nogpgcheck localinstall libnet-1.1.6-7.el5.x86_64.rpm
Loaded plugins: katello, product-id, security, subscription-manager
Updating certificate-based repositories.
Unable to read consumer identity
Setting up Local Package Process
Examining libnet-1.1.6-7.el5.x86_64.rpm: libnet-1.1.6-7.el5.x86_64
Marking libnet-1.1.6-7.el5.x86_64.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package libnet.x86_64 0:1.1.6-7.el5 set to be updated
--> Finished Dependency Resolution
Dependencies Resolved
=================================================================================================================================================================================
Package Arch Version Repository Size
=================================================================================================================================================================================
Installing:
libnet x86_64 1.1.6-7.el5 /libnet-1.1.6-7.el5.x86_64 138 k
Transaction Summary
=================================================================================================================================================================================
Install 1 Package(s)
Upgrade 0 Package(s)
建议安装方式都采用Yum方式进行安装,自动解决依赖关系,减少不必要的安装开销;
7)安装Heartbeat软件
【node1节点】、【node2节点】同时进行安装
[root@node1 heartbeat]# ls
heartbeat-2.1.4-11.el5.x86_64.rpm heartbeat-gui-2.1.4-11.el5.x86_64.rpm heartbeat-stonith-2.1.4-11.el5.x86_64.rpm
heartbeat-devel-2.1.4-11.el5.x86_64.rpm heartbeat-pils-2.1.4-11.el5.x86_64.rpm
[root@node1 heartbeat]# yum --nogpgcheck localinstall *.rpm
Total 22 MB/s | 205 kB 00:00
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : libtool-ltdl 1/7
Installing : heartbeat-pils 2/7
Installing : openhpi-libs 3/7
Installing : heartbeat-stonith 4/7
Installing : heartbeat 5/7
Installing : heartbeat-gui 6/7
Installing : heartbeat-devel 7/7
Installed products updated.
Installed:
heartbeat.x86_64 0:2.1.4-11.el5 heartbeat-devel.x86_64 0:2.1.4-11.el5 heartbeat-gui.x86_64 0:2.1.4-11.el5 heartbeat-pils.x86_64 0:2.1.4-11.el5
heartbeat-stonith.x86_64 0:2.1.4-11.el5
Dependency Installed:
libtool-ltdl.x86_64 0:1.5.22-7.el5_4 openhpi-libs.x86_64 0:2.14.0-5.el5
Complete!
8)安装http服务
【node1节点】安装如下:
1.[root@node1 ~]# yum install httpd
2.[root@node1 ~]# echo "
node1.heyuxuan.con
" >> /var/www/html/index.html
//在本机上配置index.html信息用于测试节点Web访问信息是否能够正常切换。正常情况下节点上的Web配置信息应该一致;
3.[root@node1 ~]# service httpd start
Starting httpd: [ OK ]
//在配置完成后需要手工启动一次httpd服务,验证是否能够正常访问,在集群信息配置完成后,httpd服务一定不能配置自启动;
4.[root@node1 ~]# curl
node1.heyuxuan.con
node1.heyuxuan.con
[root@node1 ~]#
//验证配置的http信息是否能够正常访问,再次确认,能够正常访问后需要把httpd自启动关闭;
5.[root@node1 ~]# service httpd stop
Stopping httpd: [ OK ]
[root@node1 ~]# chkconfig httpd off
//关闭httpd服务与关闭其服务自启动信息
【node2节点】 httpd服务启动与测试与【node1节点】相同;
到此,Heartbeat集群两个节点准备工作安装配置完成.
二、配置heartbeat集群
【2.1】要想成功启动heartbeat集群,必须具备以下三个配置文件:
1、密钥文件,权限:600 authkeys (密钥文件)
2、heartbeat服务的配置文件 ha.cf (核心配置文件)
3、资源管理配置文件 haresources(资源管理配置)
此时目录下没有这三个文件,需要创建,我们可以在 /usr/share/doc/heartbeat目录里找到ha.cf、haresources、authkeys三个文件,只需将其拷贝到/etc/ha.d目录下,即可
拷贝三个范例来进行配置
[root@node1 ~]# cp /usr/share/doc/heartbeat-2.1.4/ha.cf /etc/ha.d/
[root@node1 ~]# cp /usr/share/doc/heartbeat-2.1.4/haresources /etc/ha.d/
[root@node1 ~]# cp /usr/share/doc/heartbeat-2.1.4/authkeys /etc/ha.d/
【2.2】密钥配置文件配置
1.修改authkeys密钥文件权限,使其权限限制使用在600
[root@node1 ha.d]# chmod 600 authkeys
[root@node1 ha.d]# ll authkeys
-rw------- 1 root root 645 Mar 27 23:11 authkeys
[root@node1 ha.d]#
2.编辑authkeys文件;
[root@node1 ha.d]# vim authkeys
#
# Authentication file. Must be mode 600
#
#
# Must have exactly one auth directive at the front.
# auth send authentication using this method-id
#
# Then, list the method and key that go with that method-id
#
# Available methods: crc sha1, md5. Crc doesn't need/want a key.
#
# You normally only have one authentication method-id listed in this file
#
# Put more than one to make a smooth transition when changing auth
# methods and/or keys.
#
#
# sha1 is believed to be the "best", md5 next best.
#
# crc adds no security, except from packet corruption.
# Use only on physically secure networks.
#
#auth 1
#1 crc //主节点与备用节点间数据校验采用crc算法
#2 sha1 HI!
#3 md5 Hello!
~
~
authkeys文件用于设定heartbeat的认证方式,共有三种可用的认证方式:crc、md5和sha1,三种认证方式的安全性依次提高,但是占用的系统资源也依次增加。
如果heartbeat集群运行在安全的网络上,可以使用crc方式;
如果HA每个节点的硬件配置很高,建议使用sha1,这种认证方式安全级别最高;
如果是处于网络安全和系统资源之间,可以使用md5认证方式。
本次使用MD5校验算法,使用如下方式取得一段随机md5值用于密钥信息:
[root@node1 ~]# dd if=/dev/random count=1 bs=512 | md5sum
0+1 records in
0+1 records out
128 bytes (128 B) copied, 0.000191 seconds, 670 kB/s
31be51c63e144792110658e7a7650f75 -
将512位随机编码添加至authkeys文件末尾中:
auth 1
1 md5 31be51c63e144792110658e7a7650f75
//保存退出,authkeys密钥文件配置完毕!
【2.3】核心配置文件ha.cf文件配置
核心配置里只需要配置三个参数即可运行heartbeat集群
1.组播或者广播方式
2.node节点信息
详细配置信息参照 【ha.cf配置文件以及注释】
【2.4】haresource配置文件
Haresources文件用于指定双机系统的主节点、集群IP、子网掩码、广播地址以及启动的服务等集群资源,文件每一行可以包含一个或多个资源脚本名,资源之间使用空格隔开,参数之间使用两个冒号隔开,在两个HA节点上该文件必须完全一致,此文件的一般格式为:
node1(node-name) 192.168.1.100(network) httpd zabbix_server(resource-group)
node-name表示主节点的主机名,必须和ha.cf文件中指定的节点名一致;
network用于设定集群的IP地址、子网掩码、网络设备标识等,需要注意的是,这里指定的IP地址就是集群对外服务的IP地址;
resource-group用来指定需要heartbeat托管的服务,也就是这些服务可以由heartbeat来启动和关闭,如果要托管这些服务,必须将服务写成可以通过start/stop来启动和关闭的脚本,然后放到/etc/init.d/或者/etc/ha.d/resource.d/目录下,heartbeat会根据脚本的名称自动去/etc/init.d或者/etc/ha.d/resource.d/目录下找到相应脚步进行启动或关闭操作。
在haresources配置文件末尾加上:
# Regarding the node-names in this file:
#
# They must match the names of the nodes listed in ha.cf, which in turn
# must match the `uname -n` of some node in the cluster. So they aren't
# virtual in any sense of the word.
#
node1 IPaddr::192.168.1.100/24/eth0 httpd
//保存退出,authkeys密钥文件配置完毕!
【2.5】复制authkeys haresources ha.cf三个配置文件到远程node2节点:
[root@node1 ha.d]# scp -p authkeys haresources ha.cf node2:/etc/ha.d/
authkeys 100% 691 0.7KB/s 00:00
haresources 100% 5947 5.8KB/s 00:00
ha.cf 100% 10KB 10.4KB/s 00:00
//启动heartbeat节点一:
[root@node1 ha.d]# service heartbeat start
Starting High-Availability services:
2016/03/28_00:42:00 INFO: Resource is stopped
[ OK ]
//启动heartbeat节点二:
[root@node1 ha.d]# ssh node2 'service heartbeat start'
Starting High-Availability services:
2016/03/28_00:43:55 INFO: Resource is stopped
[ OK ]
至此,Heartbeat双节点集群搭建完成.
三、测试验证是否搭建成功与故障节点切换
【3.1】验证是否启用
[root@node1 ~]# curl 0
node1.heyuxuan.con
//此界面表明httpd服务已经在node1节点生效;
//查看netwstat信息观察80端口是否启用并在node1节点生效:
[root@node1 ha.d]# netstat -tunlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:2208 0.0.0.0:* LISTEN 3171/./hpiod
tcp 0 0 0.0.0.0:994 0.0.0.0:* LISTEN 2932/rpc.statd
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 2896/portmap
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 3189/sshd
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 3198/cupsd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 3229/sendmail
tcp 0 0 127.0.0.1:2207 0.0.0.0:* LISTEN 3176/python
tcp 0 0 :::5989 :::* LISTEN 3364/cimservermain
tcp 0 0 :::80 :::* LISTEN 5558/httpd
tcp 0 0 :::22 :::* LISTEN 3189/sshd
udp 0 0 0.0.0.0:37922 0.0.0.0:* 3350/avahi-daemon
udp 0 0 0.0.0.0:694 0.0.0.0:* 5160/heartbeat: wri
udp 0 0 0.0.0.0:988 0.0.0.0:* 2932/rpc.statd
udp 0 0 0.0.0.0:991 0.0.0.0:* 2932/rpc.statd
udp 0 0 0.0.0.0:5353 0.0.0.0:* 3350/avahi-daemon
udp 0 0 0.0.0.0:111 0.0.0.0:* 2896/portmap
udp 0 0 0.0.0.0:631 0.0.0.0:* 3198/cupsd
udp 0 0 0.0.0.0:51960 0.0.0.0:* 5160/heartbeat: wri
udp 0 0 :::53950 :::* 3350/avahi-daemon
udp 0 0 :::5353 :::* 3350/avahi-daemon
httpd 80端口以及heartbeat 694端口均已正常启用;
【3.2主备节点故障切换】
1.Heartbeat集群提供一个hb_standby脚本用于故障节点间手动切换;
[root@node1 heartbeat]# pwd
/usr/lib64/heartbeat
[root@node1 heartbeat]# ll hb_standby
lrwxrwxrwx 1 root root 31 Mar 27 22:51 hb_standby -> /usr/share/heartbeat/hb_standby
//手动切换至standby节点
[root@node1 heartbeat]# ./hb_standby
2016/03/28_00:55:54 Going standby [all].
[root@node1 heartbeat]#
2.验证是否切换成功:
[root@node2 heartbeat]# curl 0
node2.heyuxuan.con
//此界面表明httpd服务已经在node1节点生效;
//查看netwstat信息观察80端口是否启用并在node2节点生效:
[root@node2 heartbeat]# netstat -tunlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:2208 0.0.0.0:* LISTEN 3168/./hpiod
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 2893/portmap
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 3186/sshd
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 3195/cupsd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 3226/sendmail
tcp 0 0 127.0.0.1:2207 0.0.0.0:* LISTEN 3173/python
tcp 0 0 0.0.0.0:991 0.0.0.0:* LISTEN 2929/rpc.statd
tcp 0 0 :::5989 :::* LISTEN 3361/cimservermain
tcp 0 0 :::80 :::* LISTEN 5444/httpd
tcp 0 0 :::22 :::* LISTEN 3186/sshd
udp 0 0 0.0.0.0:38279 0.0.0.0:* 5131/heartbeat: wri
udp 0 0 0.0.0.0:54320 0.0.0.0:* 3347/avahi-daemon
udp 0 0 0.0.0.0:694 0.0.0.0:* 5131/heartbeat: wri
udp 0 0 0.0.0.0:985 0.0.0.0:* 2929/rpc.statd
udp 0 0 0.0.0.0:988 0.0.0.0:* 2929/rpc.statd
udp 0 0 0.0.0.0:5353 0.0.0.0:* 3347/avahi-daemon
udp 0 0 0.0.0.0:111 0.0.0.0:* 2893/portmap
udp 0 0 0.0.0.0:631 0.0.0.0:* 3195/cupsd
udp 0 0 :::44603 :::* 3347/avahi-daemon
udp 0 0 :::5353 :::* 3347/avahi-daemon
//节点切换完成,现在node2是主节点,node1是备节点,heartbeat集群节点故障切换成功。
四、配置共享存储版
前提:
1)本配置共有一个测试节点,对应的IP地址分别为192.168.1.21【Hostname为node4】
2)为集群服务提供NFS共享服务;
3)系统为rhel5.8;
【4.1】配置集群共享服务
1.node4节点配置NFS服务
[root@node4 ~]# mkdir -pv /web/htdocs
mkdir: created directory `/web'
mkdir: created directory `/web/htdocs'
2.编辑/etc/exports文件配置共享路径与网络
[root@node4 ~]# cat /etc/exports
/web/htdocs 192.168.1.0/24(ro)
##重启NFS服务与protmap服务
3.确认在【node1节点】、【node2节点】都可以访问到模拟NFS共享存储
[root@node1 heartbeat]# showmount -e 192.168.1.21
Export list for 192.168.1.21:
/web/htdocs 192.168.1.0/24
##node2节点操作相同
##为了便于测试,在/web/htdocs目录下创建一个index.html文件,里面声明这是一个NFS服务
[root@node4 htdocs]# cat index.html
This is NFS Server!
4.停止【node1节点】、【node2节点】heartbeat服务
###先通过SSH在node1节点上远程操作停止node2节点的heartbeat服务,在停止node1节点的heartbeat服务
[root@node1 heartbeat]# ssh node2 'service heartbeat stop'
Stopping High-Availability services:
[ OK ]
//停止node1节点hearbeat服务
[root@node1 heartbeat]# service heartbeat stop
Stopping High-Availability services:
[ OK ]
//停止node2节点hearbeat服务
###建议修改haresources文件前先手动挂载一次/web/htdocs目录,确认node1节点、node2节点都能正常挂载NFS;
[root@node1 heartbeat]# mount 192.168.1.21:/web/htdocs /mnt
[root@node1 heartbeat]# ls /mnt
index.html
###然后卸载NFS共享,前往不要手动挂载;
[root@node1 heartbeat]# umount /mnt
【4.2】修改haresources文件配置
修改该文件配置,在文件末尾加入如下配置,使其生效:
node1 IPaddr::192.168.1.100/24/eth0 filesystem::192.168.1.21:/web/htdocs::/var/www/html::nfs httpd
###保存退出;
【4.3】将haresources配置文件传输至节点二上,使其配置保持一致
[root@node1 ha.d]# scp haresources node2:/etc/ha.d/
haresources 100% 6004 5.9KB/s 00:00
##启动双节点的heartbeat服务
[root@node1 ha.d]# service heartbeat start
Starting High-Availability services:
2016/03/29_21:51:53 INFO: Resource is stopped
[ OK ]
[root@node1 ha.d]# ssh node2 'service heartbeat start'
Starting High-Availability services:
2016/03/29_21:52:03 INFO: Resource is stopped
[ OK ]
[root@node1 ha.d]#
【4.4】观察heartbeat【/var/log/heartbeat.log】日志
###日志详情请参照【Heartbeat共享存储节点日志】
###观察测试httpd服务与NFS服务是否已经挂载,方法同之前观察测试方式一致!
[root@node4 ~]# curl 0
This is NFS Server!
//通过hb_standby脚本进行切换测试成功!
阅读(2482) | 评论(0) | 转发(0) |