分类: LINUX
2017-04-19 21:41:40
高可用集群原理详解之多节点集群_Heartbeat v1
Heartbeat v1 自带的资源管理器 haresources:
Heartbeat v2 自带的资源管理器 haresources
crm
Heartbeat v3: 资源管理器 crm 发展为独立的项目,pacemaker
前提:
1)本配置共有两个测试节点,分别 node1.heyuxuan.com 和 node2.heyuxuan.com,相应的 IP 地址分别为 192.168.1.10 和 192.168.1. 12;
2)集群服务为 apache 的 httpd 服务;
3)提供 web 服务的地址为 192.168.1.100; 4)系统为 rhel5.8,并挂载安装镜像,设置 Yum;
1、准备工作
为了配置一台 Linux 主机成为 HA 的节点,通常需要做出如下的准备工作:
1)所有节点的主机名称和对应的 IP 地址解析服务可以正常工作,且每个节点的主机名称需要跟"uname -n“命令的结果保持一致; 因此,需要保证两个节点上的/etc/hosts 文件均为下面的内容:
192.168.1.10 node1.heyuxuan.com node1 192.168.1.12 node2.heyuxuan.com node2
为了使得重新启动系统后仍能保持如上的主机名称,还分别需要在各节点执行类似如下的命令:
【Node1】:
# sed -i 's@\(HOSTNAME=\).*@\1node1.heyuxuan.com@g' /etc/sysconfig/network # hostname node1.heyuxuan.com
【Node2】:
# sed -i 's@\(HOSTNAME=\).*@\1node2.heyuxuan.com@g' /etc/sysconfig/network # hostname node2.heyuxuan.com
2)设定两个节点可以基于密钥进行 ssh 通信,这可以通过类似如下的命令实现: 【Node1 节点执行】:
# ssh-keygen -t rsa
或者执行以下命令:
[root@node1 ~]# ssh-keygen -t rsa -f ~/.ssh/id_rsa -P '' Generating public/private rsa key pair.
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: cd:70:b7:64:2e:b7:11:e0:71:98:13:17:c6:57:81:7a root@node1
复制密钥至节点二
# ssh-copy-id -i ~/.ssh/id_rsa.pub root@node2
[root@node1 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@node2
root@node2's password:
Now try logging into the machine, with "ssh 'root@node2'", and check in:
.ssh/authorized_keys
to make sure we haven't added extra keys that you weren't expecting.
【Node2 节点执行】: # ssh-keygen -t rsa
或者执行以下命令:
[root@node2 ~]# ssh-keygen -t rsa -f ~/.ssh/id_rsa -P ''
Generating public/private rsa key pair.
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: b3:72:eb:d0:8b:32:a7:e2:e3:b4:07:25:b5:31:22:cf root@node2
复制密钥至节点一
# ssh-copy-id -i ~/.ssh/id_rsa.pub root@node1
Warning: Permanently added 'node1,192.168.1.10' (RSA) to the list of known hosts. root@node1's password:
Now try logging into the machine, with "ssh 'root@node1'", and check in:
.ssh/authorized_keys
to make sure we haven't added extra keys that you weren't expecting.
3)确保 iptables 防火墙状态,本实验环境需要,关闭 iptables 服务与自启动选项 -----注意:服务器生产环境禁止关闭防火墙自启动选项
【node1 节点】
[root@node1 ~]# service iptables status
Firewall is stopped.
[root@node1 ~]# chkconfig iptables off
[root@node1 ~]# chkconfig --list iptables
iptables 0:off 1:off 2:off 3:off 4:off 5:off 6:off
【node2 节点】
[root@node2 ~]# service iptables status
Firewall is stopped.
[root@node2 ~]# chkconfig iptables off
[root@node2 ~]# chkconfig --list iptables
iptables 0:off 1:off 2:off 3:off 4:off 5:off 6:off
4)确保 selinux 处于 disable 状态 【node1 节点】
[root@node1 ~]# cat /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - SELinux is fully disabled.
SELINUX=disable d
# SELINUXTYPE= type of policy in use. Possible values are:
# targeted - Only targeted network daemons are protected.
# strict - Full SELinux protection. SELINUXTYPE=targeted
【node2 节点】
[root@node2 ~]# cat /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - SELinux is fully disabled.
SELINUX=disable d
# SELINUXTYPE= type of policy in use. Possible values are:
# targeted - Only targeted network daemons are protected.
# strict - Full SELinux protection. SELINUXTYPE=targeted
5)下载安装包
EPEL/zh-cn - FedoraProject 站点提供 Heartbeat 安装包下载
【需下载的安装包】
【heartbeat】 - Heartbeat subsystem for High-Availability Linux 【heartbeat-devel】 - Heartbeat development package 【heartbeat-gui】 - Provides a gui interface to manage heartbeat clusters
--图形化的组件
【heartbeat-ldirectord】 - Monitor daemon for maintaining high availability resources --为 ipvs 高可用提供规则自动生成 以及后端 realserver 健康状态检查的组件
【heartbeat-pils】 - Provides a general plugin and interface loading library 【heartbeat-stonith】 - Provides an interface to Shoot The Other Node In The Head
6)下载安装依赖包
【node1 节点】、【node2 节点】分别安装如下依赖包:
【perl-MailTools-1.77】
[root@node1 home]# rpm -ivh perl-MailTools-1.77-1.el5.noarch.rpm
warning: perl-MailTools-1.77-1.el5.noarch.rpm: Header V3 DSA signature: NOKEY, key ID 217521f6 Preparing... ########################################### [100%]
1:perl-MailTools ########################################### [100%] 【libnet-1.1.6- 7】
[root@node1 home]# yum --nogpgcheck localinstall libnet-1.1.6-7.el5.x86_64.rpm Loaded plugins: katello, product-id, security, subscription-manager
Updating certificate-based repositories.
Unable to read consumer identity
Setting up Local Package Process
Examining libnet-1.1.6-7.el5.x86_64.rpm: libnet-1.1.6-7.el5.x86_64 Marking libnet-1.1.6-7.el5.x86_64.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package libnet.x86_64 0:1.1.6-7.el5 set to be updated --> Finished Dependency Resolution
Dependencies Resolved
============================================================================== ============================================================================== =====================
Package Arch Versi on Reposit ory Size
============================================================================== ============================================================================== =====================
Installing:
libnet x86_64 1.1.6-7.el5 /libnet-1.1.6-7.el5.x86_64 138 k
Transaction Summary
============================================================================== ============================================================================== =====================
Install 1 Package(s) Upgrade 0 Package(s)
建议安装方式都采用 Yum 方式进行安装,自动解决依赖关系,减少不必要的安装开销;
7)安装 Heartbeat 软件
【node1 节点】、【node2 节点】同时进行安装
[root@node1 heartbeat]# ls
heartbeat-2.1.4-11.e l5.x8 6_6 4.rpm heartbeat -gui-2.1.4-11.el5.x 86_6 4.rpm heartbeat -stonith-2.1.4- 11.el5.x 86_ 64.rpm heartbeat-devel-2.1.4-11.el5.x86_64.rpm heartbeat-pils-2.1.4-11.el5.x86_64.rpm
[root@node1 heartbeat]# yum --nogpgcheck localinstall *.rpm
Total
Running rpm_check_debug Running Transaction Test Finished Transaction Test
22 MB/s | 205 kB
00:00
Transaction Test Succeeded Running Transaction
Installing Installing Installing Installing Installing Installing Installing
: libtool-ltdl
: heartbeat-pils
: openhpi-libs
: heartbeat-stonith : heartbeat
: heartbeat-gui
: heartbeat-devel
1/7 2/7
3/7 4/7
5/7 6/7
7/7
Installed products updated.
Installed:
heartbeat.x86_64 0:2.1.4-11.el5 heartbeat-pils.x86_64 0:2.1.4-11.el5
heartbeat-devel.x86_64 0:2.1.4-11.el5
heartbeat-gui.x86_64 0:2.1.4-11.el5
heartbeat-stonith.x86_64 0:2.1.4-11.el5
Dependency Installed: libtool-ltdl.x86_64 0:1.5.22-7.el5_4
Complete!
8)安装 http 服务
【node1 节点】安装如下: 1.[root@node1 ~]# yum install httpd
openhpi-libs.x86_64 0:2.14.0-5.el5
2.[root@node1 ~]# echo "
//在本机上配置 index.html 信息用于测试节点 Web 访问信息是否能够正常切换。正常情况下节点上的 Web 配置信息应该一致;
3.[root@node1 ~]# service httpd start
Starting httpd: [ OK ]
//在配置完成后需要手工启动一次 httpd 服务,验证是否能够正常访问,在集群信息配置完成后,httpd 服务一定不能配置自启动;
4.[root@node1 ~]# curl
//验证配置的 http 信息是否能够正常访问,再次确认,能够正常访问后需要把 httpd 自启动关闭; 5.[root@node1 ~]# service httpd stop
Stopping httpd: [ OK ]
[root@node1 ~]# chkconfig httpd off //关闭 httpd 服务与关闭其服务自启动信息
【node2 节点】 httpd 服务启动与测试与【node1 节点】相同;
到此,Heartbeat 集群两个节点准备工作安装配置完成!
二、配置 heartbeat 集群
【2.1】要想成功启动 heartbeat 集群,必须具备以下三个配置文件: 1、密钥文件,权限:600 authkeys (密钥文件) 2、heartbeat服务的配置文件 ha.cf (核心配置文件) 3、资源管理配置文件 haresources(资源管理配置)
此时目录下没有这三个文件,需要创建,我们可以在 /usr/share/doc/heartbeat 目录里找到 ha.cf、haresources、authkeys 三个 文件,只需将其拷贝到/etc/ha.d 目录下,即可
拷贝三个范例来进行配置
[root@node1 ~]# cp /usr/share/doc/heartbeat-2.1.4/ha.cf /etc/ha.d/ [root@node1 ~]# cp /usr/share/doc/heartbeat-2.1.4/haresources /etc/ha.d/ [root@node1 ~]# cp /usr/share/doc/heartbeat-2.1.4/authkeys /etc/ha.d/
【2.2】密钥配置文件配置
1.修改 authkeys 密钥文件权限,使其权限限制使用在 600 [root@node1 ha.d]# chmod 600 authkeys [root@node1 ha.d]# ll authkeys
-rw------- 1 root root 645 Mar 27 23:11 authkeys [root@node1 ha.d]#
2.编辑 authkeys 文件; [root@node1 ha.d]# vim authkeys #
# Authentication file. Must be mode 600 #
#
# Must have exactly one auth directive at the front.
# auth send authentication using this method-id #
# Then, list the method and key that go with that method-id #
# Available methods: crc sha1, md5. Crc doesn't need/want a key. #
# You normally only have one authentication method-id listed in this file #
# Put more than one to make a smooth transition when changing auth
# methods and/or keys. #
#
# sha1 is believed to be the "best", md5 next best. #
# crc adds no security, except from packet corruption.
#
#
#auth 1
#1 crc
#2 sha1 HI! #3 md5 Hello! ~
Use only on physically secure networks.
//主节点与备用节点间数据校验采用 crc 算法
~
authkeys 文件用于设定 heartbeat 的认证方式,共有三种可用的认证方式:crc、md5 和 sha1,三种认证方式的安全性依次提高, 但是占用的系统资源也依次增加。
如果 heartbeat 集群运行在安全的网络上,可以使用 crc 方式;
如果 HA 每个节点的硬件配置很高,建议使用 sha1,这种认证方式安全级别最高; 如果是处于网络安全和系统资源之间,可以使用 md5 认证方式。
本次使用 MD5 校验算法,使用如下方式取得一段随机 md5 值用于密钥信息: [root@node1 ~]# dd if=/dev/random count=1 bs=512 | md5sum
0+1 records in
0+1 records out
128 bytes (128 B) copied, 0.000191 seconds, 670 kB/s 31be51c63e144792110658e7a7650f75 -
将 512 位随机编码添加至 authkeys 文件末尾中: auth 1
1 md5 31be51c63e144792110658e7a7650f75
//保存退出,authkeys 密钥文件配置完毕!
【2.3】核心配置文件 ha.cf 文件配置 核心配置里只需要配置三个参数即可运行 heartbeat 集群 1.组播或者广播方式
2.node 节点信息
详细配置信息参照 【ha.cf配置文件以及注释】
【2.4】haresource 配置文件
Haresources 文件用于指定双机系统的主节点、集群 IP、子网掩码、广播地址以及启动的服务等集群资源,文件每一行可以包含一 个或多个资源脚本名,资源之间使用空格隔开,参数之间使用两个冒号隔开,在两个 HA 节点上该文件必须完全一致,此文件的一般 格式为:
node1(node-name) 192.168.1.100(network) httpd zabbix_server(resource-group)
node-name 表示主节点的主机名,必须和 ha.cf 文件中指定的节点名一致;
network 用于设定集群的 IP 地址、子网掩码、网络设备标识等,需要注意的是,这里指定的 IP 地址就是集群对外服务的 IP 地址;
resource-group 用来指定需要 heartbeat 托管的服务,也就是这些服务可以由 heartbeat 来启动和关闭,如果要托管这些服务,必 须将服务写成可以通过 start/stop 来启动和关闭的脚本,然后放到/etc/init.d/或者/etc/ha.d/resource.d/目录下,heartbeat 会根 据脚本的名称自动去/etc/init.d 或者/etc/ha.d/resource.d/目录下找到相应脚步进行启动或关闭操作。
在 haresources 配置文件末尾加上:
# Regarding the node-names in this file: #
# They must match the names of the nodes listed in ha.cf, which in turn
# must match the `uname -n` of some node in the cluster. So they aren't
# virtual in any sense of the word.
#
node1 IPaddr::192.168.1.100/24/eth0 httpd
//保存退出,authkeys 密钥文件配置完毕!
【2.5】复制 authkeys haresources ha.cf 三个配置文件到远程 node2 节点:
[root@node1 ha.d]# scp -p authkeys haresources ha.cf node2:/etc/ha.d/
authkeys haresources ha.cf
100% 691 0.7KB/s 00:00 100% 5947 5.8KB/s 00:00
100% 10KB 10.4KB/s 00: 00
//启动 heartbeat 节点一:
[root@node1 ha.d]# service heartbeat start Starting High-Availability services: 2016/03/28_00:42:00 INFO: Resource is stopped [ OK ]
//启动 heartbeat 节点二:
[root@node1 ha.d]# ssh node2 'service heartbeat start' Starting High-Availability services: 2016/03/28_00:43:55 INFO: Resource is stopped
[ OK ]
至此,Heartbeat 双节点集群搭建完成!
三、测试验证是否搭建成功与故障节点切换 【2.1】验证是否启用
[root@node1 ~]# curl 0
//此界面表明 httpd 服务已经在 node1 节点生效;
//查看 netwstat 信息观察 80 端口是否启用并在 node1 节点生效: [root@node1 ha.d]# netstat -tunlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address
State 3171/./hpiod
tcp 0 tcp 0 tcp 0 tcp 0
0 127.0.0.1:2208 0 0.0.0.0:994
0 0.0.0.0:111
0 0.0.0.0:22
0.0.0.0:* 0.0.0.0:* 0.0.0.0:*
0.0.0.0:*
LISTEN LISTEN
LISTEN LISTEN
2932/rpc.statd
2896/portmap 3189/sshd
PID/Program name
tcp 0 tcp 0 tcp 0 tcp 0 tcp 0 tcp 0 udp 0 udp 0 udp 0 udp 0 udp 0 udp 0 udp 0
0 127.0.0.1:631 0 127.0.0.1:25
0 127.0.0.1:2207 0 :::5989
0 :::80 0 :::22
0 0.0.0.0:37922 0 0.0.0.0:694
0 0.0.0.0:988
0 0.0.0.0:991
0 0.0.0.0:5353 0 0.0.0.0:111 0 0.0.0.0:631
0.0.0.0:* 0.0.0.0:*
0.0.0.0:*
LISTEN LISTEN
:::* :::*
:::*
LISTEN LISTEN LISTEN
LISTEN
0.0.0.0:*
0.0.0.0:*
0.0.0.0:* 2932/rpc.statd 0.0.0.0:* 2932/rpc.statd
0.0.0.0:* 0.0.0.0:* 0.0.0.0:*
3350/avahi-daemon 2896/portmap
3198/cupsd
3198/cupsd 3229/sendmail
3176/python 3364/cimservermain
5558/httpd 3189/sshd
3350/avahi-daemon 5160/heartbeat: wri
udp 0 udp 0 udp 0
0 0.0.0.0:51960 0.0.0.0:* 0 :::53950 :::*
0 :::5353 :::*
5160/heartbeat: wri 3350/avahi-daemon
3350/avahi-daemon
httpd 80 端口以及 heartbeat 694 端口均已正常启用;
【2.2 主备节点故障切换】
1.Heartbeat 集群提供一个 hb_standby 脚本用于故障节点间手动切换; [root@node1 heartbeat]# pwd
/usr/lib64/heartbeat
[root@node1 heartbeat]# ll hb_standby
lrwxrwxrwx 1 root root 31 Mar 27 22:51 hb_standby -> /usr/share/heartbeat/hb_standby
//手动切换至 standby 节点
[root@node1 heartbeat]# ./hb_standby 2016/03/28_00:55:54 Going standby [all]. [root@node1 heartbeat]#
2.验证是否切换成功:
[root@node2 heartbeat]# curl 0
//此界面表明 httpd 服务已经在 node1 节点生效;
//查看 netwstat 信息观察 80 端口是否启用并在 node2 节点生效: [root@node2 heartbeat]# netstat -tunlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address
State PID/Program name 3168/./hpiod
2893/portmap 3186/sshd
3195/cupsd 3226/sendmail
tcp 0 tcp 0 tcp 0 tcp 0 tcp 0 tcp 0 tcp 0 tcp 0
0 127.0.0.1:2208 0 0.0.0.0:111
0 0.0.0.0:22
0 127.0.0.1:631 0 127.0.0.1:25
0 127.0.0.1:2207 0 0.0.0.0:991
0 :::5989
0.0.0.0:* 0.0.0.0:*
0.0.0.0:* 0.0.0.0:*
0.0.0.0:* 0.0.0.0:*
0.0.0.0:* :::*
LISTEN LISTEN
LISTEN LISTEN
LISTEN LISTEN
3173/python 2929/rpc.statd
Foreign Address
LISTEN
LISTEN 3361/cimservermain
tcp 0
tcp 0
udp 0
udp 0
udp 0
udp 0
udp 0
udp 0
udp 0
udp 0
udp 0
udp 0
//节点切换完成,现在 node2 是主节点,node1 是备节点,heartbeat 集群节点故障切换成功。
0 :::80 0 :::22
:::* :::*
0.0.0.0:*
LISTEN LISTEN
5444/httpd 3186/sshd
0 0.0.0.0:38279 0 0.0.0.0:54320 0 0.0.0.0:694
0 0.0.0.0:985
5131/heartbeat: wri 3347/avahi-daemon
0 0.0.0.0:988 0 0.0.0.0:5353 0 0.0.0.0:111 0 0.0.0.0:631 0 :::44603
0 :::5353
0.0.0.0:* 0.0.0.0:* 0.0.0.0:*
:::* :::*
3347/avahi-daemon 2893/portmap
0.0.0.0:*
0.0.0.0:*
0.0.0.0:* 2929/rpc.statd 0.0.0.0:* 2929/rpc.statd
5131/heartbeat: wri
3195/cupsd 3347/avahi -daemon
3347/avahi -daemon