全部博文(389)
分类: LINUX
2014-07-11 03:26:42
Corosync和pacemaker安装
corosync可以用来提供高可用**,比如在两个节点上,可以定义active-passive模式,当一个节点
启动时,另一个节点会自动接管资源,从而提供高可用**
直接从clusterlabs下载yum源地址来进行安装.在两个点上都部署
[root@node2 ~]# yum install pacemaker corosync
安装完成,修改配置文件
[root@node2 corosync]# cp corosync.conf.example corosync.conf
[root@node2 corosync]# cat corosync.conf
# Please read the corosync.conf.5 manual page
compatibility: whitetank
totem {
version: 2
secauth: off --认证是否开启
threads: 0 --认证线程数
interface {
ringnumber: 0
bindnetaddr: 172.28.10.0 --定义内网通信地址.
mcastaddr: 226.94.1.1
mcastport: 5405
}
}
logging {
fileline: off
to_stderr: no
to_logfile: yes
to_syslog: yes
logfile: /var/log/cluster/corosync.log --日志路径
debug: off
timestamp: on
logger_subsys {
subsys: AMF
debug: off
}
}
amf {
mode: disabled
}
service {
# Load the Pacemaker Cluster Resource Manager --定义corosync在启动时加载pacemaker
name: pacemaker
ver: 0
}
复制该文件到节点1和节点2,手动增加目录
[root@node2 corosync]# mkdir -p /var/log/cluster
如果没有增加该目录,在日志中会报如下错误
Jul 10 18:08:46 node2 corosync[30029]: [MAIN ] Successfully read main configuration file '/etc/corosync/corosync.conf'.
Jul 10 18:08:46 node2 corosync[30029]: [MAIN ] parse error in config: parse error in config: .
Jul 10 18:08:46 node2 corosync[30029]: [MAIN ] Corosync Cluster Engine exiting with status 8 at main.c:1397.
在两个结点上启动服务
[root@node1 corosync]# /etc/init.d/corosync start
Starting Corosync Cluster Engine (corosync): [ OK ]
[root@localhost corosync]#
[root@node2 corosync]# /etc/init.d/corosync start
Starting Corosync Cluster Engine (corosync): [ OK ]
[root@node2 corosync]# crm_mon
============
Last updated: Thu Jul 10 19:14:41 2014
Stack: openais
Current DC: node2 - partition with quorum
Version: 1.0.12-unknown
2 Nodes configured, 2 expected votes
1 Resources configured.
============
Online: [ node2 node1 ]
可以看到当前两台服务器都已经在线了.
现在我们可以测试增加一个vip,vip平常是由节点1提供服务,当节点1有问题时,节点2提供服务
[root@node2 corosync]# crm configure property no-quorum-policy="ignore" --因为只有两个节点没有办法提供仲裁
[root@node2 corosync]# crm configure property stonith-enabled="false" --无stonith设备.
定义一个vip资源
[root@node2 ~]# crm
crm(live)# configure
crm(live)configure# primitive vip ocf:heartbeat:IPaddr2 \
> params ip="172.28.10.179"
crm(live)configure# commit --提交
crm(live)configure# show --show一下
node node2
node node1 \
attributes standby="off"
primitive vip ocf:heartbeat:IPaddr2 \
params ip="172.28.10.179"
property $id="cib-bootstrap-options" \
dc-version="1.0.12-unknown" \
cluster-infrastructure="openais" \
expected-quorum-votes="2" \
stonith-enabled="false" \
no-quorum-policy="ignore" \
last-lrm-refresh="1405019894"
现在定义了一个vip资源,ip是172.28.10.179,该ip必须要和提供服务的物理ip在同一网段中
============
Last updated: Thu Jul 10 19:21:01 2014
Stack: openais
Current DC: node2 - partition with quorum
Version: 1.0.12-unknown
2 Nodes configured, 2 expected votes
1 Resources configured.
============
Online: [ node2 node1 ]
vip (ocf::heartbeat:IPaddr2): Started node2
查看状态,当前ip已经在线.重启node2节点,模拟故障发生
Last updated: Thu Jul 10 19:22:19 2014
Stack: openais
Current DC: node1 - partition WITHOUT quorum
Version: 1.0.12-unknown
2 Nodes configured, 2 expected votes
1 Resources configured.
============
Online: [ node1 ]
OFFLINE: [ node2 ]
vip (ocf::heartbeat:IPaddr2): Started node1
可以看到vip现在已经运在另一个节点上了.