Chinaunix首页 | 论坛 | 博客
  • 博客访问: 22305
  • 博文数量: 3
  • 博客积分: 28
  • 博客等级: 民兵
  • 技术积分: 25
  • 用 户 组: 普通用户
  • 注册时间: 2012-02-28 09:32
文章分类
文章存档

2013年(3)

我的朋友

分类: 系统运维

2013-02-01 08:44:53

其实我一直规避Zenoss的高可用性。主要考虑是分布式体系中,做好主节点的ZoDB和MYSQL的数据备份,在系统故障时,能够最快时间做好Zenoss恢复工作即可。对于我这样一个熟练工程师而言,能够确认系统在30分钟内恢复正常。
直到有朋友提出需求,我才开始重新审视Zenoss的HA配置,梳理数据同步与服务的延续,以及高可性的切换时,分布式结构是否稳固的问题。
经过几天的测试与研究,终于搞出一个Demo环境。现将整个过程和配置思想放在这里,与大家分享。

1. 结构说明
图为主从两台Zenoss服务器。由Heartbeat控制VIP网卡,MySQL和Zenoss服务以及DRBD块的挂载。同时DRBD实现主从服务器中的相关Zenoss数据(MYSQL,Zenoss主目录和Zenoss性能目录)的同步与处理。需要注意的是,DRBD只是一个虚设备,因此,每台服务器应该划分专用分区用以存放Zenoss数据。在Demo环境中,用Vmware重新添加了一块磁盘。并将划分为:
/dev/sdb1              /var/lib/mysql  MySQL数据库
/dev/sdb2              /opt/zenoss/    zenoss主目录
/dev/sdb3              /opt/zenoss/perf Zenoss RRD数据库
注意,sdbX分区并不需要格式化,因此,请在系统安装后,再用fdisk命令进行划分。具体方法参看网上资料,这里就不重复了。

2. 前期准备
利用Vmware创建了两个台相同规格的Centos系统(版本为Centos5.6 32Bit)。注意在安装Centos时,先不安装MYSQL(主要希望之后的MYSQL数据目录可以在两节点是使用-DRBD)。系统的网络配置要求如下:
zenoss master
hostname:zenossha1
eth0 ip:119.10.119.5
eth1 ip:192.168.100.5
zenoss slave
hostsname:zenossha2
eth0 ip:119.10.119.6
eth1 ip:192.168.100.6

同时规划vip为119.10.119.8

系统安装好后,首先将建议两台服务器的hosts关系,以备之后的Heartbeat和DRBD的内部通讯。

点击(此处)折叠或打开

  1. # vi /etc/hosts

点击(此处)折叠或打开

  1. 192.168.100.5 zenossha1
  2. 192.168.100.6 zenossha2
3. DRBD的安装与配置
前先准备工作完成后,接下来,在两台服务器上安装DRBD

点击(此处)折叠或打开

  1. # yun -y install drbd82 kmod-drbd82
安装好后,在两台服务器上同时建立DRBD配置文件

点击(此处)折叠或打开

  1. # vi /etc/drbd.conf

点击(此处)折叠或打开

  1. global { usage-count no;
  2. }
  3. common { protocol C;
  4. disk {
  5. on-io-error detach; no-disk-flushes; no-md-flushes;
  6. }
  7. net {
  8. max-buffers 2048; unplug-watermark 2048;
  9. }
  10. syncer {
  11. rate 700000K; al-extents 1801;
  12. } }
  13. resource mysql { device /dev/drbd1; disk /dev/sdb1; meta-disk internal;
  14. on zenossha1 {
  15. address 192.168.100.5:7789;
  16. }
  17. on zenossha2 {
  18. address 192.168.100.6:7789;
  19. } }
  20. resource zenhome { device /dev/drbd2; disk /dev/sdb2; meta-disk internal;
  21. on zenossha1 {
  22. address 192.168.100.5:7790;
  23. }
  24. on zenossha2 {
  25. address 192.168.100.6:7790;
  26. } }
  27. resource zenperf { device /dev/drbd3; disk /dev/sdb3; meta-disk internal;
  28. on zenossha1 {
  29. address 192.168.100.5:7791;
  30. }
  31. on zenossha2 {
  32. address 192.168.100.6:7791;
  33. } }
在配置文件中,我们创建了三个DRBD源:mysql,zenhome,zenperf。每个源使用一个Drbd块(/dev/drbdX)与我们之前划分的专用分区关联,同时分别配置端口用以通讯。

接下来,我们创建源数据。在创建之前,需要测试一下磁盘的写入

点击(此处)折叠或打开

  1. # dd if=/dev/zero of=/dev/sdb1 bs=1M count=128
创建源数据

点击(此处)折叠或打开

  1. # drbdadm create-md mysql
  2. # drbdadm create-md zenhome
  3. # drbdadm create-md zenperf
创建好后,启动DRBD服务

点击(此处)折叠或打开

  1. # service drbd start
注意,第一次双节点启动drbd时,需要全盘同步。同步速度与网络环境有关。

点击(此处)折叠或打开

  1. more /proc/drbd
  2. version: 8.2.6 (api:88/proto:86-88)
  3. GIT-hash: 3e69822d3bb4920a8c1bfdf7d647169eba7d2eb4 build by buildsvn@c5-x8664-build, 2008-10-03 11:30:17
  4. 1: cs:SyncSource st:Primary/Secondary ds:UpToDate/Inconsistent C r---
  5. ns:35982212 nr:0 dw:476244 dr:35982432 al:209 bm:2167 lo:1 pe:50 ua:14883 ap:43 oos:12687508
  6. [=============>......] sync'ed: 73.7% (12390/47063)M
  7. finish: 1:14:11 speed: 2,832 (3,780) K/sec
  8. 2: cs:SyncSource st:Primary/Secondary ds:UpToDate/Inconsistent C r---
  9. ns:35724156 nr:0 dw:892220 dr:34843136 al:414 bm:2155 lo:1 pe:8 ua:348 ap:0 oos:12674912
  10. [=============>......] sync'ed: 73.8% (12377/47063)M
  11. finish: 0:54:52 speed: 3,848 (3,784) K/sec
  12. 3: cs:SyncSource st:Primary/Secondary ds:UpToDate/Inconsistent C r---
  13. ns:35911460 nr:0 dw:435600 dr:35961280 al:189 bm:2165 lo:1 pe:2055 ua:14912 ap:2049 oos:10854136
  14. [==============>.....] sync'ed: 76.6% (10599/45251)M
  15. finish: 0:44:07 speed: 4,072 (3,780) K/sec


请待drbd同步完成之后,再进行下面的操作。

接下来,我们需要对于源数据格式化,同时需要指定两台服务器间DRBD的主从关系。因此,下面的需要分部来做。
在主服务器上,首先查看drbd状态

点击(此处)折叠或打开

  1. # cat /proc/drbd
  2. version: 8.2.6 (api:88/proto:86-88)
  3. GIT-hash: 3e69822d3bb4920a8c1bfdf7d647169eba7d2eb4 build by buildsvn@c5-i386-build, 2008-10-03 11:42:32
  4. 1: cs:WFConnection st:Primary/Unknown ds:UpToDate/DUnknown C r---
  5. ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 oos:2570252
  6. 2: cs:WFConnection st:Secondary/Unknown ds:Inconsistent/DUnknown C r---
  7. ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 oos:2650604
  8. 3: cs:WFConnection st:Secondary/Unknown ds:Inconsistent/DUnknown C r---
  9. ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 oos:2650604
强行将主服务器各源数据设定为primary

点击(此处)折叠或打开

  1. # drbdadm -- -o primary mysql
  2. # drbdadm -- -o primary zenhome
  3. # drbdadm -- -o primary zenperf
然后将主服务器上的数据源块格式化

点击(此处)折叠或打开

  1. # mkfs.ext3 /dev/drbd1
  2. # mkfs.ext3 /dev/drbd2
  3. # mkfs.ext3 /dev/drbd3
将主服务器的DRBD服务关闭

点击(此处)折叠或打开

  1. # service drbd stop

在从服务器上重复主服务器的操作

点击(此处)折叠或打开

  1. # drbdadm -- -o primary mysql
  2. # drbdadm -- -o primary zenhome
  3. # drbdadm -- -o primary zenperf
  4. # mkfs.ext3 /dev/drbd1
  5. # mkfs.ext3 /dev/drbd2
  6. # mkfs.ext3 /dev/drbd3
两台服务器做好格式化后,再将从服务器的DBRD源数据块降为从。

点击(此处)折叠或打开

  1. # drbdadm secondary mysql
  2. # drbdadm secondary zenhomy
  3. # drbdadm secondary zenperf
接下来,我们重新启动主服务器上的DRBD,并重新指定主服务器的DRBD源为primary

点击(此处)折叠或打开

  1. # servuce drbd start
  2. # drbdadm -- -o primary mysql
  3. # drbdadm -- -o primary zenhome
  4. # drbdadm -- -o primary zenperf
主从服务器的DRBD关系配置好后,我们就可以在主服务器将DRBD源与实际的Zenoss数据目录进行挂载。

点击(此处)折叠或打开

  1. # mkdir /var/lib/mysql -p
  2. # mount /dev/drbd1 /var/lib/mysql
  3. # mkdir /opt/zenoss/ -p
  4. # mount /dev/drbd2 /opt/zenoss
  5. # mkdir /opt/zenoss/perf -p
  6. # mount /dev/drbd3 /opt/zenoss/perf

4. MySQL与Zenoss的安装
挂载完成后,在主服务器的Zneoss相关目录就可以直接使用。而从服务器上,并不能同时使用DRBD资源,不过,接下来,我们仍然可以在主和从上安装MYSQL与zenoss。(事实上,在从服务器上,我只是希望MySQL和Zenoss的服务安装,实际的数据,最终还是调用DRBD上的数据)。
两台服务器同时安装MySQL与Zenoss

点击(此处)折叠或打开

  1. # yum -y install mysql mysql-server
  2. # service mysqld start
  3. # rpm -ivh zenoss-3.2.1.el5.i386.rpm
由于在主服务器上,挂载目录的权限为root,我们需要将主服务器上的Zenoss性能目录权限改为zenoss用户。

点击(此处)折叠或打开

  1. # chown zenoss:zenoss -R /opt/zenoss/perf
主从服务器同时初始化zenoss

点击(此处)折叠或打开

  1. # service zenoss start
初始化后,主从服务器同时禁示zenoss服务自启动

点击(此处)折叠或打开

  1. # service zenoss stop
  2. # service mysqld stop
  3. # chkconfig zenoss off
5. Heartbeat的安装与配置
接下来,我们安装heartbeat。注意,由于yum安装Heartbeat存在BUG。因此,需要执行两次heartbeat安装。

点击(此处)折叠或打开

  1. #yum -y install heartbeat
  2. #yum -y install heartbeat
验证heartbeat是否已经安装。如有下列软件,即说明heartbeat安装完成。

点击(此处)折叠或打开

  1. # rpm -qa |grep heartbeat
  2. heartbeat-stonith-2.1.3-3.el5.centos
  3. heartbeat-2.1.3-3.el5.centos
  4. heartbeat-pils-2.1.3-3.el5.centos
设置heartbeat服务开机自启动

点击(此处)折叠或打开

  1. # chkconfig --add heartbeat
  2. # chkconfig heartbeat on

heartbeat需要配置3个配置文件,它们分别是认证key(/etc/ha.d/authkeys),heartbeat主配置文件(/etc/ha.d/ha.cf)和ha调度文件(/etc/ha.d/haresources)。下面,我们在主从服务器上,分别配置这三个文件。

点击(此处)折叠或打开

  1. # vi /etc/ha.d/authkeys
  2. auth 3
  3. #1 crc
  4. #2 sha1 HI!
  5. 3 md5 zenosshaTestforMurA!

点击(此处)折叠或打开

  1. # vi /etc/ha.d/ha.cf
  2. debugfile /var/log/ha-debug
  3. logfile /var/log/ha-log
  4. keepalive 1
  5. deadtime 20
  6. warntime 5
  7. initdead 40
  8. udpport 694
  9. ucast eth1 192.168.100.6 #ucast中指定IP为对方IP,即在主服务器写192.168.100.6,在从服务器写192.168.100.5
  10. auto_failback on
  11. node zenossha1
  12. node zenossha2
  13. ping 192.168.100.1
  14. respawn hacluster /usr/lib/heartbeat/ipfail
  15. apiauth ipfail uid=hacluster
  16. use_logd yes

点击(此处)折叠或打开

  1. # vi /etc/ha.d/haresources
  2. zenossha1 IPaddr::119.10.119.8/28/eth0 drbddisk::mysql Filesystem::/dev/drbd1::/var/lib/mysql::ext3 drbddisk::zenhome Filesystem::/dev/drbd2::/opt/zenoss drbddisk::zenperf Filesystem::/dev/drbd3::/opt/zenoss/perf::ext3::noatime,data=writeback mysqld zenoss
在调度文件中,主从服务器同时指定主服务器的主机名,指定VIP,指定drbd源并挂载它们。最后再启动MYSQL与Zenoss。

6. 测试Zenoss HA
现在,整个配置工作已经完成。接下来,主从同时启动heartbeat,测试HA。

点击(此处)折叠或打开

  1. # service heartbeat start
主服务器查看VIP的设定。

点击(此处)折叠或打开

  1. # ip a
  2. 1: lo: mtu 16436 qdisc noqueue
  3. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  4. inet 127.0.0.1/8 scope host lo
  5. 2: eth0: mtu 1500 qdisc pfifo_fast qlen 1000
  6. link/ether 00:0c:29:f7:36:75 brd ff:ff:ff:ff:ff:ff
  7. inet 119.10.119.5/28 brd 119.10.119.15 scope global eth0
  8. inet 119.10.119.8/28 brd 119.10.119.15 scope global secondary eth0:0
  9. 3: eth1: mtu 1500 qdisc pfifo_fast qlen 1000
  10. link/ether 00:0c:29:f7:36:7f brd ff:ff:ff:ff:ff:ff
  11. inet 192.168.100.5/24 brd 192.168.100.255 scope global eth1
查看DRBD的挂载

点击(此处)折叠或打开

  1. # df -l
  2. 文件系统 1K-块 已用 可用 已用% 挂载点
  3. /dev/mapper/VolGroup00-LogVol00
  4. 6030784 2748296 2971192 49% /
  5. /dev/sda1 101086 12353 83514 13% /boot
  6. tmpfs 1037484 0 1037484 0% /dev/shm
  7. /dev/drbd1 3162284 99996 2901648 4% /var/lib/mysql
  8. /dev/drbd2 3162316 479952 2521724 16% /opt/zenoss
  9. /dev/drbd3 3992452 77052 3712588 3% /opt/zenoss/perf
查看MySQL与Zenoss服务状态

点击(此处)折叠或打开

  1. # service mysqld status
  2. mysqld (pid 12631) 正在运行...
  3. # service zenoss status
  4. Daemon: zeoctl program running; pid=12938
  5. Daemon: zopectl program running; pid=12943
  6. Daemon: zenhub program running; pid=12978
  7. Daemon: zenjobs program running; pid=13007
  8. Daemon: zenping program running; pid=13073
  9. Daemon: zensyslog program running; pid=13111
  10. Daemon: zenstatus program running; pid=13113
  11. Daemon: zenactions program running; pid=13140
  12. Daemon: zentrap program running; pid=13240
  13. Daemon: zenmodeler program running; pid=13245
  14. Daemon: zenperfsnmp program running; pid=13279
  15. Daemon: zencommand program running; pid=13313
  16. Daemon: zenprocess program running; pid=13339
  17. Daemon: zenwin program running; pid=13377
  18. Daemon: zeneventlog program running; pid=13415
将主服务器上的heartbeat服务关闭,验证HA的切换。

点击(此处)折叠或打开

  1. # service heartbeat stop
  2. Stopping High-Availability services: 
  3.                                                            [确定]
主服务器上相关验证

点击(此处)折叠或打开

  1. # ip a
  2. 1: lo: mtu 16436 qdisc noqueue
  3. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  4. inet 127.0.0.1/8 scope host lo
  5. 2: eth0: mtu 1500 qdisc pfifo_fast qlen 1000
  6. link/ether 00:0c:29:f7:36:75 brd ff:ff:ff:ff:ff:ff
  7. inet 119.10.119.5/28 brd 119.10.119.15 scope global eth0
  8. 3: eth1: mtu 1500 qdisc pfifo_fast qlen 1000
  9. link/ether 00:0c:29:f7:36:7f brd ff:ff:ff:ff:ff:ff
  10. inet 192.168.100.5/24 brd 192.168.100.255 scope global eth1
  11. # df -l
  12. 文件系统 1K-块 已用 可用 已用% 挂载点
  13. /dev/mapper/VolGroup00-LogVol00
  14. 6030784 2746420 2973068 49% /
  15. /dev/sda1 101086 12353 83514 13% /boot
  16. tmpfs 1037484 0 1037484 0% /dev/shm
  17. # service mysqld status
  18. mysqld 已停
  19. # service zenoss status
  20. Startup script not found at /opt/zenoss/bin/zenoss.
主服务器上ha日志

点击(此处)折叠或打开

  1. # cat /var/log/ha-debug
  2. Daemon: zeneventlog stopping...
  3. Daemon: zenwin stopping...
  4. Daemon: zenprocess stopping...
  5. Daemon: zencommand stopping...
  6. Daemon: zenperfsnmp stopping...
  7. Daemon: zenmodeler stopping...
  8. Daemon: zentrap stopping...
  9. Daemon: zenactions stopping...
  10. Daemon: zenstatus stopping...
  11. Daemon: zensyslog stopping...
  12. Daemon: zenping stopping...
  13. Daemon: zenjobs stopping...
  14. Daemon: zenhub stopping...
  15. Daemon: zopectl .
  16. daemon process stopped
  17. Daemon: zeoctl .
  18. daemon process stopped
  19. 停止 MySQL: [确定]
  20. INFO: Success
  21. INFO: Success
  22. INFO: Success
  23. In IP Stop
  24. SIOCDELRT: No such process
  25. INFO: Success
我们再切换到从服务器上,查看各项服务状态

点击(此处)折叠或打开

  1. # ip a
  2. 1: lo: mtu 16436 qdisc noqueue
  3. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  4. inet 127.0.0.1/8 scope host lo
  5. 2: eth0: mtu 1500 qdisc pfifo_fast qlen 1000
  6. link/ether 00:50:56:2a:c0:d8 brd ff:ff:ff:ff:ff:ff
  7. inet 119.10.119.6/28 brd 119.10.119.15 scope global eth0
  8. inet 119.10.119.8/28 brd 119.10.119.15 scope global secondary eth0:0
  9. 3: eth1: mtu 1500 qdisc pfifo_fast qlen 1000
  10. link/ether 00:50:56:3c:96:bc brd ff:ff:ff:ff:ff:ff
  11. inet 192.168.100.6/24 brd 192.168.100.255 scope global eth1

  12. # df -l
  13. 文件系统 1K-块 已用 可用 已用% 挂载点
  14. /dev/mapper/VolGroup00-LogVol00
  15. 6030784 3226032 2493456 57% /
  16. /dev/sda1 101086 12353 83514 13% /boot
  17. tmpfs 1037484 0 1037484 0% /dev/shm
  18. /dev/drbd1 3162284 99996 2901648 4% /var/lib/mysql
  19. /dev/drbd2 3162316 479676 2522000 16% /opt/zenoss
  20. /dev/drbd3 3992452 77052 3712588 3% /opt/zenoss/perf

  21. # service mysqld status
  22. mysqld (pid 9919) 正在运行...

  23. # service zenoss status
  24. Daemon: zeoctl program running; pid=10232
  25. Daemon: zopectl program running; pid=10237
  26. Daemon: zenhub program running; pid=10272
  27. Daemon: zenjobs program running; pid=10301
  28. Daemon: zenping program running; pid=10363
  29. Daemon: zensyslog program running; pid=10402
  30. Daemon: zenstatus program running; pid=10408
  31. Daemon: zenactions program running; pid=10434
  32. Daemon: zentrap program running; pid=10539
  33. Daemon: zenmodeler program running; pid=10544
  34. Daemon: zenperfsnmp program running; pid=10578
  35. Daemon: zencommand program running; pid=10613
  36. Daemon: zenprocess program running; pid=10638
  37. Daemon: zenwin program running; pid=10675
  38. Daemon: zeneventlog program running; pid=10713

  39. # cat /var/log/ha-debug
  40. ipfail[23546]: 2012/04/11_12:35:54 debug: Other side is unstable.
  41. heartbeat[23536]: 2012/04/11_12:36:08 info: Received shutdown notice from 'zenossha1'.
  42. heartbeat[23536]: 2012/04/11_12:36:08 info: Resources being acquired from zenossha1.
  43. heartbeat[23536]: 2012/04/11_12:36:08 debug: StartNextRemoteRscReq(): child count 1
  44. heartbeat[8929]: 2012/04/11_12:36:08 info: acquire local HA resources (standby).
  45. heartbeat[8929]: 2012/04/11_12:36:08 info: local HA resource acquisition completed (standby).
  46. heartbeat[8930]: 2012/04/11_12:36:08 info: No local resources [/usr/share/heartbeat/ResourceManager listkeys zenossha2] to acquire.
  47. heartbeat[23536]: 2012/04/11_12:36:08 info: Standby resource acquisition done [foreign].
  48. heartbeat[23536]: 2012/04/11_12:36:08 debug: StartNextRemoteRscReq(): child count 1
  49. heartbeat[8955]: 2012/04/11_12:36:08 debug: notify_world: setting SIGCHLD Handler to SIG_DFL
  50. logd is not runningharc[8955]: 2012/04/11_12:36:08 info: Running /etc/ha.d/rc.d/status status
  51. logd is not runningmach_down[8971]: 2012/04/11_12:36:08 info: Taking over resource group IPaddr::119.10.119.8/28/eth0
  52. logd is not runningResourceManager[8997]: 2012/04/11_12:36:08 info: Acquiring resource group: zenossha1 IPaddr::119.10.119.8/28/eth0 drbddisk::mysql Filesystem::/dev/drbd1::/var/lib/mysql::ext3 drbddisk::zenhome Filesystem::/dev/drbd2::/opt/zenoss drbddisk::zenperf Filesystem::/dev/drbd3::/opt/zenoss/perf::ext3::noatime,data=writeback mysqld zenoss
  53. logd is not runningIPaddr[9024]: 2012/04/11_12:36:09 INFO: Resource is stopped
  54. logd is not runningResourceManager[8997]: 2012/04/11_12:36:09 info: Running /etc/ha.d/resource.d/IPaddr 119.10.119.8/28/eth0 start
  55. logd is not runningResourceManager[8997]: 2012/04/11_12:36:09 debug: Starting /etc/ha.d/resource.d/IPaddr 119.10.119.8/28/eth0 start
  56. logd is not runningIPaddr[9122]: 2012/04/11_12:36:09 INFO: Using calculated netmask for 119.10.119.8: 255.255.255.240
  57. logd is not runningIPaddr[9122]: 2012/04/11_12:36:09 DEBUG: Using calculated broadcast for 119.10.119.8: 119.10.119.15
  58. logd is not runningIPaddr[9122]: 2012/04/11_12:36:09 INFO: eval ifconfig eth0:0 119.10.119.8 netmask 255.255.255.240 broadcast 119.10.119.15
  59. logd is not runningIPaddr[9122]: 2012/04/11_12:36:09 DEBUG: Sending Gratuitous Arp for 119.10.119.8 on eth0:0 [eth0]
  60. logd is not runningIPaddr[9093]: 2012/04/11_12:36:10 INFO: Success
  61. INFO: Success
  62. logd is not runningResourceManager[8997]: 2012/04/11_12:36:10 debug: /etc/ha.d/resource.d/IPaddr 119.10.119.8/28/eth0 start done. RC=0
  63. logd is not runningResourceManager[8997]: 2012/04/11_12:36:10 info: Running /etc/ha.d/resource.d/drbddisk mysql start
  64. logd is not runningResourceManager[8997]: 2012/04/11_12:36:10 debug: Starting /etc/ha.d/resource.d/drbddisk mysql start
  65. logd is not runningResourceManager[8997]: 2012/04/11_12:36:10 debug: /etc/ha.d/resource.d/drbddisk mysql start done. RC=0
  66. logd is not runningFilesystem[9269]: 2012/04/11_12:36:10 INFO: Resource is stopped
  67. logd is not runningResourceManager[8997]: 2012/04/11_12:36:10 info: Running /etc/ha.d/resource.d/Filesystem /dev/drbd1 /var/lib/mysql ext3 start
  68. logd is not runningResourceManager[8997]: 2012/04/11_12:36:10 debug: Starting /etc/ha.d/resource.d/Filesystem /dev/drbd1 /var/lib/mysql ext3 start
  69. logd is not runningFilesystem[9350]: 2012/04/11_12:36:11 INFO: Running start for /dev/drbd1 on /var/lib/mysql
  70. logd is not runningFilesystem[9339]: 2012/04/11_12:36:11 INFO: Success
  71. INFO: Success
  72. logd is not runningResourceManager[8997]: 2012/04/11_12:36:11 debug: /etc/ha.d/resource.d/Filesystem /dev/drbd1 /var/lib/mysql ext3 start done. RC=0
  73. logd is not runningResourceManager[8997]: 2012/04/11_12:36:11 info: Running /etc/ha.d/resource.d/drbddisk zenhome start
  74. logd is not runningResourceManager[8997]: 2012/04/11_12:36:11 debug: Starting /etc/ha.d/resource.d/drbddisk zenhome start
  75. logd is not runningResourceManager[8997]: 2012/04/11_12:36:11 debug: /etc/ha.d/resource.d/drbddisk zenhome start done. RC=0
  76. logd is not runningFilesystem[9459]: 2012/04/11_12:36:11 INFO: Resource is stopped
  77. logd is not runningResourceManager[8997]: 2012/04/11_12:36:11 info: Running /etc/ha.d/resource.d/Filesystem /dev/drbd2 /opt/zenoss start
  78. logd is not runningResourceManager[8997]: 2012/04/11_12:36:11 debug: Starting /etc/ha.d/resource.d/Filesystem /dev/drbd2 /opt/zenoss start
  79. logd is not runningFilesystem[9540]: 2012/04/11_12:36:12 INFO: Running start for /dev/drbd2 on /opt/zenoss
  80. logd is not runningFilesystem[9540]: 2012/04/11_12:36:12 INFO: Starting filesystem check on /dev/drbd2
  81. fsck 1.39 (29-May-2006)
  82. /dev/drbd2: clean, 29826/402400 files, 132525/803216 blocks
  83. logd is not runningFilesystem[9529]: 2012/04/11_12:36:12 INFO: Success
  84. INFO: Success
  85. logd is not runningResourceManager[8997]: 2012/04/11_12:36:12 debug: /etc/ha.d/resource.d/Filesystem /dev/drbd2 /opt/zenoss start done. RC=0
  86. logd is not runningResourceManager[8997]: 2012/04/11_12:36:12 info: Running /etc/ha.d/resource.d/drbddisk zenperf start
  87. logd is not runningResourceManager[8997]: 2012/04/11_12:36:12 debug: Starting /etc/ha.d/resource.d/drbddisk zenperf start
  88. logd is not runningResourceManager[8997]: 2012/04/11_12:36:12 debug: /etc/ha.d/resource.d/drbddisk zenperf start done. RC=0
  89. logd is not runningFilesystem[9654]: 2012/04/11_12:36:12 INFO: Resource is stopped
  90. logd is not runningResourceManager[8997]: 2012/04/11_12:36:13 info: Running /etc/ha.d/resource.d/Filesystem /dev/drbd3 /opt/zenoss/perf ext3 noatime,data=writeback start
  91. logd is not runningResourceManager[8997]: 2012/04/11_12:36:13 debug: Starting /etc/ha.d/resource.d/Filesystem /dev/drbd3 /opt/zenoss/perf ext3 noatime,data=writeback start
  92. logd is not runningFilesystem[9735]: 2012/04/11_12:36:13 INFO: Running start for /dev/drbd3 on /opt/zenoss/perf
  93. logd is not runningFilesystem[9724]: 2012/04/11_12:36:13 INFO: Success
  94. INFO: Success
  95. logd is not runningResourceManager[8997]: 2012/04/11_12:36:13 debug: /etc/ha.d/resource.d/Filesystem /dev/drbd3 /opt/zenoss/perf ext3 noatime,data=writeback start done. RC=0
  96. logd is not runningResourceManager[8997]: 2012/04/11_12:36:13 info: Running /etc/init.d/mysqld start
  97. logd is not runningResourceManager[8997]: 2012/04/11_12:36:13 debug: Starting /etc/init.d/mysqld start
  98. 启动 MySQL: [确定]
  99. logd is not runningResourceManager[8997]: 2012/04/11_12:36:15 debug: /etc/init.d/mysqld start done. RC=0
  100. logd is not runningResourceManager[8997]: 2012/04/11_12:36:24 info: Running /etc/init.d/zenoss start
  101. logd is not runningResourceManager[8997]: 2012/04/11_12:36:24 debug: Starting /etc/init.d/zenoss start
  102. Daemon: zeoctl .
  103. daemon process started, pid=10232
  104. Daemon: zopectl heartbeat[23536]: 2012/04/11_12:36:30 WARN: node zenossha1: is dead
  105. heartbeat[23536]: 2012/04/11_12:36:30 info: Dead node zenossha1 gave up resources.
  106. heartbeat[23536]: 2012/04/11_12:36:30 info: Link zenossha1:eth1 dead.
  107. ipfail[23546]: 2012/04/11_12:36:30 info: Status update: Node zenossha1 now has status dead
  108. ipfail[23546]: 2012/04/11_12:36:30 debug: Found ping node 192.168.100.1!
  109. ipfail[23546]: 2012/04/11_12:36:31 info: NS: We are still alive!
  110. ipfail[23546]: 2012/04/11_12:36:31 info: Link Status update: Link zenossha1/eth1 now has status dead
  111. ipfail[23546]: 2012/04/11_12:36:31 debug: Found ping node 192.168.100.1!
  112. ipfail[23546]: 2012/04/11_12:36:32 info: Asking other side for ping node count.
  113. ipfail[23546]: 2012/04/11_12:36:32 debug: Message [num_ping] sent.
  114. ipfail[23546]: 2012/04/11_12:36:32 info: Checking remote count of ping nodes.
  115. .
  116. daemon process started, pid=10237
  117. Daemon: zenhub starting...
  118. Daemon: zenjobs starting...
  119. Daemon: zenping starting...
  120. Daemon: zensyslog starting...
  121. Daemon: zenstatus starting...
  122. Daemon: zenactions starting...
  123. Daemon: zentrap starting...
  124. Daemon: zenmodeler starting...
  125. Daemon: zenperfsnmp starting...
  126. Daemon: zencommand starting...
  127. Daemon: zenprocess starting...
  128. Daemon: zenwin starting...
  129. Daemon: zeneventlog starting...
  130. logd is not runningResourceManager[8997]: 2012/04/11_12:37:08 debug: /etc/init.d/zenoss start done. RC=0
  131. logd is not runningmach_down[8971]: 2012/04/11_12:37:08 info: /usr/share/heartbeat/mach_down: nice_failback: foreign resources acquired
  132. heartbeat[23536]: 2012/04/11_12:37:08 info: mach_down takeover complete.
  133. logd is not runningmach_down[8971]: 2012/04/11_12:37:08 info: mach_down takeover complete for node zenossha1.
从服务器上各项服务运行正常,从日志情况上来看,切换过程共花费74秒种。

我们再次启动主服务器上的heartbeat服务,将服务切换回主服务器,相关验证工作请大家自行操作。






阅读(1697) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~