一、环境我是用的vamware安装了两个RHEL5,默认安装了HTTP的服务。每个系统配置了两块类型为NAT的网卡。
Name ETH0 ETH1 virtual IP
node1.hrwang.com 192.168.146.131 10.0.0.1 192.168.146.110
cluster.hrwang.com 192.168.146.132 10.0.0.2 192.168.146.110
关于ETH1网卡如何更改IP地址,可以编辑/etc/sysconfig/network-scripts/ifcfg-eth1文件,例如更改node1的eth1地址:
DEVICE=eth1
ONBOOT=yes
BOOTPROTO=static
HWADDR=00:0c:29:8e:1d:eb
IPADDR=10.0.0.1
NETMASK=255.0.0.0
二、安装ipvsadm软件包可以在光盘里找到,直接安装就行。
安装heartbeat,需要先安装依赖的软件包libnet、net-snmp、net-snmp-libs、perl-Compress-Zlib、perl-HTML-Parser、perl-HTML-
Tagset、perl-libwww-perl、perl-MailTools、perl-TimeDate、perl-URI,其中只有Perl-MailTools没有在安装盘里,需要从网上搜或者参考http://blog.chinaunix.net/u/22677/showart_1133957.html使用yum安装。
接下来从 下载:
libnet-1.1.2.1-2.1.i386.rpm
heartbeat-2.1.4-2.1.i386.rpm
heartbeat-devel-2.1.4-2.1.i386.rpm
heartbeat-ldirectord-2.1.4-2.1.i386.rpm
heartbeat-pils-2.1.4-2.1.i386.rpm
heartbeat-stonith-2.1.4-2.1.i386.rpm
使用rpm -ivh *.rpm 即可。
注:上述的安装操作是需要在两台机器上都进行的。
三、配置现在在node1上进行配置的操作。由于安装完成后,在/etc/ha.d/目录下默认没有ldirectord.cf等配置文件,我们先从安装目录下拷贝:
[root@node1 ha.d]cp /usr/share/doc/heartbeat-ldirectord-2.1.4/ldirectord.cf /etc/ha.d/
[root@node1 ha.d]cp /usr/share/doc/packages/heartbeat/ha.cf /etc/ha.d/
[root@node1 ha.d]cp /usr/share/doc/packages/heartbeat/authkeys /etc/ha.d/
[root@node1 ha.d]cp /usr/share/doc/packages/heartbeat/haresources /etc/ha.d/
|
下来编辑ldirectord.cf、authkeys、ha.cf、haresources四个文件。
我的配置如下所示:
ldirecord.cf checktimeout=3 #检测超时3s
checkinterval=1 #检查时间间隔1s
autoreload=yes #配置文件改变时自动加载配置文件
logfile="/var/log/ldirectord.log" #定义日志文件
quiescent=yes
virtual=192.168.146.110 #虚拟IP
real=192.168.146.131:80 gate #realserver node1.hrwang.com
real=192.168.146.132:80 gate #realserver cluster.hrwang.com
fallback=127.0.0.1:80 #所有服务器发生故障时访问该服务器
service=http #服务名称
scheduler=rr #我这里使用的直连路由调度算法
protocol=fwm
checktype=negotiate
注:protocol使用tcp时,总报错:Error [5413] reading file /etc/ha.d/ldirectord.cf at line 32: protocol must be fwm if the virtual service is a fwmark (a number),改为fwm就没问题了。
ha.cf文件: debugfile /var/log/ha-debug #定义日志文件,其实这个文件的内容与ldirectord.cf文件中定义的/var/log/ldirectord.log文件内容相同。
logfile /var/log/ha-log #同上
logfacility local0 #默认
keepalive 2 #设定heartbeat之间的时间间隔为2秒.
deadtime 30 #在30秒后宣布节点死亡。
warntime 10 #在日志中发出“late heartbeat“警告之前等待的时间,单位为秒。
initdead 120 #在某些配置下,重启后网络需要一些时间才能正常工作。这个单独的”deadtime”选项可以处理这种情况。它的取值至少应该为通常deadtime的两倍。
udpport 694 # 使用端口694进行bcast和ucast通信。这是默认的,并且在IANA官方注册的端口号。
bcast eth1 # Linux 表示在eth1接口上使用广播heartbeat(将eth1替换为eth0,eth2),或者您使用的任何接口。
mcast eth0 225.0.0.1 694 1 0 #默认即可
ucast eth0 192.168.146.131 #改为你eth0的地址。
auto_failback on #该选项是必须配置的。值为on or off
node node1.hrwang.com #负载均衡服务器名,必须与uname -n的输出一致。
node cluster.hrwang.com #同上
ping 10.0.0.254
respawn hacluster /usr/lib/heartbeat/ipfail
apiauth ipfail gid=haclient uid=hacluster
注:
auto_failback on # 当主节点恢复后,是否自动切回
watchdog /dev/watchdog #watchdog能让系统在出现故障1分钟后重启该机器。这个功能可以帮助服务器在确实停止心跳后能够重新恢复心跳。 如果使用该特性,则在内核中装入"softdog"内核模块,用来生成实际的设备文件,输入"insmod softdog"加载模块。输入"grep misc /proc/devices"(应为10),输入"cat /proc/misc | grep watchdog"(应为130)。生成设备文件:"mknod /dev/watchdog c 10 130" 。
haresources文件:
node1.hrwang.com ldirectord::ldirectord.cf LVSSyncDaemonSwap::master IPaddr::192.168.146.110/24/eth0/192.168.1.255
authkeys文件,这里我使用的是md5方式,注意,该文件的权限必须是600:
auth 3
3 md5 test
然后编辑/etc/sysctl.conf文件,加入以下行:
net.ipv4.ip_forward = 1
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.eth0.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.eth0.arp_announce = 2
再执行sysctl -p命令,使其立即生效!
以上是node1的配置。切记,cluster上的/etc/ha.d/目录下的authkeys、haresources、ha.cf、ldirecotord.cf四个文件必须和node1上的相同。
我们把node1上的文件拷贝到cluster上:
#scp /etc/ha.d/* root@cluster.hrwang.com:/etc/ha.d/
另外别忘了在cluster上的/etc/sysctl.conf文件中也加入像node1上的内容。
四、验证Heartbeat
接下来分别启动node1和cluster上的httpd服务,然后分别在node1上和cluster上创建一个新的访问主页。
[root@node1 ha.d]# echo "This is node1.hrwang.com" >/var/www/html/index.html
[root@cluster rc.d]# echo "This is cluster.hrwang.com" > /var/www/html/index.html
确保通过和可以正常访问这两个页面。
接下来下面的命令启动node1上的heartbeat服务:
#service heartbeat start
查看日志,看是否有报错,然后按照同样的方法启动cluster上的heartbeat服务。
node1上:[root@node1 ha.d]# ifconfig
eth0 Link encap:Ethernet HWaddr 00:0C:29:8E:1D:E1
inet addr:192.168.146.131 Bcast:192.168.146.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe8e:1de1/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:241300 errors:0 dropped:0 overruns:0 frame:0
TX packets:149963 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:34216543 (32.6 MiB) TX bytes:20399011 (19.4 MiB)
Interrupt:17 Base address:0x1400
eth0:0 Link encap:Ethernet HWaddr 00:0C:29:8E:1D:E1 inet addr:192.168.146.110 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:17 Base address:0x1400eth1 Link encap:Ethernet HWaddr 00:0C:29:8E:1D:EB
inet addr:10.0.0.1 Bcast:10.255.255.255 Mask:255.0.0.0
inet6 addr: fe80::20c:29ff:fe8e:1deb/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:247197 errors:0 dropped:0 overruns:0 frame:0
TX packets:86908 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:43822366 (41.7 MiB) TX bytes:11597767 (11.0 MiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:268812 errors:0 dropped:0 overruns:0 frame:0
TX packets:268812 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:39481736 (37.6 MiB) TX bytes:39481736 (37.6 MiB)
peth1 Link encap:Ethernet HWaddr FE:FF:FF:FF:FF:FF
inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link
UP BROADCAST RUNNING NOARP MTU:1500 Metric:1
RX packets:486089 errors:0 dropped:0 overruns:0 frame:0
TX packets:86979 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:71149416 (67.8 MiB) TX bytes:11614415 (11.0 MiB)
Interrupt:18 Base address:0x1480
vif0.1 Link encap:Ethernet HWaddr FE:FF:FF:FF:FF:FF
inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link
UP BROADCAST RUNNING NOARP MTU:1500 Metric:1
RX packets:86938 errors:0 dropped:0 overruns:0 frame:0
TX packets:252019 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:11606107 (11.0 MiB) TX bytes:44591984 (42.5 MiB)
virbr0 Link encap:Ethernet HWaddr 00:00:00:00:00:00
inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0
inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:40 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:7821 (7.6 KiB)
xenbr1 Link encap:Ethernet HWaddr FE:FF:FF:FF:FF:FF
inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
UP BROADCAST RUNNING NOARP MTU:1500 Metric:1
RX packets:338923 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:51449469 (49.0 MiB) TX bytes:0 (0.0 b)
cluster上:[root@cluster rc.d]# ifconfig
eth0 Link encap:Ethernet HWaddr 00:0C:29:D1:84:DF
inet addr:192.168.146.132 Bcast:192.168.146.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fed1:84df/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:263380 errors:0 dropped:0 overruns:0 frame:0
TX packets:326204 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:44860098 (42.7 MiB) TX bytes:49844040 (47.5 MiB)
Interrupt:17 Base address:0x1400
eth1 Link encap:Ethernet HWaddr 00:0C:29:D1:84:E9
inet addr:10.0.0.2 Bcast:10.255.255.255 Mask:255.0.0.0
inet6 addr: fe80::20c:29ff:fed1:84e9/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:323611 errors:0 dropped:0 overruns:0 frame:0
TX packets:25334 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:54931453 (52.3 MiB) TX bytes:3488436 (3.3 MiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:226146 errors:0 dropped:0 overruns:0 frame:0
TX packets:226146 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:29884572 (28.5 MiB) TX bytes:29884572 (28.5 MiB)
peth1 Link encap:Ethernet HWaddr FE:FF:FF:FF:FF:FF
inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link
UP BROADCAST RUNNING NOARP MTU:1500 Metric:1
RX packets:611386 errors:0 dropped:0 overruns:0 frame:0
TX packets:25655 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:104759301 (99.9 MiB) TX bytes:3529384 (3.3 MiB)
Interrupt:18 Base address:0x1480
vif0.1 Link encap:Ethernet HWaddr FE:FF:FF:FF:FF:FF
inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link
UP BROADCAST RUNNING NOARP MTU:1500 Metric:1
RX packets:25649 errors:0 dropped:0 overruns:0 frame:0
TX packets:337345 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:3528412 (3.3 MiB) TX bytes:57006075 (54.3 MiB)
virbr0 Link encap:Ethernet HWaddr 00:00:00:00:00:00
inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0
inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:36 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:7520 (7.3 KiB)
xenbr1 Link encap:Ethernet HWaddr FE:FF:FF:FF:FF:FF
inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link
UP BROADCAST RUNNING NOARP MTU:1500 Metric:1
RX packets:361509 errors:0 dropped:0 overruns:0 frame:0
TX packets:3 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:55344155 (52.7 MiB) TX bytes:246 (246.0 b)
node1上多了个eth0:0接口,若你的没有出现,请等待一会,再使用ifconfig查看,若还是没有eth0:0接口出现,请检查配置文件是否正确!
[root@
node1 resource.d]# ./LVSSyncDaemonSwap master status
master running
(ipvs_syncmaster pid: 11854)
[root@
node1 resource.d]# ./LVSSyncDaemonSwap backup status
backup stopped
(ipvs_syncmaster pid: 11854)
[root@
cluster resource.d]# ./LVSSyncDaemonSwap master status
master stopped
[root@
cluster resource.d]# ./LVSSyncDaemonSwap backup status
backup stopped
接下来,你通过windows下的IE浏览器,访问,看到了什么??? 是不是This is node1.hrwang.com,:)
若停止node1上的heartbeat服务,则cluster会接管,并产生一个eth0:0接口!你再通过windows下的IE浏览器,访问http:/192.168.146.110,又看到了什么??? 是This is cluster.hrwang.com吧,哈哈!
注:罗唆一句再,如果你把node1上的heartbeat重新启动,那么node1会重新接管。
五、验证lvs安装完ipvsadm后,默认是没有/etc/sysconfig/ipvsadm文件生成的,而启动ipvsadm服务时没有这个配置文件,会报错“Applying IPVS configuration: /etc/init.d/ipvsadm: line 62: /etc/sysconfig/ipvsadm: No such file or directory”,怎么办呢? 这样做:
[root@node1 ha.d]# service ipvsadm save
Saving IPVS table to /etc/sysconfig/ipvsadm: [ OK ]
[root@node1 ha.d]# service ipvsadm start
Clearing the current IPVS table: [ OK ]
Applying IPVS configuration: [ OK ]
看到了,很简单。接下来通过在当前正处于接管状态的结点上运行ipvsadm -Ln 来查看输出了什么...
[root@node1 init.d]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
FWM 192 rr
-> 192.168.146.132:80 Route 1 0 0
-> 192.168.146.131:80 Local 1 0 0
六、后记这里是负载均衡加高可用,还有HTTP服务都配置在相同的两台机器上。如果负载均衡加高可用集群有两台机器,HTTP服务又有两台机器,那么记着在realserver上的/etc/sysctl.conf文件中添加:
net.ipv4.ip_forward = 1
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.eth0.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.eth0.arp_announce = 2
并通过sysctl -p命令,使其立即生效!