分类: 系统运维
2006-01-19 12:21:04
1. 架构说明:
三层架构:
第一层:集群调度器(两台)
第二层:web服务器(三台)
第三层:sino-trade.com数据库(DELL服务器一台)
2. 软件包:
内核以及补丁:
linux-
hidden-
linux-
aa.config(我的内核配置文件)
集群软件:
ipvsadm:
ipvs-
mon:
Mon-0.11.tar.gz
mon-0.99.1.tar.gz
Time-HiRes-01.20.tar.gz
Period-1.20.tar.gz
Convert-BER-1.31.tar.gz
HeartBeat:
heartbeat-
NFS:
REDHAT系统自带(7.2)
3. 安装方法:
第一步:配置内核 :
配置内核,是相对简单的步骤,需要以下软件包:
linux-
linux-2.4.18-ipvs-1.0.2.patch.gz()
hidden-2.4.5-1.diff(~julian/)
对于LVS集群器
1/cp linux-2.4.18.tar.gz /usr/src
2/cd /usr/src
3/tar xvzf linux-2.4.18.tar.gz
4/cd linux
5/cp /home/sun/linux-2.4.18-ipvs-1.0.2.patch.gz ./
6/cp /home/sun/hidden-2.4.5-1.diff ./
7/gunzip linux-2.4.18-ipvs-1.0.2.patch.gz
8/patch -p1 < linux-2.4.18-ipvs-1.0.2.patch
9/patch -p1 < hidden-2.4.5-1.diff
10/make menuconfig (导入aa.config)
然后将内核编译后,运行新内核。
对于real server
1/cp /home/sun/linux-2.4.18.tar.gz /usr/src
2/cd /usr/src
3/tar xvzf linux-2.4.18.tar.gz
4/cd linux
5/cp /home/sun/hidden-2.4.5-1.diff ./
6/patch -p1 < hidden-2.4.5-1.diff(因为实际服务器,所以,不打IPVS补丁)
7/make meunconfig
然后是编译内核,用新内核启动
编辑:/etc/rc.d/rc.local
加入:
/bin/echo 1 >/proc/sys/net/ipv4/ip_forward
/sbin/modprobe ipip
/sbin/ifconfig tunl0
/bin/echo 1 > /proc/sys/net/ipv4/conf/all/hidden
/bin/echo 1 > /proc/sys/net/ipv4/conf/tunl0/hidden
/sbin/ifconfig tunl0 218.106.209.45 netmask 255.255.255.255 broadcast 218.106.209.255 up
(至此www的服务器配置完毕,装入apache即刻,具体步骤省略)
第二步:mon的安装:
mon安装在lvs的集群调度器和LVS2上面:
Mon-0.11.tar.gz(ftp://ftp.kernel.org/pub/software/admin/mon/)
mon-0.99.1.tar.gz(同上)
Time-HiRes-01.20.tar.gz(,自己搜索吧,一个perl模块)
Period-1.20.tar.gz(,跟上面一样)
Convert-BER-1.31.tar.gz(,一样)
步骤:
1.安装perl模块,就是
用:
A>cd <模块>
B>perl MakeFile.pl
C>make
D>make install
成功的话,那就开始mon安装了,
2.直接tar xvzf mon-0.99.1.tar.gz,然后将其cp到/usr/local/mon,没什么原因,是为了以后管理方便。
至此,mon安装完毕
然后是麻烦一点的,配置mon:
我们这里需要两个文件:
1。lvs.alert (/usr/local/mon/alert.d)
#!/usr/bin/perl
#
# lvs.alert - Linux Virtual Server alert for mon
#
# It can be activated by mon to remove a real server when the
# service is down, or add the server when the service is up.
#
#
use Getopt::Std;
getopts ("s:g:h:t:l:P:V:R:W:F:u");
$ipvsadm = "/sbin/ipvsadm";
$protocol = $opt_P;
$virtual_service = $opt_V;
$remote = $opt_R;
if ($opt_u) {
$weight = $opt_W;
if ($opt_F eq "nat") {
$forwarding = "-m";
} elsif ($opt_F eq "tun") {
$forwarding = "-i";
} else {
$forwarding = "-g";
}
if ($protocol eq "tcp") {
system("$ipvsadm -a -t $virtual_service -r $remote -w $weight $forwarding");
} else {
system("$ipvsadm -a -u $virtual_service -r $remote -w $weight $forwarding");
}
} else {
if ($protocol eq "tcp") {
system("$ipvsadm -d -t $virtual_service -r $remote");
} else {
system("$ipvsadm -d -u $virtual_service -r $remote");
}
};
2.mon.cf(mon的启动配置文件,放到/usr/local/mon/etc):
#
# The mon.cf file
#
#
# global options
#
cfbasedir = /usr/local/mon/etc
alertdir = /usr/local/mon/alert.d
mondir = /usr/local/mon/mon.d
maxprocs = 20
histlength = 100
randstart = 3s
#
# group definitions (hostnames or IP addresses)
#
hostgroup www1 218.106.209.42
hostgroup www2 218.106.209.43
hostgroup www3 218.106.209.44
#
# Web server 1
#
watch www1
service http
interval 10s
monitor http.monitor
period wd {Sun-Sat}
alert mail.alert sun
upalert mail.alert sun
alert lvs.alert -P tcp -V 218.106.209.45:80 -R 218.106.209.42 -W 1 -F tun
upalert lvs.alert -P tcp –V 218.109.209.45:80 -R 218.106.209.42 -W 1 -F tun -u
#
# Web server 2
#
watch www2
service http
interval 10s
monitor http.monitor
period wd {Sun-Sat}
alert mail.alert sun
upalert mail.alert sun
alert lvs.alert -P tcp -V 218.106.209.45:80 -R 218.106.209.43 -W 1 -F tun
upalert lvs.alert -P tcp -V 218.106.209.45:80 -R 218.106.209.43 -W 1 -F tun -u
#
# Web server 3
#
watch www3
service http
interval 10s
monitor http.monitor
period wd {Sun-Sat}
alert mail.alert sun
upalert mail.alert sun
alert lvs.alert -P tcp -V 218.106.209.45:80 -R 218.106.209.44 -W 1 -F tun
upalert lvs.alert -P tcp -V 218.106.209.45:80 -R 218.106.209.44 -W 1 -F tun -u
第三步:HeartBeat的安装(heartbeat的安装需要redhat v7.2的全部安装):
HearBeat需要安装在lvs和lvs2的两台集群器上面:
heartbeat-
步骤:
tar xvzf heartbeat-
cd heartbeat-
make
make install
(没有问题的话,heartbeat安装完毕)
测试串口线路:(假设安装在com1)
在一台机器上面执行:
cat < /dev/ttyS0
在另外的机器上面执行:
echo hello > /dev/ttyS0
如果,在执行cat的机器上面看到hello的字样,然后将两台机器执行的命令翻转过来测试,如果一切正常,那么说明,com1的串口线路没有故障
heartbeat的配置文件:(两台机器都要完成)
配置/etc/hosts(根据自己的情况调整)
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 lvs localhost.localdomain localhost
218.106.209.42 web1
218.106.209.43 web2
218.106.209.44 web3
218.106.209.41 lvs
218.106.209.46 lvs2
然后是:ha.cf(/etc/ha.d/ha.cf)
#
# There are lots of options in this file. All you have to have is a set
# of nodes listedJ {"node ...}
# and one of {serial, udp, or mcast}
#
#
# Note on logging:
# If any of debugfile, logfile and logfacility are defined then they
# will be used. If debugfile and/or logfile are not defined and
# logfacility is defined then the respective logging and debug
# messages will be loged to syslog. If logfacility is not defined
# then debugfile and logfile will be used to log messges. If
# logfacility is not defined and debugfile and/or logfile are not
# defined then defaults will be used for debugfile and logfile as
# required and messages will be sent there.
#
# File to wirte debug messages to
debugfile /var/log/ha-debug
#
#
# File to write other messages to
#
logfile /var/log/ha-log
#
#
# Facility to use for syslog()/logger
#
logfacility local0
#
#
# keepalive: how many seconds between heartbeats
#
keepalive 2
#
# deadtime: seconds-to-declare-host-dead
#
deadtime 10
#
#
# Very first dead time (initdead)
#
# On some machines/OSes, etc. the network takes a while to come up
# and start working right after you've been rebooted. As a result
# we have a separate dead time for when things first come up.
# It should be at least twice the normal dead time.
#
#initdead 120
#
# hopfudge maximum hop count minus number of nodes in config
#hopfudge 1
#
# serial serialportname ...
serial /dev/ttyS0
#
#
# Baud rate for serial ports...
#
baud 19200
#
# What UDP port to use for communication?
#
udpport 694
#
# What interfaces to heartbeat over?
#
udp eth0
#
# Set up a multicast heartbeat medium
# mcast [dev] [mcast group] [port] [ttl] [loop]
#
# [dev] device to send/rcv heartbeats on
# [mcast group] multicast group to join (class D multicast address
# 224.0.0.0 - 239.255.255.255)
# [port] udp port to sendto/rcvfrom (no real reason to differ
# from the port used for broadcast heartbeats)
# [ttl] the ttl value for outbound heartbeats. this effects
# how far the multicast packet will propagate. (0-255)
# [loop] toggles loopback for outbound multicast heartbeats.
# if enabled, an outbound packet will be looped back and
# received by the interface it was sent on. (0 or 1)
#
#
mcast eth0 225.0.0.1 694 1 1
#
# Watchdog is the watchdog timer. If our own heart doesn't beat for
# a minute, then our machine will reboot.
#
watchdog /dev/watchdog
#
# "Legacy" STONITH support
# Using this directive assumes that there is one stonith
# device in the cluster. Parameters to this device are
# read from a configuration file. The format of this line is:
#
# stonith
#
# NOTE: it is up to you to maintain this file on each node in the
# cluster!
#
#stonith baytech /etc/ha.d/conf/stonith.baytech
#
# STONITH support
# You can configure multiple stonith devices using this directive.
# The format of the line is:
# stonith_host
#
# to or * to mean it is accessible from any host.
#
# supported drives is in /usr/lib/stonith.)
#
# format for a particular device, run:
# stonith -l -t
#
#
# Note that if you put your stonith device access information in
# here, and you make this file publically readable, you're asking
# for a denial of service attack ;-)
#
#
#stonith_host * baytech
#stonith_host ken3 rps10 /dev/ttyS1 kathy 0
#stonith_host kathy rps10 /dev/ttyS1 ken3 0
#
# Tell what machines are in the cluster
# node nodename ... -- must match uname -n
node lvs
node lvs2
再次是:haresources(/etc/ha.d/haresources)
lvs 218.106.209.45 lvs
然后是:authkeys(/etc/ha.d//etc/ha.d/)注意:authkeys的属性必须是:600(用chmod 600 authkeys调整)
#auth 1
#1 crc
#2 sha1 HI!
#3 md5 Hello!
auth 1
1 crc
还有:lvs(/etc/rc.d/init.d/)
#!/bin/sh
#
# You probably want to set the path to include
# nothing but local filesystems.
#
PATH=/bin:/usr/bin:/sbin:/usr/sbin
export PATH
IPVSADM=/sbin/ipvsadm
case "$1" in
start)
if [ -x $IPVSADM ]
then
echo 1 > /proc/sys/net/ipv4/ip_forward
$IPVSADM -A -t 218.106.209.45:80
$IPVSADM -a -t 218.106.209.45:80 -r 218.106.209.42 -i
$IPVSADM -a -t 218.106.209.45:80 -r 218.106.209.43 -i
$IPVSADM -a -t 218.106.209.45:80 -r 218.106.209.44 -i
fi
;;
stop)
if [ -x $IPVSADM ]
then
$IPVSADM -C
fi
;;
*)
echo "Usage: lvs {start|stop}"
exit 1
esac
exit 0
最后是:mon(/etc/rc.d/init.d)
#!/bin/sh
#
# the mon start/stop shell code
#
#
case "$1" in
start)
/usr/local/mon/mon -B /usr/local/mon/etc &
;;
stop)
killall mon
;;
*)
echo "Usage: mon {start|stop}"
exit 1
esac
exit 0
第四步:安装NFS
相对简单,但是对于安全配置如下:
NFS服务器配置:
NFS服务器安装在lvs2上面
打开:portmap/nfs选项(setup中)
编辑:/etc/hosts.deny
portmap:ALL
编辑:/etc/hosts.allow
portmap:218.106.209.42
portmap:218.106.209.43
portmap:218.106.209.44
编辑:/etc/hosts
同上描述。
NFS客户端配置:
NFS客户端安装在www服务器上面:
打开:portmap选项(setup中)
编辑:/etc/hosts.deny
portmap:ALL
编辑:/etc/hosts.allow
portmap:218.106.209.42
portmap:218.106.209.43
portmap:218.106.209.44
编辑:/etc/rc.d/rc.local
加入:/bin/mount -t nfs 218.106.209.42:/session /tmp
总结:
heartbeat/mon/lvs需要安装在:lvs和lvs2上面