1)Virtual Server via Network Address Translation(VS/NAT) 网络地址转换 (如图1) 2)Virtual Server via IP Tunneling(VS/TUN) 隧道技术 (如图2) 3)Virtual Server via Direct Routing(VS/DR) 直接路由 (如图3)
针对不同的网络服务需求和服务器配置,IPVS调度器实现了如下十种负载调度算法: 轮叫(Round Robin) 加权轮叫(Weighted Round Robin) 最少链接(Least Connections) 加权最少链接(Weighted Least Connections) 基于局部性的最少链接(Locality-Based Least Connections) 带复制的基于局部性最少链接(Locality-Based Least Connections with Replication) 目标地址散列(Destination Hashing ) 源地址散列(Source Hashing) 最短期望延迟(Shortest Expected Delay)(新增加调度算法) 无须队列等待(Never Queue)(新增加调度算法)
在主负载调度器及备份负载调度器同时安装ldirectord及heartbeat,并同时运行Heartbeat,相互监视"健康"状况,一旦备份负载调度器监测到主负载调度器发生故障,备份服务器通过运行shell脚本来启动备份调度器上服务来完成虚拟 IP 故障转移,通过ipvsadm启动LVS及ldirectord,完成负载调度器的整体接管。一旦主负载调度器恢复正常工作,备份负载将通过脚本停掉备份负载调度器上的LVS及ldirectord,同时完成虚拟 IP 故障转移,主负载调度器重新恢复接管LVS及ldirectord监控服务。这样负载调度器就完成了一个标准的接管及恢复过程。
Red Hat 9.0 完全安装 Red Hat 9.0 (kernel-2.4.20-8 ) gcc -3.2.2-5
如果你使用的是Red Hat 8 ,那么你很幸运。Red Hat 8 (kernel-2.4.18-14)已经预先打了ipvs的补丁,在预安装的内核中已将ipvs编译成模块,可以通过检查 /lib/modules/2.4.18-14/kernel/net/ipv4/ipvs 是否存在来判断。可以直接安装ipvsadm。从Red Hat 9 (kernel-2.4.20-8),Red Hat 取消了ipvs的补丁及预编译ipvs为模块的内核,且Red Hat 9 自带的kernel-2.4.20-8的内核无法通过ipvs补丁linux-2.4.20-ipvs-1.0.9 .patch.gz的编译,至少在我的两台P4的机器上都没有通过编译。标准内核linux-2.4.20.tar.gz则可以通过编译。所以我们将采用上的标准内核kernel-2.4.20.tar.gz来构造我们的集群系统。
2.4.需下载软件包:
# Linux kernel http:///pub/linux/kernel/v2.4/linux-2.4.20.tar.gz
# ipvs patch
# ipvs tar ball
# hidden patch(已经包括在ipvs-1.0.9.tar.gz,可不用下载) ~ja/hidden-2.4.20pre10-1.diff(解决arp 问题 for LVS-DR/LVS-Tun)
# ipvsadm (已包含在ipvs-1.0.9.tar.gz包中, 可不用下载)
# Ldirectord
# heartbeat
2.5. 安装要求:
2.5.1 Director: 内核打non-arp的补丁hidden-2.4.20pre10-1.diff (解决arp problem for LVS-DR/LVS-Tun,虽然在Directors上运行时并不需要,但硬件环境如果和Realserver类似,内核可以直接放到RealServer上运行,不用再为realServer重新编译内核,当然也可不打该补丁。) 内核打ipvs的补丁linux-2.4.20-ipvs-1.0.9.patch.gz,重新编译新内核。 运行新内核后编译安装ipvsadm。(LVS-NAT (network address translation); LVS-DR (direct routing) and LVS-Tun (tunneling). 真实服务器 realserver/service 的转发方式由ipvsadm来设置。)
2.5.2 RealServes: (在LVS-NAT方式下,不需要打任何补丁)
内核打arp的补丁hidden-2.4.20pre10-1.diff (arp problem for LVS-DR/LVS-Tun)编译新内核。 解决Non-arp problem 设置缺省网关(gw) LVS-NAT: director(DIP) LVS-DR, LVS-Tun: 对外路由 (不是director的IP).
2.6.1 Director或RealServer上内核的安装: export D=/tmp/download mkdir $D cd $D wget http:///pub/linux/kernel/v2.4/linux-2.4.20.tar.gz wget .patch.gz wget tar zxvf linux-2.4.20.tar.gz tar zxvf ipvs-1.0.9.tar.gz gunzip linux-2.4.20-ipvs-1.0.9.patch.gz mv linux-2.4.20 /usr/src/linux-2.4.20 cd /usr/src rm -f linux-2.4 ln -s linux-2.4.20 linux-2.4 cd linux-2.4 patch -p1 < $D/ipvs-1.0.9/contrib/patches/hidden-2.4.20pre10-1.diff (arp for LVS-DR/LVS-Tun) patch -p1 < $D/linux-2.4.20-ipvs-1.0.9.patch (仅在编译Director内核时打补丁) make mrproper cp /boot/config-2.4.20-8 .config (使用Red Hat 9自带的config内核配置文件或使用/usr/src/linux-2.4.7-10/configs下的配置文件) make menuconfig(参照相关ipvs及内核配置) 或在图形界面运行: make xconfig
相关网络内核选项:
相关LVS内核选项:
make dep make clean make bzImage make modules make modules_install cp arch/i386/boot/bzImage /boot/ vmlinuz-2.4.20-lvs (rs) cp System.map /boot/System.map.2.4.20-lvs (rs) cp vmlinux /boot/vmlinux-2.4.20-lvs (rs) cd /boot rm -f System map ln -s System.map.2.4.20-lvs (rs) System.map vi /boot/grub/grub.conf: title 2.4.20-lvs root (hd0,0) kernel /boot/vmlinuz-2.4.20-lvs (rs) ro root=/dev/xxx
如果要安装该内核在其它机器上: tar czf linux-2.4.20-dir.tgz /usr/src/linux-2.4.20/ 在其它机器上解压tar zxvf linux-2.4.20-dir.tgz 放置到/usr/src rm -f linux-2.4 ln -s linux-2.4.20 linux-2.4 cd linux-2.4 make modules_install cp arch/i386/boot/bzImage /boot/ vmlinuz-2.4.20-lvs (rs) cp System.map /boot/System.map.2.4.20-lvs (rs) cd /boot rm -f System map ln -s System.map.2.4.20-lvs (rs) System.map vi /boot/grub/grub.conf: title 2.4.20-lvs root (hd0,0) kernel /boot/vmlinuz-2.4.20-lvs (rs) ro root=/dev/xxx
2.6.2 Director上ipvsadm 的安装:
用打过ipvs和hidden(for LVS-DR/LVS-Tun)补丁的新内核启动linuxcd / tmp/download/ipvs-1.0.9/ipvs/ipvsadm make install
检查ipvsadm 探测到内核的ipvs的补丁可以运行ipvsadm
如果成功你会看到类似于如下内容:director: /usr/src# ipvsadm IP Virtual Server version 0.2.7 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn
说明安装成功
运行lsmod | grep ip_vs 可以看到ip_vs模块已经被插入内核运行。
2.6.3. 运行脚本( Run Scripts )
以下以最常用的LVS-DR模式来介绍相关的脚本设置
( 以Telnet服务, 轮叫(rr)策略为例 )
2.6.3.1 Director上: #!/bin/bash #---------------mini-rc.lvs_dr-director------------------------ #set ip_forward OFF for vs-dr director (1 on, 0 off) cat /proc/sys/net/ipv4/ip_forward echo 0 >/proc/sys/net/ipv4/ip_forward #director is not gw for realservers: leave icmp redirects on echo 'setting icmp redirects (1 on, 0 off) ' echo 1 >/proc/sys/net/ipv4/conf/all/send_redirects cat /proc/sys/net/ipv4/conf/all/send_redirects echo 1 >/proc/sys/net/ipv4/conf/default/send_redirects cat /proc/sys/net/ipv4/conf/default/send_redirects echo 1 >/proc/sys/net/ipv4/conf/eth0/send_redirects cat /proc/sys/net/ipv4/conf/eth0/send_redirects #add ethernet device and routing for VIP 192.168.7.110 #if use backup director ,pay any attention about bellow /sbin/ifconfig eth0:0 192.168.7.110 broadcast 192.168.7.110 netmask 255.255.255.255 up /sbin/route add -host 192.168.7.110 dev eth0:0 #listing ifconfig info for VIP 192.168.7.110 /sbin/ifconfig eth0:0 #check VIP 192.168.7.110 is reachable from self (director) /bin/ping -c 1 192.168.7.110 #listing routing info for VIP 192.168.7.110 /bin/netstat -rn #setup_ipvsadm_table #clear ipvsadm table /sbin/ipvsadm -C #installing LVS services with ipvsadm #add telnet to VIP with round robin scheduling /sbin/ipvsadm -A -t 192.168.7.110:telnet -s rr #forward telnet to realserver using direct routing with weight 1 /sbin/ipvsadm -a -t 192.168.7.110:telnet -r 192.168.7.11 -g -w 1 #check realserver reachable from director ping -c 1 192.168.7.11 #forward telnet to realserver using direct routing with weight 1 /sbin/ipvsadm -a -t 192.168.7.110:telnet -r 192.168.7.12 -g -w 1 #check realserver reachable from director ping -c 1 192.168.7.12 #displaying ipvsadm settings /sbin/ipvsadm #not installing a default gw for LVS_TYPE vs-dr #---------------mini-rc.lvs_dr-director------------------------
范例: (http服务) # # Sample ldirectord configuration file to configure various virtual services. # # Ldirectord will connect to each real server once per second and request # /index.html. If the data returned by the server does not contain the # string "Test Message" then the test fails and the real server will be # taken out of the available pool. The real server will be added back into # the pool once the test succeeds. If all real servers are removed from the # pool then localhost:80 is added to the pool as a fallback measure. # Global Directives checktimeout=3 checkinterval=1 fallback=127.0.0.1:80 autoreload=yes #logfile="/var/log/ldirectord.log" #logfile="local0" quiescent=yes # A sample virual with a fallback that will override the gobal setting virtual=192.168.7.110:80 real=192.168.7.11:80 gate real=192.168.7.12:80 gate real=192.168.7.13:80 gate fallback=127.0.0.1:80 gate service=http request="/.testpage" receive="Test Page" scheduler=rr #persistent=600 #netmask=255.255.255.255 protocol=tcp
源代码安装: cd heartbeat-1.0.3 ./ConfigureMe configure make make install.
rpm安装: rpm -ivh --nodeps heartbeat-1.0.3.i386.rpm
为了适应Red Hat 9,我安装的是heartbeat for Red Hat 9的rpm包heartbeat-1.0.3-1.rh.9.1.i386.rpmRedHat和Debian版本rpm的版本从这里下载: 在安装时可能会遇到依赖性的错误,可以用--nodeps参数进行安装。总之把它提示所需要的rpm包(在dependancies目录下)全部装上。在dependancies目录下其它一些其它rpm包,如果不用相关功能可以不装。