全部博文(408)
分类: LINUX
2006-07-24 14:42:51
Setting up LVS on Gentoo is pretty simple. My setup is described below. I use LVS-NAT instead of VS-DR (Direct routing). VS-DR is faster than LVS-NAT, but I'm doing it this way for a few reasons.
Note: This document does not describe the setup of the mail nodes. That will be left for another document.
Note: I run a load balanced cluster for email. You should be able to adapt these docs for many other services.
Note: I'm going to assume you have a working installation of Gentoo on a box with two NICs. You might be able to do this on one NIC but that is left as an excersize for the reader.
And here we go .... :-)
The first thing we're going to do is upgrade to the 2.6.x kernel. You don't have to do this, I suppose, but I did.
emerge -k gentoo-dev-sources
Now add IPVS to the kernel.
cd /usr/src/linux
make menuconfig
Here are the options that I set specific to LVS. Feel free to adjust as needed.
Code maturity level options --->
[*] Prompt for development and/or incomplete code/drivers
Device Drivers --->
Networking support --->
Networking options --->
[*] Network packet filtering (replaces ipchains) --->
[ ] Network packet filtering debugging
IP: Virtual Server Configuration --->
IP virtual server support (EXPERIMENTAL)
[ ] IP virtual server debugging
(12) IPVS connection table size (the Nth power of 2)
--- IPVS transport protocol load balancing support
[*] TCP load balancing support
[*] UDP load balancing support
[ ] ESP load balancing support
[ ] AH load balancing support
--- IPVS scheduler
round-robin scheduling
weighted round-robin scheduling
least-connection scheduling
weighted least-connection scheduling
locality-based least-connection scheduling
locality-based least-connection with replication scheduling
destination hashing scheduling
source hashing scheduling
shortest expected delay scheduling
never queue scheduling
--- IPVS application helper
FTP protocol helper
Now build and install your new kernel.
make && make modules_install
mount /boot
cp arch/i386/boot/bzImage /boot/kernel-version
cp System.map /boot/System.map-version
cp .config /boot/config-version
umount /boot
Boot to your new kernel and make sure everything works.
Because we're using the latest kernel we need to be sure that we have the latest version of ipvsadm.
emerge -k >=ipvsadm-1.24
or
cd /usr/portage/sys-cluster/ipvsadm
emerge -k ipvsadm-1.24.ebuild
You may be able to skip this step. I setup NAT through iptables before I setup the LVS-NAT. It's not a bad idea to have iptables on the gateway/firewall, though.
emerge -k iptables
heartbeat is what will provide the high-availability features needed to automatically remove nodes from the rotation when it goes out of service for whatever reason. It also allows me to setup a redundant LB to take over from the main one if it goes down.
USE='ldirectord' emerge -k heartbeat
Note: You can add ldirectord to your USE line in /etc/make.conf.
Now the real fun begins. All of the configs are in /etc/ha.d so we'll start there.
We'll first copy the example configs to /etc/ha.d.
cd /usr/share/doc/heartbeat-version
cp ha.cf haresources /etc/ha.d
Here is my ha.cf with the comments removed.
logfacility local0
bcast eth1
node hydra cerberus
hydra is my primary LB and cerberus is the secondary. The
names need to match the output of uname -n
.
Here is my haresources, again, with the comments removed. wan_ip is the WAN address of the cluster and lan_ip is the LAN address. The nodes will use lan_ip as their gateway address.
hydra wan_ip/24/eth0 ldirectord
hydra lan_ip/24/eth1 ldirectord
Note: This file should be the same on all LVS servers in this group.
authkeys controls access to the LVS group. My LBs are talking to each other on a private network so security isn't a big an issue here. If you are trying to setup VS-DR, you will want to use something a bit more secure that this.
auth 1
1 crc
ldirectord.cf controls the load balancer itself. This is where you set which nodes handle which services and the weights for the nodes.
logfile="local0"
virtual = wan_ip:25
real = node1:25 masq 1000
real = node2:25 masq 1000
real = node3:25 masq 1000
real = node4:25 masq 667
checktimeout = 10
scheduler = sed
protocol = tcp
virtual = wan_ip:110
real = node1:110 masq 1000
real = node2:110 masq 1000
real = node3:110 masq 1000
real = node4:110 masq 667
scheduler = sed
protocol = tcp
virtual = wan_ip:143
real = node1:143 masq 1000
real = node2:143 masq 1000
real = node3:143 masq 1000
real = node4:143 masq 1000
scheduler = sed
protocol = tcp
virtual = wan_ip:80
real = node1:80 masq 10
real = node2:80 masq 10
real = node3:80 masq 10
real = node4:80 masq 10
real = node5:80 masq 1000
scheduler = sed
persistent = 300
protocol = tcp
request = "/testpage.html"
receive = "This server seems to be up."
See ipvsadm(8) and ldirectord(8) for details on what these options mean.
I want to bring the last block to your attention. That is the setup for WebMail. The persistent option is required to keep users going to the web server when the log in. That is needed to preserve session information.
We need to turn on IP forwarding in /etc/conf.d/iptables or the NAT won't work.
ENABLE_FORWARDING_IPv4="yes"
Speaking of NAT, I added a NAT rule to iptables. I'm still not 100% sure that it was needed, but it doesn't seem to hurt.
iptables -A POSTROUTING -s lan_net/255.255.255.0 -j MASQUERADE
/etc/init.d/iptables save
Everything is configured, let's turn things on so that we test.
/etc/init.d/iptables start
/etc/init.d/heartbeat start
/etc/init.d/ldirector start
Use ipvsadm to view the status of the load balancer.
# ipvsadm
IP Virtual Server version 1.2.0 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP wan_ip:smtp sed
-> mail4:smtp Masq 667 22 99
-> mail3:smtp Masq 1000 34 130
-> mail2:smtp Masq 1000 28 193
-> mail1:smtp Masq 1000 28 104
TCP wan_ip:www sed persistent 300
-> mail5:www Masq 1000 6 18
-> mail4:www Masq 10 0 0
-> mail3:www Masq 10 0 0
-> mail2:www Masq 10 0 0
-> mail1:www Masq 10 0 0
TCP wan_ip:pop3 sed
-> mail4:pop3 Masq 667 2 46
-> mail3:pop3 Masq 1000 3 54
-> mail2:pop3 Masq 1000 3 21
-> mail1:pop3 Masq 1000 2 43
TCP wan_ip:imap2 sed
-> mail4:imap2 Masq 1000 2 0
-> mail3:imap2 Masq 1000 1 1
-> mail2:imap2 Masq 1000 1 0
-> mail1:imap2 Masq 1000 0 3
Now that everything is up, it's time to make sure it works. Use a client or telnet to the wan_ip from outside the cluster. You should be able to see your connection in the server logs.
At this point you should have a working system and it's now time to make sure all the services we need will be started when the machine reboots.
rc-update add iptables default
rc-update add heartbeat default
And that, as they say, is that.