在Linux系统中使用bonding技术,可以将两块网卡绑定为一个对外IP使用,以达到高可用或负载均衡的目的。以下简要介绍该技术的使用。
1.安装两块网卡
在linux系统下将两块网卡配置好,使其均能独立工作。可能会碰到驱动问题,自己想办法解决吧,无非是在内核配置里找或者找第三方的。
网卡不一定要一个型号,我用的一块是独立的,一块是集成的,也没什么问题。
2.编译内核
在内核中添加bonding的支持。
在原码目录下make menuconfig
选上device drivers -> network device support -> bonding driver support
一般来说2.4以上默认是选上的,我用的2.6.22
编译内核其他过程不再赘述
3.bonding配置
以下是截取源代码目录下/Documentation/networking/bonding.txt中的一段说明
3.2 Configuration with Initscripts Support
------------------------------------------
This section applies to distros using a version of initscripts with bonding support, for example, Red Hat Linux 9 or Red Hat Enterprise Linux version 3 or 4. On these systems, the network
initialization scripts have some knowledge of bonding, and can be configured to control bonding devices.
These distros will not automatically load the network adapter driver unless the ethX device is configured with an IP address. Because of this constraint, users must manually configure a
network-script file for all physical adapters that will be members of a bondX link. Network script files are located in the directory:
/etc/sysconfig/network-scripts
The file name must be prefixed with "ifcfg-eth" and suffixed with the adapter's physical adapter number. For example, the script for eth0 would be named /etc/sysconfig/network-scripts/ifcfg-eth0.Place the following text in the file:
DEVICE=eth0
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none
The DEVICE= line will be different for every ethX device and must correspond with the name of the file, i.e., ifcfg-eth1 must have a device line of DEVICE=eth1. The setting of the MASTER= line will also depend on the final bonding interface name chosen for your bond. As with other network devices, these typically start at 0, and go up one for each device, i.e., the first bonding instance is bond0, the second is bond1, and so on.
Next, create a bond network script. The file name for this script will be /etc/sysconfig/network-scripts/ifcfg-bondX where X is the number of the bond. For bond0 the file is named "ifcfg-bond0",
for bond1 it is named "ifcfg-bond1", and so on. Within that file, place the following text:
DEVICE=bond0
IPADDR=192.168.1.1
NETMASK=255.255.255.0
NETWORK=192.168.1.0
BROADCAST=192.168.1.255
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
Be sure to change the networking specific lines (IPADDR, NETMASK, NETWORK and BROADCAST) to match your network configuration.
Finally, it is necessary to edit /etc/modules.conf (or /etc/modprobe.conf, depending upon your distro) to load the bonding module with your desired options when the bond0 interface is brought
up. The following lines in /etc/modules.conf (or modprobe.conf) will load the bonding module, and select its options:
alias bond0 bonding
options bond0 mode=balance-alb miimon=100
Replace the sample parameters with the appropriate set of
options for your configuration.
Finally run "/etc/rc.d/init.d/network restart" as root. This will restart the networking subsystem and your bond link should be now up and running.
能看懂就照着做吧,看不懂的话照着我下面的例子也OK
vi /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
NETWORK=192.168.0.0
IPADDR=192.168.0.111
NETMASK=255.255.255.0
BROADCAST=192.168.0.255
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none
vi /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none
vi /etc/modprobe.conf
加入两行
alias bond0 bonding
options bond0 miimon=100 mode=1
参数miimon是链路监测。miimon=100,表示系统每100ms监测一次链路连接状态,如果有一条线路不通就
转入另一条线路。
参数mode表示工作模式。
mode=0表示load balancing两块网卡同时工作,提供负载均衡。
mode=1表示fault-tolerance提供冗余功能,工作方式是主备的工作方式,提供高可用。
配置完毕后重启机器,启动过程中show details可以看到bonding启动成功。
4.察看bonding运行状态
[root@localhost /]# ifconfig
bond0 Link encap:Ethernet HWaddr 00:1D:60:99:8C:EB
inet addr:192.168.0.111 Bcast:192.168.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:2805 errors:0 dropped:0 overruns:0 frame:0
TX packets:2006 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:281846 (275.2 KiB) TX bytes:481206 (469.9 KiB)
eth0 Link encap:Ethernet HWaddr 00:1D:60:99:8C:EB
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:2302 errors:0 dropped:0 overruns:0 frame:0
TX packets:2014 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:234137 (228.6 KiB) TX bytes:482422 (471.1 KiB)
Interrupt:17 Base address:0xd400
eth1 Link encap:Ethernet HWaddr 00:1D:60:99:8C:EB
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:506 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:47889 (46.7 KiB) TX bytes:0 (0.0 b)
Interrupt:21 Base address:0x4c00
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:2640 errors:0 dropped:0 overruns:0 frame:0
TX packets:2640 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:4668416 (4.4 MiB) TX bytes:4668416 (4.4 MiB)
[root@localhost /]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.1.3 (June 13, 2007)
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:1d:60:99:8c:eb
Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:05:5d:74:2a:81
任意拔掉一条网线试试,工作还正常吗?/proc中有什么变化?大功告成了!