Chinaunix首页 | 论坛 | 博客
  • 博客访问: 530546
  • 博文数量: 104
  • 博客积分: 2089
  • 博客等级: 大尉
  • 技术积分: 1691
  • 用 户 组: 普通用户
  • 注册时间: 2010-06-29 08:48
文章分类

全部博文(104)

文章存档

2015年(1)

2013年(13)

2012年(31)

2011年(59)

分类: LINUX

2011-12-31 10:28:24


简单的说就是几个网卡绑在一起成为一个虚拟的网卡,这个网卡一般命名为bond0,1,2...用到的技术是bonding,有下面的帖子总结的好,贴之

===================================
Linux下双网卡绑定技术实现负载均衡和失效保护

     保持服务器的高可用性是企业级 IT 环境的重要因素。其中最重要的一点是服务器网络连接的高可用性。网卡(NIC)绑定技术有助于保证高可用性特性并提供其它优势以提高网络性能。 

      我们在这介绍的Linux双网卡绑定实现就是使用两块网卡虚拟成为一块网卡,这个聚合起来的设备看起来是一个单独的以太网接口设备,通俗点讲就是两块网卡具有相同的IP地址而并行链接聚合成一个逻辑链路工作。其实这项技术在Sun和Cisco中早已存在,被称为Trunking和Etherchannel技术,在Linux的2.4.x的内核中也采用这这种技术,被称为bonding。bonding技术的最早应用是在集群——beowulf上,为了提高集群节点间的数据传输而设计的。下面我们讨论一下bonding 的原理,什么是bonding需要从网卡的混杂(promisc)模式说起。我们知道,在正常情况下,网卡只接收目的硬件地址(MAC Address)是自身Mac的以太网帧,对于别的数据帧都滤掉,以减轻驱动程序的负担。但是网卡也支持另外一种被称为混杂promisc的模式,可以接收网络上所有的帧,比如说tcpdump,就是运行在这个模式下。bonding也运行在这个模式下,而且修改了驱动程序中的mac地址,将两块网卡的Mac地址改成相同,可以接收特定mac的数据帧。然后把相应的数据帧传送给bond驱动程序处理。 
    说了半天理论,其实配置很简单,一共四个步骤:
实验的操作系统是 Redhat Linux Enterprise 3.0
绑定的前提条件: 芯片组型号相同,而且网卡应该具备自己独立的BIOS芯片

1.编辑虚拟网络接口配置文件,指定网卡IP 
  1. vi /etc/sysconfig/ network-scripts/ ifcfg-bond0
  2. [root@rhas-13 root]# cp /etc/sysconfig/network-scripts/ifcfg-eth0 ifcfg-bond0
2 #vi ifcfg-bond0 
  1. 将第一行改成 DEVICE=bond0
  2. # cat ifcfg-bond0
  3. DEVICE=bond0
  4. BOOTPROTO=static
  5. IPADDR=172.31.0.13
  6. NETMASK=255.255.252.0
  7. BROADCAST=172.31.3.254
  8. ONBOOT=yes
  9. TYPE=Ethernet
这里要主意,不要指定单个网卡的IP 地址、子网掩码或网卡 ID。将上述信息指定到虚拟适配器(bonding)中即可。
[root@rhas-13 network-scripts]# cat ifcfg-eth0 
  1. DEVICE=eth0
  2. ONBOOT=yes
  3. BOOTPROTO=dhcp
  4. [root@rhas-13 network-scripts]# cat ifcfg-eth1
  5. DEVICE=eth0
  6. ONBOOT=yes
  7. BOOTPROTO=dhcp

3 # vi /etc/modules.conf 
编辑 /etc/modules.conf 文件,加入如下一行内容,以使系统在启动时加载bonding模块,对外虚拟网络接口设备为 bond0 
 
加入下列两行 
  1. alias bond0 bonding
  2. options bond0 miimon=100 mode=1
说明:miimon是用来进行链路监测的。 比如:miimon=100,那么系统每100ms监测一次链路连接状态,如果有一条线路不通就转入另一条线路;mode的值表示工作模式,他共有0,1,2,3四种模式,常用的为0,1两种。
  1.    mode=0表示load balancing (round-robin)为负载均衡方式,两块网卡都工作。
  2.    mode=1表示fault-tolerance (active-backup)提供冗余功能,工作方式是主备的工作方式,也就是说默认情况下只有一块网卡工作,另一块做备份.  
bonding只能提供链路监测,即从主机到交换机的链路是否接通。如果只是交换机对外的链路down掉了,而交换机本身并没有故障,那么bonding会认为链路没有问题而继续使用
4 # vi /etc/rc.d/rc.local 
加入两行 
  1. ifenslave bond0 eth0 eth1
  2. route add -net 172.31.3.254 netmask 255.255.255.0 bond0

到这时已经配置完毕重新启动机器.
重启会看见以下信息就表示配置成功了
................ 
Bringing up interface bond0 OK 
Bringing up interface eth0 OK 
Bringing up interface eth1 OK 
................


下面我们讨论以下mode分别为0,1时的情况

mode=1
工作在主备模式下,这时eth1作为备份网卡是no arp的
    1. [root@rhas-13 network-scripts]# ifconfig 验证网卡的配置信息
    2. bond0 Link encap:Ethernet HWaddr 00:0E:7F:25:D9:8B
    3.           inet addr:172.31.0.13 Bcast:172.31.3.255 Mask:255.255.252.0
    4.           UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
    5.           RX packets:18495 errors:0 dropped:0 overruns:0 frame:0
    6.           TX packets:480 errors:0 dropped:0 overruns:0 carrier:0
    7.           collisions:0 txqueuelen:0
    8.           RX bytes:1587253 (1.5 Mb) TX bytes:89642 (87.5 Kb)
    9.   
    10. eth0 Link encap:Ethernet HWaddr 00:0E:7F:25:D9:8B
    11.           inet addr:172.31.0.13 Bcast:172.31.3.255 Mask:255.255.252.0
    12.           UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
    13.           RX packets:9572 errors:0 dropped:0 overruns:0 frame:0
    14.           TX packets:480 errors:0 dropped:0 overruns:0 carrier:0
    15.           collisions:0 txqueuelen:1000
    16.           RX bytes:833514 (813.9 Kb) TX bytes:89642 (87.5 Kb)
    17.           Interrupt:11
    18.   
    19. eth1 Link encap:Ethernet HWaddr 00:0E:7F:25:D9:8B
    20.           inet addr:172.31.0.13 Bcast:172.31.3.255 Mask:255.255.252.0
    21.           UP BROADCAST RUNNING NOARP SLAVE MULTICAST MTU:1500 Metric:1
    22.           RX packets:8923 errors:0 dropped:0 overruns:0 frame:0
    23.           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
    24.           collisions:0 txqueuelen:1000
    25.           RX bytes:753739 (736.0 Kb) TX bytes:0 (0.0 b)
    26.           Interrupt:15
    那也就是说在主备模式下,当一个网络接口失效时(例如主交换机掉电等),不回出现网络中断,系统会按照cat /etc/rc.d/rc.local里指定网卡的顺序工作,机器仍能对外服务,起到了失效保护的功能.

mode=0    
负载均衡工作模式,他能提供两倍的带宽,下我们来看一下网卡的配置信息
  1. [root@rhas-13 root]# ifconfig
  2. bond0 Link encap:Ethernet HWaddr 00:0E:7F:25:D9:8B
  3. inet addr:172.31.0.13 Bcast:172.31.3.255 Mask:255.255.252.0
  4. UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
  5. RX packets:2817 errors:0 dropped:0 overruns:0 frame:0
  6. TX packets:95 errors:0 dropped:0 overruns:0 carrier:0
  7. collisions:0 txqueuelen:0
  8. RX bytes:226957 (221.6 Kb) TX bytes:15266 (14.9 Kb)
  9. eth0 Link encap:Ethernet HWaddr 00:0E:7F:25:D9:8B
  10. inet addr:172.31.0.13 Bcast:172.31.3.255 Mask:255.255.252.0
  11. UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
  12. RX packets:1406 errors:0 dropped:0 overruns:0 frame:0
  13. TX packets:48 errors:0 dropped:0 overruns:0 carrier:0
  14. collisions:0 txqueuelen:1000
  15. RX bytes:113967 (111.2 Kb) TX bytes:7268 (7.0 Kb)
  16. Interrupt:11
  17. eth1 Link encap:Ethernet HWaddr 00:0E:7F:25:D9:8B
  18. inet addr:172.31.0.13 Bcast:172.31.3.255 Mask:255.255.252.0
  19. UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
  20. RX packets:1411 errors:0 dropped:0 overruns:0 frame:0
  21. TX packets:47 errors:0 dropped:0 overruns:0 carrier:0
  22. collisions:0 txqueuelen:1000
  23. RX bytes:112990 (110.3 Kb) TX bytes:7998 (7.8 Kb)
  24. Interrupt:15

      在这种情况下出现一块网卡失效,仅仅会是服务器出口带宽下降,也不会影响网络使用.




通过查看bond0的工作状态查询能详细的掌握bonding的工作状态
  1. [root@rhas-13 bonding]# cat /proc/net/bonding/bond0
  2. bonding.c:v2.4.1 (September 15, 2003)
  3. Bonding Mode: load balancing (round-robin)
  4. MII Status: up
  5. MII Polling Interval (ms): 0
  6. Up Delay (ms): 0
  7. Down Delay (ms): 0
  8. Multicast Mode: all slaves
  9. Slave Interface: eth1
  10. MII Status: up
  11. Link Failure Count: 0
  12. Permanent HW addr: 00:0e:7f:25:d9:8a
  13. Slave Interface: eth0
  14. MII Status: up
  15. Link Failure Count: 0
  16. Permanent HW addr: 00:0e:7f:25:d9:8b
     Linux下通过网卡邦定技术既增加了服务器的可靠性,又增加了可用网络带宽,为用户提供不间断的关键服务。用以上方法均在redhat的多个版本测试成功,而且效果良好.心动不如行动,赶快一试吧!

参考文档:
/usr/share/doc/kernel-doc-2.4.21/networking/bonding.txt


-----------------------------

Finally, today I had implemented NIC bounding (bind both NIC so that it works as a single device). Bonding is nothing but Linux kernel feature that allows to aggregate multiple like interfaces (such as eth0, eth1) into a single virtual link such as bond0. The idea is pretty simple get higher data rates and as well as link failover. The following instructions were tested on:

  1. RHEL v4 / 5 / 6 amd64
  2. CentOS v5 / 6 amd64
  3. Fedora Linux 13 amd64 and up.
  4. 2 x PCI-e Gigabit Ethernet NICs with Jumbo Frames (MTU 9000)
  5. Hardware RAID-10 w/ SAS 15k enterprise grade hard disks.
  6. Gigabit switch with Jumbo Frame


Say Hello To bounding DriverThis server act as an heavy duty ftp, and nfs file server. Each, night a perl script will transfer lots of data from this box to a backup server. Therefore, the network would be setup on a switch using dual network cards. I am using Red Hat enterprise Linux version 4.0. But, the inductions should work on RHEL 5 and 6 too.

Linux allows binding of multiple network interfaces into a single channel/NIC using special kernel module called bonding. According to official bonding documentation:

The Linux bonding driver provides a method for aggregating multiple network interfaces into a single logical "bonded" interface. The behavior of the bonded interfaces depends upon the mode; generally speaking, modes provide either hot standby or load balancing services. Additionally, link integrity monitoring may be performed.

Step #1: Create a Bond0 Configuration File

Red Hat Enterprise Linux (and its clone such as CentOS) stores network configuration in /etc/sysconfig/network-scripts/ directory. First, you need to create a bond0 config file as follows:
# vi /etc/sysconfig/network-scripts/ifcfg-bond0
Append the following linest:

  1. DEVICE=bond0
  2. IPADDR=192.168.1.20
  3. NETWORK=192.168.1.0
  4. NETMASK=255.255.255.0
  5. USERCTL=no
  6. BOOTPROTO=none
  7. ONBOOT=yes

You need to replace IP address with your actual setup. Save and close the file.

Step #2: Modify eth0 and eth1 config files

Open both configuration using a text editor such as vi/vim, and make sure file read as follows for eth0 interface

  1. # vi /etc/sysconfig/network-scripts/ifcfg-eth0
Modify/append directive as follows:

  1. DEVICE=eth0
  2. USERCTL=no
  3. ONBOOT=yes
  4. MASTER=bond0
  5. SLAVE=yes
  6. BOOTPROTO=none

Open eth1 configuration file using vi text editor, enter:

  1. # vi /etc/sysconfig/network-scripts/ifcfg-eth1
Make sure file read as follows for eth1 interface:

  1. DEVICE=eth1
  2. USERCTL=no
  3. ONBOOT=yes
  4. MASTER=bond0
  5. SLAVE=yes
  6. BOOTPROTO=none

Save and close the file.

Step # 3: Load bond driver/module

Make sure bonding module is loaded when the channel-bonding interface (bond0) is brought up. You need to modify kernel modules configuration file:

  1. # vi /etc/modprobe.conf
Append following two lines:

  1. alias bond0 bonding
  2. options bond0 mode=balance-alb miimon=100
Save file and exit to shell prompt. You can learn more about all bounding options by clicking ).

Step # 4: Test configuration

First, load the bonding module, enter:

  1. # modprobe bonding
Restart the networking service in order to bring up bond0 interface, enter:
  1. # service network restart
Make sure everything is working. Type the following to query the current status of Linux kernel bounding driver, enter:
  1. # cat /proc/net/bonding/bond0
Sample outputs:

  1. Bonding Mode: load balancing (round-robin)
  2. MII Status: up
  3. MII Polling Interval (ms): 100
  4. Up Delay (ms): 200
  5. Down Delay (ms): 200
  6. Slave Interface: eth0
  7. MII Status: up
  8. Link Failure Count: 0
  9. Permanent HW addr: 00:0c:29:c6:be:59
  10. Slave Interface: eth1
  11. MII Status: up
  12. Link Failure Count: 0
  13. Permanent HW addr: 00:0c:29:c6:be:63

To kist all network interfaces, enter:

  1. # ifconfig
Sample outputs:

  1. bond0 Link encap:Ethernet HWaddr 00:0C:29:C6:BE:59
  2. inet addr:192.168.1.20 Bcast:192.168.1.255 Mask:255.255.255.0
  3. inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
  4. UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
  5. RX packets:2804 errors:0 dropped:0 overruns:0 frame:0
  6. TX packets:1879 errors:0 dropped:0 overruns:0 carrier:0
  7. collisions:0 txqueuelen:0
  8. RX bytes:250825 (244.9 KiB) TX bytes:244683 (238.9 KiB)
  9. eth0 Link encap:Ethernet HWaddr 00:0C:29:C6:BE:59
  10. inet addr:192.168.1.20 Bcast:192.168.1.255 Mask:255.255.255.0
  11. inet6 addr: fe80::20c:29ff:fec6:be59/64 Scope:Link
  12. UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
  13. RX packets:2809 errors:0 dropped:0 overruns:0 frame:0
  14. TX packets:1390 errors:0 dropped:0 overruns:0 carrier:0
  15. collisions:0 txqueuelen:1000
  16. RX bytes:251161 (245.2 KiB) TX bytes:180289 (176.0 KiB)
  17. Interrupt:11 Base address:0x1400
  18. eth1 Link encap:Ethernet HWaddr 00:0C:29:C6:BE:59
  19. inet addr:192.168.1.20 Bcast:192.168.1.255 Mask:255.255.255.0
  20. inet6 addr: fe80::20c:29ff:fec6:be59/64 Scope:Link
  21. UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
  22. RX packets:4 errors:0 dropped:0 overruns:0 frame:0
  23. TX packets:502 errors:0 dropped:0 overruns:0 carrier:0
  24. collisions:0 txqueuelen:1000
  25. RX bytes:258 (258.0 b) TX bytes:66516 (64.9 KiB)
  26. Interrupt:10 Base address:0x1480

Read the which covers the following additional topics:

  • VLAN Configuration
  • Cisco switch related configuration
  • Advanced routing and troubleshooting
This blog post is 1 of 2 in the "Linux NIC Interface Bonding (aggregate multiple links) Tutorial" series. Keep reading the rest of the series:
Table of Contents:
  1. Red Hat (RHEL/CentOS) Linux Bond or Team Multiple Network Interfaces (NIC) into a Single Interface

--------------------------------------------------


在学习Suse的时候,我们会遇到很多的情况,我研究了一些Suse的问题,今天所要讲的就是怎样进行Suse双网卡绑定,通过本文希望你能过学习记住Suse双网卡绑定的过程。

1, 比较简单的方法
---------------------------------------------------------- 

将两块Fabric网卡绑定为bond1
# vi /etc/sysconfig/network/ifcfg-bond1
-------------------- 
BOOTPROTO='static'
IPADDR='10.69.16.102'
NETMASK='255.255.255.0'
STARTMODE='onboot'
BONDING_MASTER='yes'
BONDING_MODULE_OPTS='mode=1 miimon=200'
BONDING_SLAVE0='eth1'
BONDING_SLAVE1='eth2'
-------------------- 

删掉原来的网卡配置文件,重启网络服务
cd /etc/sysconfig/network/
rm ifcfg-eth1
rm ifcfg-eth2
rcnetwork restart
使用ifconfig命令检查网卡绑定是否成功。如果已经启用bond0的IP地址,而且原来的两个网卡没有附着IP,而且mac地址一致,则说明绑定成功。

2,比较正规的方法
---------------------------------------------------------- 

步骤 1     进入到网络配置目录下:
# cd /etc/sysconfig/network

步骤 2     创建ifcfg-bond0配置文件。
# vi ifcfg-bond0

在ifcfg-bond0配置文件中添加如下内容。
#suse 9 kernel 2.6 ifcfg-bond0
BOOTPROTO='static'
device='bond0'
IPADDR='10.71.122.13'
NETMASK='255.255.255.0'
NETWORK='10.71.122.0'
BROADCAST='10.71.122.255'
STARTMODE='onboot'
BONDING_MASTER='yes'
BONDING_MODULE_OPTS='mode=1 miimon=200'
BONDING_SLAVE0='eth0'
BONDING_SLAVE2='eth1'

步骤 3     配置完成,保存该文件并退出。

步骤 4     创建ifcfg-eth0配置文件。
(装完SUSE9操作系统后/etc/sysconfig/network会有两块网卡MAC地址命名的文件,直接把下面的ifcfg-eth0文件内容覆盖那两个配置文件,不用新建ifcfg-eth0,ifcfg-eth1,SUSE10下则按下面操作)
# vi ifcfg-eth0

在ifcfg-eth0配置文件中添加如下内容。
DEVICE='eth0'
BOOTPROTO='static'
STARTMODE='onboot'

步骤 5     保存该文件并退出。

步骤 6     创建ifcfg-eth1配置文件。
# vi ifcfg-eth1

在ifcfg-eth1配置文件中添加如下内容。
DEVICE='eth1'
BOOTPROTO='static'
STARTMODE='onboot'

步骤 7     保存该文件并退出。

步骤 8     重启系统网络配置,使配置生效。
# rcnetwork restart

3,SUSE厂家主流推荐的方法,个人也比较推崇!
----------------------------------------------------------

一、配置加在网卡驱动

在/etc/sysconfig/kernel中的
MODULES_LOADED_ON_BOOT参数加上网卡的驱动,例如
MODULES_LOADED_ON_BOOT=”tg3 e1000”

注意:大多数情况下不需要配置这一步骤,只有某些网卡不能在启动过程中驱动初始较慢没有识别导致绑定不成功,也就是有的slave设备没有加入绑定,才需要配置。

二、创建要绑定的网卡配置文件
/etc/sysconfig/network/ifcfg-eth*,其中*为数字,例如ifcfg-eth0 , ifcfg-eth1等等。

每个文件的内容如下:
BOOTPROTO='none'
STARTMODE='off'

三、创建bond0的配置文件
/etc/sysconfig/network/ifcfg-bond0

内容如下:
-------------------- 
BOOTPROTO='static'
BROADCAST='192.168.1.255'
IPADDR='192.168.1.1'
NETMASK='255.255.255.0'
NETWORK='192.168.1.0'
STARTMODE='onboot'
BONDING_MASTER='yes'
BONDING_MODULE_OPTS='mode=1 miimon=100 use_carrier=1 '
-------------------- 

#其中mode=1为active-backup模式,mode=0为balance_rr模式
BONDING_SLAVE0='eth0'
BONDING_SLAVE1='eth1'

四、对于active-backup模式,需要在BONDING_MODULE_OPTS参数中加上制定主设备的参数,例如:

BONDING_MODULE_OPTS='mode=1 miimon=100 use_carrier=1 primary=eth0'

五、重新启动networkf服务

rcnetwork restart

六、注意事项

(1)在某些情况下网卡驱动的初始化的时间可能会比较长,从而导致bonding不成功,那么可以修改

/etc/sysconfig/network/config配置文件的WAIT_FOR_INTERFACES参数,将其值改成30。

(2)配置完bonding之后,可以通过在客户端ping,然后在服务器端拔插网线来验证是否已经正常工作。

(3)cat /proc/net/bonding/bond0可以查看bonding的状态。这样你就完成了Suse双网卡绑定。


from:


======================================================

参考:

阅读(1805) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~