sitemap

RSS地图

收藏本站

设为首页

Oracle研究中心

当前位置:Oracle研究中心 > 运维DBA > Linux >

【学习笔记】Linux操作系统bond配置主/备网卡绑定与测试案例

时间:2016-10-17 15:35   来源:Oracle研究中心   作者:网络   点击:

天萃荷净 分享一篇关于Linux操作系统bond的7种模式中最常用的主/备模式配置方法案例,是测试rhel 5这个版本下配置网卡的多个bond。
linux bond的模式只持7种,常常使用的是1(主备)这个模式,不需要交换机上做任何操作,当主网卡出现故障的时候自己切换到备用网卡。此没有只能在rhel5下面有效,rhel4肯定是不支持的,rhel6还没有玩过。rhel下面配置网卡绑定一点不人性化,没有suse,aix那么简单。

1,查看网卡与网线连接情况

[root@localhost ~]# mii-tool
eth0: negotiated 100baseTx-FD, link ok
eth1: negotiated 100baseTx-FD, link ok
eth2: negotiated 100baseTx-FD, link ok
eth3: negotiated 100baseTx-FD, link ok
eth4: negotiated 100baseTx-FD, link ok
[root@localhost ~]# ifconfig -a|grep addr
eth0      Link encap:Ethernet  HWaddr 00:0C:29:87:3B:8D 
          inet addr:192.168.111.6  Bcast:192.168.111.255  Mask:255.255.255.0
eth1      Link encap:Ethernet  HWaddr 00:0C:29:87:3B:97 
eth2      Link encap:Ethernet  HWaddr 00:0C:29:87:3B:A1 
eth3      Link encap:Ethernet  HWaddr 00:0C:29:87:3B:AB 
eth4      Link encap:Ethernet  HWaddr 00:0C:29:87:3B:B5 
          inet addr:127.0.0.1  Mask:255.0.0.0

2,修改/etc/modprobe.conf文件

[root@localhost log]# tail -2 /etc/modprobe.conf
alias bond0 bonding
alias bond1 bonding
[root@localhost ~]# set -o vi
[root@localhost ~]# modprobe bonding
[root@localhost ~]# modprobe bond1
[root@localhost ~]# modprobe bond0
modprobe是手动加载模块,我们还可以会用到关于模块的其它几个命令,lsmod查看系统上已经加载的模块,modinfo查看模块的详细信息,rmmod,删除已经加载的模块。

3,配置IP地址信息

这里是eth1与eth2绑定成bond0,eth3与eth4绑定成bond1,bond0与bond1的IP如下面配置

[root@localhost network-scripts]# cat ifcfg-bond0
# Intel Corporation 82545EM Gigabit Ethernet Controller (Copper)
DEVICE=bond0
BOOTPROTO=none
ONBOOT=yes
MASTER=yes
TYPE=Ethernet
NETMASK=255.255.255.0
IPADDR=192.168.112.6
GATEWAY=192.168.111.1
BONDING_OPTS="mode=1 miimon=100"
[root@localhost network-scripts]# cat ifcfg-bond1
# Intel Corporation 82545EM Gigabit Ethernet Controller (Copper)
DEVICE=bond1
BOOTPROTO=none
ONBOOT=yes
MASTER=yes
TYPE=Ethernet
NETMASK=255.255.255.0
IPADDR=192.168.113.6
GATEWAY=192.168.111.1
BONDING_OPTS="mode=1 miimon=100"
[root@localhost network-scripts]# cat ifcfg-eth1
# Intel Corporation 82545EM Gigabit Ethernet Controller (Copper)
DEVICE=eth1
BOOTPROTO=none
ONBOOT=yes
TYPE=Ethernet
MASTER=bond0
SLAVE=yes
[root@localhost network-scripts]# cat ifcfg-eth2
# Intel Corporation 82545EM Gigabit Ethernet Controller (Copper)
DEVICE=eth2
BOOTPROTO=none
ONBOOT=yes
TYPE=Ethernet
MASTER=bond0
SLAVE=yes
[root@localhost network-scripts]# cat ifcfg-eth3
# Intel Corporation 82545EM Gigabit Ethernet Controller (Copper)
DEVICE=eth3
BOOTPROTO=none
ONBOOT=yes
TYPE=Ethernet
MASTER=bond1
SLAVE=yes
[root@localhost network-scripts]# cat ifcfg-eth4
# Intel Corporation 82545EM Gigabit Ethernet Controller (Copper)
DEVICE=eth4
BOOTPROTO=none
ONBOOT=yes
TYPE=Ethernet
MASTER=bond1
SLAVE=yes
关于bonding_opts中的参值的含意,我们可以通过如下命令查看到,常常使用的就是mode,miimon,primary参数

filename:       /lib/modules/2.6.9-89.0.0.0.1.ELsmp/kernel/drivers/net/bonding/bonding.ko
author:         Thomas Davis, tadavis@lbl.gov and many others
description:    Ethernet Channel Bonding Driver, v2.6.3-rh
version:        2.6.3-rh 620E0651334290527B73D08
license:        GPL
parm:           fail_over_mac:For active-backup, do not set all slaves to the same MAC.  0 for off (default), 1 for on.
parm:           arp_validate:validate src/dst of ARP probes: none (default), active, backup or all
parm:           arp_ip_target:arp targets in n.n.n.n form
parm:           arp_interval:arp interval in milliseconds
parm:           xmit_hash_policy:XOR hashing method : 0 for layer 2 (default), 1 for layer 3+4
parm:           lacp_rate:LACPDU tx rate to request from 802.3ad partner (slow/fast)
parm:           primary:Primary network device to use
parm:           mode:Mode of operation : 0 for round robin, 1 for active-backup, 2 for xor
parm:           use_carrier:Use netif_carrier_ok (vs MII ioctls) in miimon; 0 for off, 1 for on (default)
parm:           downdelay:Delay before considering link down, in milliseconds
parm:           updelay:Delay before considering link up, in milliseconds
parm:           miimon:Link check interval in milliseconds
parm:           max_bonds:Max number of bonded devices
depends:       
vermagic:       2.6.9-89.0.0.0.1.ELsmp SMP gcc-3.4

4,启用网卡

[root@localhost network-scripts]# ifup eth1
[root@localhost network-scripts]# ifup eth2
[root@localhost network-scripts]# ifup eth3
grep: /sys/class/net/bond1/bonding/slaves: No such file or directory
/etc/sysconfig/network-scripts/ifup-eth: line 99: /sys/class/net/bond1/bonding/slaves: No such file or directory
[root@localhost network-scripts]# ifup eth4
grep: /sys/class/net/bond1/bonding/slaves: No such file or directory
/etc/sysconfig/network-scripts/ifup-eth: line 99: /sys/class/net/bond1/bonding/slaves: No such file or directory
[root@localhost network-scripts]# ifup bond0
[root@localhost network-scripts]# ifup bond1
[root@localhost network-scripts]# ifup eth3
[root@localhost network-scripts]# ifup eth4
查看IP地址
[root@localhost network-scripts]# http://www.oracleplus.netifconfig -a|grep addr
bond0     Link encap:Ethernet  HWaddr 00:0C:29:87:3B:97 
          inet addr:192.168.112.6  Bcast:192.168.112.255  Mask:255.255.255.0
bond1     Link encap:Ethernet  HWaddr 00:0C:29:87:3B:AB 
          inet addr:192.168.113.6  Bcast:192.168.113.255  Mask:255.255.255.0
eth0      Link encap:Ethernet  HWaddr 00:0C:29:87:3B:8D 
          inet addr:192.168.111.6  Bcast:192.168.111.255  Mask:255.255.255.0
eth1      Link encap:Ethernet  HWaddr 00:0C:29:87:3B:97 
eth2      Link encap:Ethernet  HWaddr 00:0C:29:87:3B:97 
eth3      Link encap:Ethernet  HWaddr 00:0C:29:87:3B:AB 
eth4      Link encap:Ethernet  HWaddr 00:0C:29:87:3B:AB

5,查看bond的信息

[root@localhost network-scripts]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth1
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:0c:29:87:3b:97
Slave queue ID: 0

Slave Interface: eth2
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:0c:29:87:3b:a1
Slave queue ID: 0
[root@localhost network-scripts]# cat /proc/net/bonding/bondo
cat: /proc/net/bonding/bondo: No such file or directory
[root@localhost network-scripts]# cat /proc/net/bonding/bond1
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth3
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth3
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:0c:29:87:3b:ab
Slave queue ID: 0

Slave Interface: eth4
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:0c:29:87:3b:b5
Slave queue ID: 0

6,测试功能是否正常

测试的时候要使用断网线的方式来测试,不要使用ifdown的方式,不然切换成功后,网络仍然不通,因为ifdown后,网卡的信息会从bond中清除。需要另外再找个主机一直PING测试的IP地址,这里PING的信息就没有贴出来了

查看到所有网线都是正常连接的
[root@localhost network-scripts]# mii-tool
eth0: negotiated 100baseTx-FD, link ok
eth1: negotiated 100baseTx-FD, link ok
eth2: negotiated 100baseTx-FD, link ok
eth3: negotiated 100baseTx-FD, link ok
eth4: negotiated 100baseTx-FD, link ok
拔掉eth1的网线
[root@localhost network-scripts]# mii-tool
eth0: negotiated 100baseTx-FD, link ok
eth1: no link
eth2: negotiated 100baseTx-FD, link ok
eth3: negotiated 100baseTx-FD, link ok
eth4: negotiated 100baseTx-FD, link ok
可以看到正常切换,在另外主机PING那里看是否有正常
[root@localhost network-scripts]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth2
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth1
MII Status: down
Link Failure Count: 1
Permanent HW addr: 00:0c:29:87:3b:97
Slave queue ID: 0

Slave Interface: eth2
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:0c:29:87:3b:a1
Slave queue ID: 0
恢复eth1,拔掉eth2的网线
[root@localhost network-scripts]# mii-tool
eth0: negotiated 100baseTx-FD, link ok
eth1: negotiated 100baseTx-FD, link ok
eth2: no link
eth3: negotiated 100baseTx-FD, link ok
eth4: negotiated 100baseTx-FD, link ok
查看是否正常
[root@localhost network-scripts]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth1
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth1
MII Status: up
Link Failure Count: 1
Permanent HW addr: 00:0c:29:87:3b:97
Slave queue ID: 0

Slave Interface: eth2
MII Status: down
Link Failure Count: 1
Permanent HW addr: 00:0c:29:87:3b:a1
Slave queue ID: 0
下面是关于bond1的测试,跟bond0一样的方式
[root@localhost network-scripts]# mii-tool
eth0: negotiated 100baseTx-FD, link ok
eth1: negotiated 100baseTx-FD, link ok
eth2: negotiated 100baseTx-FD, link ok
eth3: negotiated 100baseTx-FD, link ok
eth4: negotiated 100baseTx-FD, link ok
[root@localhost network-scripts]# cat /proc/net/bonding/bond1
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth3
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth3
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:0c:29:87:3b:ab
Slave queue ID: 0

Slave Interface: eth4
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:0c:29:87:3b:b5
Slave queue ID: 0
[root@localhost network-scripts]# mii-tool
eth0: negotiated 100baseTx-FD, link ok
eth1: negotiated 100baseTx-FD, link ok
eth2: negotiated 100baseTx-FD, link ok
eth3: no link
eth4: negotiated 100baseTx-FD, link ok
[root@localhost network-scripts]# cat /proc/net/bonding/bond1
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth4
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth3
MII Status: down
Link Failure Count: 1
Permanent HW addr: 00:0c:29:87:3b:ab
Slave queue ID: 0

Slave Interface: eth4
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:0c:29:87:3b:b5
Slave queue ID: 0
[root@localhost network-scripts]# mii-tool
eth0: negotiated 100baseTx-FD, link ok
eth1: negotiated 100baseTx-FD, link ok
eth2: negotiated 100baseTx-FD, link ok
eth3: negotiated 100baseTx-FD, link ok
eth4: no link
[root@localhost network-scripts]# cat /proc/net/bonding/bond1
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth3
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth3
MII Status: up
Link Failure Count: 1
Permanent HW addr: 00:0c:29:87:3b:ab
Slave queue ID: 0

Slave Interface: eth4
MII Status: down
Link Failure Count: 1
Permanent HW addr: 00:0c:29:87:3b:b5
Slave queue ID: 0

7,下面是messages中的提示信息

Apr 26 21:28:00 localhost kernel: bonding: bond0: enslaving eth2 as a backup interface with an up link.
Apr 26 21:28:04 localhost avahi-daemon[3509]: New relevant interface bond0.IPv4 for mDNS.
Apr 26 21:28:04 localhost avahi-daemon[3509]: Joining mDNS multicast group on interface bond0.IPv4 with address 192.168.112.6.
Apr 26 21:28:04 localhost avahi-daemon[3509]: Registering new address record for 192.168.112.6 on bond0.
Apr 26 21:29:14 localhost kernel: e1000: eth1 NIC Link is Down
Apr 26 21:29:14 localhost kernel: bonding: bond0: link status definitely down for interface eth1, disabling it
Apr 26 21:29:14 localhost kernel: bonding: bond0: making interface eth2 the new active one.
Apr 26 21:29:41 localhost kernel: e1000: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
Apr 26 21:29:41 localhost kernel: bonding: bond0: link status definitely up for interface eth1.
Apr 26 21:29:44 localhost kernel: e1000: eth2 NIC Link is Down
Apr 26 21:29:45 localhost kernel: bonding: bond0: link status definitely down for interface eth2, disabling it
Apr 26 21:29:45 localhost kernel: bonding: bond0: making interface eth1 the new active one.
Apr 26 21:30:19 localhost kernel: e1000: eth2 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
Apr 26 21:30:19 localhost kernel: bonding: bond0: link status definitely up for interface eth2.
Apr 26 21:30:40 localhost kernel: e1000: eth3 NIC Link is Down
Apr 26 21:30:40 localhost kernel: bonding: bond1: link status definitely down for interface eth3, disabling it
Apr 26 21:30:40 localhost kernel: bonding: bond1: making interface eth4 the new active one.
Apr 26 21:31:06 localhost kernel: e1000: eth3 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
Apr 26 21:31:06 localhost kernel: bonding: bond1: link status definitely up for interface eth3.
Apr 26 21:31:07 localhost kernel: e1000: eth4 NIC Link is Down
Apr 26 21:31:07 localhost kernel: bonding: bond1: link status definitely down for interface eth4, disabling it

Apr 26 21:31:07 localhost kernel: bonding: bond1: making interface eth3 the new active one.
本文固定链接: http://www.htz.pw/2013/04/26/rhel-5%e9%85%8d%e7%bd%ae%e7%bd%91%e5%8d%a1%e7%bb%91%e5%ae%9a%e4%b8%8e%e6%b5%8b%e8%af%95.html | 认真就输

--------------------------------------ORACLE-DBA----------------------------------------

最权威、专业的Oracle案例资源汇总之【学习笔记】Linux操作系统bond配置主/备网卡绑定与测试案例

本文由大师惜分飞原创分享,网址:http://www.oracleplus.net/arch/984.html

Oracle研究中心

关键词:

测试rhel 5这个版本下配置网卡的多个bond

linux bond的7种模式

Linux 网卡bond主备模式配置方法