定位每个TCP栈中的问题都得四处查找各种计数器的在当前实现版本中的具体含义,耗费时间,还不一定有用,真是头疼。
同事用blktrace调试块子系统和文件系统的方便性着实让我眼馋,可惜网络子系统里没有这种杀手级的trace工具。话说回来,开源的好处就是可以DIY。于是,我启动了个小项目,skbtrace,实现在网络子系统中类似于blktrace的功能。雏形已经完成,下面先看些它的输出:
- 11 1333399184.790000414 cpu=0 action=tcp_conn [SYN_RECV] sock=0xffff88003aca5980 local=192.168.241.128:12865 peer=192.168.241.1:43545
- 12 1333399184.790000414 cpu=0 action=tcp_conn [ESTABLISHED] sock=0xffff88003d3b2900 local=192.168.241.128:12865 peer=192.168.241.1:43545
- 13 1333399184.790000299 cpu=1 action=tcp_conn [LISTEN] sock=0xffff88003a8c9480 local=0.0.0.0:34258
- 14 1333399184.790000299 cpu=0 action=tcp_conn [SYN_RECV] sock=0xffff88003aca5580 local=192.168.241.128:34258 peer=192.168.241.1:54216
- 15 1333399184.790000299 cpu=0 action=tcp_conn [ESTABLISHED] sock=0xffff88003d3b1ec0 local=192.168.241.128:34258 peer=192.168.241.1:54216
- 16 1333399185.790000585 cpu=0 action=tcp_conn [CLOSE_WAIT] sock=0xffff88003d3b1ec0 local=192.168.241.128:34258 peer=192.168.241.1:54216
- 17 1333399185.790000585 cpu=0 action=tcp_conn [LAST_ACK] sock=0xffff88003d3b1ec0 local=192.168.241.128:34258 peer=192.168.241.1:54216
- 18 1333399185.790000585 cpu=0 action=tcp_conn [CLOSE] sock=0xffff88003d3b1ec0 local=192.168.241.128:34258 peer=192.168.241.1:54216
- 19 1333399185.790000585 cpu=1 action=tcp_conn [CLOSE_WAIT] sock=0xffff88003d3b2900 local=192.168.241.128:12865 peer=192.168.241.1:43545
- 20 1333399185.790000585 cpu=0 action=tcp_conn [LAST_ACK] sock=0xffff88003d3b2900 local=192.168.241.128:12865 peer=192.168.241.1:43545
- 21 1333399185.790000585 cpu=0 action=tcp_conn [CLOSE] sock=0xffff88003d3b2900 local=192.168.241.128:12865 peer=192.168.241.1:43545
介绍一下skbparse出来数据的格式:
- (64位)序号 时间 处理器 活动名 标志位描述 事件属性描述
对应到上面输出的第一行:
序号:11,引入这个序号的原因在于这样可以得到准确的不同处理器生成的事件的顺序。
时间:1333399184.790000414(current_kernel_time())
处理器:cpu=0,这个事件发生的处理器编号
活动名:action=tcp_conn,tcp_conn代表这个事件是描述基本TCP状态变迁的。
标志位描述: [SYN_RECV] 这个TCP连接进入SYN_RECV状态,这个描述于取决事件类型。
事件属性描述:sock=0xffff88003aca5980 local=192.168.241.128:12865 peer=192.168.241.1:43545,这个描述取决于事件类型。这里的sock=X,代表这个TCP连接在内核中的数据结构地址,local=X/peer=Y的含义很明显了:)
现在解释一下,上面输出的含义——这是在执行
netperf -H 192.168.241.1 -t TCP_MAERTS -l 1
测试时在192.168.241.1上trace出来的结果。可以明显地看到netserver的两个TCP连接的变迁:一次测试中遇到涉及到两个TCP连接。一个是netserver启动后就会监听的端口12865上的连接,这应该是个控制连接。控制连接建立完成之后,数据连接local=192.168.241.128:34258 peer=192.168.241.1:54216建立了起来。1秒之后,数据连接和控制连接依次关闭。
同样是以上测试,但使用更长的测试时间(-l 200000),试试另一个叫tcp_sendlimit的trace event,以下是测试开始后几秒秒的结果:
- 1722499 1333503217.160003511 cpu=1 action=tcp_sendlim [ok] sock=0xffff88003af8b340 begin=1333503217.160003511 cnt=1 sentpkts=1
- 1722500 1333503217.160003511 cpu=1 action=tcp_sendlim [nagle] sock=0xffff88003af8b340 begin=1333503217.160003511 cnt=1 mtuprobe=0
- 1722501 1333503217.160003511 cpu=1 action=tcp_sendlim [ok] sock=0xffff88003af8b340 begin=1333503217.160003511 cnt=1 sentpkts=11
- 1722502 1333503217.160003511 cpu=1 action=tcp_sendlim [ok] sock=0xffff88003af8b340 begin=1333503217.160003511 cnt=1 sentpkts=1
- 1722503 1333503217.160003511 cpu=1 action=tcp_sendlim [nagle] sock=0xffff88003af8b340 begin=1333503217.160003511 cnt=1 mtuprobe=0
- 1722504 1333503217.160003511 cpu=1 action=tcp_sendlim [ok] sock=0xffff88003af8b340 begin=1333503217.160003511 cnt=1 sentpkts=11
- 1722505 1333503217.160003511 cpu=1 action=tcp_sendlim [ok] sock=0xffff88003af8b340 begin=1333503217.160003511 cnt=1 sentpkts=1
- 1722506 1333503217.160003511 cpu=1 action=tcp_sendlim [nagle] sock=0xffff88003af8b340 begin=1333503217.160003511 cnt=1 mtuprobe=0
- 1722507 1333503217.160003511 cpu=1 action=tcp_sendlim [ok] sock=0xffff88003af8b340 begin=1333503217.160003511 cnt=1 sentpkts=11
- 1722508 1333503217.160003511 cpu=1 action=tcp_sendlim [ok] sock=0xffff88003af8b340 begin=1333503217.160003511 cnt=1 sentpkts=1
- 1722509 1333503217.160003511 cpu=1 action=tcp_sendlim [nagle] sock=0xffff88003af8b340 begin=1333503217.160003511 cnt=1 mtuprobe=0
这时netperf显示的速率为:
- Interim result: 1142.41 10^6bits/s over 1.03 seconds
- Interim result: 1140.39 10^6bits/s over 1.00 seconds
- Interim result: 1167.64 10^6bits/s over 1.00 seconds
- Interim result: 1185.44 10^6bits/s over 1.00 seconds
- Interim result: 1146.37 10^6bits/s over 1.03 seconds
- Interim result: 1148.12 10^6bits/s over 1.00 seconds
- Interim result: 1172.02 10^6bits/s over 1.00 seconds
- Interim result: 1187.22 10^6bits/s over 1.00 seconds
- Interim result: 1167.58 10^6bits/s over 1.02 seconds
- Interim result: 1143.35 10^6bits/s over 1.02 seconds
- Interim result: 1170.49 10^6bits/s over 1.00 seconds
- Interim result: 1158.36 10^6bits/s over 1.01 seconds
- Interim result: 1152.60 10^6bits/s over 1.01 seconds
以上结果表明在执行这个测试时,一个可能的制约因素是nagle算法。这个可以用netperf的-D选项关掉。咱们看看关掉nagle后的结果:
- Interim result: 1187.84 10^6bits/s over 1.03 seconds
- Interim result: 1240.50 10^6bits/s over 1.00 seconds
- Interim result: 1256.66 10^6bits/s over 1.00 seconds
- Interim result: 1176.48 10^6bits/s over 1.07 seconds
- Interim result: 1193.74 10^6bits/s over 1.00 seconds
- Interim result: 1242.75 10^6bits/s over 1.00 seconds
- Interim result: 1254.95 10^6bits/s over 1.00 seconds
- Interim result: 1205.35 10^6bits/s over 1.04 seconds
- Interim result: 1187.79 10^6bits/s over 1.01 seconds
- Interim result: 1198.16 10^6bits/s over 1.00 seconds
- Interim result: 1227.07 10^6bits/s over 1.00 seconds
- Interim result: 1227.19 10^6bits/s over 1.00 seconds
性能是快了一点,但仍然难以令人满意,因为这是一台物理机上的虚拟网络内通信,性能应该远比这个快的。再看看这时的skbparse的结果:
- 1746236 1333503357.3914 cpu=0 action=tcp_sendlim [ok] sock=0xffff88003cea0a40 begin=1333503357.3914 cnt=9243 sentpkts=12
- 1746239 1333503360.3920 cpu=0 action=tcp_sendlim [ok] sock=0xffff88003cea0a40 begin=1333503360.3920 cnt=9481 sentpkts=12
- 1746238 1333503359.3915 cpu=1 action=tcp_sendlim [ok] sock=0xffff88003cea0a40 begin=1333503359.3915 cnt=9652 sentpkts=12
- 1746240 1333503361.3924 cpu=1 action=tcp_sendlim [ok] sock=0xffff88003cea0a40 begin=1333503361.3924 cnt=9568 sentpkts=12
- 1746241 1333503362.3927 cpu=0 action=tcp_sendlim [ok] sock=0xffff88003cea0a40 begin=1333503362.3927 cnt=9598 sentpkts=12
- 1746242 1333503363.3928 cpu=0 action=tcp_sendlim [ok] sock=0xffff88003cea0a40 begin=1333503363.3928 cnt=9671 sentpkts=12
- 1746243 1333503364.3931 cpu=0 action=tcp_sendlim [ok] sock=0xffff88003cea0a40 begin=1333503364.3931 cnt=9494 sentpkts=12
- 1746244 1333503365.3935 cpu=0 action=tcp_sendlim [ok] sock=0xffff88003cea0a40 begin=1333503365.3935 cnt=9366 sentpkts=12
- 1746245 1333503366.3938 cpu=0 action=tcp_sendlim [ok] sock=0xffff88003cea0a40 begin=1333503366.3938 cnt=9394 sentpkts=12
- 1746246 1333503367.3939 cpu=0 action=tcp_sendlim [ok] sock=0xffff88003cea0a40 begin=1333503367.3939 cnt=9480 sentpkts=12
- 1746247 1333503368.3945 cpu=0 action=tcp_sendlim [ok] sock=0xffff88003cea0a40 begin=1333503368.3945 cnt=9490 sentpkts=12
- 1746249 1333503370.3947 cpu=0 action=tcp_sendlim [ok] sock=0xffff88003cea0a40 begin=1333503370.3947 cnt=9245 sentpkts=12
- 1746250 1333503371.3953 cpu=0 action=tcp_sendlim [ok] sock=0xffff88003cea0a40 begin=1333503371.3953 cnt=9622 sentpkts=12
- 1746251 1333503372.3955 cpu=0 action=tcp_sendlim [ok] sock=0xffff88003cea0a40 begin=1333503372.3955 cnt=9397 sentpkts=12
- 1746248 1333503369.3948 cpu=1 action=tcp_sendlim [ok] sock=0xffff88003cea0a40 begin=1333503369.3948 cnt=9302 sentpkts=12
- 1746253 1333503374.3962 cpu=1 action=tcp_sendlim [ok] sock=0xffff88003cea0a40 begin=1333503374.3962 cnt=9386 sentpkts=12
- 1746252 1333503373.3958 cpu=0 action=tcp_sendlim [ok] sock=0xffff88003cea0a40 begin=1333503373.3958 cnt=9280 sentpkts=12
好整齐啊,解释一下具体含义:
sentpkts=12,TCP一次发送了12个sk_buff。特别地,因为上面没有TSO因素,所以只可能是14个MTU大小的数据。
cnt=9280 代表相同参数的事件连续发生了9280次。
begin=X (cnt=X)次事件中第一次事件发生在这个时间点上。
对于tcp_sendlimit,如果是完全相同的事件,skbtrace不会每次都报告出来,只是每秒报告一次。另外,注意到上面的trace结果表明两个CPU都在忙活着发送数据,这种并发性导致了trace结果有一定的乱序。但第一列的序号可以帮助我们找出真正的事件发生顺序。
以上面结果的最后一行数据为例,我们可以计算一下发送速率
1秒内的Goodput: 9280 * 12 * MTU = 9280 * 12 * 1500 = 167040000
折算为速率(Mbps): 167040000 * 8 / 1024 / 1024 = 1274.41
这与netperf报告的速率差不多是一致的。看起来,现在制约发送的因素是一次发送的数据量太小了,可能的原因是什么呐,其实很直接的,一个极端可疑的原因就是发送缓冲区太小了,扩大一下试试:
这是原始设置:
net.ipv4.tcp_wmem = 4096 16384 1031668
放大10x:
net.ipv4.tcp_wmem = 4096 163840 1031668
再看一下netperf的速率结果:
- Interim result: 2294.31 10^6bits/s over 1.00 seconds
- Interim result: 2012.05 10^6bits/s over 1.14 seconds
- Interim result: 2770.41 10^6bits/s over 1.06 seconds
- Interim result: 2097.98 10^6bits/s over 1.32 seconds
- Interim result: 2228.85 10^6bits/s over 1.00 seconds
- Interim result: 2754.29 10^6bits/s over 1.04 seconds
- Interim result: 2071.38 10^6bits/s over 1.33 seconds
- Interim result: 1963.17 10^6bits/s over 1.06 seconds
- Interim result: 2609.76 10^6bits/s over 1.00 seconds
- Interim result: 2749.54 10^6bits/s over 1.02 seconds
- Interim result: 2608.99 10^6bits/s over 1.05 seconds
- Interim result: 2398.40 10^6bits/s over 1.09 seconds
- Interim result: 2206.32 10^6bits/s over 1.09 seconds
- Interim result: 1989.19 10^6bits/s over 1.11 seconds
- Interim result: 2178.68 10^6bits/s over 1.01 seconds
- Interim result: 2381.69 10^6bits/s over 1.04 seconds
差不多快了一倍,再用skbtrace验证一下:
- 1906250 1333503904.100005479 cpu=0 action=tcp_sendlim [pushone] sock=0xffff88003af8dc40 begin=1333503904.100005479 cnt=1 mtuprobe=0
- 1906251 1333503904.100005479 cpu=0 action=tcp_sendlim [ok] sock=0xffff88003af8dc40 begin=1333503904.100005479 cnt=1 sentpkts=45
- 1906252 1333503904.100005479 cpu=0 action=tcp_sendlim [ok] sock=0xffff88003af8dc40 begin=1333503904.100005479 cnt=1 sentpkts=24
- 1906253 1333503904.100005479 cpu=0 action=tcp_sendlim [pushone] sock=0xffff88003af8dc40 begin=1333503904.100005479 cnt=1 mtuprobe=0
- 1906254 1333503904.100005479 cpu=0 action=tcp_sendlim [ok] sock=0xffff88003af8dc40 begin=1333503904.100005479 cnt=1 sentpkts=45
- 1906255 1333503904.100005479 cpu=0 action=tcp_sendlim [pushone] sock=0xffff88003af8dc40 begin=1333503904.100005479 cnt=1 mtuprobe=0
- 1906256 1333503904.100005479 cpu=0 action=tcp_sendlim [ok] sock=0xffff88003af8dc40 begin=1333503904.100005479 cnt=1 sentpkts=45
- ^C1906257 1333503904.100005479 cpu=0 action=tcp_sendlim [ok] sock=0xffff88003af8dc40 begin=1333503904.100005479 cnt=1 sentpkts=24
- 1906258 1333503904.100005479 cpu=0 action=tcp_sendlim [pushone] sock=0xffff88003af8dc40 begin=1333503904.100005479 cnt=1 mtuprobe=0
- 1906259 1333503904.100005479 cpu=0 action=tcp_sendlim [ok] sock=0xffff88003af8dc40 begin=1333503904.100005479 cnt=1 sentpkts=45
- 1906260 1333503904.110005480 cpu=0 action=tcp_sendlim [pushone] sock=0xffff88003af8dc40 begin=1333503904.110005480 cnt=1 mtuprobe=0
果然,一次发送的数据更多了!另一个可能的优化点:对于bulk transmit的连接,pushone也许可以更智能一点。但这需要修改TCP代码了,已经超出了本文的范畴。
最后,这个东东是开源的,目前的补丁是基于v3.3内核:
阅读(7535) | 评论(0) | 转发(2) |