Chinaunix首页 | 论坛 | 博客
  • 博客访问: 245356
  • 博文数量: 63
  • 博客积分: 179
  • 博客等级: 入伍新兵
  • 技术积分: 342
  • 用 户 组: 普通用户
  • 注册时间: 2010-06-27 20:29
文章分类

全部博文(63)

文章存档

2019年(2)

2013年(5)

2012年(53)

2011年(3)

gso

分类: LINUX

2012-05-02 00:08:54

gso

GSO ("Generic Segmentation Offload") is a performance optimization
which is a generalisation of the concept of .

It has been added into Linux 2.6.18

Taken from Herbert Xu's
posting on linux-netdev

Many people have observed that a lot of the savings in TSO come from
traversing the networking stack once rather than many times for each
super-packet. These savings can be obtained without hardware support.
In fact, the concept can be applied to other protocols such as TCPv6,
UDP, or even DCCP.

The key to minimising the cost in implementing this is to postpone the
segmentation as late as possible. In the ideal world, the segmentation
would occur inside each NIC driver where they would rip the super-packet
apart and either produce SG lists which are directly fed to the hardware,
or linearise each segment into pre-allocated memory to be fed to the NIC.
This would elminate segmented skb's altogether.

Unfortunately this requires modifying each and every NIC driver so it
would take quite some time. A much easier solution is to perform the
segmentation just before the entry into the driver's xmit routine. This
concept is called GSO: Generic Segmentation Offload.

Herbert Xu has also posted some numbers on the performance gains by doing
this:

The test was performed through the loopback device which is a fairly good
approxmiation of an SG-capable NIC.
GSO like TSO is only effective if the MTU is significantly less than the
maximum value of 64K. So only the case where the MTU was set to 1500 is
of interest. There we can see that the throughput improved by 17.5%
(3061.05Mb/s => 3598.17Mb/s). The actual saving in transmission cost is
in fact a lot more than that as the majority of the time here is spent on
the RX side which still has to deal with 1500-byte packets.

The worst-case scenario is where the NIC does not support SG and the user
uses write(2) which means that we have to copy the data twice. The files
gso-off/gso-on provide data for this case (the test was carried out on
e100). As you can see, the cost of the extra copy is mostly offset by the
reduction in the cost of going through the networking stack.

For now GSO is off by default but can be enabled through ethtool. It is
conceivable that with enough optimisation GSO could be a win in most cases
and we could enable it by default.

However, even without enabling GSO explicitly it can still function on
bridged and forwarded packets. As it is, passing TSO packets through a
bridge only works if all constiuents support TSO. With GSO, it provides
a fallback so that we may enable TSO for a bridge even if some of its
constituents do not support TSO.

This provides massive savings for Xen as it uses a bridge-based architecture
and TSO/GSO produces a much larger effective MTU for internal traffic between
domains.

Groups:
阅读(52383) | 评论(0) | 转发(0) |
0

上一篇:关于printk的学习

下一篇:RCU同步机制

给主人留下些什么吧!~~