Chinaunix首页 | 论坛 | 博客
  • 博客访问: 875444
  • 博文数量: 204
  • 博客积分: 2433
  • 博客等级: 大尉
  • 技术积分: 2205
  • 用 户 组: 普通用户
  • 注册时间: 2011-04-05 13:32
文章分类

全部博文(204)

分类: LINUX

2019-04-05 17:36:16

TCP 使用一种称为“拥塞窗口”的机制来决定在同一时间能够传送多少数据包,拥塞窗口越大,吞吐量越。TCP使用“slow start”和“congestion avoidance”算法来决定拥塞窗口的大小。最大的拥塞窗口大小与内核分配给每个socket的缓冲空间数量有关。对于每个socket都有一个默认的缓冲大小,这个大小可以改变,在打开这个socket之前,应用程序可以使用系统库调用更改这个大小。另外,内核还会强制指定一个最大的缓冲数值。 The buffer size can be adjusted for both the send and receive ends of the socket.

获得最大的吞吐量的关键是针对你所使用的连接类型最佳化TCP接收和发送的socket缓冲区的大小。如果缓冲太小,那么TCP拥塞窗口永远不会全部打开。如果接收缓冲过大,TCP的流量控制会失效,发送方的流量会造成接收方的溢出,从而导致TCP窗口关闭,这种问题通常发生在发送方主机比接收方主机快的情况下。如果有足够的内存,那么很大的窗口在发送一方不会产生太大的问题。

最佳的缓冲大小应该是 链路带宽*延时乘积的两倍

   buffer size = 2 * bandwidth * delay

使用ping 可以获得延时的大小,使用类似pathrate的工具可以获得端到端的容量(链路中最慢的一跳的带宽)。因为ping可以给出round trip
time (RTT),通常可以使用下面的公式替代上边的那个:

   buffer size = bandwidth * RTT

举例来说,如果ping值是50ms,端到端的网络全部由100M以太网组成,那么TCP缓冲的大小应该是:

   0.05 sec * (100 Mbits / 8 bits) = 625 KBytes

你需要了解的TCP设置有两个,一个是 default TCP发送和接收的缓冲大小,另一个是maximum TCP发送和接收的缓冲大小。当前的类unix系统的maximum TCP缓冲大小只有256KB!关于如何调整maximum TCP缓冲的大小,请参考不同的操作系统设置方法。注意,不要将default 缓冲值设置大于128KB,因为这可能会降低LAN的性能。因此你需要在发送者和接收者两边都使用Unix setsockopt调用来设置缓冲区的大小。

====================================================================
Linux系统的调节

因为2.4和2.6内核有很大的不同,因此我们首先调整两者相同的部分:
将下边的内容加入/etc/sysctl.conf,然后运行 "sysctl -p"

# increase TCP max buffer size
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
# increase Linux autotuning TCP buffer limits
# min, default, and max number of bytes to use
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216

注: 请保持tcp_mem不动,默认值可以很好的工作。

另外一个可以增加TCP吞吐量的参数是增加网卡的队列大小,参考以下命令:

    ifconfig eth0 txqueuelen 1000

我曾经在一条很长而且很快的线路上做了上边的修改,结果带宽增加了近8倍!!当然这仅仅对于那些使用GigaEthernet连接的主机来说是个好主意,而且可能会有副作用,例如uneven sharing between multiple streams。

Linux 2.4

Starting with Linux 2.4, Linux has implemented a sender-side autotuning mechanism, so that setting the opitimal buffer size on the sender is not needed. This assumes you have set large buffers on the recieve side, as the sending buffer will not grow beyond the size of the recieve buffer.
However, Linux 2.4 has some other strange behavior that one needs to be aware of. For example: The value for ssthresh for a given path is cached in the routing table. This means that if a connection has has a retransmition and reduces its window, then all connections to that host for the next 10 minutes will use a reduced window size, and not even try to increase its window. The only way to disable this behavior is to do the following before all new connections (you must be root):

      sysctl -w net.ipv4.route.flush=1

More information on various tuning parameters for Linux 2.4 are available in the Ipsysctl tutorial .


Linux 2.6

Starting in Linux 2.6.7 (and back-ported to 2.4.27), BIC TCP is part of the kernel, and enabled by default. BIC TCP helps
recover quickly from packet loss on high-speed WANs, and appears to work quite well. A BIC implementation bug was discovered, but this was fixed in Linux 2.6.11, so you should upgrade to this version or higher.

Linux 2.6 also includes and both send and receiver-side automatic buffer tuning (up to the maximum sizes specified above).

There is also a setting to fix the ssthresh caching weirdness described above.

There are a couple additional sysctl settings for 2.6:

  # don't cache ssthresh from previous connection
  net.ipv4.tcp_no_metrics_save = 1
  # recommended to increase this for 1000 BT or higher
  net.core.netdev_max_backlog = 2500
  # for 10 GigE, use this
  # net.core.netdev_max_backlog = 30000   

Starting with version 2.6.13, Linux supports pluggable congestion control algorithms . The congestion control algorithm used is set using the sysctl variable net.ipv4.tcp_congestion_control, which is set to Reno by default. (Apparently they decided that BIC was not quite ready for prime time.) The current set of congestion control options are:

   * reno: Traditional TCP used by almost all other OSes. (default)
   * bic: BIC-TCP
   * highspeed: HighSpeed TCP: Sally Floyd's suggested algorithm
   * htcp: Hamilton TCP
   * hybla: For satellite links
   * scalable: Scalable TCP
   * vegas: TCP Vegas
   * westwood: optimized for lossy networks

For very long fast paths, I suggest trying HTCP or BIC-TCP if Reno is not is not performing as desired. To set this, do the following:

   sysctl -w net.ipv4.tcp_congestion_control=htcp

More information on each of these algorithms and some results can be found here .

Note: Linux 2.6.11 and under has a serious problem with certain Gigabit and 10 Gig ethernet drivers and NICs that support "tcp segmentation offload", such as the Intel e1000 and ixgb drivers, the Broadcom tg3, and the s2io 10 GigE drivers. This problem was fixed in version 2.6.12. A workaround for this problem is to use ethtool to disable segmentation offload:
    ethtool -K eth0 tso off
This will reduce your overall performance, but will make TCP over LFNs far more stable.

More information on tuning parameters and defaults for Linux 2.6 are available in the file ip-sysctl.txt, which is part of
the 2.6 source distribution.

And finally a warning for both 2.4 and 2.6: for very large BDP paths where the TCP window is > 20 MB, you are likely to hit the Linux SACK implementation problem. If Linux has too many packets in flight when it gets a SACK event, it takes too long to located the SACKed packet, and you get a TCP timeout and CWND goes back to 1 packet. Restricting the TCP buffer size to about 12 MB seems to avoid this problem, but clearly limits your total throughput. Another solution is to disable SACK.


Linux 2.2

If you are still running Linux 2.2, upgrade! If this is not possible, add the following to /etc/rc.d/rc.local

  echo 8388608 > /proc/sys/net/core/wmem_max  
  echo 8388608 > /proc/sys/net/core/rmem_max
  echo 65536 > /proc/sys/net/core/rmem_default
  echo 65536 > /proc/sys/net/core/wmem_default

====================================================================
Windows系统的调节

使用注册表编辑器修改下边的键值

# turn on window scale and timestamp option
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\Tcp1323=3
# set default TCP window size (default = 16KB)
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\TcpWindowSize=131400
# and maybe set this too: (default = not set )
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\GlobalMaxTcpWindowSize=16777216

进一步信息请参考:
;en-us;314053
 ... depovg/tcpip2k.mspx

====================================================================
FreeBSD系统的调节

将下面的内容添加到/etc/sysctl.conf中,然后重启。

   kern.ipc.maxsockbuf=16777216
   net.inet.tcp.rfc1323=1
   net.inet.tcp.sendspace=1048576
   net.inet.tcp.recvspace=1048576
阅读(7612) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~