Chinaunix首页 | 论坛 | 博客
  • 博客访问: 1919861
  • 博文数量: 376
  • 博客积分: 2147
  • 博客等级: 大尉
  • 技术积分: 3642
  • 用 户 组: 普通用户
  • 注册时间: 2012-02-06 10:47
文章分类

全部博文(376)

文章存档

2019年(3)

2017年(28)

2016年(15)

2015年(17)

2014年(182)

2013年(16)

2012年(115)

我的朋友

分类: 嵌入式

2014-01-08 09:54:02

From Wikipedia, the free encyclopedia
  (Redirected from )
Jump to: ,

TCP offload engine or TOE is a technology used in (NIC) to offload processing of the entire stack to the network controller. It is primarily used with high-speed network interfaces, such as and , where processing overhead of the network stack becomes significant.

The term, TOE, is often used to refer to the NIC itself, although circuit board engineers may use it to refer only to the integrated circuit included on the card which processes the . TOEs are often suggested as a way to reduce the overhead associated with storage protocols such as and .

Contents

Purpose

Originally was designed for low speed networks (such as early ) but with the growth of the Internet in terms of transmission speeds (, and links) and faster and more access mechanisms (such as and ) it is frequently used in and desktop environments at speeds over 1 gigabit per second. The TCP software implementations on host systems require extensive computing power. Full duplex gigabit communication using software processing alone is enough to consume more than 80% of a 2.4 GHz processor (see ), resulting in little or no processing resources left for the applications to run on the system.

As TCP is a protocol, this adds to the complexity and processing overhead of the protocol. These aspects include:

  • using the "3-way handshake" (SYNchronize; SYNchronize-ACKnowledge; ACKnowledge).
  • Acknowledgment of packets as they are received by the far end, adding to the message flow between the endpoints and thus the protocol load.
  • and calculations - again a burden on a general purpose CPU to perform.
  • calculations for packet acknowledgement and .
  • .

Moving some or all of these functions to dedicated hardware, a TCP offload engine, frees the system's main for other tasks. As of 2012, very few consumer network interface cards support TOE.

Instead of replacing the TCP stack with a TOE entirely, there are alternative techniques to offload some operations in co-operation with the operating system's TCP stack. and are supported by the majority of today's Ethernet NICs. Newer techniques like and TCP acknowledgment offload are already implemented in some high-end Ethernet hardware, but are effective even when implemented purely in software.

Freed-up CPU cycles

A generally accepted rule of thumb is that 1 hertz of CPU processing is required to send or receive 1 of TCP/IP. For example 5 Gbit/s (625 MB/s) of network traffic requires 5 GHz of CPU processing. This implies that 2 entire cores of a 2.5 GHz will be required to handle the TCP/IP processing associated with 5 Gbit/s of TCP/IP traffic. Since Ethernet (10Ge in this example) is bidirectional it is possible to send and receive 10 Gbit/s (for an aggregate throughput of 20 Gbit/s). Using the 1 Hz/(bit/s) rule this equates to eight 2.5 GHz cores. (Few if any current day servers have a requirement to move 10 Gbit/s in both directions but not so long ago 1 Gbit/s full duplex was thought to be more than enough bandwidth.)

Many of the CPU cycles used for TCP/IP processing are "freed-up" by TCP/IP offload and may be used by the CPU (usually a CPU) to perform other tasks such as file system processing (in a file server) or indexing (in a backup media server). In other words, a server with TCP/IP offload can do more server work than a server without TCP/IP offload NICs.

Reduction of PCI traffic

In addition to the protocol overhead that TOE can address, it can also address some architectural issues that affect a large percentage of host based (server and PC) endpoints. Currently most end point hosts are bus based, which provides a standard interface for the addition of certain such as Network Interfaces to and PCs. PCI is inefficient for transferring small bursts of data from memory, across the PCI bus to the network interface ICs, but its efficiency improves as the data burst size increases. Within the TCP protocol, a large number of small packets are created (e.g. acknowledgements) and as these are typically generated on the host CPU and transmitted across the PCI bus and out the network physical interface, this impacts the host computer IO throughput.

A TOE solution, located on the network interface, is located on the other side of the PCI bus from the CPU host so it can address this I/O efficiency issue, as the data to be sent across the TCP connection can be sent to the TOE from the CPU across the PCI bus using large data burst sizes with none of the smaller TCP packets having to traverse the PCI bus.

History

One of the first patents in this technology, for UDP offload, was issued to in early 1990. Auspex founder and a number of Auspex engineers went on to found in 1997 with the idea of extending the concept of network stack offload to TCP and implementing it in custom silicon. They introduced the first parallel-stack full offload network card in early 1999; the company’s SLIC (Session Layer Interface Card) was the predecessor to its current TOE offerings. Alacritech holds a number of patents in the area of TCP/IP offload.

By 2002, as the emergence of TCP-based storage such as spurred interest, it was said that "At least a dozen newcomers, most founded toward the end of the dot-com bubble, are chasing the opportunity for merchant semiconductor accelerators for storage protocols and applications, vying with half a dozen entrenched vendors and in-house ASIC designs."

In 2005 Microsoft licensed Alacritech's patent base and along with Alacritech created the partial TCP offload architecture that has become known as TCP chimney offload. TCP chimney offload centers on the Alacritech "Communication Block Passing Patent". At the same time, Broadcom also obtained a license to build TCP chimney offload chips.

Types of TCP/IP offload

Parallel-stack full offload

Parallel-stack full offload gets its name from the concept of two parallel TCP/IP Stacks. The first is the main host stack which is included with the host OS. The second or "parallel stack" is connected between the and the using a "vampire tap". The vampire tap intercepts TCP connection requests by applications and is responsible for TCP connection management as well as TCP data transfer. Many of the criticisms in the following section relate to this type of TCP offload.

HBA full offload

HBA full offload is found in iSCSI which present themselves as disk controllers to the host system while connecting (via TCP/IP) to an storage device. This type of TCP offload not only offloads TCP/IP processing but it also offloads the iSCSI initiator function. Because the HBA appears to the host as a disk controller, it can only be used with iSCSI devices and is not appropriate for general TCP/IP offload.

TCP chimney partial offload

TCP chimney offload addresses the major security criticism of parallel-stack full offload. In partial offload, the main system stack controls all connections to the host. After a connection has been established between the local host (usually a server) and a foreign host (usually a client) the connection and its state are passed to the TCP offload engine. The heavy lifting of data transmit and receive is handled by the offload device. Almost all TCP offload engines use some type of TCP/IP hardware implementation to perform the data transfer without host CPU intervention. When the connection is closed, the connection state is returned from the offload engine to the main system stack. Maintaining control of TCP connections allows the main system stack to implement and control connection security.

Support in Linux

Unlike other kernels, the Linux kernel does not include support for TOE hardware. However kernel network drivers have had TOE support since 2002. While there are patches from the hardware manufacturers such as or that add support, the Linux kernel developers are opposed to this technology for several reasons, including

  • Security – because TOE is implemented in hardware, patches must be applied to the TOE , instead of just software, to address any security vulnerabilities found in a particular TOE implementation. This is further compounded by the newness and vendor-specificity of this hardware, as compared to a well tested TCP/IP stack as is found in an operating system that does not use TOE.
  • Limitations of hardware – because connections are buffered and processed on the TOE chip, resource starvation can more easily occur as compared to the generous CPU and memory available to the operating system.
  • Complexity – TOE breaks the assumption that kernels make about having access to all resources at all times – details such as memory used by open connections are not available with TOE. TOE also requires very large changes to a networking stack in order to be supported properly, and even when that is done, features like and typically do not work.
  • Proprietary – TOE is implemented differently by each hardware vendor. This means more code must be rewritten to deal with the various TOE implementations, at a cost of the aforementioned complexity and, possibly, security. Furthermore, TOE firmware cannot be easily modified since it is closed-source.
  • Obsolescence – Each TOE NIC has a limited lifetime of usefulness, because system hardware rapidly catches up to TOE performance levels, and eventually exceeds TOE performance levels.

Other

Despite these concerns, measurable performance improvements have been observed in other open source operating systems, such as . There have been few, if any reported security holes, and most academic research supports the use of TOE.[]

Suppliers

Much of the current work on TOE technology is by manufacturers of interface cards, such as, , , , , , , , and .

See also

  • (LSO)
  • (LRO)

References

  1. Jonathan Corbet (2007-08-01). . . Retrieved 2007-08-22.
  2. Aravind Menon, Willy Zwaenepoel (2008-04-28). .
  3. TCP performance re-visited. 2003-04-02.
  4. , Rick Merritt, 10/21/2002, EE Times
  5. , August 22, 2005, LWN.net
  6. ethtool 1.7 man page
  7. , Linux Foundation.

External links

  • Article: TCP Offload to the Rescue by Andy Currid at
  • Mogul, Jeffrey C. (2003). . Proceedings of HotOS IX: The 9th Workshop on Hot Topics in Operating Systems. USENIX Association. Retrieved 23 July 2006.
  • . . April 2002.
阅读(1076) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~