分类: 网络与安全
2013-08-29 23:05:27
原文地址:tcp close关闭连接和SIGPIPE信号 作者:mfc42d
经常有人反映发送消息的客户端经常自己挂掉,真的有必要花时间看看了。
看了看服务端的日志,一般出现在客户端的某些信息不完整时,服务器关掉连接的时候。通过抓包分析了一下,tcp链接在close的时候,客户端会收到econnrest,不是正常的fin包. 发现close系统调用的时候,服务器端发出rst报文, 而不是正常的fin。
如果服务器的接收缓冲区还有数据,服务器协议栈就会发rst代替fin。
这是linux 2.6.27的kernel/net/ipv4下面tcp.c的部分源码
/* As outlined in RFC 2525, section 2.17, we send a RST here because
* data was lost. To witness the awful effects of the old behavior of
* always doing a FIN, run an older 2.1.x kernel or 2.0.x, start a bulk
* GET in an FTP client, suspend the process, wait for the client to
* advertise a zero window, then kill -9 the FTP client, wheee...
* Note: timeout is always zero in such a case.
*/
if (data_was_unread) {
/* Unread data was tossed, zap the connection. */
NET_INC_STATS_USER(sock_net(sk), LINUX_MIB_TCPABORTONCLOSE);
tcp_set_state(sk, TCP_CLOSE);
tcp_send_active_reset(sk, GFP_KERNEL);
} else if (sock_flag(sk, SOCK_LINGER) && !sk->sk_lingertime) {
/* Check zero linger _after_ checking for unread data. */
sk->sk_prot->disconnect(sk, 0);
NET_INC_STATS_USER(sock_net(sk), LINUX_MIB_TCPABORTONDATA);
} else if (tcp_close_state(sk)) {
/* We FIN if the application ate all the data before
* zapping the connection.
*/
/* RED-PEN. Formally speaking, we have broken TCP state
* machine. State transitions:
*
* TCP_ESTABLISHED -> TCP_FIN_WAIT1
* TCP_SYN_RECV -> TCP_FIN_WAIT1 (forget it, it's impossible)
* TCP_CLOSE_WAIT -> TCP_LAST_ACK
*
* are legal only when FIN has been sent (i.e. in window),
* rather than queued out of window. Purists blame.
*
* F.e. "RFC state" is ESTABLISHED,
* if Linux state is FIN-WAIT-1, but FIN is still not sent.
*
* The visible declinations are that sometimes
* we enter time-wait state, when it is not required really
* (harmless), do not send active resets, when they are
* required by specs (TCP_ESTABLISHED, TCP_CLOSE_WAIT, when
* they look as CLOSING or LAST_ACK for Linux)
* Probably, I missed some more holelets.
* --ANK
*/
tcp_send_fin(sk);
当然进程关闭的原因不在这里, 当服务器close一个连接时,若client端接着发数据。根据TCP协议的规定,会收到一个RST响应,进程对接收了RST的socket发送数据时,内核会发出一个SIGPIPE信号给进程。根据信号的默认处理规则SIGPIPE信号的默认执行动作是终止进程,所以client会退出。若不想client退出可以把SIGPIPE设为SIG_IGN
如: signal(SIGPIPE,SIG_IGN);
这时SIGPIPE交给了系统处理。
正常的情况下,如果send在等待协议传送数据时连接断开,调用send的进程会接收到一个SIGPIPE信号,进程对该信号的默认处理是进程终止。
在Unix系统下,如果recv函数在等待协议接收数据时连接断开,那么调用recv的进程会接收到一个SIGPIPE信号,进程对该信号的默认处理是进程终止。
处理方法:
在初始化时调用signal(SIGPIPE,SIG_IGN)忽略该信号
其时send或recv函数将返回-1,errno为EPIPE,可关闭socket。
看了一下JDK的实现openjdk-7-fcs-src-b147-27_jun_2011\openjdk\jdk\src\solaris\native\java\net\SocketInputStream.c
Java_java_net_SocketInputStream_socketRead0函数中
if (nread < 0) {
switch (errno) {
case ECONNRESET:
case EPIPE:
JNU_ThrowByName(env, "sun/net/ConnectionResetException",
"Connection reset");
break;
case EBADF:
JNU_ThrowByName(env, JNU_JAVANETPKG "SocketException",
"Socket closed");
break;
case EINTR:
JNU_ThrowByName(env, JNU_JAVAIOPKG "InterruptedIOException",
"Operation interrupted");
break;
default:
NET_ThrowByNameWithLastError(env,
JNU_JAVANETPKG "SocketException", "Read failed");
}
squid中client_side.c代码
/* This is a handler normally called by comm_close() */
static void
connStateFree(int fd, void *data)
{
ConnStateData *connState = data;
dlink_node *n;
clientHttpRequest *http;
debug(33, 3) ("connStateFree: FD %d\n", fd);
assert(connState != NULL);
clientdbEstablished(connState->peer.sin_addr, -1); /* decrement */
n = connState->reqs.head;
while (n != NULL) {
http = n->data;
n = n->next;
assert(http->conn == connState);
httpRequestFree(http);
}
if (connState->auth_user_request)
authenticateAuthUserRequestUnlock(connState->auth_user_request);
connState->auth_user_request = NULL;
authenticateOnCloseConnection(connState);
memFreeBuf(connState->in.size, connState->in.buf);
pconnHistCount(0, connState->nrequests);
if (connState->pinning.fd >= 0)
comm_close(connState->pinning.fd);
cbdataFree(connState);
#ifdef _SQUID_LINUX_
/* prevent those nasty RST packets */
{
char buf[SQUID_TCP_SO_RCVBUF];
while (FD_READ_METHOD(fd, buf, SQUID_TCP_SO_RCVBUF) > 0);
}
#endif
}
最后的几句,如果不把读缓冲区的数据读完,close的话会发rst
static void
ngx_http_lingering_close_handler(ngx_event_t *rev)
{
ssize_t n;
ngx_msec_t timer;
ngx_connection_t *c;
ngx_http_request_t *r;
ngx_http_core_loc_conf_t *clcf;
u_char buffer[NGX_HTTP_LINGERING_BUFFER_SIZE];
c = rev->data;
r = c->data;
ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0,
"http lingering close handler");
if (rev->timedout) {
ngx_http_close_request(r, 0);
return;
}
timer = (ngx_msec_t) (r->lingering_time - ngx_time());
if (timer <= 0) {
ngx_http_close_request(r, 0);
return;
}
do {
n = c->recv(c, buffer, NGX_HTTP_LINGERING_BUFFER_SIZE);
ngx_log_debug1(NGX_LOG_DEBUG_HTTP, c->log, 0, "lingering read: %d", n);
if (n == NGX_ERROR || n == 0) {
ngx_http_close_request(r, 0);
return;
}
} while (rev->ready);
if (ngx_handle_read_event(rev, 0) != NGX_OK) {
ngx_http_close_request(r, 0);
return;
}
clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module);
timer *= 1000;
if (timer > clcf->lingering_timeout) {
timer = clcf->lingering_timeout;
}
ngx_add_timer(rev, timer);
}
也是在关闭的时候,读完所有的数据。