Chinaunix首页 | 论坛 | 博客
  • 博客访问: 6684567
  • 博文数量: 1159
  • 博客积分: 12444
  • 博客等级: 上将
  • 技术积分: 12570
  • 用 户 组: 普通用户
  • 注册时间: 2008-03-13 21:34
文章分类

全部博文(1159)

文章存档

2016年(126)

2015年(350)

2014年(56)

2013年(91)

2012年(182)

2011年(193)

2010年(138)

2009年(23)

分类: C/C++

2015-12-04 11:04:55


Is it safe to share the same Epoll fd (not socket fd) among several threads.

Yes, it is safe - the epoll(7) interface is thread-safe - but you should be careful when doing so, you should at least use EPOLLET (edge-triggered mode, as opposed to the default level-triggered) to avoid spurious wake-ups in other threads. This is because level-triggered mode will wake up every thread when a new event is available for processing. Since only one thread will be dealing with it, this would wake up most threads unnecessarily.

If shared epfd is used will each thread have to pass its own events array or a shared events array to epoll_wait()

Yes, you need a separate events array on each thread, or else you'll have race conditions and nasty things can happen. For example, you might have a thread that is still iterating through the events returned by epoll_wait(2) and processing the requests when suddenly another thread calls epoll_wait(2) with the same array and then the events get overwritten at the same time the other thread is reading them. Not good! You absolutely need a separate array for each thread.

Assuming you do have a separate array for each thread, either possibility - waiting on the same epoll fd or have a separate epoll fd for each thread - will work equally well, but note that the semantics are different. With a globally shared epoll fd, every thread waits for a request from any client, because clients are all added to the same epoll fd. With a separate epoll fd for each thread, then each thread is essentially responsible for a subset of clients (those clients that were accepted by that thread).

This may be irrelevant for your system, or it may make a huge difference. For example, it may happen that a thread is unfortunate enough to get a group of power users that make heavy and frequent requests, leaving that thread overworked, while other threads with less aggressive clients are almost idle. Wouldn't that be unfair? On the other hand, maybe you'd like to have only some threads dealing with a specific class of users, and in that case maybe it makes sense to have different epoll fds on each thread. As usual, you need to consider both possibilities, evaluate trade offs, think about your specific problem, and make a decision.

Below is an example using a globally shared epoll fd. I originally didn't plan to do all of this, but one thing led to another, and, well, it was fun and I think it may help you get started. It's an echo server that listens on port 3000 and has a pool of 20 threads using epoll to concurrently accept new clients and serve requests.


点击(此处)折叠或打开

  1. // g++ -lpthread server.c -o server_c
  2. #include <stdio.h>
  3. #include <stdlib.h>
  4. #include <inttypes.h>
  5. #include <errno.h>
  6. #include <string.h>
  7. #include <pthread.h>
  8. #include <assert.h>
  9. #include <unistd.h>
  10. #include <sys/types.h>
  11. #include <sys/socket.h>
  12. #include <arpa/inet.h>
  13. #include <sys/epoll.h>

  14. #define SERVERPORT 1111
  15. #define SERVERBACKLOG 10
  16. #define THREADSNO 20
  17. #define EVENTS_BUFF_SZ 256

  18. static int serversock;
  19. static int epoll_fd;
  20. static pthread_t threads[THREADSNO];

  21. int accept_new_client(void)
  22. {

  23.     int clientsock;
  24.     struct sockaddr_in addr;
  25.     socklen_t addrlen = sizeof(addr);
  26.     if ((clientsock = accept(serversock, (struct sockaddr *)&addr, &addrlen)) < 0) {
  27.         return -1;
  28.     }

  29.     char ip_buff[INET_ADDRSTRLEN + 1];
  30.     if (inet_ntop(AF_INET, &addr.sin_addr, ip_buff, sizeof(ip_buff)) == NULL) {
  31.         close(clientsock);
  32.         return -1;
  33.     }

  34.     printf("*** [%p] Client connected from %s:%d\n", (void *)pthread_self(), ip_buff, ntohs(addr.sin_port));

  35.     struct epoll_event epevent;
  36.     epevent.events = EPOLLIN | EPOLLET;
  37.     epevent.data.fd = clientsock;

  38.     if (epoll_ctl(epoll_fd, EPOLL_CTL_ADD, clientsock, &epevent) < 0) {
  39.         perror("epoll_ctl(2) failed attempting to add new client");
  40.         close(clientsock);
  41.         return -1;
  42.     }

  43.     return 0;
  44. }

  45. int handle_request(int clientfd)
  46. {
  47.     char readbuff[512];
  48.     struct sockaddr_in addr;
  49.     socklen_t addrlen = sizeof(addr);
  50.     ssize_t n;

  51.     if ((n = recv(clientfd, readbuff, sizeof(readbuff) - 1, 0)) < 0) {
  52.         return -1;
  53.     }

  54.     if (n == 0) {
  55.         return 0;
  56.     }

  57.     readbuff[n] = '\0';

  58.     if (getpeername(clientfd, (struct sockaddr *)&addr, &addrlen) < 0) {
  59.         return -1;
  60.     }

  61.     char ip_buff[INET_ADDRSTRLEN + 1];
  62.     if (inet_ntop(AF_INET, &addr.sin_addr, ip_buff, sizeof(ip_buff)) == NULL) {
  63.         return -1;
  64.     }

  65.     printf("*** [%p] [%s:%d] -> server: %s", (void *)pthread_self(), ip_buff, ntohs(addr.sin_port), readbuff);

  66.     ssize_t sent;
  67.     if ((sent = send(clientfd, readbuff, n, 0)) < 0) {
  68.         return -1;
  69.     }

  70.     readbuff[sent] = '\0';

  71.     printf("*** [%p] server -> [%s:%d]: %s", (void *)pthread_self(), ip_buff, ntohs(addr.sin_port), readbuff);

  72.     return 0;
  73. }

  74. void *worker_thr(void *args)
  75. {
  76.     struct epoll_event *events = (struct epoll_event *)malloc(sizeof(*events) * EVENTS_BUFF_SZ);
  77.     if (events == NULL) {
  78.         perror("malloc(3) failed when attempting to allocate events buffer");
  79.         pthread_exit(NULL);
  80.     }

  81.     int events_cnt;
  82.     while ((events_cnt = epoll_wait(epoll_fd, events, EVENTS_BUFF_SZ, -1)) > 0) {
  83.         int i;
  84.         for (i = 0; i < events_cnt; i++) {
  85.             assert(events[i].events & EPOLLIN);

  86.             if (events[i].data.fd == serversock) {
  87.                 if (accept_new_client() == -1) {
  88.                     fprintf(stderr, "Error accepting new client: %s\n", strerror(errno));
  89.                 }
  90.             } else {
  91.                 if (handle_request(events[i].data.fd) == -1) {
  92.                     fprintf(stderr, "Error handling request: %s\n", strerror(errno));
  93.                 }
  94.             }
  95.         }
  96.     }

  97.     if (events_cnt == 0) {
  98.         fprintf(stderr, "epoll_wait(2) returned 0, but timeout was not specified...?");
  99.     } else {
  100.         perror("epoll_wait(2) error");
  101.     }

  102.     free(events);

  103.     return NULL;
  104. }

  105. int main(void)
  106. {
  107.     if ((serversock = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP)) < 0) {
  108.         perror("socket(2) failed");
  109.         exit(EXIT_FAILURE);
  110.     }

  111.     struct sockaddr_in serveraddr;
  112.     serveraddr.sin_family = AF_INET;
  113.     serveraddr.sin_port = htons(SERVERPORT);
  114.     serveraddr.sin_addr.s_addr = INADDR_ANY;

  115.     if (bind(serversock, (const struct sockaddr *)&serveraddr, sizeof(serveraddr)) < 0) {
  116.         perror("bind(2) failed");
  117.         exit(EXIT_FAILURE);
  118.     }

  119.     if (listen(serversock, SERVERBACKLOG) < 0) {
  120.         perror("listen(2) failed");
  121.         exit(EXIT_FAILURE);
  122.     }

  123.     if ((epoll_fd = epoll_create(1)) < 0) {
  124.         perror("epoll_create(2) failed");
  125.         exit(EXIT_FAILURE);
  126.     }

  127.     struct epoll_event epevent;
  128.     epevent.events = EPOLLIN | EPOLLET;
  129.     epevent.data.fd = serversock;

  130.     if (epoll_ctl(epoll_fd, EPOLL_CTL_ADD, serversock, &epevent) < 0) {
  131.         perror("epoll_ctl(2) failed on main server socket");
  132.         exit(EXIT_FAILURE);
  133.     }

  134.     int i;
  135.     for (i = 0; i < THREADSNO; i++) {
  136.         if (pthread_create(&threads[i], NULL, worker_thr, NULL) < 0) {
  137.             perror("pthread_create(3) failed");
  138.             exit(EXIT_FAILURE);
  139.         }
  140.     }

  141.     /* main thread also contributes as worker thread */
  142.     worker_thr(NULL);

  143.     return 0;
  144. }


A couple of notes:

  • main() should return int, not void (as you show in your example)
  • Always deal with error return codes. It is very common to ignore them and when things break it's hard to know what happened.
  • The code assumes that no request is larger than 511 bytes (as seen by the buffer size in handle_request()). If a request is greater than this, it is possible that some data is left in the socket for a very long time, because epoll_wait(2) will not report it until a new event occurs on that file descriptor (because we're using EPOLLET). In the worst case, the client may never actually send any new data, and wait for a reply forever.
  • The code that prints the thread identifier for each request assumes that pthread_t is an opaque pointer type. Indeed, pthread_t is a pointer type in Linux, but it may be an integer type in other platforms, so this is not portable. However, that is probably not much of a problem, since epoll is Linux specific, so the code is not portable anyway.
  • It assumes that no other requests from the same client arrive when a thread is still serving a request from that client. If a new request arrives in the meantime and another thread starts serving it, we have a race condition and the client will not necessarily receive the echo messages in the same order he sent them (however, write(2) is atomic, so while the replies may be out of order, they will not intersperse).


阅读(1452) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~