Chinaunix首页 | 论坛 | 博客
  • 博客访问: 1121664
  • 博文数量: 82
  • 博客积分: 3362
  • 博客等级: 中校
  • 技术积分: 500
  • 用 户 组: 普通用户
  • 注册时间: 2009-10-05 16:27
文章分类

全部博文(82)

文章存档

2011年(1)

2010年(19)

2009年(62)

我的朋友

分类: LINUX

2009-12-27 22:00:12



General Tuning Tips

The essentials of tuning Apache for optimum performance are discussed in depth at
Some of the advice given here is definitely Apache specific, but parts (or at least the principles behind them) are more generally applicable.

Some simple rules follow:

    * Only enable those modules you absolutely require. Each additional module implies additional processing overhead, even though you may not be making active use of the module on your site.

    * Where possible, use static content instead of dynamic content. If you are generating weather reports that are updated once an hour, then it is better to write a program that generates a static file once an hour than to have users run a CGI to generate the report on the fly.

    * When choosing an API for your dynamic applications, choose the fastest and most appropriate one available. CGI may be easy to program for, but it forks a process for each request - usually an expensive and unnecessary procedure. FastCGI is a better alternative, as is Apache's mod_perl- both provide persistency and can increase performance significantly.

    * If your server performance is a critical issue for you (or you just want to get the maximum number of bangs for your bucks), then choose a high performance web server instead of Apache. Apache's process model sucks, frankly; it uses an individual process for each connection, leading to massive context switching when you have a large number of simultaneous connections. Single process per CPU servers such as Zeus and Boa are significantly faster; Zeus has held the SPECweb96 and 99 records for the majority of the last five years. Unfortunately, however, it is commercial, but is worth the money if you want to eke the last drop of performance out of your server.
          o Zeus:
          o Boa:

Tuning file descriptor limits on Linux

Linux limits the number of file descriptors that any one process may open; the default limits are 1024 per process. These limits can prevent optimum performance of both benchmarking clients (such as httperf and apachebench) and of the web servers themselves (Apache is not affected, since it uses a process per connection, but single process web servers such as Zeus use a file descriptor per connection, and so can easily fall foul of the default limit).

The open file limit is one of the limits that can be tuned with the ulimit command. The command ulimit -aS displays the current limit, and ulimit -aH displays the hard limit (above which the limit cannot be increased without tuning kernel parameters in /proc).

The following is an example of the output of ulimit -aH. You can see that the current shell (and its children) is restricted to 1024 open file descriptors.

core file size (blocks)     unlimited
data seg size (kbytes)      unlimited
file size (blocks)          unlimited
max locked memory (kbytes)  unlimited
max memory size (kbytes)    unlimited
open files                  1024
pipe size (512 bytes)       8
stack size (kbytes)         unlimited
cpu time (seconds)          unlimited
max user processes          4094
virtual memory (kbytes)     unlimited

Increasing the file descriptor limit

The file descriptor limit can be increased using the following procedure:

   1. Edit /etc/security/limits.conf and add the lines:

      *       soft    nofile  1024
      *       hard    nofile  65535

   2. Edit /etc/pam.d/login, adding the line:

      session required /lib/security/pam_limits.so

   3. The system file descriptor limit is set in /proc/sys/fs/file-max. The following command will increase the limit to 65535:

      echo 65535 > /proc/sys/fs/file-max

   4. You should then be able to increase the file descriptor limits using:

      ulimit -n unlimited

      The above command will set the limits to the hard limit specified in /etc/security/limits.conf.

Note that you may need to log out and back in again before the changes take effect. General Tuning Tips

The essentials of tuning Apache for optimum performance are discussed in depth at
Some of the advice given here is definitely Apache specific, but parts (or at least the principles behind them) are more generally applicable.

Some simple rules follow:

    * Only enable those modules you absolutely require. Each additional module implies additional processing overhead, even though you may not be making active use of the module on your site.

    * Where possible, use static content instead of dynamic content. If you are generating weather reports that are updated once an hour, then it is better to write a program that generates a static file once an hour than to have users run a CGI to generate the report on the fly.

    * When choosing an API for your dynamic applications, choose the fastest and most appropriate one available. CGI may be easy to program for, but it forks a process for each request - usually an expensive and unnecessary procedure. FastCGI is a better alternative, as is Apache's mod_perl- both provide persistency and can increase performance significantly.

    * If your server performance is a critical issue for you (or you just want to get the maximum number of bangs for your bucks), then choose a high performance web server instead of Apache. Apache's process model sucks, frankly; it uses an individual process for each connection, leading to massive context switching when you have a large number of simultaneous connections. Single process per CPU servers such as Zeus and Boa are significantly faster; Zeus has held the SPECweb96 and 99 records for the majority of the last five years. Unfortunately, however, it is commercial, but is worth the money if you want to eke the last drop of performance out of your server.
          o Zeus:
          o Boa:

Tuning file descriptor limits on Linux

Linux limits the number of file descriptors that any one process may open; the default limits are 1024 per process. These limits can prevent optimum performance of both benchmarking clients (such as httperf and apachebench) and of the web servers themselves (Apache is not affected, since it uses a process per connection, but single process web servers such as Zeus use a file descriptor per connection, and so can easily fall foul of the default limit).

The open file limit is one of the limits that can be tuned with the ulimit command. The command ulimit -aS displays the current limit, and ulimit -aH displays the hard limit (above which the limit cannot be increased without tuning kernel parameters in /proc).

The following is an example of the output of ulimit -aH. You can see that the current shell (and its children) is restricted to 1024 open file descriptors.

core file size (blocks)     unlimited
data seg size (kbytes)      unlimited
file size (blocks)          unlimited
max locked memory (kbytes)  unlimited
max memory size (kbytes)    unlimited
open files                  1024
pipe size (512 bytes)       8
stack size (kbytes)         unlimited
cpu time (seconds)          unlimited
max user processes          4094
virtual memory (kbytes)     unlimited

Increasing the file descriptor limit

The file descriptor limit can be increased using the following procedure:

   1. Edit /etc/security/limits.conf and add the lines:

      *       soft    nofile  1024
      *       hard    nofile  65535

   2. Edit /etc/pam.d/login, adding the line:

      session required /lib/security/pam_limits.so

   3. The system file descriptor limit is set in /proc/sys/fs/file-max. The following command will increase the limit to 65535:

      echo 65535 > /proc/sys/fs/file-max

   4. You should then be able to increase the file descriptor limits using:

      ulimit -n unlimited

      The above command will set the limits to the hard limit specified in /etc/security/limits.conf.

Note that you may need to log out and back in again before the changes take effect. 
阅读(293) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~