Chinaunix首页 | 论坛 | 博客
  • 博客访问: 320690
  • 博文数量: 81
  • 博客积分: 3813
  • 博客等级: 中校
  • 技术积分: 945
  • 用 户 组: 普通用户
  • 注册时间: 2005-08-24 18:14
文章分类

全部博文(81)

文章存档

2013年(1)

2012年(2)

2011年(54)

2010年(15)

2009年(9)

分类: Mysql/postgreSQL

2011-09-03 08:28:53

Check shared_buffers
   
     Trigger: 
shared_buffers < (OS == Windows ? 64MB : MIN(0.20 * (system_memory - 256MB), 6GB)) or shared_buffers > (OS == Windows ? 512MB : MAX(0.35 * system_memory, 8GB))

     Recommended Value: 
system_memory < 1GB ? MAX((system_memory - 256MB) / (OS == Windows ? 6 : 3), 64MB), OS == Windows ? MAX(system_memory / 8, 256MB) : MAX(system_memory / 4, 8GB)

     Description: 
The configuration variable shared_buffers controls the amount of memory reserved by PostgreSQL for its internal buffer cache.  Setting this value too low may result in "thrashing" the buffer cache, resulting in excessive disk activity and degraded performance.  However, setting it too high may also cause performance problems.  PostgreSQL relies on operating system caching to a significant degree, and setting this value too high may result in excessive "double buffering" that can degrade performance.  It also increases the internal costs of managing the buffer pool.  On UNIX-like systems, a good starting value is approximately 25% of system memory, but not more than 8GB.  On Windows systems, values between 64MB and 512MB typically perform best.  The optimal value is workload-dependent, so it may be worthwhile to try several different values and benchmark your system to determine which one delivers best performance. Note: PostgreSQL will fail to start if the necessary amount of shared_memory cannot be located.  This is usually due to an operating system limitation which can be raised by changing a system configuration setting, often called shmall.  See the documentation for more details.  You must set this limit to a value somewhat higher than the amount of memory required for shared_buffers, because PostgreSQLs shared memory allocation also includes amounts required for other purposes.

     Current Values: 

      Setting         Value
      shared_buffer   "32 MB"
      total_ram_memory "3884 MB"

Check work_mem

     Trigger: 
given spare_mem = system_memory - (OS == Windows ? 256MB : MAX(0.25 * system_memory, 8GB)) then work_mem < MAX(1MB, spare_mem / 512) or work_mem > (spare_mem / 128)

     Recommended Value: 
given spare_mem defined as on the previous line then MAX(1MB, spare_mem / 256)

     Description: 
The configuration variable work_mem controls the amount of memory PostgreSQL will use for each individual hash or sort operation.  When a sort would use more than this amount of memory, the planner will arrange to perform an external sort using disk files.  While this algorithm is memory efficient, it is much slower than an in-memory quick sort.  Similarly, when a hash join would use more than this amount of memory, the planner will arrange to perform it in multiple batches, which saves memory but is likewise much slower.  In either case, the planner may in the alternative choose some other plan that does not require the sort or hash operation, but this too is often less efficient.  Therefore, for good performance, it is important to set this parameter high enough to allow the planner to choose good plans.  However, each concurrently executing query can potentially involve several sorts or hashes, and the number of queries on the system can vary greatly.  Therefore, a value for this setting that works well when the system is lightly loaded may result in swapping when the system becomes more heavily loaded.  Swapping has very negative effects on database performance and should be avoided, so it is usually wise to set this value somewhat conservatively. Note: work_mem can be adjusted for particular databases, users, or user-and-database combinations by using the commands ALTER ROLE and ALTER DATABASE.  It can also be changed for a single session using the SET command.  This can be helpful when particular queries can be shown to run much faster with a value of work_mem that is too high to be applied to the system as a whole.

     Current Values: 
     
      Setting       Value
      work_mem       "1.000 MB"
      total_ram_memory  "3884 MB"

Check maintenance_work_mem
 
     Trigger: 
given spare_mem = system_memory - (OS == Windows ? 256MB : MAX(0.25 * system_memory, 8GB)) then maintenance_work_mem < MAX(16MB, spare_mem / 32) or maintenance_work_mem > MIN(spare_mem / 8, 256MB)

     Recommended Value: 
given spare_mem as defined on the previous line then MIN(spare_mem/16, 256MB)

     Description: 
The configuration variable maintenance_work_mem controls the amount of memory PostgreSQL will use for maintenance operations such as CREATE INDEX and VACUUM.  Increasing this setting from the default of 16MB to 256MB can make these operations run much faster.  Higher settings typically do not produce a significant further improvement.  On PostgreSQL 8.3 and higher, multiple autovacuum processes may be running at one time (up to autovacuum_max_workers, which defaults to 3), and each such process will use the amount of dedicated memory dictated by this parameter.  This should be kept in mind when setting this parameter, especially on systems with relatively modest amounts of physical memory, so as to avoid swapping.  Swapping has very negative effects on database performance and should be avoided.  If the value recommended above is less than 256MB, it is chosen with this consideration in mind.  However, the optimal value is workload-dependent, so it may be worthwhile to experiment with higher or lower settings.

     Current Values: 
 
      Setting       Value  
      maintenance_work_mem      "16 MB"
      total_ram_memory       "3884 MB"

Check wal_sync_method

     Trigger: 
OS == Windows and wal_sync_method not in (fsync, fsync_writethrough) OS == MacOS X and wal_sync_method != fsync_writethrough

     Recommended Value: 
On Windows, consider configuring wal_sync_method = fsync or wal_sync_method = fsync_writethrough. On Mac OS X, consider configuring wal_sync_method = fsync_writethrough.

     Description: 
In order to guarantee reliable crash recovery, PostgreSQL must ensure that the operating system flushes the write-ahead log to disk when asked to do so.  On Windows, this can be achieved by setting wal_sync_method to fsync or fsync_writethrough, or by disabling the disk cache on the drive where the write-ahead log is written.  (It is safe to leave the disk cache enable if a battery-back disk cache is in use.)

     Current Values: 
       Setting       Value
      wal_sync_method       open_datasync
      "Operating System"       Windows

Check wal_buffers

     Trigger: 
wal_buffers < 1MB or wal_buffers > 16MB

     Recommended Value: 
Consider adjusting  wal_buffers.

     Description: 
Increasing the configuration parameter wal_buffers from the default value of 64kB to 1MB or more can reduced the number of times the database must flush the write-ahead log, leading to improved performance under some workloads.  There is no benefit to setting this parameter to a value greater than the size of a WAL segment (16MB).

     Current Values: 
      Setting       Value
      wal_buffers       "0.063 MB"

Check checkpoint_segments

Trigger: 
checkpoint_segments < 10 or checkpoint_segments > 300

     Recommended Value: 
Consider adjusting checkpoint_segments.

     Description: 
In order to ensure reliable and efficient crash recovery, PostgreSQL periodically writes all dirty buffers to disk.  This process is called a checkpoint.  Checkpoints occur when (1) the number of write-ahead log segments written since the last checkpoint exceeds checkpoint_segments, (2) the amount of time since the last checkpoint exceeds checkpoint_timeout, (3) the SQL command CHECKPOINT is issued, or (4) the system completes either shutdown or crash recovery.  Increasing the value of checkpoint_segments will reduce the frequency of checkpoints and will therefore improve performance, especially during bulk loading.  The main downside of increasing checkpoint_segments is that, in the event of a crash, recovery will require a longer period of time to return the database to a consistent state.  In addition, increasing checkpoint_segments will increase disk space consumption during periods of heavy system activity.  However, because the theoretical limit on the amount of additional disk space that will be consumed for this reason is less than 32MB per additional checkpoint segment, this is often a small price to pay for improved performance. Values between 30 and 100 are often suitable for modern systems.  However, on smaller systems, a value as low as 10 may be appropriate, and on larger systems, a value as 300 may be useful.  Values outside this range are generally not worthwhile.

     Current Values: 
      Setting       Value
      checkpoint_segments       3

Check checkpoint_completion_target

     Trigger: 
checkpoint_completion_target != 0.9

     Recommended Value: 
Consider adjusting checkpoint_completion_target.

     Description: 
In order to ensure reliable and efficient crash recovery, PostgreSQL periodically writes all dirty buffers to disk.  This process is called a checkpoint.  Beginning in PostgreSQL 8.3, checkpoints take place over an extended period of time in order to avoid swamping the I/O system.  checkpoint_completion_target controls the rate at which the checkpoint is performed, as a function of the time remaining before the next checkpoint is due to start.  A value of 0 indicates that the checkpoint should be performed as quickly as possible, whereas a value of 1 indicates that the checkpoint should complete just as the next checkpoint is scheduled to start.  It is usually beneficial to spread the checkpoint out as much as possible; however, if checkpoint_completion_target is set to a value greater than 0.9, unexpected delays near the end of the checkpoint process can cause the checkpoint to fail to complete before the next one needs to start.  Because of this, the recommended setting is 0.9.

     Current Values: 
      Setting       Value
      checkpoint_completion_target      0.5

Check effective_cache_size
 
     Trigger: 
effective_cache_size < 0.5 * system_memory or effective_cache_size > MAX(0.9 * system_memory, system_memory - 1GB)

     Recommended Value: 
0.75 * system_memory

     Description: 
When estimating the cost of a nested loop with an inner index-scan, PostgreSQL uses this parameter to estimate the chances that rows from the inner relation which are fetched multiple times will still be in cache when the second fetch occurs.  Changing this parameter does not allocate any memory, but an excessively small value may discourage the planner from using indexes which would in fact speed up the query.  The recommended value is 75% of system memory.

     Current Values:      
      Setting       Value
      effective_cache_size      "128 MB"
      total_ram_memory       "3884 MB"

Check reducing random_page_cost
   
     Trigger: 
 random_page_cost > 2 * seq_page_cost

     Recommended Value: 
Consider reducing random_page_cost to no more than twice seq_page_cost.

     Description: 
seq_page_cost and random_page_cost are parameters used by the query parameter to determine the optimal plan for each query.  seq_page_cost represents the cost of a sequential page read, while random_page_cost represents the cost of a random page read. random_page_cost should always be greater than or equal to seq_page_cost, but it is rarely beneficial to set random_page_cost to a value more than twice seq_page_cost.However, the correct values for these variables is workload-dependent.  If the databases working set is much larger than physical memory and the blocks needed to execute a query will rarely be in cache, setting random_page_cost to a value greater than twice seq_page_cost may maximize performance.

     Current Values: 
      Setting       Value
      seq_page_cost      1.000
      random_page_cost   4.000


Check for missing primary keys
 
     Trigger: 
table with no defined primary key

     Recommended Value: 
Ensure tables have a Primary Key

     Description: 
Primary Keys are used to define the set of columns that make up the unique key to each row in the table. Whilst they are similar to unique indexes, Primary Keys cannot contain NULL values, thus are always able to identify a single row. Tools such as Postgres Enterprise Manager and other pieces of software such as ORMs will automatically detect Primary Keys on tables and use their definition to identify individual rows.

Check data and transaction log on same drive
 
     Trigger: 
data directory and transaction log directory share a device

     Recommended Value: 
Avoid using the same storage device for the data directory and transaction logs

     Description: 
The database server must write any changes to the data stored first to the transaction log, a sequential log of all changes to be made to the database files, and then to the database files themselves. On busy servers, significant performance gains may be seen when separating the data directory and transaction log directory onto different physical storage devices.

     Current Values: 
      Setting       Value
      data_dir       "C:/Program Files/PostgreSQL/9.0/data"
      xlog_dir       "C:/Program Files/PostgreSQL/9.0/data/pg_xlog"
阅读(934) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~