Chinaunix首页 | 论坛 | 博客
  • 博客访问: 334661
  • 博文数量: 81
  • 博客积分: 3813
  • 博客等级: 中校
  • 技术积分: 945
  • 用 户 组: 普通用户
  • 注册时间: 2005-08-24 18:14
文章分类

全部博文(81)

文章存档

2013年(1)

2012年(2)

2011年(54)

2010年(15)

2009年(9)

分类: Mysql/postgreSQL

2011-04-02 16:56:12


By , January 17, 2011 2:41 AM 
I maintain a number of project whose purpose in life is to make testing portions of PostgreSQL easier.  All of these got a decent upgrade over this last week.

 tests how memory speed increases on servers as more cores are brought into play.  It's fascinating data, enough of it there to start seeing some real trends.  It now works correctly on systems with large amounts of CPU cache, because they have many cores.  It was possible before for it to be so aggressive with sizing the test set to avoid cache impact that it used more memory than could be allocated with the current design of the stream code.  That's been scaled back.  If you have a 48 core server or larger, I could use some more testing of this new code to see if the new way I handle this make sense.

 is a script I wrote to make it easier to build PostgreSQL from source, typically for developer work or for temporarily trying a newer version on a production system.  It was very easy to get confused with switch between projects and their associated git branches before; the documentation in this area is much improved.  

 is my performance testing workhouse, allowing you to queue up days worth of benchmark mark and then have enough analysis tools available to make sense of it.  The program now tracks the recently introduced pg_stat_bgwriter.buffers_backend_fsync parameter if you have a version that supports it (currently only a recent soure build--which brings us back to why peg is useful).  You can also tell it to run each test for a fixed amount of time, which makes testing at wildly varying client/size values far easier.

As far as what you can do with pgbench-tools...as of today I am now  I am doing on PostgreSQL 9.1 on the most powerful server I have unlimited use of.  8 cores, 16GB RAM, 3 disk RAID-0 database drives, 1 disk WAL volume, Areca Battery-backed cache.  You can see the results.  Runs are organized into test sets, each of which represents some sort of change to configuration.  For example, #1 in this data is only running SELECT, #2 is running TPC-B-like but with 8GB of RAM and ealrier code, while the hot stuff is #3, running TPC-B with 16GB of RAM and code that tracks buffers_backend_fsyncs.

There are several patches in the PostgreSQL 9.1 queue related to performance in the areas highlighted by these reults--that Linux can have extremely high worst-case latency on write-heavy database loads.  A good averagish example is :  scale of 1000, 32 clients, 365 TPS.,  But the worst-case latency is 43 seconds, and you can see the dead spots in the TPS graph.  That's just terrible, and there are a few concepts floating around for how to do just that.

If anyone reading this has a powerful server available for a few weeks to run tests like this on, I'd be happy to help you replicate this environment and see what kind of results you see.  The only magic I've got is some practice at how to set the scaling and client loads so you don't loose a lot of time to unproductive tests.  The rest of my process is all free and documented.  
阅读(637) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~