Chinaunix首页 | 论坛 | 博客
  • 博客访问: 3055957
  • 博文数量: 535
  • 博客积分: 15788
  • 博客等级: 上将
  • 技术积分: 6507
  • 用 户 组: 普通用户
  • 注册时间: 2007-03-07 09:11
文章分类

全部博文(535)

文章存档

2016年(1)

2015年(1)

2014年(10)

2013年(26)

2012年(43)

2011年(86)

2010年(76)

2009年(136)

2008年(97)

2007年(59)

分类: LINUX

2009-08-07 10:33:55

原文:

由于Bonnie存在一些众所周知的问题,比如>2G的文件支持.
 开发了一套新的代码,用以支持>2G的文件等.
得到的许可之后,Russell把他的软件命名为bonnie++,在网上发布,并开始流行起来.

目前的版本已经更新到了1.03a,你可以到以下地址下载:

你也可以点击下载,这个版本需要编译,如果你没有编译环境,可以点击下载我编译好的,适用于SUN Solaris环境(Solaris8测试通过)

的个人主页是:

Bonnie++ 与 bonnie的区别主要是:

我简单介绍一下Bonnie++的编译及使用:

1.编译

你需要把以上下载的源码编译以后才能使用,如果你没有编译环境,可以点击下载我编译好的,适用于SUN Solaris环境(Solaris8测试通过)

当然你需要安装make,及gcc等必要编译器.在编译过程中,如果遇到以下错误,可能是因为你没有设置正确的环境变量

$ ./configure
grep: illegal option -- q
Usage: grep -hblcnsviw pattern file . . .
grep: illegal option -- q
Usage: grep -hblcnsviw pattern file . . .
checking for g++... g++
checking for C++ compiler default output... a.out
checking whether the C++ compiler works... configure: error: cannot run C++ compiled programs.
If you meant to cross compile, use `--host'.
See `config.log' for more details.

设置环境变量后继续编译,一般可以成功.

 

# export LD_LIBRARY_PATH=/usr/lib:/usr/local/lib
# ./configure
grep: illegal option -- q
Usage: grep -hblcnsviw pattern file . . .
grep: illegal option -- q
Usage: grep -hblcnsviw pattern file . . .
checking for g++... g++
checking for C++ compiler default output... a.out
checking whether the C++ compiler works... yes
checking whether we are cross compiling... no
checking for suffix of executables...
checking for suffix of object files... o
checking whether we are using the GNU C++ compiler... yes
checking whether g++ accepts -g... yes
checking how to run the C++ preprocessor... g++ -E
checking for a BSD-compatible install... /usr/bin/install -c
checking for an ANSI C-conforming const... yes
checking for egrep... egrep
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... no
checking for unistd.h... yes
checking for size_t... yes
checking vector.h usability... yes
checking vector.h presence... yes
checking for vector.h... yes
checking vector usability... yes
checking vector presence... yes
checking for vector... yes
checking algorithm usability... yes
checking algorithm presence... yes
checking for algorithm... yes
checking algo.h usability... yes
checking algo.h presence... yes
checking for algo.h... yes
checking algo usability... no
checking algo presence... no
checking for algo... no
configure: creating ./config.status
config.status: creating Makefile
config.status: creating bonnie.h
config.status: creating port.h
config.status: creating bonnie++.spec
config.status: creating bon_csv2html
config.status: creating bon_csv2txt
config.status: creating sun/pkginfo
config.status: creating conf.h
config.status: conf.h is unchanged

 

编译完成之后会生成bonnie++,可以用来测试了.

2.下面是一些测试结果

a.T3大文件读写测试

-m     name of the machine - for display purposes only.

  # ./bonnie++ -d /data1 -u root -s 4096 -m billing Using uid:0, gid:1. Writing with putc()...done Writing intelligently...done Rewriting...done Reading with getc()...done Reading intelligently...done start 'em...done...done...done... Create files in sequential order...done. Stat files in sequential order...done. Delete files in sequential order...done. Create files in random order...done. Stat files in random order...done. Delete files in random order...done. Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP billing 4G 9915 87 30319 56 11685 38 9999 99 47326 66 177.6 3 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 639 19 +++++ +++ 1258 22 679 16 +++++ +++ 1197 27 billing,4G,9915,87,30319,56,11685,38,9999,99,47326,66,177.6,3,16,639,19,+++++,+++,1258,22,679,16,+++++,+++,1197,27

b. EMC CLARiiON CX500 禁用写Cache的测试数据

这个是在我禁用了写Cache以后的测试数据:

4块盘的Raid1+0测试:

  # ./bonnie++ -d /eygle -u root -s 4096 -m jump Using uid:0, gid:1. File size should be double RAM for good results, RAM is 4096M. # ./bonnie++ -d /eygle -u root -s 8192 -m jump Using uid:0, gid:1. Writing with putc()...done Writing intelligently...done Rewriting...done Reading with getc()...done Reading intelligently...done start 'em...done...done...done... Create files in sequential order...done. Stat files in sequential order...done. Delete files in sequential order...done. Create files in random order...done. Stat files in random order...done. Delete files in random order...done. Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP jump 8G 12647 36 13414 8 7952 13 33636 97 146503 71 465.7 5 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 86 1 +++++ +++ 161 1 81 1 +++++ +++ 163 1 jump,8G,12647,36,13414,8,7952,13,33636,97,146503,71,465.7,5,16,86,1,+++++,+++,161,1,81,1,+++++,+++,163,1

4块盘的Raid5,禁用写Cache后的速度:

  # ./bonnie++ -d /eygle -u root -s 8192 -m jump Using uid:0, gid:1. Writing with putc()...done Writing intelligently...done Rewriting...done Reading with getc()...done Reading intelligently...done start 'em...done...done...done... Create files in sequential order...done. Stat files in sequential order...done. Delete files in sequential order...done. Create files in random order...done. Stat files in random order...done. Delete files in random order...done. Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP jump 8G 10956 30 10771 6 3388 5 34169 98 158861 75 431.1 5 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 81 1 +++++ +++ 160 1 82 1 +++++ +++ 109 1 jump,8G,10956,30,10771,6,3388,5,34169,98,158861,75,431.1,5,16,81,1,+++++,+++,160,1,82,1,+++++,+++,109,1

对比这两个结果我们发现(单位K/sec):

 
字符写
Block写
字符读
Block读
Raid10
12,647
13,414
33,636
146,503
Raid5
10,956
10,771
34,169
158,861
Diff
1,691
2,643
-533
-12,358

我们看到,在直接读写上,写Raid10会略快于Raid5;而在读取上,Raid5会略快于Raid10,这符合我们通常的观点.

这里需要提一下的是,通常我们建议把RedoLog file存放在Raid10的磁盘上,因其具有写优势.

c.EMC CLARiiON CX500 启用1G写Cache的测试数据

这是4块盘的Raid10的测试数据:

  # ./bonnie++ -d /eygle -u root -s 8192 -m jump
Using uid:0, gid:1.
Writing with putc()...done
Writing intelligently...done
Rewriting...done
Reading with getc()...done
Reading intelligently...done
start 'em...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
jump 8G 31447 90 73130 50 29123 50 33607 97 144470 71 493.5 6
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 786 13 +++++ +++ 1534 14 781 12 +++++ +++ 1527 15
jump,8G,31447,90,73130,50,29123,50,33607,97,144470,71,493.5,6,16,786,13,+++++,+++,1534,14,781,12,+++++,+++,1527,15

这是4块盘的Raid5的测试数据:

  # ./bonnie++ -d /eygle -u root -s 8192 -m jump Using uid:0, gid:1. Writing with putc()...done Writing intelligently...done Rewriting...done Reading with getc()...done Reading intelligently...done start 'em...done...done...done... Create files in sequential order...done. Stat files in sequential order...done. Delete files in sequential order...done. Create files in random order...done. Stat files in random order...done. Delete files in random order...done. Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP jump 8G 34620 98 103440 65 35756 61 33900 97 160964 76 495.4 6 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 788 12 +++++ +++ 1503 14 783 11 +++++ +++ 1520 15 jump,8G,34620,98,103440,65,35756,61,33900,97,160964,76,495.4,6,16,788,12,+++++,+++,1503,14,783,11,+++++,+++,1520,15

我们再来对比一下这个结果(单位K/sec):

 
字符写
Block写
字符读
Block读
Raid10
31,447
73,130
33,607
144,470
Raid5
34,620
103,440
33,900
160,964
Diff
-3,173
-30,310
-293
-16,494

Raid5在启用了大的Write Cache下,性能全面超过了Raid10

 

3.对T3和Emc做个对比

都是4块盘的Raid5的情况下:

 
字符写
Block写
字符读
Block读
T3
9,915
30,319
9,999
47,326
EMC
34,620
103,440
33,900
160,964
Diff
-24,705
-73,120
-23,901
-113,638
Emc/T3
3.49
3.41
3.39
3.40

4.对ATA Raid10的测试

开有1G Cache的测试结果:

  # ./bonnie++ -d /eygle -u root -s 8192 -m jump Using uid:0, gid:1. Writing with putc()...done Writing intelligently...done Rewriting...done Reading with getc()...done Reading intelligently...done start 'em...done...done...done... Create files in sequential order...done. Stat files in sequential order...done. Delete files in sequential order...done. Create files in random order...done. Stat files in random order...done. Delete files in random order...done. Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP jump 8G 34299 97 84734 54 30960 53 33918 97 155474 74 575.0 7 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 786 12 +++++ +++ 1525 13 783 11 +++++ +++ 1508 15 jump,8G,34299,97,84734,54,30960,53,33918,97,155474,74,575.0,7,16,786,12,+++++,+++,1525,13,783,11,+++++,+++,1508,15

关闭Cache后的测试数据:

  # ./bonnie++ -d /eygle -u root -s 8192 -m jump Using uid:0, gid:1. Writing with putc()...done Writing intelligently...done Rewriting...done Reading with getc()...done Reading intelligently...done start 'em...done...done...done... Create files in sequential order...done. Stat files in sequential order...done. Delete files in sequential order...done. Create files in random order...done. Stat files in random order...done. Delete files in random order...done. Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP jump 8G 15587 44 16282 10 6433 10 33863 97 153921 73 538.8 6 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 99 1 +++++ +++ 161 1 91 1 +++++ +++ 180 1 jump,8G,15587,44,16282,10,6433,10,33863,97,153921,73,538.8,6,16,99,1,+++++,+++,161,1,91,1,+++++,+++,180,1

 

依次解读一下,从Writing with putc()开始到Delete files in random order…结束,这是bonnie++作的12项测试,这12项测试依次对应12项结果,而这12项结果又被分为了5大类,分别是Sequential Output(写测试),Sequential Input(读测试),Random Seeks(读写测试),Sequential Create(顺序读写文件测试)和Random Create(随意读写文件测试)。

那么测试步骤和测试结果依次对应的顺序就是:

Writing with putc() -> Sequential OutputPer Chr

Writing intelligently -> Sequential OutputBlock

Rewriting -> Sequential OutputRewrite

Reading with getc() -> Sequential InputPer Chr

Reading intelligently -> Sequential InputBlock

start ''''em -> Random Seeks

Create files in sequential order -> Sequential CreateCreate

Stat files in sequential order -> Sequential CreateRead

Delete files in sequential order -> Sequential CreateDelete

Create files in random order -> Random CreateCreate

Stat files in random order -> Random CreateRead

Delete files in random order -> Random CreateDelete

每个结果中又包括了2项数值,一个是K字节数或者文件数,另一个是%CP,就是执行这项测试时CPU的平均占用率。

对于输出结果的评价,我们认为在相等CPU的占用率情况下,存取字节数越高表示该存储设备的吞吐量越大,自然性能也就越好。

阅读(1647) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~