Chinaunix首页 | 论坛 | 博客
  • 博客访问: 421384
  • 博文数量: 125
  • 博客积分: 2838
  • 博客等级: 少校
  • 技术积分: 1410
  • 用 户 组: 普通用户
  • 注册时间: 2010-08-05 09:45
文章分类

全部博文(125)

文章存档

2012年(13)

2011年(5)

2010年(107)

我的朋友

分类: LINUX

2010-09-08 16:46:42

基本测试:
dd
 
通常就是 计算读写一定大小的块耗费的时间 ,本身有速度输出
基本的测试如下
磁盘读速度
sync;time dd if=[mountpoint] of=/dev/null bs=4096k count=2000
测试数据大小为:4096k×2000
磁盘写速度
sync;time dd if=/dev/zero of=[mountpoint] bs=4096k count=2000
测试数据大小为:4096k×2000
[mountpoint]替换为你实际的挂载点
以上都是测试 2000个 4M块的速度 可以通过改变 bs大小来分析不同级别块的性能
 

hdparm(hard disk parameters)

功能说明:显示与设定硬盘的参数。

语  法:hdparm [-CfghiIqtTvyYZ][-a <快取分区>][-A <0或1>][-c ][-d <0或1>][-k <0或1>][-K <0或1>][-m <分区数>][-n <0或1>][-p ][-P <分区数>][-r <0或1>][-S <时间>][-u <0或1>][-W <0或1>][-X <传输模式>][设备]

补充说明:hdparm可检测,显示与设定IDE或SCSI硬盘的参数。

参  数:
-a<快取分区>   设定读取文件时,预先存入块区的分区数,若不加上<快取分区>选项,则显示目前的设定。
-A<0或1>   启动或关闭读取文件时的快取功能。
-c   设定IDE32位I/O模式。
-C   检测IDE硬盘的电源管理模式。
-d<0或1>   设定磁盘的DMA模式。
-f   将内存缓冲区的数据写入硬盘,并清楚缓冲区。
-g   显示硬盘的磁轨,磁头,磁区等参数。
-h   显示帮助。
-i   显示硬盘的硬件规格信息,这些信息是在开机时由硬盘本身所提供。
-I   直接读取硬盘所提供的硬件规格信息。
-k<0或1>   重设硬盘时,保留-dmu参数的设定。
-K<0或1>   重设硬盘时,保留-APSWXZ参数的设定。
-m<磁区数>   设定硬盘多重分区存取的分区数。
-n<0或1>   忽略硬盘写入时所发生的错误。
-p   设定硬盘的PIO模式。
-P<磁区数>   设定硬盘内部快取的分区数。
-q   在执行后续的参数时,不在屏幕上显示任何信息。
-r<0或1>   设定硬盘的读写模式。
-S<时间>   设定硬盘进入省电模式前的等待时间。
-t   评估硬盘的读取效率。
-T   平谷硬盘快取的读取效率。
-u<0或1>   在硬盘存取时,允许其他中断要求同时执行。
-v   显示硬盘的相关设定。
-W<0或1>   设定硬盘的写入快取。
-X<传输模式>   设定硬盘的传输模式。
-y   使IDE硬盘进入省电模式。
-Y   使IDE硬盘进入睡眠模式。
-Z   关闭某些Seagate硬盘的自动省电功能

另外的工具:

 
Bonnie的用法:

工具名称:Bonnie

功能说明:测试硬盘的I/O速度

使用手册:

步骤一:下载

# wget

步骤二:安装

# tar zxf bonnie.tar.gz

# make

步骤三:使用

# ./Bonnie -d /usr -s 750 -m mystery

Here is an example of some typical Bonnie output:

              -------Sequential Output--------            ---Sequential Input--           --Random--

           -Per  Char-      --Block---    -Rewrite-- -Per Char-     --Block---    --Seeks---

Machine    MB    K/sec  %CPU   K/sec  %CPU   K/sec  %CPU    K/sec  %CPU  K/sec  %CPU   /sec   %CPU

mystery   750    534    65.7   1236   22.5   419    17.5    564    74.3  1534   32.8    35.0   8.3

 

命令参数说明:

-d 指定目录;        -s 指定写入数据的大小,单位为M -m 机器名;    -html html方式显示

输出结果说明:

Sequential Output 连续写入数据到硬盘

Sequential Input 连续从硬盘读取数据

Char: 字符方式写入/读取

Block:块方式写入/读取

Rewrite: 顺序修改数据

Seeks :随机搜索数据

534以字符方式往硬盘写750M数据,平均速度是534KB/秒(约0.5M/秒),完成此写入任务系统消耗一个单位cpu资源的65.7%65.7% of one CPU's time

 

详版:

Getting Bonnie

First, you will have to fetch a copy of Bonnie from the ; go to and follow the pointers from there. There are other places around the Internet from which you can get Bonnie, but this is the only site where it is maintained.

Bonnie is incredibly easy to build from source, but I make some binary versions available as well for people who don't have C compilers or want to save some time.

If you have to get the source code (which is available in shar and tar.gzip formats), after you've unpacked it, just try typing

make

There is an excellent chance that this will just work. If it doesn't read the file named "Instructions".

The Bonnie Command Line

The Bonnie command line, given here in Unix style, is:

 

 Bonnie [-d scratch-dir] [-s size-in-Mb] [-m machine-label] [-html]

All the things enclosed in square brackets may be left out. The meaning of the things on this line is:

Bonnie

The name of the program. You might want to give it a different name. If you are sitting in the same directory as the program, you might have to use something like ./Bonnie.

-d scratch-dir

The name of a directory; Bonnie will write and read scratch files in this directory. Bonnie does not do any special interpretation of the directory name, simply uses it as given. Suppose you used -d /raid1/TestDisk; Bonnie would write, then read back, a file whose name was something like /raid1/TestDisk/Bonnie.34251.
If you do not use this option, Bonnie will write to and read from a file in the current directory, using a name something like ./Bonnie.35152.
Bonnie does clean up by removing the file after using it; however, if Bonnie dies in the middle of a run, it is quite likely that a (potentially very large) file will be left behind.

-s size-in-Mb

The number of megabytes to test with. If you do not use this, Bonnie will test with a 100Mb file. In this discussion, Megabyte means 1048576 bytes. If you have a computer that does not allow 64-bit files, the maximum value you can use is 2047.
It is important to use a file size that is several times the size of the available memory (RAM) - otherwise, the operating system will cache large parts of the file, and Bonnie will end up doing very little I/O. At least four times the size of the available memory is desirable.

-m machine-label

This is the label that Bonnie will use to label its report. If you do not use this, Bonnie will use no label at all.

-html

If you use this, Bonnie will generate a report in HTML form, as opposed to plain text. This is not all that useful unless you are prepared to write a table header.

Bonnie Results

Before explaining each of the numbers, it should be noted that the columns below labeled %CPU may be misleading on multiprocessor systems. This percentage is computed by taking the total CPU time reported for the operation, and dividing that by the total elapsed time. On a multi-CPU system, it is very likely that application code and filesystem code may be executing on different systems. On the final test (random seeks), the parent process creates four child processes to perform the seeks in parallel; if there are multiple CPUs it is nearly certain that all will be involved. Thus, these numbers should be taken as a general indicator of the efficiency of the operation relative to the speed of a unit unit CPU. Taken literally, this could make a machine with 10 50-horsepower CPUs appear less efficient than one with one 100-horsepower CPU.

 # ./Bonnie -d /usr -s 750 -m mystery

Here is an example of some typical Bonnie output:

              -------Sequential Output--------            ---Sequential Input--   --Random--

           -Per  Char-      --Block---    -Rewrite-- -Per Char-     --Block---    --Seeks---

Machine    MB    K/sec  %CPU   K/sec  %CPU   K/sec  %CPU    K/sec  %CPU  K/sec  %CPU   /sec   %CPU

mystery   750    534    65.7   1236   22.5   419    17.5    564    74.3  1534   32.8   35.0   8.3

解释说明:

Sequential Output :连续写入数据到硬盘

Sequential Input : 连续从硬盘读取数据

Char:字符方式写入/读取

Block:块方式写入/读取

Rewrite:顺序修改数据

Seeks:随机搜索数据

534:以字符方式往硬盘写750M数据,平均速度是534KB/秒(约0.5M/秒),完成此写入任务系统消耗一个单位cpu资源为65.7%65.7% of one CPU's time

Reading the bottom line across, left to right:

mystery

This test was run with the option
-m mystery
"mystery" is the label for the test.

750

This test was run with the option
-s 750
Bonnie used a 750-Megabyte file to do the testing. For the numbers to be valid, the computer had better not have had more than about 200M of memory.

534

When writing the file by doing 750 million putc() macro invocations, Bonnie recorded an output rate of 534 K per second.

65.7

When writing the file by doing 750 million putc() macro invocations, the operating system reported that this work consumed 65.7% of one CPU's time. This is not very good; it suggests either a slow CPU or an inefficient implementation of the stdio interface.

1236

When writing the 750-Mb file using efficient block writes, Bonnie recorded an output rate of 1,236 K per second.

22.5

When writing the 750-Mb file using efficient block writes, the operating system reported that this work consumed 22.5% of one CPU's time.

419

While running through the 750-Mb file just creating, changing each block, and rewriting, it, Bonnie recorded an ability to cover 418 K per second.

17.5

While running through the 750-Mb file just creating, changing each block, and rewriting, it, the operating system reported that this work consumed 17.5% of one CPU's time.

564

While reading the file using 750 million getc() macro invocations, Bonnie recorded an input rate of 564 K per second.

74.3

While reading the file using 750 million getc() macro invocations, the operating system reported that this work consumed 74.3% of one CPU's time. This is amazingly high.

1534

While reading the file using efficient block reads, Bonnie reported an input rate of 1,534 K per second.

32.8

While reading the file using efficient block reads, the operating system reported that this work consumed 32.8% of one CPU's time.

35.0

Bonnie created 4 child processes, and had them execute 4000 seeks to random locations in the file. On 10% of these seeks, they changed the block that they had read and re-wrote it. The effective seek rate was 32.8 seeks per second.

8.3

During the seeking process, the operating system reported that this work consumed 8.3% of one CPU's time.

 

 

阅读(1019) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~