全部博文(125)
分类: LINUX
2010-09-08 16:46:42
基本测试:
dd
通常就是 计算读写一定大小的块耗费的时间 ,本身有速度输出
基本的测试如下 磁盘读速度 sync;time dd if=[mountpoint] of=/dev/null bs=4096k count=2000 测试数据大小为:4096k×2000 磁盘写速度 sync;time dd if=/dev/zero of=[mountpoint] bs=4096k count=2000 测试数据大小为:4096k×2000 [mountpoint]替换为你实际的挂载点 以上都是测试 2000个 4M块的速度 可以通过改变 bs大小来分析不同级别块的性能 hdparm(hard disk parameters) 功能说明:显示与设定硬盘的参数。 语 法:hdparm [-CfghiIqtTvyYZ][-a <快取分区>][-A <0或1>][-c ][-d <0或1>][-k <0或1>][-K <0或1>][-m <分区数>][-n <0或1>][-p 补充说明:hdparm可检测,显示与设定IDE或SCSI硬盘的参数。 参 数: Bonnie的用法:
工具名称:Bonnie 功能说明:测试硬盘的I/O速度 使用手册: 步骤一:下载 # wget 步骤二:安装 # tar zxf bonnie.tar.gz # make 步骤三:使用 # ./Bonnie -d /usr -s 750 -m mystery Here is an example of some typical Bonnie output: -------Sequential Output-------- ---Sequential Input-- --Random-- -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU mystery 750 534 65.7 1236 22.5 419 17.5 564 74.3 1534 32.8 35.0 8.3
命令参数说明: -d :指定目录; -s :指定写入数据的大小,单位为M; -m :机器名; -html :以html方式显示 输出结果说明:Sequential Output :连续写入数据到硬盘 Sequential Input : 连续从硬盘读取数据 Char: 按字符方式写入/读取 Block: 按块方式写入/读取 Rewrite: 顺序修改数据 Seeks :随机搜索数据 534:以字符方式往硬盘写750M数据,平均速度是534KB/秒(约0.5M/秒),完成此写入任务系统消耗一个单位cpu资源的65.7%(65.7% of on
详版: Getting Bonnie First, you will have to fetch a copy of Bonnie from the ; go to and follow the pointers from there. There are other places around the Internet from which you can get Bonnie, but this is the on Bonnie is incredibly easy to build from source, but I make some binary versions available as well for people who don't have C compilers or want to save some time. If you have to get the source co make There is an excellent chance that this will just work. If it doesn't read the file named "Instructions". The Bonnie Command Line The Bonnie command line, given here in Unix style, is:
Bonnie [-d scratch-dir] [-s size-in-Mb] [-m machine-label] [-html] All the things enclosed in square brackets may be left out. The meaning of the things on this line is: Bonnie The name of the program. You might want to give it a different name. If you are sitting in the same directory as the program, you might have to use something like ./Bonnie. -d scratch-dir The name of a directory; Bonnie will write and read scratch files in this directory. Bonnie does not do any special interpretation of the directory name, simply uses it as given. Suppose you used -d /raid1/TestDisk; Bonnie would write, then read back, a file whose name was something like /raid1/TestDisk/Bonnie.34251. -s size-in-Mb The number of megabytes to test with. If you do not use this, Bonnie will test with a 100Mb file. In this discussion, Megabyte means 1048576 bytes. If you have a computer that does not allow 64-bit files, the maximum value you can use is 2047. -m machine-label This is the label that Bonnie will use to label its report. If you do not use this, Bonnie will use no label at all. -html If you use this, Bonnie will generate a report in HTML form, as opposed to plain text. This is not all that useful unless you are prepared to write a table header. Bonnie Results Before explaining each of the numbers, it should be noted that the columns below labeled %CPU may be misleading on multiprocessor systems. This percentage is computed by taking the total CPU time reported for the operation, and dividing that by the total elapsed time. On a multi-CPU system, it is very likely that application co # ./Bonnie -d /usr -s 750 -m mystery Here is an example of some typical Bonnie output: -------Sequential Output-------- ---Sequential Input-- --Random-- -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU mystery 750 534 65.7 1236 22.5 419 17.5 564 74.3 1534 32.8 35.0 8.3 解释说明: Sequential Output :连续写入数据到硬盘 Sequential Input : 连续从硬盘读取数据 Char:安字符方式写入/读取 Block:安块方式写入/读取 Rewrite:顺序修改数据 Seeks:随机搜索数据 534:以字符方式往硬盘写750M数据,平均速度是534KB/秒(约0.5M/秒),完成此写入任务系统消耗一个单位cpu资源为65.7%(65.7% of on Reading the bottom line across, left to right: mystery This test was run with the option 750 This test was run with the option 534 When writing the file by doing 750 million putc() macro invocations, Bonnie recorded an output rate of 534 K per second. 65.7 When writing the file by doing 750 million putc() macro invocations, the operating system reported that this work consumed 65.7% of on 1236 When writing the 750-Mb file using efficient block writes, Bonnie recorded an output rate of 1,236 K per second. 22.5 When writing the 750-Mb file using efficient block writes, the operating system reported that this work consumed 22.5% of on 419 While running through the 750-Mb file just creating, changing each block, and rewriting, it, Bonnie recorded an ability to cover 418 K per second. 17.5 While running through the 750-Mb file just creating, changing each block, and rewriting, it, the operating system reported that this work consumed 17.5% of on 564 While reading the file using 750 million getc() macro invocations, Bonnie recorded an input rate of 564 K per second. 74.3 While reading the file using 750 million getc() macro invocations, the operating system reported that this work consumed 74.3% of on 1534 While reading the file using efficient block reads, Bonnie reported an input rate of 1,534 K per second. 32.8 While reading the file using efficient block reads, the operating system reported that this work consumed 32.8% of on 35.0 Bonnie created 4 child processes, and had them execute 4000 seeks to random locations in the file. On 10% of these seeks, they changed the block that they had read and re-wrote it. The effective seek rate was 32.8 seeks per second. 8.3 During the seeking process, the operating system reported that this work consumed 8.3% of on
|