全部博文(1144)
分类: LINUX
2006-05-22 14:26:26
This chapter covers performance tools that help you gauge disk I/O subsystem usage. These tools can show which disks or partitions are being used, how much I/O each disk is processing, and how long I/O requests issued to these disks are waiting to be processed.
After reading this chapter, you should be able to
Determine the amount of total amount and type (read/write) of disk I/O on a system (vmstat).
Determine which devices are servicing most of the disk I/O (vmstat, iostat, sar).
Determine how effectively a particular disk is fielding I/O requests (iostat).
Determine which processes are using a given set of files (lsof).
Introduction to Disk I/O
Before diving into performance tools, it is necessary to understand how the Linux disk I/O system is structured. Most modern Linux systems have one or more disk drives. If they are IDE drives, they are usually named hda, hdb, hdc, and so on; whereas SCSI drives are usually named sda, sdb, sdc, and so on. A disk is typically split into multiple partitions, where the name of the partition's device is created by adding the partition number to the end of the base device name. For example, the second partition on the first IDE hard drive in the system is usually labeled /dev/hda2. Each individual partition usually contains either a file system or a swap partition. These partitions are mounted into the Linux root file system, as specified in /etc/fstab. These mounted file systems contain the files that applications read to and write from.
When an application does a read or write, the Linux kernel may have a copy of the file stored into its cache or buffers and returns the requested information without ever accessing the disk. If the Linux kernel does not have a copy of the data stored in memory, however, it adds a request to the disk's I/O queue. If the Linux kernel notices that multiple requests are asking for contiguous locations on the disk, it merges them into a single big request. This merging increases overall disk performance by eliminating the seek time for the second request. When the request has been placed in the disk queue, if the disk is not currently busy, it starts to service the I/O request. If the disk is busy, the request waits in the queue until the drive is available, and then it is serviced.
Disk I/O Performance Tools
This section examines the various disk I/O performance tools that enable you to investigate how a given application is using the disk I/O subsystem, including how heavily each disk is being used, how well the kernel's disk cache is working, and which files a particular application has "open."
As you saw in , "," vmstat is a great tool to give an overall view of how the system is performing. In addition to CPU and memory statistics, vmstat can provide a system-wide view of I/O performance.
While using vmstat to retrieve disk I/O statistics from the system, you must invoke it as follows:
vmstat [-D] [-d] [-p partition] [interval [count]]
describes the other command-line parameters that influence the disk I/O statistics that vmstat will display.
Option |
Explanation |
---|---|
-D |
This displays Linux I/O subsystem total statistics. This option can give you a good idea of how your I/O subsystem is being used, but it won't give statistics on individual disks. The statistics given are the totals since system boot, rather than just those that occurred between this sample and the previous sample. |
-d |
This option displays individual disk statistics at a rate of one sample per interval. The statistics are the totals since system boot, rather than just those that occurred between this sample and the previous sample. |
-p partition |
This displays performance statistics about the given partition at a rate of one sample per interval. The statistics are the totals since system boot, rather than just those that occurred between this sample and the previous sample. |
interval |
The length of time between samples. |
count |
The total number of samples to take. |
If you run vmstat without any parameters other than [interval] and [count], it shows you the default output. This output contains three columns relevant to disk I/O performance: bo, bi, and wa. These statistics are described in .
Statistic |
Explanation |
---|---|
bo |
This indicates the number of total blocks written to disk in the previous interval. (In vmstat, block size for a disk is typically 1,024 bytes.) |
bi |
This shows the number of blocks read from the disk in the previous interval. (In vmstat, block size for a disk is typically 1,024 bytes.) |
wa |
This indicates the amount of CPU time spent waiting for I/O to complete. The rate of disk blocks written per second. |
When running with the -D mode, vmstat provides statistical information about the system's disk I/O system as a whole. Information about these statistics is provided in . (Note that more information about these statistics is available in the Linux kernel source package, under Documentation/iostats.txt.)
Statistic |
Explanation |
---|---|
disks |
The total number of disks in the system. |
partitions |
The total number of partitions in the system. |
total reads |
The total number of reads that have been requested. |
merged reads |
The total number of times that different reads to adjacent locations on the disk were merged to improve performance. |
read sectors |
The total number of sectors read from disk. (A sector is usually 512 bytes.) |
milli reading |
The amount of time (in ms) spent reading from the disk. |
writes |
The total number of writes that have been requested. |
merged writes |
The total number of times that different writes to adjacent locations on the disk were merged to improve performance. |
written sectors |
The total number of sectors written to disk. (A sector is usually 512 bytes.) |
milli writing |
The amount of time (in ms) spent writing to the disk. |
inprogress IO |
The total number of I/O that are currently in progress. Note that there is a bug in recent versions (v3.2) of vmstat in which this is incorrectly divided by 1,000, which almost always yields a 0. |
milli spent IO |
This is the number of milliseconds spent waiting for I/O to complete. Note that there is a bug in recent versions (v3.2) of vmstat in which this is the number of seconds spent on I/O rather than milliseconds. |
The -d option of vmstat displays I/O statistics of each individual disk. These statistics are similar to those of the -D option and are described in .
Statistic |
Explanation |
---|---|
reads: total |
The total number of reads that have been requested. |
reads: merged |
The total number of times that different reads to adjacent locations on the disk were merged to improve performance. |
reads: sectors |
The total number of sectors read from disk. |
reads: ms |
The amount of time (in ms) spent reading from the disk. |
writes: total |
The total number of writes that have been requested for this disk. |
writes: merged |
The total number of times that different writes to adjacent locations on the disk were merged to improve performance. |
writes: sectors |
The total number of sectors written to disk. (A sector is usually 512 bytes.) |
writes: ms |
The amount of time (in ms) spent writing to the disk. |
IO: cur |
The total number of I/O that are currently in progress. Note that there is a bug in recent versions of vmstat in which this is incorrectly divided by 1,000, which almost always yields a 0. |
IO: s |
This is the number of seconds spent waiting for I/O to complete. |
Finally, when asked to provide partition-specific statistics, vmstat displays those listed in .
Statistic |
Explanation |
---|---|
reads |
The total number of reads that have been requested for this partition. |
read sectors |
The total number of sectors read from this partition. |
writes |
The total number of writes that resulted in I/O for this partition. |
requested writes |
The total number of reads that have been requested for this partition. |
The default vmstat output provides a coarse indication of system disk I/O, but a good level. The options provided by vmstat enable you to reveal more details about which device is responsible for the I/O. The primary advantage of vmstat over other I/O tools is that it is present on almost every Linux distribution.
The number of I/O statistics that vmstat can present to the Linux user has been growing with recent releases of vmstat. The examples shown in this section rely on vmstat version 3.2.0 or greater. In addition, the extended disk statistics provided by vmstat are only available on Linux systems with a kernel version greater than 2.5.70.
In the first example, shown in , we are just invoking vmstat for three samples with an interval of 1 second. vmstat outputs the system-wide performance overview that we saw in .
[ezolt@wintermute procps-3.2.0]$ ./vmstat 1 3 procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 1 1 0 197020 81804 29920 0 0 236 25 1017 67 1 1 93 4 1 1 0 172252 106252 29952 0 0 24448 0 1200 395 1 36 0 63 0 0 0 231068 50004 27924 0 0 19712 80 1179 345 1 34 15 49
shows that during one of the samples, the system read 24,448 disk blocks. As mentioned previously, the block size for a disk is 1,024 bytes, so this means that the system is reading in data at about 23MB per second. We can also see that during this sample, the CPU was spending a significant portion of time waiting for I/O to complete. The CPU waits on I/O 63 percent of the time during the sample in which the disk was reading at ~23MB per second, and it waits on I/O 49 percent for the next sample, in which the disk was reading at ~19MB per second.
Next, in , we ask vmstat to provide information about the I/O subsystem's performance since system boot.
[ezolt@wintermute procps-3.2.0]$ ./vmstat -D 3 disks 5 partitions 53256 total reads 641233 merged reads 4787741 read sectors 343552 milli reading 14479 writes 17556 merged writes 257208 written sectors 7237771 milli writing 0 inprogress IO 342 milli spent IO
In , vmstat provides I/O statistic totals for all the disk devices in the system. As mentioned previously, when reading and writing to a disk, the Linux kernel tries to merge requests for contiguous regions on the disk for a performance increase; vmstat reports these events as merged reads and merged writes. In this example, a large number of the reads issued to the system were merged before they were issued to the device. Although there were ~640,000 merged reads, only ~53,000 read commands were actually issued to the drives. The output also tells us that a total of 4,787,741 sectors have been read from the disk, and that since system boot, 343,552ms (or 344 seconds) were spent reading from the disk. The same statistics are available for write performance. This view of I/O statistics is a good view of the overall I/O subsystem's performance.
Although the previous example displayed I/O statistics for the entire system, the following example in shows the statistics broken down for each individual disk.
[ezolt@wintermute procps-3.2.0]$ ./vmstat -d 1 3 disk ----------reads------------ -----------writes----------- -------IO------- total merged sectors ms total merged sectors ms cur s fd0 0 0 0 0 0 0 0 0 0 0 hde 17099 163180 671517 125006 8279 9925 146304 2831237 0 125 hda 0 0 0 0 0 0 0 0 0 0 fd0 0 0 0 0 0 0 0 0 0 0 hde 17288 169008 719645 125918 8279 9925 146304 2831237 0 126 hda 0 0 0 0 0 0 0 0 0 0 fd0 0 0 0 0 0 0 0 0 0 0 hde 17288 169008 719645 125918 8290 9934 146464 2831245 0 126 hda 0 0 0 0 0 0 0 0 0 0
shows that 60 (19,059 – 18,999) reads and 94 writes (24,795 – 24,795) have been issued to partition hde3. This view can prove particularly useful if you are trying to determine which partition of a disk is seeing the most usage.
[ezolt@wintermute procps-3.2.0]$ ./vmstat -p hde3 1 3 hde3 reads read sectors writes requested writes 18999 191986 24701 197608 19059 192466 24795 198360 19161 193282 24795 198360
Although vmstat provides statistics about individual disks/partitions, it only provides totals rather than the rate of change during the sample. This can make it difficult to eyeball which device's statistics have changed significantly from sample to sample.
iostat is like vmstat, but it is a tool dedicated to the display of the disk I/O subsystem statistics. iostat provides a per-device and per-partition breakdown of how many blocks are written to and from a particular disk. (Blocks in iostat are usually sized at 512 bytes.) In addition, iostat can provide extensive information about how a disk is being utilized, as well as how long Linux spends waiting to submit requests to the disk.
iostat is invoked using the following command line:
iostat [-d] [-k] [-x] [device] [interval [count]]
Much like vmstat, iostat can display performance statistics at regular intervals. Different options modify the statistics that iostat displays. These options are described in .
Option |
Explanation |
---|---|
-d |
This displays only information about disk I/O rather than the default display, which includes information about CPU usage as well. |
-k |
This shows statistics in kilobytes rather than blocks. |
-x |
This shows extended-performance I/O statistics. |
device |
If a device is specified, iostat shows only information about that device. |
interval |
The length of time between samples. |
count |
The total number of samples to take. |
The default output of iostat displays the performance statistics described in .
Statistic |
Explanation |
---|---|
tps |
Transfers per second. This is the number of reads and writes to the drive/partition per second. |
Blk_read/s |
The rate of disk blocks read per second. |
Blk_wrtn/s |
The rate of disk blocks written per second. |
Blk_read |
The total number of blocks read during the interval. |
Blk_wrtn |
The total number of blocks written during the interval. |
When you invoke iostat with the -x parameter, it displays extended statistics about the disk I/O subsystem. These extended statistics are described in .
Statistic |
Explanation |
---|---|
rrqm/s |
The number of reads merged before they were issued to the disk. |
wrqm/s |
The number of writes merged before they were issued to the disk. |
r/s |
The number of reads issued to the disk per second. |
w/s |
The number of writes issued to the disk per second. |
rsec/s |
Disk sectors read per second. |
wsec/s |
Disk sectors written per second. |
rkB/s |
Kilobytes read from disk per second. |
wkB/s |
Kilobytes written to disk per second. |
avgrq-sz |
The average size (in sectors) of disk requests. |
avgqu-sz |
The average size of the disk request queue. |
await |
The average time (in ms) for a request to be completely serviced. This average includes the time that the request was waiting in the disk's queue plus the amount of time it was serviced by the disk. |
svctm |
The average service time (in ms) for requests submitted to the disk. This indicates how long on average the disk took to complete a request. Unlike await, it does not include the amount of time spent waiting in the queue. |
iostat is a helpful utility, providing the most complete view of disk I/O performance statistics of any that I have found so far. Although vmstat is present everywhere and provides some basic statistics, iostat is far more complete. If it is available and installed on your system, iostat should be the first tool to turn to when a system has a disk I/O performance problem.
shows an example iostat run while a disk benchmark is writing a test file to the file system on the /dev/hda2 partition. The first sample iostat displays is the total system average since system boot time. The second sample (and any that would follow) is the statistics from each 1-second interval.
[ezolt@localhost sysstat-5.0.2]$ ./iostat -d 1 2 Linux 2.4.22-1.2188.nptl (localhost.localdomain) 05/01/2004 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn hda 7.18 121.12 343.87 1344206 3816510 hda1 0.00 0.03 0.00 316 46 hda2 7.09 119.75 337.59 1329018 3746776 hda3 0.09 1.33 6.28 14776 69688 hdb 0.00 0.00 0.00 16 0 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn hda 105.05 5.78 12372.56 16 34272 hda1 0.00 0.00 0.00 0 0 hda2 100.36 5.78 11792.06 16 32664 hda3 4.69 0.00 580.51 0 1608 hdb 0.00 0.00 0.00 0 0
One interesting note in the preceding example is that /dev/hda3 had a small amount of activity. In the system being tested, /dev/hda3 is a swap partition. Any activity recorded from this partition is caused by the kernel swapping memory to disk. In this way, iostat provides an indirect method to determine how much disk I/O in the system is the result of swapping.
shows the extended output of iostat.
[ezolt@localhost sysstat-5.0.2]$ ./iostat -x -dk 1 5 /dev/hda2 Linux 2.4.22-1.2188.nptl (localhost.localdomain) 05/01/2004 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda2 11.22 44.40 3.15 4.20 115.00 388.97 57.50 194.49 68.52 1.75 237.17 11.47 8.43 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda2 0.00 1548.00 0.00 100.00 0.00 13240.00 0.00 6620.00 132.40 55.13 538.60 10.00 100.00 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda2 0.00 1365.00 0.00 131.00 0.00 11672.00 0.00 5836.00 89.10 53.86 422.44 7.63 100.00 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda2 0.00 1483.00 0.00 84.00 0.00 12688.00 0.00 6344.00 151.0 39.69 399.52 11.90 100.00 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util hda2 0.00 2067.00 0.00 123.00 0.00 17664.00 0.00 8832.00 143.61 58.59 508.54 8.13 100.00
In , you can see that the average queue size is pretty high (~237 to 538) and, as a result, the amount of time that a request must wait (~422.44ms to 538.60ms) is much greater than the amount of time it takes to service the request (7.63ms to 11.90ms). These high average service times, along with the fact that the utilization is 100 percent, show that the disk is completely saturated.
The extended iostat output provides so many statistics that it only fits on a single line in a very wide terminal. However, this information is nearly all that you need to identify a particular disk as a bottleneck.
As discussed in , "," sar can collect the performance statistics of many different areas of the Linux system. In addition to CPU and memory statistics, it can collect information about the disk I/O subsystem.
When using sar to monitor disk I/O statistics, you can invoke it with the following command line:
sar -d [ interval [ count ] ]
Typically, sar displays information about the CPU usage in a system; to display disk usage statistics instead, you must use the -d option. sar can only display disk I/O statistics with a kernel version higher than 2.5.70. The statistics that it displays are described in .
Statistic |
Explanation |
---|---|
tps |
Transfers per second. This is the number of reads and writes to the drive/partition per second. |
rd_sec/s |
Number of disk sectors read per second. |
wr_sec/s |
Number of disk sectors written per second. |
The number of sectors is taken directly from the kernel, and although it is possible for it to vary, the size is usually 512 bytes.
In , sar is used to collect information about the I/O of the devices on the system. sar lists the devices by their major and minor number rather than their names.
[ezolt@wintermute sysstat-5.0.2]$ sar -d 1 3 Linux 2.6.5 (wintermute.phil.org) 05/02/04 16:38:28 DEV tps rd_sec/s wr_sec/s 16:38:29 dev2-0 0.00 0.00 0.00 16:38:29 dev33-0 115.15 808.08 2787.88 16:38:29 dev33-64 0.00 0.00 0.00 16:38:29 dev3-0 0.00 0.00 0.00 16:38:29 DEV tps rd_sec/s wr_sec/s 16:38:30 dev2-0 0.00 0.00 0.00 16:38:30 dev33-0 237.00 1792.00 8.00 16:38:30 dev33-64 0.00 0.00 0.00 16:38:30 dev3-0 0.00 0.00 0.00 16:38:30 DEV tps rd_sec/s wr_sec/s 16:38:31 dev2-0 0.00 0.00 0.00 16:38:31 dev33-0 201.00 1608.00 0.00 16:38:31 dev33-64 0.00 0.00 0.00 16:38:31 dev3-0 0.00 0.00 0.00 Average: DEV tps rd_sec/s wr_sec/s Average: dev2-0 0.00 0.00 0.00 Average: dev33-0 184.62 1404.68 925.75 Average: dev33-64 0.00 0.00 0.00 Average: dev3-0 0.00 0.00 0.00
sar has a limited number of disk I/O statistics when compared to iostat. However, the capability of sar to simultaneously record many different types of statistics may make up for these shortcomings.
lsof provides a way to determine which processes have a particular file open. In addition to tracking down the user of a single file, lsof can display the processes using the files in a particular directory. It can also recursively search through an entire directory tree and list the processes using files in that directory tree. lsof can prove helpful when narrowing down which applications are generating I/O.
You can invoke lsof with the following command line to investigate which files processes have open:
lsof [-r delay] [+D directory] [+d directory] [file]
Typically, lsof displays which processes are using a given file. However, by using the +d and +D options, it is possible for lsof to display this information for more than one file. describes the command-line options of lsof that prove helpful when tracking down an I/O performance problem.
Option |
Explanation |
---|---|
-r delay |
This causes lsof to output statistics every delay seconds. |
+D directory |
This causes lsof to recursively search all the files in the given directory and report on which processes are using them. |
+d directory |
This causes lsof to report on which processes are using the files in the given directory. |
lsof displays the statistics described in when showing which processes are using the specified files.
Statistic |
Explanation |
---|---|
COMMAND |
The name of the command that has the file open. |
PID |
The PID of the command that has the file open. |
USER |
The user who has the file open. |
FD |
The file descriptor of the file, or tex for a executable, mem for a memory mapped file. |
TYPE |
The type of file. REG for a regular file. |
DEVICE |
Device number in major, minor number. |
SIZE |
The size of the file. |
NODE |
The inode of the file. |
Although lsof does not show the amount and type of file access that a particular process is doing, it at least displays which processes are using a particular file.
shows lsof being run on the /usr/bin directory. This run shows which processes are accessing all of the files in /usr/bin.
[ezolt@localhost manuscript]$ /usr/sbin/lsof -r 2 +D /usr/bin/
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
gnome-ses 2162 ezolt txt REG 3,2 113800 597490 /usr/bin/gnome-session
ssh-agent 2175 ezolt txt REG 3,2 61372 596783 /usr/bin/ssh-agent
gnome-key 2182 ezolt txt REG 3,2 77664 602727 /usr/bin/gnome-keyring-daemon
metacity 2186 ezolt txt REG 3,2 486520 597321 /usr/bin/metacity
gnome-pan 2272 ezolt txt REG 3,2 503100 602174 /usr/bin/gnome-panel
nautilus 2280 ezolt txt REG 3,2 677812 598239 /usr/bin/nautilus
magicdev 2287 ezolt txt REG 3,2 27008 598375 /usr/bin/magicdev
eggcups 2292 ezolt txt REG 3,2 32108 599596 /usr/bin/eggcups
pam-panel 2305 ezolt txt REG 3,2 45672 600140 /usr/bin/pam-panel-icon
gnome-ter 3807 ezolt txt REG 3,2 289116 596834 /usr/bin/gnome-terminal
less 6452 ezolt txt REG 3,2 104604 596239 /usr/bin/less
=======
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
gnome-ses 2162 ezolt txt REG 3,2 113800 597490 /usr/bin/gnome-session
ssh-agent 2175 ezolt txt REG 3,2 61372 596783 /usr/bin/ssh-agent
gnome-key 2182 ezolt txt REG 3,2 77664 602727 /usr/bin/gnome-keyring-daemon
metacity 2186 ezolt txt REG 3,2 486520 597321 /usr/bin/metacity
gnome-pan 2272 ezolt txt REG 3,2 503100 602174 /usr/bin/gnome-panel
nautilus 2280 ezolt txt REG 3,2 677812 598239 /usr/bin/nautilus
magicdev 2287 ezolt txt REG 3,2 27008 598375 /usr/bin/magicdev
eggcups 2292 ezolt txt REG 3,2 32108 599596 /usr/bin/eggcups
pam-panel 2305 ezolt txt REG 3,2 45672 600140 /usr/bin/pam-panel-icon
gnome-ter 3807 ezolt txt REG 3,2 289116 596834 /usr/bin/gnome-terminal
less 6452 ezolt txt REG 3,2 104604 596239 /usr/bin/less
In particular, we can see that process 3807 is using the file /usr/bin/gnome-terminal. This file is an executable, as indicated by the txt in the FD column, and the name of the command that is using it is gnome-terminal. This makes sense; the process that is running gnome-terminal must therefore have the executable open. One interesting thing to note is that this file is on the device 3,2, which corresponds to /dev/hda2. (You can figure out the device number for all the system devices by executing ls -la /dev and looking at the output field that normally displays size.) Knowing on which device a file is located can help if you know that a particular device is the source of an I/O bottleneck. lsof provides the unique ability to trace an open file descriptor back to individual processes; although it does not show which processes are using a significant amount of I/O, it does provide a starting point.
All the disk I/O tools on Linux provide information about the utilization of a particular disk or partition. Unfortunately, after you determine that a particular disk is a bottleneck, there are no tools that enable you to figure out which process is causing all the I/O traffic.
Usually a system administrator has a good idea about what application uses the disk, but not always. Many times, for example, I have been using my Linux system when the disks started grinding for apparently no reason. I can usually run top and look for a process that might be causing the problem. By eliminating processes that I believe are not doing I/O, I can usually find the culprit. However, this requires knowledge of what the various applications are supposed to do. It is also error prone, because the guess about which processes are not causing the problem might be wrong. In addition, for a system with many users or many running applications, it is not always practical or easy to determine which application might be causing the problem. Other UNIXes support the inblk and oublk parameters to ps, which show you the amount of disk I/O issued on behalf of a particular process. Currently, the Linux kernel does not track the I/O of a process, so the ps tool has no way to gather this information.
You can use lsof to determine which processes are accessing files on a particular partition. After you list all PIDs accessing the files, you can then attach to each of the PIDs with strace and figure out which one is doing a significant amount of I/O. Although this method works, it is really a Band-Aid solution, because the number of processes accessing a partition could be large and it is time-consuming to attach and analyze the system calls of each process. This may also miss short-lived processes, and may unacceptably slow down processes when they are being traced.
This is an area where the Linux kernel could be improved. The ability to quickly track which processes are generating I/O would allow for much quicker diagnosis of I/O performance-related problems.