This file is populated writen by /proc/net/rpc/nfsd file by kernel code linux/fs/nfsd/stats.c and linux/net/sunrpc/stats.c:
cat /proc/net/rpc/nfsd
rc 14 277482605 1519481075
fh 5208 0 0 0 0
io 3197575487 677537481
th 128 1196 48 11 677 54 66 31 288 27 26 118
ra 256 1089064507 0 0 0 0 0 0 0 0 0 1410044
net 471073134 0 471089819 60766
rpc 1796938178 0 0 0 0
proc2 18 0 152 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
proc3 22 1 219996357 556065 24059461 43520164 606473 150260668 28056897 545340 198181 3708 0 412742 104649 177365 10 4940 255357 152 3
0 1809695
proc4 2 0 0
proc4ops 40 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Line by line explaination
rc 14 277482605 1519481075
rc Statistsics for the reply cache [3]
- hits: client it's retransmiting (a bad thing! o hits is good) [1]
- misses: a operation that requires caching
- nocache: a operation that no requires caching
fh 5208 0 0 0 0
fh (filehandle): [1]
- stale: *supose* to be file handle errors (like when you resize the
underlying filesystem)
- total-lookups, anonlookups, dir-not-in-cache, nodir-not-in-cache: do
not appear (and I always seen it as zeros). So I supose they are unused.
io 3197575487 677537481
io (input/output):
- bytes-read: bytes read directly from disk
- bytes-written: bytes written to disk
th 128 1196 48 11 677 54 66 31 288 27 26 118
th (threads): ...
- threads: number of nfsd threads
- fullcnt: number of times that the last 10% of threads (so all threads) are busy.
- 10%-20% (1196), 20%-30% ... 90%-100%: histogram (in the unit of seconds) of the percentage of threads is used
1196 means there is 1196 seconds in which more than 10% threads (and less than 20% threads) are
used.
ra 256 1089064507 0 0 0 0 0 0 0 0 0 1410044
ra (read-ahead): ...
- cache-size: always the double of number threads
- 10%, 20% ... 100%: how deep it found what was looking for. I *suppose*
this means how far the cached block is from the original block that was
first requested.
- not-found: not found in the read-ahead cache
net 1797032444 15813112 1781194739 52706
net:
- netcnt: counts every read
- netudpcnt: counts every UDP packet it receives
- nettcpcnt: counts every time it receives data from a TCP connection
- nettcpconn: count every TCP connection it receives
rpc 1796938178 0 0 0 0
rpc:
- rpccnt: counts all rpc operations
- rpcbadfmt: counts if while processing a RPC it encounters the
following errors: err_bad_dir, err_bad_rpc, err_bad_prog, err_bad_vers,
err_bad_proc, err_bad
- rpcbadauth: bad authentication. It does not count if you try to mount
from a machine that it's not in your exports file
- rpcbadclnt: unused
proc3 22 1 219996357 556065 24059461 43520164 606473 150260668 28056897 545340 198181 3708 0 412742 104649 177365 10 4940 255357 152 3
0 1809695
This should be the nfsv3 statistic, comparing with the output of nfsstat, we should be able to figure out what are those number for
Server nfs v3:
null getattr setattr lookup access readlink
1 0% 219996357 46% 556065 0% 24059461 5% 43520164 9% 606473 0%
read write create mkdir symlink mknod
150260668 31% 28056899 5% 545340 0% 198181 0% 3708 0% 0 0%
remove rmdir rename link readdir readdirplus
412742 0% 104649 0% 177365 0% 10 0% 4940 0% 255357 0%
fsstat fsinfo pathconf commit
152 0% 3 0% 0 0% 1809696 0%
Looks like field 9 and filed 10 is the read and write count. But I still can not tell the difference between
and this nfs read count.
reference:
[1]
[2] http://blog.peacon.co.uk/wiki/Monitoring_NFS_Performance
[3] ftp://82.96.64.7/pub/linux/kernel/people/marcelo/linux-2.4/fs/nfsd/stats.c