Chinaunix首页 | 论坛 | 博客
  • 博客访问: 58902
  • 博文数量: 44
  • 博客积分: 1245
  • 博客等级: 中尉
  • 技术积分: 255
  • 用 户 组: 普通用户
  • 注册时间: 2010-05-08 10:41
文章分类

全部博文(44)

文章存档

2013年(1)

2012年(5)

2011年(38)

我的朋友

分类: 服务器与存储

2011-09-13 11:29:31

Description

You have a high performance NFS environment, and you suspect that one of your UNIX hosts is spamming the filer with bad requests. There are a few simple things you can do to isolate and research NFS performance against the filer.

You will need console/telnet access to the filer, and about five minutes of time.


The command you will be using is "nfsstat", with various options.


You will also need to turn on this option to enable per-host statistics. Note that if you have a high-performance NFS environment, you will want to turn this option back off after testing as it creates a small amount of performance overhead:


> options nfs.per_client_stats.enable on

Procedure First, zero out the nfs stats for the filer, resetting them all for your polling period. When you are in a period that you believe the performance impacting UNIX server is acting up, start the research process with the following command to zero the stats:

> nfsstat -z


Wait about five minutes, enough time for the filer to generate some good data for research. When you are ready to look at the stats collected, run:


> nfsstat


From there, look at the percentage of NFSv2, v3 etc. traffic coming in to your filer during that five minute time. You will want to see a decent amount of reads and writes. If you see too many getattr or access requests (50%+ of all traffic for either) that could indicate that you have a host that is trawling the filesystem (indexing, scanning, etc) instead of doing any actual useful reads or writes.


It will also show you the transfer byte size range for NFS requests to the filer. If you see that the ranges are all over the place, with several small (16-) byte transfers and large (64k+) transfers and trend towards one request size, you may want to stabilize the transfers by setting the read and write mount options from your UNIX hosts to perform at a regular 32k.


Next, we need to find the culprits of the bad or unoptimized NFS requests:


> nfsstat -l


This will show you the clients which have connected in the last time since the statistics were zeroed. It will show the number of NFS operations that the host performed against the filer. This will give you a good indication of the highest impacting NFS hosts. If you only have a few NFS hosts, then maybe researching the top three hosts will reveal the culprit. If you have many NFS hosts connecting to the filer, then you will probably need to look at a large number of hosts to find the culprit.


> nfsstat -h (host IP address, listed from "nfsstat -l" output)


Note
: To run the nfsstat -h portion in 7.3.1.1, the command is:

vfiler run
nfsstat -h

For the root vfiler, it is vfiler0. This has changed in Data ONTAP 7.3.1.1. In Data ONTAP 7.3 it still appears to be the same, but 7.3.1.1 has different syntax.


This will again break down for us the type and number of NFS requests, but against that one host only. You will be able to see if it matches up with the statistics gathered from all incoming NFS requests. If it looks like it is the culprit that is generating too many getattr, access, or other requests at a rate of 50% or higher at the expense of reads and writes, you should work with your UNIX vendor or support team to analyze that data and determine what processes on the host are running against the filer to cause so many unoptimized requests. Usually standard UNIX commands from the host like "lsof /vol/mountpoint_on_filer" will reveal the processes at that time running against the mountpoint to the filer, which may assist in your continuing troubleshooting.

阅读(209) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~