一不小心出了国
分类: 服务器与存储
2008-04-18 11:17:33
I recently needed to setup NFS between some Red Hat Linux systems. I had two different types of NFS connections to setup: a permanent read-only (RO) directory for copying software, and some automatically mounted home directories. Although you can use the redhat-config-nfs GUI tool, it is good to know what is going on under the hood. These are my notes on setting up NFS in both scenarios.
The notes were based on a stock Red Hat 8 NFS server, and a stock Red Hat 9 client. The NFS server is named "im" and has IP address 192.168.1.2 with a /24 netmask. The NFS client is named "tp1" and has IP address 192.168.1.191 with a /24 netmask. In this example, I will force the use of NFSv3.
This is an example of a directory that you want available throughout your LAN, but you don't want anyone writing to that directory.
# Van's NFS export file
/Data/Photos 192.168.1.0/24(ro,all_squash,anonuid=65534,anongid=65534)
(There should already be a user called "nfsnobody" with UID/GID=65534)
portmap: ALL
portmap: 192.168.1.0/255.255.255.0
This prevents hosts from other networks from connecting to the portmapper.
# /etc/init.d/portmap start
Starting portmapper: [ OK ]
# /etc/init.d/nfs start
Starting NFS services: [ OK ]
Starting NFS quotas: [ OK ]
Starting NFS daemon: [ OK ]
Starting NFS mountd: [ OK ]
# /etc/init.d/nfslock start
Starting NFS statd: [ OK ]
# exportfs -rv
exportfs: No 'sync' or 'async' option specified for export "192.168.1.0/24:/Data/Photos".
Assuming default behaviour ('sync').
NOTE: this default has changed from previous versions
exporting 192.168.1.0/24:/Data/Photos
# rpcinfo -p
program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100011 1 udp 762 rquotad
100011 2 udp 762 rquotad
100011 1 tcp 765 rquotad
100011 2 tcp 765 rquotad
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100021 1 udp 32781 nlockmgr
100021 3 udp 32781 nlockmgr
100021 4 udp 32781 nlockmgr
100005 1 udp 32782 mountd
100005 1 tcp 33396 mountd
100005 2 udp 32782 mountd
100005 2 tcp 33396 mountd
100005 3 udp 32782 mountd
100005 3 tcp 33396 mountd
100024 1 udp 32783 status
100024 1 tcp 33397 status
# showmount -e
Export list for im.vanemery.com:
/Data/Photos 192.168.1.0/24
# exportfs
/Data/Photos 192.168.1.0/24
The netstat -tuap command should also give you a good idea of what TCP and UDP ports are listening now.
# chkconfig portmap off
# chkconfig nfs off
# chkconfig nfslock off
# chkconfig --level 345 portmap on
# chkconfig --level 345 nfs on
# chkconfig --level 345 nfslock on
Also, you need to make sure that you don't have a Linux firewall like iptables automatically
blocking the connections.
Prerequisites: The client must be running the portmapper service and the rpc.statd service. If you need file locking, you must also be running the NFS lock daemon. You do not need to be running rquotad, nfsd, or mountd. By running /etc/init.d/portmap and /etc/init.d/nfslock, I had everything ready for the NFS mount. After mounting the NFS partition, running rpcinfo -p on the client showed me that the "status" RPC service had been started automatically. This is called "rpc.statd" in the ps listings. You will probably want to secure the portmapper and other RPC services with /etc/hosts.allow and /etc/hosts.deny, just like you did on the server.
[tp1]# ping im
[tp1]# showmount -e im
Export list for im:
/Data/Photos 192.168.1.0/24
[tp1]# rpcinfo -p im
[tp1]# tracepath im/2049
1?: [LOCALHOST] pmtu 1500
1: im (192.168.1.2) 0.371ms reached
Resume: pmtu 1500 hops 1 back 1
Based on the output of these commands, you should be able to see if the client will be able to make an NFS connection to the server or not.
[TP1]# mkdir /mnt/Photos
[tp1]# mount -t nfs -o hard,intr,ro,rsize=2048,wsize=2048,nfsvers=3 im:/Data/Photos /mnt/Photos
[tp1]# mount
im:/Data/Photos on /mnt/Photos type nfs (ro,hard,intr,rsize=2048,wsize=2048,addr=192.168.1.2)
Test from root and non-root accounts on tp1 to see if directory and file read operations work. Commands like df, du, ls, cd, and cp should work just fine.
# showmount -a
All mount points on im.vanemery.com:
tp1:/Data/Photos
# nfsstat
# nfsstat -o net
Warning: /proc/net/rpc/nfs: No such file or directory
Server packet stats:
packets udp tcp tcpconn
353441 353441 0 0
Client packet stats:
packets udp tcp tcpconn
0 0 0 0
# ifconfig eth0
eth0 Link encap:Ethernet HWaddr 00:E0:29:42:F8:C2
inet addr:192.168.1.2 Bcast:192.168.1.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:391457 errors:0 dropped:0 overruns:0 frame:37
TX packets:698615 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:100
RX bytes:89280722 (85.1 Mb) TX bytes:846404790 (807.1 Mb)
Interrupt:10 Base address:0xb400
# netstat -s
# cat /proc/net/snmp
[tp1]# umount /mnt/Photos
im:/Data/Photos /mnt/Photos nfs ro,hard,intr,rsize=2048,wsize=2048,nfsvers=3,bg 0 0
You can test this with the mount -av command without rebooting the server.
I selected the rsize=2048 and wsize=2048 option for this NFS mount based on some testing I did with copying 90MB worth of files over the network to the client with different rsize settings. I tried a number of settings, and 2048 was the 2nd fastest next to 8192. 8192 produced UDP datagrams fragmented into 6 IP packets, 5 of which were max size at 1514 bytes on the wire. I don't want that many max sized packets or fragments on my LAN. 2048 gave reasonable performance and only produced 2 IP fragments per UDP datagram. This leaves more space on the LAN for other traffic, especially if you are using a repeater instead of a switch.
What if you would like users on a remote host to be able to mount their home directories from a central server? With conventional NFS mounts, only root can mount and unmount NFS directories. With the automount utility (a.k.a. autofs), NFS directories can be mounted and unmounted automatically as needed by regular users.
This section will show you how to set up the automounter on an NFS client so that machine tp1's users can login to their home directories on a central NFS server. This has several advantages:
In this scenaro, host "im" is the NFS server, host "tp1" is the client. User "gishj" will have his home directory located on the NFS server. Also, the partition /dev/hdb2 is mounted to /ahome as ext3. This partition has user quota support enabled.
Note: NFS assumes that users have the same UID and GID on the client machine as they do on the server machine. The UIDs, GIDs, and usernames can be synchronized via several mechanisms, which are outside the scope of this mini-HOWTO. Here is a short list of possibilities:
[root@im /]# mkdir /ahome
[root@im /]# useradd -d /ahome/gishj -u 600 gishj
Note that the /etc/skel files were put here by the useradd utility, allowing you to create a standard user environment for every user on the server.
/ahome 192.168.1.0/24(rw,sync,root_squash)Activate the NFS export and verify it with these commands:
# exportfs -rv
exporting 192.168.1.0/24:/ahome
# showmount -e
Export list for im.vanemery.com:
/ahome 192.168.1.0/24
# exportfs -v
/ahome 192.168.1.0/24(rw,wdelay,root_squash)
[tp1]# showmount -e im
Export list for im:
/ahome 192.168.1.0/24
/autohome /etc/auto.autohome --timeout=120Create a new file called /etc/auto.autohome and add these lines:
# This is for mounting user homes over NFS
# Format = key [-mount-options-separated-by-comma] location
* -fstype=nfs,rw,hard,intr,rsize=2048,wsize=2048,nosuid,nfsvers=3 im:/ahome/&
The wildcards "*" and "&" allow usernames to be inserted as NFS paths. "rw" allows read and write operations, and "nosuid" is a security option. If you want to read more about the allowable wildcards in the autofs maps, try man 5 autofs.
[tp1]# mkdir /autohome
[tp1]# /etc/init.d/autofs start
Starting automount: [ OK ]
[tp1]# chkconfig autofs off
[tp1]# chkconfig --level 345 autofs on
[root@tp1 /]# useradd -M -d /autohome/gishj -u 600 gishj
[root@tp1 /]# passwd gishj
Changing password for user gishj.
New password: ********
Retype new password: ******
passwd: all authentication tokens updated successfully.
automount(pid1510) on /autohome type autofs (rw,fd=5,pgrp=1510,minproto=2,maxproto=3)After Joe Gish logs out, 2 minutes later the auto-mounter will unmount the NFS directory. When you use the mount command on tp1 again (as root), here is what you will see:
im:/ahome/gishj on /autohome/gishj type nfs (rw,nosuid,hard,intr,rsize=2048,wsize=2048,nfsvers=3,addr=192.168.1.2)
automount(pid1510) on /autohome type autofs (rw,fd=5,pgrp=1510,minproto=2,maxproto=3)
The automount maps can be distributed via NIS, NIS+, LDAP, or other means. Back on the NFS server, you can watch the automount operation with watch showmount -a. Note that the user on the NFS clients does not need to be able to login to the NFS server. If you don't run passwd for the user on the server, they will not be able to login, but they can still use their home directory over the network.
Limiting the disk space for each user on the NFS server with quotas:
If you are using ext2/ext3 or ReiserFS for your automount partition on the server, then you can setup
quotas for each user. This limits how much disk space each user can have. This may also be possible
in some kernels with JFS and XFS, but I have not looked into this. When quotas are enabled, the
user on the NFS client can still view his or her quota by typing the quota command. I tested the
quota function by logging in as one of the users and copying lots of files to my home directory.
As expected, when I exceeded my quota, further copy operations failed with an error. Removing some files
fixed my quota problem, and I could write to the NFS directory again.
This mini-HOWTO has focused on NFSv3 for Linux. The NFSv2 and NFSv3 implementations for Linux are fairly mature now, with the exception of TCP support (client and server) for NFSv3, which will be incorporated in future kernels. NFSv4 is being actively developed for Solaris and Linux. NFSv4 will become an Internet standard for filesharing over a network. It has some key improvements over NFSv3:
NFSv4 will be part of the standard 2.6 Linux kernels.