Chinaunix首页 | 论坛 | 博客
  • 博客访问: 7740282
  • 博文数量: 637
  • 博客积分: 10265
  • 博客等级: 上将
  • 技术积分: 6165
  • 用 户 组: 普通用户
  • 注册时间: 2004-12-12 22:00
文章分类

全部博文(637)

文章存档

2011年(1)

2010年(1)

2009年(3)

2008年(12)

2007年(44)

2006年(156)

2005年(419)

2004年(1)

分类: LINUX

2006-01-08 09:36:24

   By Van Emery

Table of Contents

Introduction

Interested in taking NFSv4, the IETF's newest network file system, for a spin? Fedora Core 2 (FC-2) is a good vehicle for testing it out. The kernel (2.6.5) includes basic NFSv4 support. The nfs-utils package contains many NFS-related scripts, programs, and libraries.

What's the big deal with NFSv4? How is it an improvement over NFSv3 or CIFS? Here is a short list of some of NFSv4's features and benefits:

  • Works well through firewalls and NAT devices
  • Lock and mount protocols are integrated into the NFS protocol
  • Stateful operations (handles client or server crashes pretty well)
  • Strong security is built-in: uses RPCSEC_GSS (based on GSS-API)
  • Makes extensive use of client-side caching
  • Supports replication and migration
  • Vendor-independent, platform-independent, protocol-independent IETF standard
  • Will support Unix-like clients as well as Windows clients
  • Supports ACLs
  • Handles Unicode (UTF-8) filenames for internationalization
  • Good performance on Internet, even on high-latency, low-bandwidth links

NFSv4 supports several security flavors, including AUTH_SYS and RPCSEC_GSS. AUTH_SYS (also known as AUTH_UNIX) represents the traditional low-security model found in NFSv2/v3. AUTH_SYS provides UNIX-style credentials by using UIDs and GIDs to identify the sender and recipient of RPC messages. AUTH_SYS security is very easy to circumvent. The new security flavor, RPCSEC_GSS, introduces secure authentication, integrity, and encryption. It is based on GSS-API. The three required security triples when using RPCSEC_GSS are:

  • Kerberos 5
  • LIPKEY (based on SPKM-3)
  • SPKM-3

Kerberos 5 is appropriate for enterprise/LAN use, LIPKEY is appropriate for Internet use.

In this tutorial, we will go through setting up some basic NFSv4 scenarios with the AUTH_SYS security flavor. Future HOWTOs will cover using SPKM-3, LIPKEY, and Kerberos 5. This may happen with Fedora Core 3, since an unmodified FC-2 system does not contain a complete, working RPCSEC_GSS (Kerberos 5 + LIPKEY + SPKM-3) implementation.

is creating the Free/Open Source Software (FOSS) reference implementation of NFSv4 for use with GNU/Linux, FreeBSD, and OpenDarwin. If you want to follow along with the FOSS implementation of NFSv4, you will need to track the and use their patches with newer kernels.

Assumptions

  • You are using Fedora Core 2 on both client and server, with the original 2.6.5-1.358 kernel
  • Your TCP/IP settings, hostname, and DNS records are setup properly
  • Your /etc/resolv.conf config file is setup properly
  • You are using the nfs-utils-1.0.6-22 RPM package or newer. This can be obtained via up2date or yum.

Here are the IP addresses and hostnames used in the examples:

  • Server:   hostname = "fc2", IP = 192.168.1.2/24
  • Client:   hostname = "nfsc" IP = 192.168.1.212/24

Config Files and Start/Stop Scripts

There are many config files and scripts that make up a working NFSv4 client/server system. Here is a list of the most important ones:

  • /etc/fstab - used on the NFS client
  • /etc/exports - used on the NFS server
  • /etc/auto.master - used on the NFS client
  • /etc/sysconfig/nfs - used on the NFS server
  • /etc/idmapd.conf - used on the NFS client and server
  • /etc/gssapi_mech.conf - used on the NFS client and server
  • /etc/init.d/portmap - used on the client and server
  • /etc/init.d/nfs - required on the server
  • /etc/init.d/rpcidmapd - required on both client and server
  • /etc/init.d/rpcsvcgssd - required on the server when RPCSEC_GSS is used
  • /etc/init.d/rpcgssd - required on the client when RPCSEC_GSS is used



Preliminary Setup

The following steps are required on the client and server before the scenarios can be pursued. Some of the steps are not strictly necessary, since this HOWTO does not deal with RPCSEC_GSS, but they are included as a prelude to future HOWTOs.

NFSv4 Server Prep:

Step 1 - Add entries to /etc/hosts for clarity and simplicity. Make sure config file is setup properly:

# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
192.168.1.2 fc2 fc2.vanemery.com
192.168.1.212 nfsc

Step 2 - Disable iptables (or other packet filter/firewall) or allow TCP 2049 from networks where NFSv4 clients will be located. When NFSv4 is using AUTH_SYS, TCP 2049 is the only port that is required to be open on the server. In my case, I just added a rule to the default FC-2 firewall. This can be done with the gui tool system-config-securitylevel or the non-GUI tool system-config-securitylevel-tui. Just add an entry under "Other ports" that looks like this:

   2049:tcp

This will add a line to /etc/sysconfig/iptables and automatically update your running iptables configuration. Here is the file after I made the change:

# Firewall configuration written by system-config-securitylevel
# Manual customization of this file is not recommended.
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:RH-Firewall-1-INPUT - [0:0]
-A INPUT -j RH-Firewall-1-INPUT
-A FORWARD -j RH-Firewall-1-INPUT
-A RH-Firewall-1-INPUT -i lo -j ACCEPT
-A RH-Firewall-1-INPUT -p icmp --icmp-type any -j ACCEPT
-A RH-Firewall-1-INPUT -p 50 -j ACCEPT
-A RH-Firewall-1-INPUT -p 51 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 2049 -j ACCEPT
-A RH-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited
COMMIT

Note:    In order to set additional network/address restrictions, you will need to setup a custom firewall.

Step 3 - Configure TCP Wrappers to protect the Portmapper

The following configuration in /etc/hosts.allow will protect your portmapper from malicious connections. You will also need to set this up to support simultaneous NFSv3 and NFSv4 support. In this example, we will only allow access from the loopback address. All other access is denied.

#
# hosts.allow This file describes the names of the hosts which are
# allowed to use the local INET services, as decided
# by the '/usr/sbin/tcpd' server.
#

portmap : 127. : ALLOW
portmap : ALL : DENY

Note:   This will work fine for NFSv4. However, if you are running an NFSv3 server or a NIS server, you will need to have an entry like this:

portmap : 127. 192.168.1. : ALLOW
portmap : ALL : DENY

This will allow connections from networks or addresses that you trust (192.168.1.0/24 in this example).

Step 4 - Configure /etc/sysconfig/nfs

This file needs to exist in order to control NFS server behavior when invoked from the startup script. Create the /etc/sysconfig/nfs file using the following config:

# This entry should be "yes" if you are using RPCSEC_GSS_KRB5 (auth=krb5,krb5i, or krb5p)
SECURE_NFS="no"
# This entry sets the number of NFS server processes. 8 is the default
RPCNFSDCOUNT=8

Step 5 - Make sure /etc/gssapi_mech.conf exists. It should be installed by default.

[root@fc2 etc]# ls -l /etc/gssapi_mech.conf
-rw-r--r-- 1 root root 804 May 19 04:43 /etc/gssapi_mech.conf

The file should contain the following config:

# GSSAPI Mechanism Definitions
#
# This configuration file determines which GSS-API mechanisms
# the gssd code should use
#
# NOTE:
# The initiaiization function "mechglue_internal_krb5_init"
# is used for the MIT krb5 gssapi mechanism. This special
# function name indicates that an internal function should
# be used to determine the entry points for the MIT gssapi
# mechanism funtions.
#
# library initialization function
# ================================ ==========================
# The MIT K5 gssapi library, use special function for initialization.
/usr/lib/libgssapi_krb5.so mechglue_internal_krb5_init
#
# The SPKM3 gssapi library function. Use the function spkm3_gss_initialize.
# /usr/local/gss_mechs/spkm/spkm3/libgssapi_spkm3.so spkm3_gss_initialize

Step 6 - Configure /etc/idmapd.conf

The id mapper daemon is required on both client and server. It maps NFSv4 username@domain user strings back and forth into numeric UIDs and GIDs when necessary. The client and server must have matching domains in this configuration file:

[General]

Verbosity = 0
Pipefs-Directory = /var/lib/nfs/rpc_pipefs
Domain = vanemery.com

[Mapping]

Nobody-User = nfsnobody
Nobody-Group = nfsnobody

"nfsnobody" is a predefined user and group entry that is included in a default FC-2 install. The UID and GID for "nfsnobody" is 65534.

Step 7 - Create Test Export Directories

In order to test NFSv4, you will need to create some test directories to export. In my case, I setup a test directory in the /home partition called /home/NFS4 to test read-only and read-write exports. I also setup the /ahome directory in a spare partition for testing NFSv4 auto-mounted home directories. I configured ACL support for both partitions. With FC-2, ACL support works "out of the box" with EXT3 and XFS filesystems.   /home and /ahome are EXT3 filesystems.

Here are my target partitions:

/dev/hda5 on /home type ext3
/dev/hda10 on /ahome type ext3

Now, let's make our test directory:

[root@fc2 root]# mkdir -m 1777 /home/NFS4
[root@fc2 root]# ls -ld /home/NFS4
drwxrwxrwt 2 root root 4096 Jun 9 10:30 /home/NFS4

Check the permissions on /ahome:

[root@fc2 root]# ls -ld /ahome
drwxr-xr-x 6 root root 4096 Jun 4 15:26 /ahome

Open /etc/fstab with an editor. Modify the entries with the "acl" and "rw" options, as shown in red:

LABEL=/home       /home     ext3    rw,acl         1 2
LABEL=/ahome /ahome ext3 rw,acl 1 2

Make sure the filesystems are not in use, then remount them (you may have to drop to single-user mode to do this):

[root@fc2 root]# umount -v /home
/dev/hda5 umounted
[root@fc2 root]# mount -v /home
/dev/hda5 on /ahome type ext3 (rw,acl)

[root@fc2 root]# umount -v /ahome
/dev/hda10 umounted
[root@fc2 root]# mount -v /ahome
/dev/hda10 on /ahome type ext3 (rw,acl)

Another way to mount the partitions with the new options is the "remount" option. It will enable you to add ACL support without dropping to single-user mode. Here is how you would use that for the /home partition:

[root@fc2 root]# mount -v -o remount /home
/dev/hda5 on /home type ext3 (rw,acl)

While we are at it, let's put a few text files in /home/NFS4. This will give us some files to look at when we try out a read-only NFS mount.

[root@fc2 root]# cp /usr/share/doc/nfs-utils-1.0.6/* /home/NFS4

Step 8 - Configure start scripts for automatic startup and shutdown

You will want to make sure that all of the NFS-related services start and stop automatically. We can use the chkconfig utility for this:

[root@fc2 /]# chkconfig --level 0123456 portmap off
[root@fc2 /]# chkconfig --level 345 portmap on
[root@fc2 /]# chkconfig --level 0123456 rpcidmapd off
[root@fc2 /]# chkconfig --level 345 rpcidmapd on
[root@fc2 /]# chkconfig --level 0123456 nfslock off
[root@fc2 /]# chkconfig --level 345 nfslock on
[root@fc2 /]# chkconfig --level 0123456 nfs off
[root@fc2 /]# chkconfig --level 345 nfs on

[root@fc2 /]# chkconfig --level 0123456 rpcgssd off
[root@fc2 /]# chkconfig --level 0123456 rpcsvcgssd off

Now, we need to make sure all the right daemons are restarted or stopped, as appropriate:

[root@fc2 /]# /etc/init.d/rpcgssd stop
[root@fc2 /]# /etc/init.d/rpcsvcgssd stop
[root@fc2 /]# /etc/init.d/portmap restart
Stopping portmapper: [ OK ]
Starting portmapper: [ OK ]
[root@fc2 /]# /etc/init.d/rpcidmapd restart
Shutting down NFS4 idmapd: [FAILED]
Starting NFS4 idmapd: [ OK ]
[root@fc2 /]# /etc/init.d/nfslock restart
Stopping NFS statd: [FAILED]
Starting NFS statd: [ OK ]
[root@fc2 /]# /etc/init.d/nfs restart
Shutting down NFS mountd: [FAILED]
Shutting down NFS daemon: [FAILED]
Shutting down NFS quotas: [FAILED]
Shutting down NFS services: [ OK ]
Starting NFS services: [ OK ]
Starting NFS quotas: [ OK ]
Starting NFS daemon: [ OK ]
Starting NFS mountd: [ OK ]

Note:   The rpc*gssd daemons are required for the RPC-GSS security flavor. They are not required in this HOWTO, as we are only using the AUTH_SYS security flavor.

The following two commands tell you what NFS-related daemons are running, and what UDP and TCP ports they are listening on:

[root@fc2 root]# rpcinfo -p
program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100024 1 udp 32768 status
100024 1 tcp 32768 status
391002 2 tcp 32770 sgi_fam
100011 1 udp 648 rquotad
100011 2 udp 648 rquotad
100011 1 tcp 651 rquotad
100011 2 tcp 651 rquotad
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100003 4 udp 2049 nfs
100003 2 tcp 2049 nfs
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100021 1 udp 32861 nlockmgr
100021 3 udp 32861 nlockmgr
100021 4 udp 32861 nlockmgr
100021 1 tcp 32987 nlockmgr
100021 3 tcp 32987 nlockmgr
100021 4 tcp 32987 nlockmgr
100005 1 udp 664 mountd
100005 1 tcp 667 mountd
100005 2 udp 664 mountd
100005 2 tcp 667 mountd
100005 3 udp 664 mountd
100005 3 tcp 667 mountd

[root@fc2 root]# netstat -tupa
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 *:897 *:* LISTEN 5806/rpc.mountd
tcp 0 0 *:nfs *:* LISTEN -
tcp 0 0 *:32941 *:* LISTEN 5747/rpc.statd
tcp 0 0 *:32942 *:* LISTEN -
tcp 0 0 *:sunrpc *:* LISTEN 5668/portmap
tcp 0 0 *:881 *:* LISTEN 5789/rpc.rquotad
tcp 0 0 *:ssh *:* LISTEN 1301/sshd
udp 0 0 *:nfs *:* -
udp 0 0 *:32775 *:* 5747/rpc.statd
udp 0 0 *:32776 *:* -
udp 0 0 *:835 *:* 5747/rpc.statd
udp 0 0 *:878 *:* 5789/rpc.rquotad
udp 0 0 *:sunrpc *:* 5668/portmap
udp 0 0 *:894 *:* 5806/rpc.mountd

Step 9 - UID/GID/user name/group name synchronization

For our test users, we will need identical usernames, UIDs, groupnames, and GIDs on both client and server. If you are already using NIS or LDAP, then you can use users and groups defined there for your testing. If your NFSv4 test machines (client and server) are simply using /etc files, then you should add three test users to the server. When you configure the client, you will add the same users with identical numeric UID/GID values.

[root@fc2 root]# useradd -d /ahome/jgish -u 600 jgish
[root@fc2 root]# useradd -d /ahome/wtdoor -u 601 wtdoor
[root@fc2 root]# useradd -d /ahome/btpunch -u 602 btpunch
[root@fc2 root]# id jgish
uid=600(jgish) gid=600(jgish) groups=600(jgish)
[root@fc2 root]# id wtdoor
uid=601(wtdoor) gid=601(wtdoor) groups=601(wtdoor)
[root@fc2 root]# id btpunch
uid=602(btpunch) gid=602(btpunch) groups=602(btpunch)



NFSv4 Client Prep:

Now that you have setup the NFS server, it is time to take care of all the NFSv4 prerequisites on the client.

Step 1 - Add entries to /etc/hosts for clarity and simplicity. Make sure config file is setup properly:

# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
192.168.1.212 nfsc.vanemery.com nfsc
192.168.1.2 fc2

Step 2 - Firewall/packet filter setup

On the client, FC-2's default iptables configuration will work without modification. Therefore, your packetfilter/firewall may be on or off. For best security, I would recommend that you leave it on. It can be enabled or disabled with the system-config-securitylevel or system-config-securitylevel-tui tools.

Note:    In future NFSv4 client implementations, there may be a requirement for a port to be opened in the client's firewall to support delegation/callbacks. More info is in the section.

Step 3 - Configure TCP Wrappers to protect the Portmapper

The following configuration in /etc/hosts.allow will protect your portmapper from malicious connections. In this example, we only allow access from the loopback network. All other access is denied.

#
# hosts.allow This file describes the names of the hosts which are
# allowed to use the local INET services, as decided
# by the '/usr/sbin/tcpd' server.
#

portmap : 127. : ALLOW
portmap : ALL : DENY

Step 4 - Make sure /etc/gssapi_mech.conf exists. It should be installed by default.

[root@fc2 etc]# ls -l /etc/gssapi_mech.conf
-rw-r--r-- 1 root root 804 May 19 04:43 /etc/gssapi_mech.conf

The default configuration (as shown in the server section) should be fine.

Step 5 - Configure /etc/idmapd.conf

The client configuration should match the server configuration:

[General]

Verbosity = 0
Pipefs-Directory = /var/lib/nfs/rpc_pipefs
Domain = vanemery.com

[Mapping]

Nobody-User = nfsnobody
Nobody-Group = nfsnobody

Step 6 - Create NFSv4 Mount Points

We will need to create mount points for our NFS mounts. We will be using the following mount points for this HOWTO:

  • /mnt/NFS4 - this will be for RO and RW mounts of the server's /home/NFS4 directory.
  • /ahome - this will be for user home automounts
[root@nfsc root]# mkdir -m 755 /mnt/NFS4
[root@nfsc root]# ls -ld /mnt/NFS4
drwxr-xr-x 2 root root 4096 Jun 9 15:27 /mnt/NFS4
[root@nfsc root]# mkdir -m 755 /ahome
[root@nfsc root]# ls -ld /ahome
drwxr-xr-x 2 root root 4096 Jun 9 15:28 /ahome

Step 7 - Configure start scripts for automatic startup and shutdown

You will want to make sure that all of the NFS-related services start and stop automatically. We can use the chkconfig utility for this:

[root@nfsc /]# chkconfig --level 0123456 portmap off
[root@nfsc /]# chkconfig --level 345 portmap on
[root@nfsc /]# chkconfig --level 0123456 rpcidmapd off
[root@nfsc /]# chkconfig --level 345 rpcidmapd on
[root@nfsc /]# chkconfig --level 0123456 nfslock off
[root@nfsc /]# chkconfig --level 0123456 nfs off
[root@nfsc /]# chkconfig --level 0123456 rpcgssd off
[root@nfsc /]# chkconfig --level 0123456 rpcsvcgssd off

Now, we need to make sure all the right daemons are restarted or stopped, as appropriate:

[root@nfsc /]# /etc/init.d/nfslock stop
Stopping NFS statd: [FAILED]
[root@nfsc /]# /etc/init.d/nfs stop
Shutting down NFS mountd: [FAILED]
Shutting down NFS daemon: [FAILED]
Shutting down NFS quotas: [FAILED]
Shutting down NFS services: [ OK ]
[root@nfsc /]# /etc/init.d/rpcgssd stop
Shutting down NFS4 gssd: [FAILED]
[root@nfsc /]# /etc/init.d/rpcsvcgssd stop
Shutting down NFS4 svcgssd: [FAILED]
[root@nfsc /]# /etc/init.d/portmap restart
Stopping portmapper: [FAILED]
Starting portmapper: [ OK ]
[root@nfsc /]# /etc/init.d/rpcidmapd restart
Shutting down NFS4 idmapd: [ OK ]
Starting NFS4 idmapd: [ OK ]

The following two commands tell you what NFS-related daemons are running, and what UDP and TCP ports they are listening on:

[root@nfsc root]# rpcinfo -p
program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
391002 2 tcp 32768 sgi_fam

[root@nfsc root]# netstat -tunap
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name

tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 7233/portmap
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 2035/sshd
udp 0 0 0.0.0.0:111 0.0.0.0:* 7233/portmap

Step 8 - UID/GID/user name/group name synchronization

For our test users, we will need identical usernames, UIDs, groupnames, and GIDs on both client and server. If you are already using NIS or LDAP, then you can use users and groups defined there for your testing. If your NFSv4 test machines (client and server) are simply using /etc files, then you should add three test users to the client:

[root@nfsc root]# useradd -u 600 jgish
[root@nfsc root]# passwd jgish
Changing password for user jgish.
New UNIX password:
Retype new UNIX password:
passwd: all authentication tokens updated successfully.

[root@nfsc root]# useradd -u 601 wtdoor
[root@nfsc root]# passwd wtdoor
Changing password for user wtdoor.
New UNIX password:
Retype new UNIX password:
passwd: all authentication tokens updated successfully.

[root@nfsc root]# useradd -u 602 btpunch
[root@nfsc root]# passwd btpunch
Changing password for user btpunch.
New UNIX password:
Retype new UNIX password:
passwd: all authentication tokens updated successfully.

[root@nfsc root]# id jgish
uid=600(jgish) gid=600(jgish) groups=600(jgish)
[root@nfsc root]# id wtdoor
uid=601(wtdoor) gid=601(wtdoor) groups=601(wtdoor)
[root@nfsc root]# id btpunch
uid=602(btpunch) gid=602(btpunch) groups=602(btpunch)



Read-Only NFSv4 Mount

The first type of NFSv4 mount we are going to look at is the Read-Only (RO) mount. This would be useful for distributing public files, network Linux installs, etc.

On the server, we will need to modify /etc/exports, then export the filesystem. Here is what your /etc/exports file should look like:

### /etc/exports - a list of directories for NFS to export ###
## Read-only export to the 192.168.1.0/24 network ##
/home/NFS4 192.168.1.0/24(ro,fsid=0,insecure,no_subtree_check,sync,anonuid=65534,anongid=65534)

The "fsid=0" option tells the server to use the .

Now, let's tell the NFS daemon about the changes and then check the resulting export:

[root@fc2 root]# exportfs -rv
exporting 192.168.1.0/24:/home/NFS4

[root@fc2 root]# exportfs -v
/home/NFS4 192.168.1.0/24(ro,wdelay,insecure,root_squash,no_subtree_check,fsid=0,anonuid=65534,anongid=65534)

[root@fc2 root]# showmount -e
Export list for fc2.vanemery.com:
/home/NFS4 192.168.1.0/24

On the client, we will now mount the NFSv4 filesystem. First, we will mount it manually, then we will put it into the fstab and mount it via directory name.

[root@nfsc root]# mount -t nfs4 -o ro,intr fc2:/ /mnt/NFS4

[root@nfsc root]# mount | grep nfs
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
fc2:/ on /mnt/NFS4 type nfs4 (ro,intr,addr=192.168.1.2)

Now, as any user on the client, you should be able to view and copy files from /mnt/NFS4. If you try to write a file to the directory, then you will get an error:

[jgish@nfsc jgish]$ cd /mnt/NFS4
[jgish@nfsc NFS4]$ ls
ChangeLog nfs.html node13.html node18.html node22.html node27.html node6.html THANKS
index.html nfs.ps node14.html node19.html node23.html node2.html node7.html TODO
INSTALL node10.html node15.html node1.html node24.html node3.html node8.html
KNOWNBUGS node11.html node16.html node20.html node25.html node4.html node9.html
NEW node12.html node17.html node21.html node26.html node5.html README

[jgish@nfsc NFS4]$ touch SomeFile.txt
touch: cannot touch `SomeFile.txt': Read-only file system

Now, let's make sure that nobody is using or accessing /mnt/NFS4 and then unmount:

[root@nfsc root]# umount /mnt/NFS4

Let's add the following entry to the /etc/fstab file:

fc2:/   /mnt/NFS4   nfs4  ro,hard,intr,proto=tcp,port=2049,noauto   0 0

Now mount the NFSv4 filesystem using the info in /etc/fstab:

[root@nfsc root]# mount -v /mnt/NFS4
fc2:/ on /mnt/NFS4 type nfs4 (ro,hard,intr,proto=tcp,port=2049,addr=192.168.1.2)

On the client, you can see that there is a persistent TCP connection on port 2049. You can also use the df and du commands to see how much space is on the remote filesystem:

[root@nfsc NFS4]# netstat -tn
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 192.168.1.212:800 192.168.1.2:2049 ESTABLISHED

[root@nfsc NFS4]# df -h /mnt/NFS4
Filesystem Size Used Avail Use% Mounted on
fc2:/ 9.7G 594M 8.6G 7% /mnt/NFS4
[root@nfsc NFS4]# du -h /mnt/NFS4
424K /mnt/NFS4

Let's go ahead and unmount the NFSv4 filesystem, because we need to make some changes for the Read/Write exercise that comes next. Just use this command to unmount, after making sure nobody is using /mnt/NFS4:

[root@nfsc /]# umount -v /mnt/NFS4
fc2:/ umounted



Read/Write NFSv4 Mount

We will now look at the most common type of NFS mount, the Read/Write (RW) mount. Since we created the /home/NFS4 directory with the sticky bit set, any remote user will be able to read and write to the directory, but only the owner of the files will be able to modify or delete them. If the "root" user on the NFSv4 client writes a file to the directory, the ownership will be changed to "nfsnobody". Root squashing is on by default, which means that the "root" user on remote NFS clients does not have root privileges on the server.

On the server, we will need to modify /etc/exports, then re-export the filesystem. Here is what your /etc/exports file should look like:

### /etc/exports - a list of directories for NFS to export ###
## read/write export to the 192.168.1.0/24 network ##
/home/NFS4 192.168.1.0/24(rw,fsid=0,insecure,no_subtree_check,sync,anonuid=65534,anongid=65534)

Now re-export it:

[root@fc2 nfs]# exportfs -rv
exporting 192.168.1.0/24:/home/NFS4

[root@fc2 nfs]# exportfs -v
/home/NFS4 192.168.1.0/24(rw,wdelay,insecure,root_squash,no_subtree_check,fsid=0,anonuid=65534,anongid=65534)

[root@fc2 nfs]# showmount -e
Export list for fc2.vanemery.com:
/home/NFS4 192.168.1.0/24

On the client, we need to modify /etc/fstab and re-mount the NFSv4 filesystem. Open up the fstab file with an editor and change it to:

fc2:/     /mnt/NFS4    nfs4    rw,hard,intr,proto=tcp,port=2049,noauto  0 0

Now re-mount the filesystem:

[root@nfsc /]# mount -v /mnt/NFS4
fc2:/ on /mnt/NFS4 type nfs4 (rw,hard,intr,proto=tcp,port=2049,addr=192.168.1.2)

Now your test users should be able to read, write, modify, and delete files and directories in the /mnt/NFS4 directory. Any files the "root" user writes to the NFS filesystem will have their ownership changed to "nfsnobody". For all practical purposes, the NFSv4 filesystem looks like a local filesystem. All standard commands will work just as expected.

Don't forget to do some testing on the client as one of your test users: "jgish", "wtdoor", or "btpunch". They should be able to read and write to the NFS filesystem. When they create files, their ownership will not be squashed to "nfsnobody".

To unmount the filesystem, make sure it is not being accessed by the client, then use this command:

[root@nfsc root]# umount -v /mnt/NFS4
fc2:/ umounted

You can also remove the NFS entry from /etc/fstab if you will not be using the remote filesystem any more. Another useful mount option that you may have noticed in the fstab file is the "noauto" option. The client machine will not attempt an NFS mount during system startup; the mount command must be be explicitly given first.

Monitoring the Server

If you want to monitor the NFSv4 server, there are several commands that might be useful. First of all, the nfsstat command can be used in conjuction with the watch command for a real-time view of NFSv4 activity:

[root@fc2 root]# watch nfsstat -r -o net

Every 2s: nfsstat -r -o net Fri Jun 11 15:37:01 2004
Server packet stats:
packets udp tcp tcpconn
162367 0 162335 67
Server rpc stats:
calls badcalls badauth badclnt xdrcall
162357 0 0 0 0

The netstat command can also be put to good use, as each NFSv4 client opens one TCP connection to port 2049:

[root@fc2 root]# netstat -tn
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 192.168.1.2:2049 192.168.1.212:800 ESTABLISHED



The NFSv4 Pseudofileystem

One big change in NFSv4 is that the server presents a single seamless view of all the exported filesystems to a client. You no longer have to explicitly mount different exported filesystems on the NFS server. The server presents a single namespace/file hierarchy. This is called the NFSv4 pseudofilesystem.

According to , the current Linux NFSv4 code implements the pseudofilesystem as a single real filesystem, identified at export with the "fsid=0" option.

What are the practical implications of this pseudofilesystem to the system administrator? Let's use an example to explore the concept:

Scenario

We already have an NFSv4 read/write export for the /home/NFS4 directory. Let's glue two discontiguous portions of the server's filesystem into /home/NFS4, then export all three directory hierarchies as a single, seamless filesystem. I will add the following directory trees:

  • /XFS (an XFS filesystem on /dev/hda9)  ====>>  /home/NFS4/XFS
  • /NFSTEST (belongs to the "/" filesystem on /dev/hda2)  ====>>  /home/NFS4/NFSTEST

We will need to use the "--bind" option for the mount command in order to make this happen. Here are step-by-step instructions for the example scenario:

Step 1 - on the client, make sure that the NFSv4 filesystem has been unmounted.

Step 2 - on the server, stop the nfsd service:

[root@fc2 /]# /etc/init.d/nfs stop
Shutting down NFS mountd: [ OK ]
Shutting down NFS daemon: [ OK ]
Shutting down NFS quotas: [ OK ]
Shutting down NFS services: [ OK ]

Step 3 - on the server, create directories and/or filesystems:

I already had a spare partition (/dev/hda9) to devote to the /XFS directory tree. If you want to make a new partition, format it, and mount it, then this is the time to do it. The other directory tree, /NFSTEST did not exist, so I simply created it on the existing root "/" filesystem with the appropriate permissions.

[root@fc2 /]# chmod 1777 /XFS
[root@fc2 /]# ls -ld /XFS
drwxrwxrwt+ 3 root root 53248 Jun 16 13:33 /XFS

[root@fc2 /]# mkdir -m 1777 /NFSTEST
[root@fc2 /]# ls -ld /NFSTEST
drwxrwxrwt 2 root root 4096 Jun 16 15:16 /NFSTEST

Now, let's copy a few files into these new directory trees so that we will have something to look at later:

[root@fc2 /]# cp /bin/* /XFS
[root@fc2 /]# cp -r /boot/* /NFSTEST

Step 4 - on the server, make mountpoints under /home/NFS4 and then mount the two new directory trees using the "--bind" option:

[root@fc2 /]# mkdir /home/NFS4/XFS
[root@fc2 /]# mkdir /home/NFS4/NFSTEST

[root@fc2 /]# mount --bind /XFS /home/NFS4/XFS
[root@fc2 /]# mount --bind /NFSTEST /home/NFS4/NFSTEST

Now, look at the new mounts:

[root@fc2 /]# mount | grep NFS4
/XFS on /home/NFS4/XFS type none (rw,bind)
/NFSTEST on /home/NFS4/NFSTEST type none (rw,bind)

Step 5 - Configure /etc/exports

On the server, you will reconfigure the /etc/exports config file:

### /etc/exports - a list of directories for NFS to export ###
## read/write export to the 192.168.1.0/24 network ##

/home/NFS4 192.168.1.0/24(rw,fsid=0,insecure,no_subtree_check,sync,anonuid=65534,anongid=65534)
/home/NFS4/XFS 192.168.1.0/24(rw,nohide,insecure,no_subtree_check,sync,anonuid=65534,anongid=65534)
/home/NFS4/NFSTEST 192.168.1.0/24(rw,nohide,insecure,no_subtree_check,sync,anonuid=65534,anongid=65534)

Step 6 - Start the NFS server and check your exports

[root@fc2 /]# /etc/init.d/nfs start
Starting NFS services: [ OK ]
Starting NFS quotas: [ OK ]
Starting NFS daemon: [ OK ]
Starting NFS mountd: [ OK ]

[root@fc2 NFS4]# exportfs -v
/home/NFS4/NFSTEST 192.168.1.0/24(rw,wdelay,nohide,insecure,root_squash,no_subtree_check,anonuid=65534,anongid=65534)
/home/NFS4/XFS 192.168.1.0/24(rw,wdelay,nohide,insecure,root_squash,no_subtree_check,anonuid=65534,anongid=65534)
/home/NFS4 192.168.1.0/24(rw,wdelay,insecure,root_squash,no_subtree_check,fsid=0,anonuid=65534,anongid=65534)

Step 7 - On the client, mount the NFSv4 filesystem

With NFSv2/v3, you would need to use mount statements that included the entire path of the export. Since NFSv4 presents a "single seamless view" of the server's exported filesystems, you only need to mount fc2:/ and you will be able to access the entire NFSv4 pseudofilesystem. In fact, if you try to mount fc2:/home/NFS4, you will get an error.

Your /etc/fstab entry on the client should still look like this:

fc2:/   /mnt/NFS4  nfs4  rw,hard,intr,proto=tcp,port=2049,noauto  0 0

Go ahead and mount the NFSv4 filesystem:

[root@nfsc root]# mount -v /mnt/NFS4
fc2:/ on /mnt/NFS4 type nfs4 (rw,hard,intr,proto=tcp,port=2049,addr=192.168.1.2)

Step 8 - Take a look around; make sure you can see the new directory trees:

[root@nfsc root]# cd /mnt/NFS4
[root@nfsc NFS4]# ls
big.up blkid.tab blkid.tab.old NFSTEST test.big VANOS XFS
[root@nfsc NFS4]# cd NFSTEST
[root@nfsc NFSTEST]# ls
config-2.6.5-1.358 initrd-2.6.5-1.358.img System.map-2.6.5-1.358 vmlinuz-2.6.6-1.427
config-2.6.6-1.427 initrd-2.6.6-1.427.img System.map-2.6.6-1.427
grub lost+found vmlinuz-2.6.5-1.358
[root@nfsc NFSTEST]# cd ../XFS
[root@nfsc XFS]# ls z*
zcat zdiff zeisstopnm zfgrep zgrep zip zipgrep zipnote zless znew
zcmp zegrep zenity zforce zic2xpm zipcloak zipinfo zipsplit zmore zsoelim

Step 9 - On the client, unmount the NFSv4 filesystem

When you are done experimenting, don't forget to exit from the /mnt/NFS4 directory and unmount the NFSv4 filesystem:

[root@nfsc root]# umount -v /mnt/NFS4
fc2:/ umounted

Notes on the NFSv4 Pseudofilesystem

  1. While you cannot mount the NFSv4 exports with this:

    • fc2:/home/NFS4

    you can still mount directories that are underneath the NFSv4 server's exported root. For example, these mount expressions are O.K.:

    • fc2:/XFS
    • fc2:/NFSTEST
    • fc2:/VANOS

    The leading absolute path from the server must be left out.

  2. For more examples, see the .

  3. The "--bind" mounts will disappear after a reboot. If you need them to automatically mount at boot time, then you should place the mount commands in the /etc/rc.local config file.




Automounting Home Directories with NFSv4

Mounting NFSv4 filesystems does not have to be a manual, root-only procedure. We can use the automounter to automatically mount filesystems when necessary, then automatically unmount them when they are no longer needed. This is especially useful on workstations where the home directories are located on a central server. Why would you want to put home directories on a central server? Here are a few reasons:

  • User mobility - no matter what workstation the user logs into, he can access his home directory
  • Data integrity - the central server can use good-quality SCSI disks and RAID arrays
  • Backups - centralized backups are easier to manage (and more likely to be done) than local backups
  • More storage - the central server may be exporting a very large disk array
  • Cost - since clients do not need large, expensive disks or backup devices, they can be simple and cheap

Server Setup for Autohomes:

There is not much to do on the server, other than dedicating a disk, partition, or volume to the "autohome" fileystem. For production use, you will probably want to enable user quotas so that nobody hogs up all of the disk space. If a user exceeds his quota, he will not be able to write to disk and will receive an error. However, in the spirit of keeping things simple, we will not use quotas in this example.

We will be exporting the /ahome EXT3 filesystem with the "rw" and "acl" options:

[root@fc2 root]# mount | grep /ahome
/dev/hda10 on /ahome type ext3 (rw,acl)

Now, edit /etc/exports so that this is the only entry:

/ahome    192.168.1.0/24(rw,fsid=0,insecure,no_subtree_check,sync,anonuid=65534,anongid=65534)

Restart the NFS daemon and verify the new export:

[root@fc2 root]# /etc/init.d/nfs restart
Shutting down NFS mountd: [ OK ]
Shutting down NFS daemon: [ OK ]
Shutting down NFS quotas: [ OK ]
Shutting down NFS services: [ OK ]
Starting NFS services: [ OK ]
Starting NFS quotas: [ OK ]
Starting NFS daemon: [ OK ]
Starting NFS mountd: [ OK ]

[root@fc2 root]# exportfs -v
/ahome 192.168.1.0/24(rw,wdelay,insecure,root_squash,no_subtree_check,fsid=0,anonuid=65534,anongid=65534)

Client Setup for Autohomes:

Now, we need to move to the client and setup the automounter. The automounter is configured with the /etc/auto.master file and the associated /etc/auto.mountpoint files. The automounter is started and stopped with the /etc/init.d/autofs init script. We want the users to be dropped into their automounted home directories (autohomes) when they login.

First, make sure that no NFS filesystems are mounted, and make sure to remove any NFSv4 entries from your /etc/fstab file. This way, you will not have any conflicts when you try out the automounter.

Next, let's edit the /etc/auto.master config file:

#
# $Id: auto.master,v 1.3 2003/09/29 08:22:35 raven Exp $
#
# Sample auto.master file
# This is an automounter map and it has the following format
# key [ -mount-options-separated-by-comma ] location
# For details of the format look at autofs(5).
#/misc /etc/auto.misc --timeout=60
#/misc /etc/auto.misc
#/net /etc/auto.net

/ahome /etc/auto.ahome --timeout=90

Now, we will create the /etc/auto.ahome config file:

# This is for mounting user homes over NFSv4

* -fstype=nfs4,rw,proto=tcp,port=2049 fc2:/&

Now, create the mountpoint:

[root@nfsc /]# mkdir -v /ahome
mkdir: created directory `/ahome'

What about the users' home directory config in /etc/passwd? If you are using LDAP or NIS, all the entries are synchronized across the network. However, we built our test users on the client machine with the standard /home/username home directories. We need to go ahead and change that to /ahome/username :

[root@nfsc /]# usermod -d /ahome/jgish jgish
[root@nfsc /]# usermod -d /ahome/btpunch btpunch
[root@nfsc /]# usermod -d /ahome/wtdoor wtdoor

Now, let's configure the automounter to start automatically after system boot, and then start it:

[root@nfsc /]# chkconfig autofs off
[root@nfsc /]# chkconfig --level 345 autofs on

[root@nfsc /]# /etc/init.d/autofs start
Starting automount: [ OK ]

[root@nfsc /]# mount | grep /ahome
automount(pid5091) on /ahome type autofs (rw,fd=4,pgrp=5091,minproto=2,maxproto=4)

As you can see, /ahome is now mounted as type "autofs".

Now, let's login as user "jgish" and see what happens:

[root@fc2 root]# ssh jgish@nfsc
jgish@nfsc's password:
[jgish@nfsc jgish]$ pwd
/ahome/jgish

[jgish@nfsc jgish]$ mount | grep ahome
automount(pid5091) on /ahome type autofs (rw,fd=4,pgrp=5091,minproto=2,maxproto=4)
fc2:/jgish on /ahome/jgish type nfs4 (rw,proto=tcp,port=2049,addr=192.168.1.2)

[jgish@nfsc jgish]$ df -h | grep ahome
fc2:/jgish 1.4G 33M 1.3G 3% /ahome/jgish

User "jgish" can now go about his business. When he logs out, his NFSv4 mount entry will disappear after 90 seconds. In fact, the TCP connection between the client and the server will also terminate. All of the user's files are stored on the server. If "jgish" logs in on a different machine, he can still access his files transparently.

I also did some quota enforcement testing on the autohomes. The user on the NFSv4 client was not able to exceed his quota.

Note:   the automounted directory will not unmount if any files are still open. For example, in Fedora Core 2 the "bonobo-activation-server" may keep running after you have logged out of a GUI desktop session. However, this is a distribution/desktop bug and not a problem with the automounter or the NFSv4 protocol. You can still shutdown the client machine without any problems.

Disable the Automounter

To stop using the automounter, stop the service with the /etc/init.d/autofs stop command, then comment out your entries in the /etc/auto.master config file. You may also want to modify which runlevels it starts and stops on with the chkconfig command.




Firewall/NAT Traversal of NFSv4 Traffic

One of the design goals for NFSv4 is that it will work well on the Internet. Part of this is the ability to traverse firewalls, packetfilters, and NAT devices. The best way to do this is to use a single TCP connection, with no "auxiliary" protocols, reverse connections, or embedded IP addresses. NFSv4 does indeed fulfill this goal. It uses a single TCP connection with a well-defined destination TCP port. It traverses firewalls and NAT devices with ease.

I was able to successfully mount and use an NFSv4 filesystem over the Internet. Between the client and server there were two packet filters, a firewall, and a NAT firewall. The total bandwidth available was 128kbps down/768kbps up.

The portmapper (SUNRPC) was not exposed, nor were any of the other traditional NFS-related RPC protocols. A packet analyzer (Ethereal) showed that only the NFSv4 protocol over TCP 2049 was used. This is definitely a step in the right direction!

Note:    In future NFSv4 client implementations, there may be a requirement for a port to be opened in the client's firewall to support delegation/callbacks. More info is in the section.




Running NFSv4 Through an SSH Tunnel

NFSv4's ability to traverse firewalls and NAT devices impressed me. If it could do that, then it should be able to traverse an SSH tunnel as well. We could use the Swiss Army Knife of host-to-host networking (SSH) to encrypt the whole NFSv4 session. Would it work?

It actually worked without a hitch.

The basic setup is similar to the example in the , except the IP addresses are changed to 127.0.0.1 (the loopback address). Here are the configs and commands that were used on both the server and the client:

NFSv4 over SSH, Server Config

/etc/exports:

/home/NFS4      127.0.0.1(rw,fsid=0,insecure,no_subtree_check,sync,anonuid=65534,anongid=65534)

Now, re-export:

[root@fc2 root]# exportfs -rv
exporting localhost.localdomain:/home/NFS4

[root@fc2 root]# exportfs -v
/home/NFS4 localhost.localdomain(rw,wdelay,insecure,root_squash,no_subtree_check,fsid=0,anonuid=65534,anongid=65534)

NFSv4 over SSH, Client Config

/etc/fstab:

127.0.0.1:/     /mnt/NFS4       nfs4    rw,hard,intr,proto=tcp,port=8888,noauto  0 0

Now, we need to setup an SSH session with port forwarding. We will use AES encryption. Instead of using TCP 2049 for the local port, we will use TCP 8888 just to show that NFSv4 clients and SSH tunnels do not care which ports they use. Open up an SSH session from the NFS client to the NFS server:

[root@nfsc /]# ssh -2 -x -c aes128-cbc -L 8888:127.0.0.1:2049 fc2
root@fc2's password:

[root@fc2 root]#

Note:   the SSH session can be established as a regular user, you do not need need to be "root".

Back on the NFS client host (nfsc/192.168.1.212), open another terminal session as "root". You will then mount the filesystem:

[root@nfsc root]# mount -v /mnt/NFS4
127.0.0.1:/ on /mnt/NFS4 type nfs4 (rw,hard,intr,proto=tcp,port=8888,addr=127.0.0.1)

The client can now use the NFSv4 filesystem as if it were local, but it is actually an encrypted, remote filesystem. Pretty neat, huh?

In order to unmount it, first use the umount -v /mnt/NFS4 command, then exit from the SSH session.




RFC 3530 Comments on Using AUTH_SYS

Since RPCSEC_GSS is mandatory for all RFC 3530-compliant NFSv4 implementations, what kind of support does NFSv4 have for the traditional AUTH_SYS security flavor? Here is what RFC 3530 has to say on the matter (emphasis added):

Traditional RPC implementations have included AUTH_NONE, AUTH_SYS, AUTH_DH, and AUTH_KRB4 as security flavors. With [RFC2203] an additional security flavor of RPCSEC_GSS has been introduced which uses the functionality of GSS-API [RFC2743]. This allows for the use of various security mechanisms by the RPC layer without the additional implementation overhead of adding RPC security flavors. For NFS version 4, the RPCSEC_GSS security flavor MUST be used to enable the mandatory security mechanism. Other flavors, such as, AUTH_NONE, AUTH_SYS, and AUTH_DH MAY be implemented as well.

NFS has historically used a model where, from an authentication perspective, the client was the entire machine, or at least the source IP address of the machine. The NFS server relied on the NFS client to make the proper authentication of the end-user. The NFS server in turn shared its files only to specific clients, as identified by the client's source IP address. Given this model, the AUTH_SYS RPC security flavor simply identified the end-user using the client to the NFS server. When processing NFS responses, the client ensured that the responses came from the same IP address and port number that the request was sent to. While such a model is easy to implement and simple to deploy and use, it is certainly not a safe model. Thus, NFSv4 mandates that implementations support a security model that uses end to end authentication, where an end-user on a client mutually authenticates (via cryptographic schemes that do not expose passwords or keys in the clear on the network) to a principal on an NFS server. Consideration should also be given to the integrity and privacy of NFS requests and responses. The issues of end to end mutual authentication, integrity, and privacy are discussed as part of the section on "RPC and Security Flavor".

Note that while NFSv4 mandates an end to end mutual authentication model, the "classic" model of machine authentication via IP address checking and AUTH_SYS identification can still be supported with the caveat that the AUTH_SYS flavor is neither MANDATORY nor RECOMMENDED by this specification, and so interoperability via AUTH_SYS is not assured.

This being said, AUTH_SYS interoperability between different NFSv4 implementations is tested at the various interop events.




Notes

  1. There are many options to the exportfs and mount commands that might be useful with NFSv4. For example, you can tune the "rsize" and "wsize" options for better throughput.
  2. FC-2's nfsd can simultaneously support NFSv3 and NFSv4 mounts. This seems to work fine.
  3. NFSv3 over TCP seems to work fine...it is the default.
  4. NFSv4 should be able to run over IPv6
  5. I was able to test Unicode (UTF-8) traditional Chinese characters in filenames. Worked fine.
  6. Most of my testing was done with iptables enabled on both client and server. Only TCP 2049 was opened on the server for NFS.
  7. You may notice some interesting output from the mount command on the server. The entries for the "rpc_pipefs" and "nfsd" are normal...do not worry!
    [root@fc2 /]# mount | grep nfs
    sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
    nfsd on /proc/fs/nfsd type nfsd (rw)
  8. The showmount -a command appears to be somewhat broken on the server. It persists in showing old clients without ever removing them from the list.
  9. There are several tab files related to NFS in the /var/lib/nfs directory.
  10. Both and have commercial products which utilize NFSv4
  11. I was able to successfully test the NFSv4 client on FC-2 with the original 2.6.5-1.358 kernel as well as the newer 2.6.6-1.427 kernel. However, I had problems on the NFSv4 server using the 2.6.6-1.427 kernel. The NFS client was given root:bin as the owner of every file and directory in the NFSv4 mount, regardless of what it was on the server. Regular users could write files to the mount, but could not delete them. Very strange...
  12. In the current NFSv4 specification, performance is enhanced by allowing the server to delegate OPEN, CLOSE, and file locking operations to the client. Then, if the delegation needs to be revoked, the server may issue a client callback via RPC. This means that if you want a client to be able to use delegation, then the client's portmapper and callback port must be available over the network. Since this may not work in many firewall/NAT scenarios, delegation and callback are not required for NFSv4 operation. The server has to test the client's ability to handle callbacks before performing a delegation. If it cannot connect to the callback port, then it will not delegate. Work is currently being done on the client to make sure that the callback mechanism can be "pegged" to a particular port. There is also a v4/Sessions proposal that will be looked at by the IETF NFSv4 working group. This proposal would allow callback traffic to share the existing operations channel, thus negating the need for a callback port and portmapper access. If this proposal is accepted, there will only be one port to worry about:  2049. Also, FC-2 does not support callbacks at this time, anyway.



Conclusion

Hopefully, you were able to "get your feet" wet in the interesting world of NFSv4. In the future, NFSv4 will be an important, cross-platform, vendor-independent, secure, network file system. It will help make network and Internet disk storage ubiquitous and OS-neutral.

I look forward to testing (and writing about) NFSv4 with the RPCSEC_GSS security flavor. It will be very cool, indeed!




Additional Resources

Man Pages:

  • nfs
  • nfsd
  • mount
  • umount
  • exports
  • exportfs
  • showmount
  • nfssstat
  • fstab
  • autofs
  • automount

Documents:

  • - IETF NFSv4 Specification
  • (PDF format)
  • (PDF format)
  • - RPCSEC_GSS
  • - Kerberos 5 GSS-API Mechanism
  • - LIPKEY GSS-API Mechanism
  • - SPKM GSS-API Mechanism
  • - document from Network Appliance (PDF format)

Links:


Last updated: 2004-06-29


阅读(5798) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~