分类: LINUX
2012-10-13 18:08:03
Recently the latest version of 6 was released. Scientific Linux is a distribution which uses Red Hat Enterprise Linux as its upstream and aims to be compatible with binaries compiled for Red Hat Enterprise. I am really impressed with the quality of this distro and the timeliness with which updates and security fixes are distributed. Thanks to all the developers and testers on the Scientific Linux team! Now let’s move on to configuring an NFS server on RHEL/Scientific Linux.
In my environment I will be using VMware ESXi 4.1 and Ubuntu 10.10 as NFS clients. ESXi 4.1 supports a maximum of NFS v3 so that version will need to remain activated. Fortunately it appears as though out of the box the NFS server on RHEL/Scientific Linux has support for NFS v3 and v4. Ubuntu 10.10 will by default use the NFSv4 protocol.
First make a directory to place the NFS export mount and assign permissions. Also open up write permissions on this directory if you’d like anyone to be able to write to it, be careful with this as there are security implications and anyone will be able to write that mounts the share:
# mkdir /nfs
# chmod a+w /nfs
Now we need to install the NFS server packages. We will include a package named “rpcbind”, which is apparently a newly named/implementation of the “portmap” service. Note that “rpcbind” may not be required to be running if you are going to use NFSv4 only, but it is a dependency to install “nfs-utils” package.
# yum -y install nfs-utils rpcbind
Verify that the required services are configured to start, “rpcbind” and “nfslock” should be on by default anyhow:
# chkconfig nfs on
# chkconfig rpcbind on
# chkconfig nfslock on
Configure Iptables Firewall for NFS
Rather than disabling the firewall it is a good idea to configure NFS to work with iptables. For NFSv3 we need to lock several daemons related to rpcbind/portmap to statically assigned ports. We will then specify these ports to be made available in the INPUT chain for inbound traffic. Fortunately for NFSv4 this is greatly simplified and in a basic configuration TCP 2049 should be the only inbound port required.
First edit the “/etc/sysconfig/nfs” file and uncomment these directives. You can customize the ports if you wish but I will stick with the defaults:
# vi /etc/sysconfig/nfs
RQUOTAD_PORT=875
LOCKD_TCPPORT=32803
LOCKD_UDPPORT=32769
MOUNTD_PORT=892
STATD_PORT=662
STATD_OUTGOING_PORT=2020
We now need to modify the iptables firewall configuration to allow access to the NFS ports. I will use the “iptables” command and insert the appropriate rules:
# iptables -I INPUT -m multiport -p tcp --dport 111,662,875,892,2049,32803 -j ACCEPT
# iptables -I INPUT -m multiport -p udp --dport 111,662,875,892,2049,32769 -j ACCEPT
Now save the iptables configuration to the config file so it will apply when the system is restarted:
# service iptables save
Now we need to edit “/etc/exports” and add the path to publish in NFS. In this example I will make the NFS export available to clients on the 192.168.10.0 subnet. I will also allow read/write access, specify synchronous writing, and allow root access. Asynchronous writes are supposed to be safe in NFSv3 and would allow for higher performance if you desire. The root access is potentially a security risk but AFAIK it is necessary with VMware ESXi.
# vi /etc/exports
/nfs 192.168.10.0/255.255.255.0(rw,sync,no_root_squash)
Configure SELinux for NFS Export
Rather than disable SELinux it is a good idea to configure it to allow remote clients to access files that are exported via NFS share. This is fairly simple and involves setting the SELinux boolean value using the “setsebool” utility. In this example we’ll use the “read/write” boolean but we can also use “nfs_export_all_ro” to allow NFS exports read-only and “use_nfs_home_dirs” to allow home directories to be exported.
# setsebool -P nfs_export_all_rw 1
Now we will start the NFS services:
# service rpcbind start
# service nfs start
# service nfslock start
If at any point you add or remove directory exports with NFS in the “/etc/exports” file, run “exportfs” to change the export table:
# exportfs -a
Implement TCP Wrappers for Greater Security
TCP Wrappers can allow us greater scrutiny in allowing hosts to access certain listening daemons on the NFS server other than using iptables alone. Keep in mind TCP Wrappers will parse first through “hosts.allow” then “hosts.deny” and the first match will be used to determine access. If there is no match in either file, access will be permitted.
Append a rule with a subnet or domain name appropriate for your environment to restrict allowable access. Domain names are implemented with a preceding period, such as “.mydomain.com” without the quotations. The subnet can also be specified like “192.168.10.” if desired instead of including the netmask.
vi /etc/hosts.allow
mountd: 192.168.10.0/255.255.255.0
Append these directives to the “hosts.deny” file to deny access from all other domains or networks:
vi /etc/hosts.deny
portmap:ALL
lockd:ALL
mountd:ALL
rquotad:ALL
statd:ALL
And that should just about do it. No restarts should be necessary to apply the TCP Wrappers configuration. I was able to connect with both my Ubuntu NFSv4 and VMware ESXi NFSv3 clients without issues. If you’d like to check activity and see the different NFS versions running simply type:
# nfsstat
Good luck with your new NFS server!
References: