Redhat Network File System (NFS) Guide
阅读原文时间:2021年04月25日阅读:1

Chapter 9. Network File System (NFS)

9.1. How NFS Works

9.1.1. Required Services

9.2. pNFS 9.3. NFS Client Configuration

9.3.1. Mounting NFS File Systems using /etc/fstab

9.4. autofs

9.4.1. Improvements in autofs Version 5 over Version 4 9.4.2. autofs Configuration 9.4.3. Overriding or Augmenting Site Configuration Files 9.4.4. Using LDAP to Store Automounter Maps

9.5. Common NFS Mount Options 9.6. Starting and Stopping NFS 9.7. NFS Server Configuration

9.7.1. The /etc/exports Configuration File 9.7.2. The exportfs Command 9.7.3. Running NFS Behind a Firewall 9.7.4. Hostname Formats 9.7.5. NFS over RDMA

9.8. Securing NFS

9.8.1. NFS Security with AUTH_SYS and export controls 9.8.2. NFS security with AUTH_GSS 9.8.3. File Permissions

9.9. NFS and rpcbind

9.9.1. Troubleshooting NFS and rpcbind

9.10. References

A  Network File System ( NFS) allows remote hosts to mount file systems over a network and interact with those file systems as though they are mounted locally. This enables system administrators to consolidate resources onto centralized servers on the network.

This chapter focuses on fundamental NFS concepts and supplemental information.

⁠9.1. How NFS Works

Currently, there are three versions of NFS. NFS version 2 (NFSv2) is older and widely supported. NFS version 3 (NFSv3) supports safe asynchronous writes and is more robust at error handling than NFSv2; it also supports 64-bit file sizes and offsets, allowing clients to access more than 2Gb of file data.

NFS version 4 (NFSv4) works through firewalls and on the Internet, no longer requires an  rpcbind service, supports ACLs, and utilizes stateful operations. Red Hat Enterprise Linux 6 supports NFSv2, NFSv3, and NFSv4 clients. When mounting a file system via NFS, Red Hat Enterprise Linux uses NFSv4 by default, if the server supports it.

All versions of NFS can use  Transmission Control Protocol ( TCP) running over an IP network, with NFSv4 requiring it. NFSv2 and NFSv3 can use the  User Datagram Protocol (UDP) running over an IP network to provide a stateless network connection between the client and server.

When using NFSv2 or NFSv3 with UDP, the stateless UDP connection (under normal conditions) has less protocol overhead than TCP. This can translate into better performance on very clean, non-congested networks. However, because UDP is stateless, if the server goes down unexpectedly, UDP clients continue to saturate the network with requests for the server. In addition, when a frame is lost with UDP, the entire RPC request must be retransmitted; with TCP, only the lost frame needs to be resent. For these reasons, TCP is the preferred protocol when connecting to an NFS server.

The mounting and locking protocols have been incorporated into the NFSv4 protocol. The server also listens on the well-known TCP port 2049. As such, NFSv4 does not need to interact with  rpcbind  ⁠ [3]lockd, and  rpc.statddaemons. The  rpc.mountd daemon is required on the NFS server to set up the exports.

Note

TCP is the default transport protocol for NFS version 2 and 3 under Red Hat Enterprise Linux. UDP can be used for compatibility purposes as needed, but is not recommended for wide usage. NFSv4 requires TCP.

All the RPC/NFS daemons have a  '-p' command line option that can set the port, making firewall configuration easier.

After TCP wrappers grant access to the client, the NFS server refers to the  /etc/exports configuration file to determine whether the client is allowed to access any exported file systems. Once verified, all file and directory operations are available to the user.

Important

In order for NFS to work with a default installation of Red Hat Enterprise Linux with a firewall enabled, configure IPTables with the default TCP port 2049. Without proper IPTables configuration, NFS will not function properly.

The NFS initialization script and  rpc.nfsd process now allow binding to any specified port during system start up. However, this can be error-prone if the port is unavailable, or if it conflicts with another daemon.

⁠9.1.1. Required Services

Red Hat Enterprise Linux uses a combination of kernel-level support and daemon processes to provide NFS file sharing. All NFS versions rely on  Remote Procedure Calls ( RPC) between clients and servers. RPC services under Red Hat Enterprise Linux 6 are controlled by the  rpcbind service. To share or mount NFS file systems, the following services work together depending on which version of NFS is implemented:

Note

The  portmap service was used to map RPC program numbers to IP address port number combinations in earlier versions of Red Hat Enterprise Linux. This service is now replaced by  rpcbind in Red Hat Enterprise Linux 6 to enable IPv6 support. For more information about this change, refer to the following links:

nfs

service nfs start starts the NFS server and the appropriate RPC processes to service requests for shared NFS file systems.

nfslock

service nfslock start activates a mandatory service that starts the appropriate RPC processes allowing NFS clients to lock files on the server.

rpcbind

rpcbind accepts port reservations from local RPC services. These ports are then made available (or advertised) so the corresponding remote RPC services can access them.  rpcbind responds to requests for RPC services and sets up connections to the requested RPC service. This is not used with NFSv4.

rpc.nfsd

rpc.nfsd allows explicit NFS versions and protocols the server advertises to be defined. It works with the Linux kernel to meet the dynamic demands of NFS clients, such as providing server threads each time an NFS client connects. This process corresponds to the  nfs service.

Note

As of Red Hat Enterprise Linux 6.3, only the NFSv4 server uses  rpc.idmapd. The NFSv4 client uses the keyring-based idmapper  nfsidmapnfsidmap is a stand-alone program that is called by the kernel on-demand to perform ID mapping; it is not a daemon. If there is a problem with  nfsidmap does the client fall back to using  rpc.idmapd. More information regarding  nfsidmap can be found on the nfsidmap man page.

The following RPC processes facilitate NFS services:

rpc.mountd

This process is used by an NFS server to process  MOUNT requests from NFSv2 and NFSv3 clients. It checks that the requested NFS share is currently exported by the NFS server, and that the client is allowed to access it. If the mount request is allowed, the rpc.mountd server replies with a  Success status and provides the  File-Handle for this NFS share back to the NFS client.

lockd

lockd is a kernel thread which runs on both clients and servers. It implements the  Network Lock Manager(NLM) protocol, which allows NFSv2 and NFSv3 clients to lock files on the server. It is started automatically whenever the NFS server is run and whenever an NFS file system is mounted.

rpc.statd

This process implements the  Network Status Monitor (NSM) RPC protocol, which notifies NFS clients when an NFS server is restarted without being gracefully brought down.  rpc.statd is started automatically by the  nfslock service, and does not require user configuration. This is not used with NFSv4.

rpc.rquotad

This process provides user quota information for remote users.  rpc.rquotad is started automatically by the  nfsservice and does not require user configuration.

rpc.idmapd

rpc.idmapd provides NFSv4 client and server upcalls, which map between on-the-wire NFSv4 names (strings in the form of  user@domain) and local UIDs and GIDs. For  idmapd to function with NFSv4, the  /etc/idmapd.conffile must be configured. At a minimum, the "Domain" parameter should be specified, which defines the NFSv4 mapping domain. If the NFSv4 mapping domain is the same as the DNS domain name, this parameter can be skipped. The client and server must agree on the NFSv4 mapping domain for ID mapping to function properly. Refer to the knowledge base article  https://access.redhat.com/site/solutions/130783 when using a local domain name.

⁠9.2. pNFS

Support for Parallel NFS (pNFS) as part of the NFS v4.1 standard is available as of Red Hat Enterprise Linux 6.4. The pNFS architecture improves the scalability of NFS, with possible improvements to performance. That is, when a server implements pNFS as well, a client is able to access data through multiple servers concurrently. It supports three storage protocols or layouts: files, objects, and blocks.

To enable this functionality, use one of the following mount options on mounts from a pNFS-enabled server:

-o minorversion=1

or

-o v4.1

After the server is pNFS-enabled, the  nfs_layout_nfsv41_files kernel is automatically loaded on the first mount. Use the following command to verify the module was loaded:

$ lsmod | grep nfs_layout_nfsv41_files

Another way to verify a successful NFSv4.1 mount is with the  mount command. The mount entry in the output should contain  minorversion=1.

Important

The protocol allows for three possible pNFS layout types: files, objects, and blocks. However the Red Hat Enterprise Linux 6.4 client only supports the files layout type, so will use pNFS only when the server also supports the files layout type.

For more information on pNFS, refer to:  http://www.pnfs.com.

⁠9.3. NFS Client Configuration

The  mount command mounts NFS shares on the client side. Its format is as follows:

# mount -t nfs -o options host:/remote/export /local/directory

This command uses the following variables:

options

A comma-delimited list of mount options; refer to  Section 9.5, “Common NFS Mount Options” for details on valid NFS mount options.

server

The hostname, IP address, or fully qualified domain name of the server exporting the file system you wish to mount

/remote/export

The file system or directory being exported from the  server, that is, the directory you wish to mount

/local/directory

The client location where  /remote/export is mounted

The NFS protocol version used in Red Hat Enterprise Linux 6 is identified by the  mount options  nfsvers or  vers. By default,  mount will use NFSv4 with  mount -t nfs. If the server does not support NFSv4, the client will automatically step down to a version supported by the server. If the  nfsvers/ vers option is used to pass a particular version not supported by the server, the mount will fail. The file system type nfs4 is also available for legacy reasons; this is equivalent to running  mount -t nfs -o nfsvers=4 host:/remote/export /local/directory.

Refer to  man mount for more details.

If an NFS share was mounted manually, the share will not be automatically mounted upon reboot. Red Hat Enterprise Linux offers two methods for mounting remote file systems automatically at boot time: the  /etc/fstabfile and the  autofs service. Refer to  Section 9.3.1, “Mounting NFS File Systems using /etc/fstab and Section 9.4, “autofs for more information.

⁠9.3.1. Mounting NFS File Systems using /etc/fstab

An alternate way to mount an NFS share from another machine is to add a line to the  /etc/fstab file. The line must state the hostname of the NFS server, the directory on the server being exported, and the directory on the local machine where the NFS share is to be mounted. You must be root to modify the  /etc/fstab file.

Example 9.1. Syntax example

The general syntax for the line in  /etc/fstab is as follows:

server:/usr/local/pub /pub nfs defaults 0 0

The mount point  /pub must exist on the client machine before this command can be executed. After adding this line to  /etc/fstab on the client system, use the command  mount /pub, and the mount point  /pub is mounted from the server.

The  /etc/fstab file is referenced by the  netfs service at boot time, so lines referencing NFS shares have the same effect as manually typing the  mount command during the boot process.

A valid  /etc/fstab entry to mount an NFS export should contain the following information:

server:/remote/export /local/directory nfs options 0 0

The variables  server,  /remote/export,  /local/directory, and  options are the same ones used when manually mounting an NFS share. Refer to  Section 9.3, “NFS Client Configuration” for a definition of each variable.

Note

The mount point  /local/directory must exist on the client before  /etc/fstab is read. Otherwise, the mount will fail.

For more information about  /etc/fstab, refer to  man fstab.

⁠9.4. autofs

One drawback to using  /etc/fstab is that, regardless of how infrequently a user accesses the NFS mounted file system, the system must dedicate resources to keep the mounted file system in place. This is not a problem with one or two mounts, but when the system is maintaining mounts to many systems at one time, overall system performance can be affected. An alternative to  /etc/fstab is to use the kernel-based  automount utility. An automounter consists of two components:

  • a kernel module that implements a file system, and

  • a user-space daemon that performs all of the other functions.

The  automount utility can mount and unmount NFS file systems automatically (on-demand mounting), therefore saving system resources. It can be used to mount other file systems including AFS, SMBFS, CIFS, and local file systems.

Important

The nfs-utils package is now a part of both the 'NFS file server' and the 'Network File System Client' groups. As such, it is no longer installed by default with the Base group. Ensure that nfs-utils is installed on the system first before attempting to automount an NFS share.

autofs is also part of the 'Network File System Client' group.

autofs uses  /etc/auto.master (master map) as its default primary configuration file. This can be changed to use another supported network source and name using the  autofs configuration (in  /etc/sysconfig/autofs) in conjunction with the Name Service Switch (NSS) mechanism. An instance of the  autofs version 4 daemon was run for each mount point configured in the master map and so it could be run manually from the command line for any given mount point. This is not possible with  autofs version 5, because it uses a single daemon to manage all configured mount points; as such, all automounts must be configured in the master map. This is in line with the usual requirements of other industry standard automounters. Mount point, hostname, exported directory, and options can all be specified in a set of files (or other supported network sources) rather than configuring them manually for each host.

⁠9.4.1. Improvements in autofs Version 5 over Version 4

autofs version 5 features the following enhancements over version 4:

Direct map support

Direct maps in  autofs provide a mechanism to automatically mount file systems at arbitrary points in the file system hierarchy. A direct map is denoted by a mount point of  /- in the master map. Entries in a direct map contain an absolute path name as a key (instead of the relative path names used in indirect maps).

Lazy mount and unmount support

Multi-mount map entries describe a hierarchy of mount points under a single key. A good example of this is the -hosts map, commonly used for automounting all exports from a host under  /net/host as a multi-mount map entry. When using the  -hosts map, an  ls of  /net/host will mount  autofs trigger mounts for each export from host. These will then mount and expire them as they are accessed. This can greatly reduce the number of active mounts needed when accessing a server with a large number of exports.

Enhanced LDAP support

The  autofs configuration file ( /etc/sysconfig/autofs) provides a mechanism to specify the  autofs schema that a site implements, thus precluding the need to determine this via trial and error in the application itself. In addition, authenticated binds to the LDAP server are now supported, using most mechanisms supported by the common LDAP server implementations. A new configuration file has been added for this support:  /etc/autofs_ldap_auth.conf. The default configuration file is self-documenting, and uses an XML format.

Proper use of the Name Service Switch (nsswitch) configuration.

The Name Service Switch configuration file exists to provide a means of determining from where specific configuration data comes. The reason for this configuration is to allow administrators the flexibility of using the back-end database of choice, while maintaining a uniform software interface to access the data. While the version 4 automounter is becoming increasingly better at handling the NSS configuration, it is still not complete. Autofs version 5, on the other hand, is a complete implementation.

Refer to  man nsswitch.conf for more information on the supported syntax of this file. Not all NSS databases are valid map sources and the parser will reject ones that are invalid. Valid sources are files,  ypnisnisplusldap, and  hesiod.

Multiple master map entries per autofs mount point

One thing that is frequently used but not yet mentioned is the handling of multiple master map entries for the direct mount point  /-. The map keys for each entry are merged and behave as one map.

Example 9.2. Multiple master map entries per autofs mount point

An example is seen in the connectathon test maps for the direct mounts below:

/- /tmp/auto_dcthon
/- /tmp/auto_test3_direct
/- /tmp/auto_test4_direct

 

⁠9.4.2. autofs Configuration

The primary configuration file for the automounter is  /etc/auto.master, also referred to as the master map which may be changed as described in the  Section 9.4.1, “Improvements in autofs Version 5 over Version 4”. The master map lists  autofs-controlled mount points on the system, and their corresponding configuration files or network sources known as automount maps. The format of the master map is as follows:

mount-point map-name options

The variables used in this format are:

mount-point

The  autofs mount point,  /home, for example.

map-name

The name of a map source which contains a list of mount points, and the file system location from which those mount points should be mounted. The syntax for a map entry is described below.

options

If supplied, these will apply to all entries in the given map provided they don't themselves have options specified. This behavior is different from  autofs version 4 where options were cumulative. This has been changed to implement mixed environment compatibility.

Example 9.3. /etc/auto.master file

The following is a sample line from  /etc/auto.master file (displayed with  cat /etc/auto.master):

/home /etc/auto.misc

The general format of maps is similar to the master map, however the "options" appear between the mount point and the location instead of at the end of the entry as in the master map:

mount-point [options] location

The variables used in this format are:

mount-point

This refers to the  autofs mount point. This can be a single directory name for an indirect mount or the full path of the mount point for direct mounts. Each direct and indirect map entry key ( mount-point above) may be followed by a space separated list of offset directories (sub directory names each beginning with a "/") making them what is known as a multi-mount entry.

options

Whenever supplied, these are the mount options for the map entries that do not specify their own options.

location

This refers to the file system location such as a local file system path (preceded with the Sun map format escape character ":" for map names beginning with "/"), an NFS file system or other valid file system location.

The following is a sample of contents from a map file (for example,  /etc/auto.misc):

payroll -fstype=nfs personnel:/exports/payroll
sales -fstype=ext3 :/dev/hda4

The first column in a map file indicates the  autofs mount point ( sales and  payroll from the server called  personnel). The second column indicates the options for the  autofs mount while the third column indicates the source of the mount. Following the above configuration, the autofs mount points will be  /home/payroll and  /home/sales. The  -fstype= option is often omitted and is generally not needed for correct operation.

The automounter will create the directories if they do not exist. If the directories exist before the automounter was started, the automounter will not remove them when it exits. You can start or restart the automount daemon by issuing either of the following two commands:

  • service autofs start (if the automount daemon has stopped)

  • service autofs restart

Using the above configuration, if a process requires access to an  autofs unmounted directory such as  /home/payroll/2006/July.sxc, the automount daemon automatically mounts the directory. If a timeout is specified, the directory will automatically be unmounted if the directory is not accessed for the timeout period.

You can view the status of the automount daemon by issuing the following command:

# service autofs status

⁠9.4.3. Overriding or Augmenting Site Configuration Files

It can be useful to override site defaults for a specific mount point on a client system. For example, consider the following conditions:

  • Automounter maps are stored in NIS and the  /etc/nsswitch.conf file has the following directive:

    automount: files nis

  • The  auto.master file contains the following

    +auto.master

  • The NIS  auto.master map file contains the following:

    /home auto.home

  • The NIS  auto.home map contains the following:

    beth fileserver.example.com:/export/home/beth
    joe fileserver.example.com:/export/home/joe
    * fileserver.example.com:/export/home/&

  • The file map  /etc/auto.home does not exist.

Given these conditions, let's assume that the client system needs to override the NIS map  auto.home and mount home directories from a different server. In this case, the client will need to use the following  /etc/auto.mastermap:

/home ­/etc/auto.home
+auto.master

The  /etc/auto.home map contains the entry:

* labserver.example.com:/export/home/&

Because the automounter only processes the first occurrence of a mount point,  /home will contain the contents of  /etc/auto.home instead of the NIS  auto.home map.

Alternatively, to augment the site-wide  auto.home map with just a few entries, create an  /etc/auto.home file map, and in it put the new entries. At the end, include the NIS  auto.home map. Then the  /etc/auto.home file map will look similar to:

mydir someserver:/export/mydir
+auto.home

Given the NIS  auto.home map listed above,  ls /home would now output:

beth joe mydir

This last example works as expected because  autofs does not include the contents of a file map of the same name as the one it is reading. As such,  autofs moves on to the next map source in the  nsswitch configuration.

⁠9.4.4. Using LDAP to Store Automounter Maps

LDAP client libraries must be installed on all systems configured to retrieve automounter maps from LDAP. On Red Hat Enterprise Linux, the  openldap package should be installed automatically as a dependency of the  automounter. To configure LDAP access, modify  /etc/openldap/ldap.conf. Ensure that BASE, URI, and schema are set appropriately for your site.

The most recently established schema for storing automount maps in LDAP is described by  rfc2307bis. To use this schema it is necessary to set it in the  autofs configuration  /etc/autofs.conf by removing the comment characters from the schema definition.

Example 9.4. Setting autofs configuration

map_object_class = automountMap
entry_object_class = automount
map_attribute = automountMapName
entry_attribute = automountKey
value_attribute = automountInformation

Note

As of Red Hat Enterprise Linux 6.6, LDAP autofs is set in the  /etc/autofs.conf file instead of the  /etc/systemconfig/autofs file as was the case in previous releases.

Ensure that these are the only schema entries not commented in the configuration. The  automountKey replaces the  cn attribute in the  rfc2307bis schema. An  LDIF of a sample configuration is described below:

Example 9.5. LDIF configuration

# extended LDIF
#

LDAPv3

base <> with scope subtree

filter: (&(objectclass=automountMap)(automountMapName=auto.master))

requesting: ALL

#

auto.master, example.com

dn: automountMapName=auto.master,dc=example,dc=com
objectClass: top
objectClass: automountMap
automountMapName: auto.master

extended LDIF

#

LDAPv3

base with scope subtree

filter: (objectclass=automount)

requesting: ALL

#

/home, auto.master, example.com

dn: automountMapName=auto.master,dc=example,dc=com
objectClass: automount
cn: /home

automountKey: /home
automountInformation: auto.home

extended LDIF

#

LDAPv3

base <> with scope subtree

filter: (&(objectclass=automountMap)(automountMapName=auto.home))

requesting: ALL

#

auto.home, example.com

dn: automountMapName=auto.home,dc=example,dc=com
objectClass: automountMap
automountMapName: auto.home

extended LDIF

#

LDAPv3

base with scope subtree

filter: (objectclass=automount)

requesting: ALL

#

foo, auto.home, example.com

dn: automountKey=foo,automountMapName=auto.home,dc=example,dc=com
objectClass: automount
automountKey: foo
automountInformation: filer.example.com:/export/foo

/, auto.home, example.com

dn: automountKey=/,automountMapName=auto.home,dc=example,dc=com
objectClass: automount
automountKey: /
automountInformation: filer.example.com:/export/&

⁠9.5. Common NFS Mount Options

Beyond mounting a file system with NFS on a remote host, it is also possible to specify other options at mount time to make the mounted share easier to use. These options can be used with manual  mount commands,  /etc/fstab settings, and  autofs.

The following are options commonly used for NFS mounts:

intr

Allows NFS requests to be interrupted if the server goes down or cannot be reached.

lookupcache=mode

Specifies how the kernel should manage its cache of directory entries for a given mount point. Valid arguments for  mode are  allnone, or  pos/ positive.

nfsvers=version

Specifies which version of the NFS protocol to use, where  version is 2, 3, or 4. This is useful for hosts that run multiple NFS servers. If no version is specified, NFS uses the highest version supported by the kernel and  mount command.

The option  vers is identical to  nfsvers, and is included in this release for compatibility reasons.

noacl

Turns off all ACL processing. This may be needed when interfacing with older versions of Red Hat Enterprise Linux, Red Hat Linux, or Solaris, since the most recent ACL technology is not compatible with older systems.

nolock

Disables file locking. This setting is occasionally required when connecting to older NFS servers.

noexec

Prevents execution of binaries on mounted file systems. This is useful if the system is mounting a non-Linux file system containing incompatible binaries.

nosuid

Disables  set-user-identifier or  set-group-identifier bits. This prevents remote users from gaining higher privileges by running a  setuid program.

port=num

port=num — Specifies the numeric value of the NFS server port. If  num is  0 (the default), then  mount queries the remote host's  rpcbind service for the port number to use. If the remote host's NFS daemon is not registered with its  rpcbind service, the standard NFS port number of TCP 2049 is used instead.

rsize=num and wsize=num

These settings speed up NFS communication for reads ( rsize) and writes ( wsize) by setting a larger data block size ( num, in bytes), to be transferred at one time. Be careful when changing these values; some older Linux kernels and network cards do not work well with larger block sizes.

Note

If an rsize value is not specified, or if the specified value is larger than the maximum that either client or server can support, then the client and server negotiate the largest resize value they can both support.

sec=mode

Specifies the type of security to utilize when authenticating an NFS connection. Its default setting is  sec=sys, which uses local UNIX UIDs and GIDs by using  AUTH_SYS to authenticate NFS operations.

sec=krb5 uses Kerberos V5 instead of local UNIX UIDs and GIDs to authenticate users.

sec=krb5i uses Kerberos V5 for user authentication and performs integrity checking of NFS operations using secure checksums to prevent data tampering.

sec=krb5p uses Kerberos V5 for user authentication, integrity checking, and encrypts NFS traffic to prevent traffic sniffing. This is the most secure setting, but it also involves the most performance overhead.

tcp

Instructs the NFS mount to use the TCP protocol.

udp

Instructs the NFS mount to use the UDP protocol.

For a complete list of options and more detailed information on each one, refer to  man mount and  man nfs.

⁠9.6. Starting and Stopping NFS

To run an NFS server, the  rpcbind [3] service must be running. To verify that  rpcbind is active, use the following command:

# `service rpcbind status`

If the  rpcbind service is running, then the  nfs service can be started. To start an NFS server, use the following command:

# `service nfs start`

nfslock must also be started for both the NFS client and server to function properly. To start NFS locking, use the following command:

# `service nfslock start`

If NFS is set to start at boot, ensure that  nfslock also starts by running  chkconfig --list nfslock. If  nfslock is not set to  on, this implies that you will need to manually run the  service nfslock start each time the computer starts. To set  nfslock to automatically start on boot, use  chkconfig nfslock on.

nfslock is only needed for NFSv2 and NFSv3.

To stop the server, use:

# `service nfs stop`

The  restart option is a shorthand way of stopping and then starting NFS. This is the most efficient way to make configuration changes take effect after editing the configuration file for NFS. To restart the server type:

# `service nfs restart`

The  condrestart ( conditional restart) option only starts  nfs if it is currently running. This option is useful for scripts, because it does not start the daemon if it is not running. To conditionally restart the server type:

# `service nfs condrestart`

To reload the NFS server configuration file without restarting the service type:

# `service nfs reload`

⁠9.7. NFS Server Configuration

There are two ways to configure an NFS server:

  • Manually editing the NFS configuration file, that is,  /etc/exports, and

  • through the command line, that is, by using the command  exportfs

⁠9.7.1. The /etc/exports Configuration File

The  /etc/exports file controls which file systems are exported to remote hosts and specifies options. It follows the following syntax rules:

  • Blank lines are ignored.

  • To add a comment, start a line with the hash mark ( #).

  • You can wrap long lines with a backslash ( \).

  • Each exported file system should be on its own individual line.

  • Any lists of authorized hosts placed after an exported file system must be separated by space characters.

  • Options for each of the hosts must be placed in parentheses directly after the host identifier, without any spaces separating the host and the first parenthesis.

Each entry for an exported file system has the following structure:

export host(options)

The aforementioned structure uses the following variables:

export

The directory being exported

host

The host or network to which the export is being shared

options

The options to be used for  host

It is possible to specify multiple hosts, along with specific options for each host. To do so, list them on the same line as a space-delimited list, with each hostname followed by its respective options (in parentheses), as in:

export host1(options1) host2(options2) host3(options3)

For information on different methods for specifying hostnames, refer to  Section 9.7.4, “Hostname Formats”.

In its simplest form, the  /etc/exports file only specifies the exported directory and the hosts permitted to access it, as in the following example:

Example 9.6. The /etc/exports file

/exported/directory bob.example.com

Here,  bob.example.com can mount  /exported/directory/ from the NFS server. Because no options are specified in this example, NFS will use  default settings.

The default settings are:

ro

The exported file system is read-only. Remote hosts cannot change the data shared on the file system. To allow hosts to make changes to the file system (that is, read/write), specify the  rw option.

sync

The NFS server will not reply to requests before changes made by previous requests are written to disk. To enable asynchronous writes instead, specify the option  async.

wdelay

The NFS server will delay writing to the disk if it suspects another write request is imminent. This can improve performance as it reduces the number of times the disk must be accesses by separate write commands, thereby reducing write overhead. To disable this, specify the  no_wdelayno_wdelay is only available if the default  sync option is also specified.

root_squash

This prevents root users connected  remotely (as opposed to locally) from having root privileges; instead, the NFS server will assign them the user ID  nfsnobody. This effectively "squashes" the power of the remote root user to the lowest local user, preventing possible unauthorized writes on the remote server. To disable root squashing, specify  no_root_squash.

To squash every remote user (including root), use  all_squash. To specify the user and group IDs that the NFS server should assign to remote users from a particular host, use the  anonuid and  anongid options, respectively, as in:

export host(anonuid=uid,anongid=gid)

Here,  uid and  gid are user ID number and group ID number, respectively. The  anonuid and  anongid options allow you to create a special user and group account for remote NFS users to share.

By default,  access control lists ( ACLs) are supported by NFS under Red Hat Enterprise Linux. To disable this feature, specify the  no_acl option when exporting the file system.

Each default for every exported file system must be explicitly overridden. For example, if the  rw option is not specified, then the exported file system is shared as read-only. The following is a sample line from  /etc/exportswhich overrides two default options:

/another/exported/directory 192.168.0.3(rw,async)

In this example  192.168.0.3 can mount  /another/exported/directory/ read/write and all writes to disk are asynchronous. For more information on exporting options, refer to  man exportfs.

Other options are available where no default value is specified. These include the ability to disable sub-tree checking, allow access from insecure ports, and allow insecure file locks (necessary for certain early NFS client implementations). Refer to  man exports for details on these less-used options.

Important

The format of the  /etc/exports file is very precise, particularly in regards to use of the space character. Remember to always separate exported file systems from hosts and hosts from one another with a space character. However, there should be no other space characters in the file except on comment lines.

For example, the following two lines do not mean the same thing:

/home bob.example.com(rw)
/home bob.example.com (rw)

The first line allows only users from  bob.example.com read/write access to the  /home directory. The second line allows users from  bob.example.com to mount the directory as read-only (the default), while the rest of the world can mount it read/write.

⁠9.7.2. The exportfs Command

Every file system being exported to remote users with NFS, as well as the access level for those file systems, are listed in the  /etc/exports file. When the  nfs service starts, the  /usr/sbin/exportfs command launches and reads this file, passes control to  rpc.mountd (if NFSv2 or NFSv3) for the actual mounting process, then to  rpc.nfsd where the file systems are then available to remote users.

When issued manually, the  /usr/sbin/exportfs command allows the root user to selectively export or unexport directories without restarting the NFS service. When given the proper options, the  /usr/sbin/exportfs command writes the exported file systems to  /var/lib/nfs/etab. Since  rpc.mountd refers to the  etab file when deciding access privileges to a file system, changes to the list of exported file systems take effect immediately.

The following is a list of commonly-used options available for  /usr/sbin/exportfs:

-r

Causes all directories listed in  /etc/exports to be exported by constructing a new export list in  /etc/lib/nfs/etab. This option effectively refreshes the export list with any changes made to  /etc/exports.

-a

Causes all directories to be exported or unexported, depending on what other options are passed to  /usr/sbin/exportfs. If no other options are specified,  /usr/sbin/exportfs exports all file systems specified in  /etc/exports.

-o file-systems

Specifies directories to be exported that are not listed in  /etc/exports. Replace  file-systems with additional file systems to be exported. These file systems must be formatted in the same way they are specified in  /etc/exports. This option is often used to test an exported file system before adding it permanently to the list of file systems to be exported. Refer to  Section 9.7.1, “The /etc/exports Configuration File” for more information on  /etc/exports syntax.

-i

Ignores  /etc/exports; only options given from the command line are used to define exported file systems.

-u

Unexports all shared directories. The command  /usr/sbin/exportfs -ua suspends NFS file sharing while keeping all NFS daemons up. To re-enable NFS sharing, use  exportfs -r.

-v

Verbose operation, where the file systems being exported or unexported are displayed in greater detail when the  exportfs command is executed.

If no options are passed to the  exportfs command, it displays a list of currently exported file systems. For more information about the  exportfs command, refer to  man exportfs.

⁠9.7.2.1. Using exportfs with NFSv4

In Red Hat Enterprise Linux 6, no extra steps are required to configure NFSv4 exports as any filesystems mentioned are automatically available to NFSv2, NFSv3, and NFSv4 clients using the same path. This was not the case in previous versions.

To prevent clients from using NFSv4, turn it off by sellecting  RPCNFSDARGS= -N 4 in  /etc/sysconfig/nfs.

⁠9.7.3. Running NFS Behind a Firewall

NFS requires  rpcbind, which dynamically assigns ports for RPC services and can cause problems for configuring firewall rules. To allow clients to access NFS shares behind a firewall, edit the  /etc/sysconfig/nfs configuration file to control which ports the required RPC services run on.

The  /etc/sysconfig/nfs may not exist by default on all systems. If it does not exist, create it and add the following variables, replacing  port with an unused port number (alternatively, if the file exists, un-comment and change the default entries as required):

MOUNTD_PORT=port

Controls which TCP and UDP port  mountd ( rpc.mountd) uses.

STATD_PORT=port

Controls which TCP and UDP port status ( rpc.statd) uses.

LOCKD_TCPPORT=port

Controls which TCP port  nlockmgr ( lockd) uses.

LOCKD_UDPPORT=port

Controls which UDP port  nlockmgr ( lockd) uses.

If NFS fails to start, check  /var/log/messages. Normally, NFS will fail to start if you specify a port number that is already in use. After editing  /etc/sysconfig/nfs, restart the NFS service using  service nfs restart. Run the  rpcinfo -p command to confirm the changes.

To configure a firewall to allow NFS, perform the following steps:

Procedure 9.1. Configure a firewall to allow NFS

  1. Allow TCP and UDP port 2049 for NFS.

  2. Allow TCP and UDP port 111 ( rpcbind/ sunrpc).

  3. Allow the TCP and UDP port specified with  MOUNTD_PORT="port"

  4. Allow the TCP and UDP port specified with  STATD_PORT="port"

  5. Allow the TCP port specified with  LOCKD_TCPPORT="port"

  6. Allow the UDP port specified with  LOCKD_UDPPORT="port"

Note

To allow NFSv4.0 callbacks to pass through firewalls set  /proc/sys/fs/nfs/nfs_callback_tcpport and allow the server to connect to that port on the client.

This process is not needed for NFSv4.1 or higher, and the other ports for  mountdstatd, and  lockd are not required in a pure NFSv4 environment.

⁠9.7.3.1. Discovering NFS exports

There are two ways to discover which file systems an NFS server exports.

First, on any server that supports NFSv2 or NFSv3, use the  showmount command:

$ showmount -e myserver
Export list for mysever
/exports/foo
/exports/bar

Second, on any server that supports NFSv4, mount  / and look around.

# mount myserver:/ /mnt/
#cd /mnt/
exports

ls exports

foo
bar

On servers that support both NFSv4 and either NFSv2 or NFSv3, both methods will work and give the same results.

Note

Before Red Hat Enterprise Linux 6 on older NFS servers, depending on how they are configured, it is possible to export filesystems to NFSv4 clients at different paths. Because these servers do not enable NFSv4 by default this should not normally be a problem.

⁠9.7.4. Hostname Formats

The host(s) can be in the following forms:

Single machine

A fully-qualified domain name (that can be resolved by the server), hostname (that can be resolved by the server), or an IP address.

Series of machines specified with wildcards

Use the  * or  ? character to specify a string match. Wildcards are not to be used with IP addresses; however, they may accidentally work if reverse DNS lookups fail. When specifying wildcards in fully qualified domain names, dots ( .) are not included in the wildcard. For example,  *.example.com includes  one.example.com but does not  include one.two.example.com.

IP networks

Use  a.b.c.d/z, where  a.b.c.d is the network and  z is the number of bits in the netmask (for example 192.168.0.0/24). Another acceptable format is  a.b.c.d/netmask, where  a.b.c.d is the network and  netmask is the netmask (for example, 192.168.100.8/255.255.255.0).

Netgroups

Use the format @ group-name, where  group-name is the NIS netgroup name.

⁠9.7.5. NFS over RDMA

To enable the RDMA transport in the linux kernel NFS server, use the following procedure:

Procedure 9.2. Enable RDMA from server

  1. Ensure the RDMA rpm is installed and the RDMA service is enabled with the following command:

    # yum install rdma; chkconfig --level 2345 rdma on
  2. Ensure the package that provides the nfs-rdma service is installed and the service is enabled with the following command:

    # yum install rdma; chkconfig --level 345 nfs-rdma on
  3. Ensure that the RDMA port is set to the preferred port (default for Red Hat Enterprise Linux 6 is 2050). To do so, edit the  /etc/rdma/rdma.conf file to set NFSoRDMA_LOAD=yes and NFSoRDMA_PORT to the desired port.

  4. Set up the exported filesystem as normal for NFS mounts.

On the client side, use the following procedure:

Procedure 9.3. Enable RDMA from client

  1. Ensure the RDMA rpm is installed and the RDMA service is enabled with the following command:

    # yum install rdma; chkconfig --level 2345 rdma on
  2. Mount the NFS exported partition using the RDMA option on the mount call. The port option can optionally be added to the call.

    # mount -t nfs -o rdma,port=port_number

⁠9.8. Securing NFS

NFS is well-suited for sharing entire file systems with a large number of known hosts in a transparent manner. However, with ease-of-use comes a variety of potential security problems. Consider the following sections when exporting NFS file systems on a server or mounting them on a client. Doing so minimizes NFS security risks and better protects data on the server.

⁠9.8.1. NFS Security with AUTH_SYS and export controls

Traditionally, NFS has given two options in order to control access to exported files.

First, the server restricts which hosts are allowed to mount which filesystems either by IP address or by host name.

Second, the server enforces file system permissions for users on NFS clients in the same way it does local users. Traditionally it does this using  AUTH_SYS (also called  AUTH_UNIX) which relies on the client to state the UID and GID's of the user. Be aware that this means a malicious or misconfigured client can easily get this wrong and allow a user access to files that it should not.

To limit the potential risks, administrators often allow read-only access or squash user permissions to a common user and group ID. Unfortunately, these solutions prevent the NFS share from being used in the way it was originally intended.

Additionally, if an attacker gains control of the DNS server used by the system exporting the NFS file system, the system associated with a particular hostname or fully qualified domain name can be pointed to an unauthorized machine. At this point, the unauthorized machine  is the system permitted to mount the NFS share, since no username or password information is exchanged to provide additional security for the NFS mount.

Wildcards should be used sparingly when exporting directories through NFS, as it is possible for the scope of the wildcard to encompass more systems than intended.

It is also possible to restrict access to the  rpcbind [3] service with TCP wrappers. Creating rules with  iptables can also limit access to ports used by  rpcbindrpc.mountd, and  rpc.nfsd.

For more information on securing NFS and  rpcbind, refer to  man iptables.

⁠9.8.2. NFS security with AUTH_GSS

The release of NFSv4 brought a revolution to NFS security by mandating the implementation of RPCSEC_GSS and the Kerberos version 5 GSS-API mechanism. However, RPCSEC_GSS and the Kerberos mechanism are also available for all versions of NFS.

With the RPCSEC_GSS Kerberos mechanism, the server no longer depends on the client to correctly represent which user is accessing the file, as is the case with AUTH_SYS. Instead, it uses cryptography to authenticate users to the server, preventing a malicious client from impersonating a user without having that user's kerberos credentials.

Note

It is assumed that a Kerberos ticket-granting server (KDC) is installed and configured correctly, prior to configuring an NFSv4 server. Kerberos is a network authentication system which allows clients and servers to authenticate to each other through use of symmetric encryption and a trusted third party, the KDC. For more information on Kerberos see Red Hat's  Identity Management Guide.

To set up RPCSEC_GSS, use the following procedure:

Procedure 9.4. Set up RPCSEC_GSS

  1. Create  nfs/client.mydomain@MYREALM and  nfs/server.mydomain@MYREALM principals.

  2. Add the corresponding keys to keytabs for the client and server.

  3. On the server side, add  sec=krb5,krb5i,krb5p to the export. To continue allowing AUTH_SYS, add  sec=sys,krb5,krb5i,krb5p instead.

  4. On the client side, add  sec=krb5 (or  sec=krb5i, or  sec=krb5p depending on the set up) to the mount options.

For more information, such as the difference between  krb5krb5i, and  krb5p, refer to the  exports and  nfs man pages or to  Section 9.5, “Common NFS Mount Options”.

For more information on the  RPCSEC_GSS framework, including how  rpc.svcgssd and  rpc.gssd inter-operate, refer to http://www.citi.umich.edu/projects/nfsv4/gssd/.

⁠9.8.2.1. NFS security with NFSv4

NFSv4 includes ACL support based on the Microsoft Windows NT model, not the POSIX model, because of the former's features and wide deployment.

Another important security feature of NFSv4 is the removal of the use of the  MOUNT protocol for mounting file systems. This protocol presented possible security holes because of the way that it processed file handles.

⁠9.8.3. File Permissions

Once the NFS file system is mounted read/write by a remote host, the only protection each shared file has is its permissions. If two users that share the same user ID value mount the same NFS file system, they can modify each others' files. Additionally, anyone logged in as root on the client system can use the  su - command to access any files with the NFS share.

By default, access control lists (ACLs) are supported by NFS under Red Hat Enterprise Linux. Red Hat recommends that this feature is kept enabled.

By default, NFS uses  root squashing when exporting a file system. This sets the user ID of anyone accessing the NFS share as the root user on their local machine to  nobody. Root squashing is controlled by the default option  root_squash; for more information about this option, refer to  Section 9.7.1, “The /etc/exports Configuration File”. If possible, never disable root squashing.

When exporting an NFS share as read-only, consider using the  all_squash option. This option makes every user accessing the exported file system take the user ID of the  nfsnobody user.

⁠9.9. NFS and rpcbind

Note

The following section only applies to NFSv2 or NFSv3 implementations that require the  rpcbind service for backward compatibility.

The  rpcbind [3] utility maps RPC services to the ports on which they listen. RPC processes notify  rpcbind when they start, registering the ports they are listening on and the RPC program numbers they expect to serve. The client system then contacts  rpcbind on the server with a particular RPC program number. The  rpcbind service redirects the client to the proper port number so it can communicate with the requested service.

Because RPC-based services rely on  rpcbind to make all connections with incoming client requests,  rpcbindmust be available before any of these services start.

The  rpcbind service uses TCP wrappers for access control, and access control rules for  rpcbind affect  all RPC-based services. Alternatively, it is possible to specify access control rules for each of the NFS RPC daemons. The  man pages for  rpc.mountd and  rpc.statd contain information regarding the precise syntax for these rules.

⁠9.9.1. Troubleshooting NFS and rpcbind

Because  rpcbind [3] provides coordination between RPC services and the port numbers used to communicate with them, it is useful to view the status of current RPC services using  rpcbind when troubleshooting. The  rpcinfocommand shows each RPC-based service with port numbers, an RPC program number, a version number, and an IP protocol type (TCP or UDP).

To make sure the proper NFS RPC-based services are enabled for  rpcbind, issue the following command:

# rpcinfo -p

Example 9.7. rpcinfo -p command output

The following is sample output from this command:

program vers proto port service
100021 1 udp 32774 nlockmgr
100021 3 udp 32774 nlockmgr
100021 4 udp 32774 nlockmgr
100021 1 tcp 34437 nlockmgr
100021 3 tcp 34437 nlockmgr
100021 4 tcp 34437 nlockmgr
100011 1 udp 819 rquotad
100011 2 udp 819 rquotad
100011 1 tcp 822 rquotad
100011 2 tcp 822 rquotad
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100003 2 tcp 2049 nfs
100003 3 tcp 2049 nfs
100005 1 udp 836 mountd
100005 1 tcp 839 mountd
100005 2 udp 836 mountd
100005 2 tcp 839 mountd
100005 3 udp 836 mountd
100005 3 tcp 839 mountd

If one of the NFS services does not start up correctly,  rpcbind will be unable to map RPC requests from clients for that service to the correct port. In many cases, if NFS is not present in  rpcinfo output, restarting NFS causes the service to correctly register with  rpcbind and begin working.

For more information and a list of options on  rpcinfo, refer to its  man page.

⁠9.10. References

Administering an NFS server can be a challenge. Many options, including quite a few not mentioned in this chapter, are available for exporting or mounting NFS shares. Consult the following sources for more information.

⁠Installed Documentation

  • man mount — Contains a comprehensive look at mount options for both NFS server and client configurations.

  • man fstab — Gives details for the format of the  /etc/fstab file used to mount file systems at boot-time.

  • man nfs — Provides details on NFS-specific file system export and mount options.

  • man exports — Shows common options used in the  /etc/exports file when exporting NFS file systems.

  • man 8 nfsidmap — Explains the  nfsidmap cammand and lists common options.

⁠Useful Websites

⁠Related Books

  • Managing NFS and NIS by Hal Stern, Mike Eisler, and Ricardo Labiaga; O'Reilly & Associates — Makes an excellent reference guide for the many different NFS export and mount options available as of 2001.

  • NFS Illustrated by Brent Callaghan; Addison-Wesley Publishing Company — Provides comparisons of NFS to other network file systems and shows, in detail, how NFS communication occurs.


[3] The  rpcbind service replaces  portmap, which was used in previous versions of Red Hat Enterprise Linux to map RPC program numbers to IP address port number combinations. For more information, refer to  Section 9.1.1, “Required Services”.