Chapter 20

NFS - Network File System
 

 
 
In this chapter:
 
 
* Distinguishing the user-space daemon from the kernel space NFS daemon
* Starting the NFS service
* Uncovering ways to export portions of the directory tree to selected machines
* Exploring methods for mapping user IDs between client and server
* Monitoring the NFS service
 
 
 
The Network File System (NFS) is used to distribute filesystems over a network. The server exports the filesystem and the client imports it. There are now two ways to run a NFS server. The traditional method is by running the user space NFS daemon. The newer option is the kernel based kernel-nfsd, which was introduced with kernel version 2.2. SuSE supports both methods. The user space daemon has built a reputation for reliability, but has limitations in terms of speed. The kernel based NFS daemon is faster, but not as well tested as the older one. This chapter will show you how to set up the NFS service with both daemons.
 
20.1 The user-space daemon
 

NFS is a RPC (remote-procedure-call) based service. There are number of of daemons involved in handling this service. The most important are:
 

 
* portmap
Portmap is a server that converts RPC program numbers into DARPA protocol port numbers. It must be running in order to make RPC calls.
* mountd
The mountd program is a NFS mount daemon. When receiving a MOUNT request from a NFS client, it checks the request against a list of exported file systems listed in /etc/exports. If the client is permitted to mount the file system, mountd creates a file handle for the requested directory, and adds an entry in /etc/rmtab. When it receives an UMOUNT request, it removes the client's entry from rmtab.
NOTE Note that a client may still be able to use the file handle after the UMOUNT request (for instance, if the client mounts the same remote file system on two different mount points). The file handle will also remain in effect if a client reboots without notifying mountd, a stale entry will remain in rmtab.
 
* nfsd
The nfsd program is a NFS service daemon that handles client filesystem requests. In Linux, nfsd operates as a normal user-level process. The server also differs from other NFS server implementations in that it mounts an entire file hierarchy not limited by the boundaries of physical file-systems. The implementation allows clients read-only or read-write access to the file hierarchy of the server machine.
 
 
In order to use NFS, you should be certain that all of the daemons listed above are running. There are several variables in /etc/rc.config which control these daemons:
 
 
* START_PORTMAP
The portmap daemon will start only if this variable is set to yes.
NOTE This is needed if the NFS server is started or if NIS is used. Note that if NFS_SERVER is set to yes, the setting of this variable will be ignored and portmap will be started in any case.
 
* NFS_SERVER
Determines if the NFS server should be started on the host named. (Flag is yes or no)
* NFS_SERVER_UGID
Determines whether the translate server client for user IDs and group IDs should be started (yes) or not (no). This daemon is useful if the client machines have the same user and group names, but different numeric IDs. You should avoid this kind of setup. In a legacy environment, there may be no other way to synchronize user IDs on all machines.
CAUTION Keep in mind that this method is very unsecure. By running ugidd, anyone can query the client host for a list of valid user names. You can protect yourself by restricting access to ugidd to valid hosts only. This can be done by entering the list of valid hosts into the /etc/hosts.allow or /etc/hosts.deny file. The service name is ugidd. A description of these files will be given in chapter 26 .
 
There is one exception to this rule, which can occur when you want the user ID of the client machine to be mapped to another ID. This is the root account. In most cases you don't want the client's root to have superuser permissions on the filesystems exported by the server. This can be done more elegantly with the root_squash option of the NFS service. We'll see how this works shortly.
The daemon will only be started if NFS_SERVER is yes.
* REEXPORT_NFS
You may want to re-export imported filesystems. This means you mount some filesystems from a remote host, and then act as a server for other hosts by exporting local and remotely mounted filesystems. You should avoid this system structure. A possible scenario is that if you have a machine acting as gateway, and the hosts in one network can't reach the hosts in the other network directly (i.e., a firewall prevents RPC calls through the gateway), this can be a way to distribute filesystems across these networks. Setting this variable to yes enables re-exporting of NFS mounted file systems
* START_PCNFSD
If set to yes, the pcnfsd daemon will be started. Pcnfsd is a RPC server that supports ONC clients on PC (DOS, OS/2, Macintosh and other) systems. This daemon supports not only exporting filesystems, but remote printing and other services. Consult pcnfsd(8) for more information.
TIP In general, SAMBA (see chapter 19) is a much better option for connecting non-Unix systems to your server. PCNFS has its roots in old DOS implementations of NFS and is only found on really old installations.
 
* START_BWNFSD
Related to pcnfsd. If you need pcnfsd, you probably need this service.
* PCNFSD_LPSPOOL
Pcnfsd and bwnfsd need a spool directory for print services. Put its location in this file and make sure that lpd can access it.
 
 
The rc-scripts starting these services (and reading the variables listed) are:
 
* /sbin/init.d/rpc
This script takes care of the RPC services. It controls the portmap daemon, and if necessary, directs rpc.ugidd to translate user and group IDs.
* /sbin/init.d/nfsserver
Controls mountd, nfsd, and starts the port mapper if it's not already running.
* /sbin/init.d/pcnfsd
This script controls pcnfsd and related services.
 
 
20.2 The kernel NFS daemon-new and improved
 

Kernel version 2.2 introduced a new kernel based version of the NFS server. The advantage of handling the NFS service in kernel space rather than in user space is that there is a noticeable increase in performance. By far, the new NFS version is not as stable as the original user space version. You may buy performance yet sacrifice reliability. Carefully consider your options before using the new daemon on a production system.
 
SuSE makes it remarkably easy to select which version you want to run. Everything has been prepared for the new daemon. If you set the variable USE_KERNEL_NFSD in /etc/rc.config to yes (default is still no), the new daemon will be used. Be sure that you compiled your kernel with kernel nfsd support, otherwise it won't work. You can elect to have the kernel nfsd support compiled as a module. This way you can easily switch between the two options - user space or kernel space nfsd. A second variable in /etc/rc.config, USE_KERNEL_NFSD_NUMER, determines how many instances of the daemon should be spawned. Regarding performance, the theory is the more, the better. SuSE's default setting is to start four instances, which seems adequate.
 
Everything else works just as it does in the user space situation. The migration to the new version should be transparent, except for adjusting the two settings in /etc/rc.config.
 

20.3 Controlling the NFS filesystems
 

At this point we've collected quite a bit of information about starting the NFS daemon and its related services. The only thing missing is details on how to control which file system to export to which hosts, and determining the permissions on the exported file systems. This information is held in one file: /etc/exports. The file serves as the access control list for file systems which may be exported to NFS clients. This is the most important configuration file of this service, and the one you will edit the most. Once the service is running - usually right after the installation - this is the location to make changes. Each line of /etc/exports contains a mount point (which will be referenced by the client) and a list of machine or netgroup names allowed to mount the file system beginning at that point. An optional parenthesized list of mount parameters may follow each machine name. Blank lines are ignored, and a hash sign (#) introduces a comment to the end of the line. Entries may be continued across newlines using a back slash.
 
Before we dig deeper into the possible options, let's have a look at a simple example of the /etc/exports file. Later we will see a more complex example that covers nearly all the options available.
 

 
# This file contains a list of all directories exported to  
# other machines. It is used by rpc.nfsd and rpc.mountd.  

 
/home *.simpsons.net(rw,root_squash) /usr *.simpsons.net(ro) /opt *.simpsons.net(ro) /cdrom *.simpsons.net(ro,no_root_squash) /ZIP *.simpsons.net(rw,no_root_squash)
 
 
All entries are exported to hosts in the simpsons.net domain. The /home tree is exported with read and write access (rw), but prohibits root permissions (root_squash). Read-only (ro) access is specified for /usr, /opt and /cdrom, where root permissions are gained for the /cdrom directory. The same goes for /ZIP, but here the client also gets write access. In most cases your exports file will be as simple as this one. You can get more out of NFS if you want to.
 
The man page exports(5) lists all the possible options that the Linux NFS implementation can handle.
 
First, there are several ways to specify who may access the service:
 
* single hosts
This is the most common format. You may specify a host either by an abbreviated name recognized by the domain-name resolver (DNS, bind), the fully qualified domain name, or an IP address.
* netgroups
NIS netgroups may be given as @group. Only the host part of all netgroup members is extracted and added to the access list. Empty host parts or those containing a single dash (-) are ignored.
* wildcards
Machine names may contain the wildcard characters * and ?. This can be used to make the exports file more compact. In the previous example, *.simpsons.net matches all hosts in the domain simpsons.com. However, these wildcard characters do not match the dots in a domain name, so the above pattern does not include hosts such as homer.powerplant.simpsons.com.
* IP networks
You can also export directories to all hosts on an IP (sub-) network simultaneously. This is done by specifying an IP address and netmask pair as address/netmask.
* =public
This is a special hostname that identifies the given directory name as the public root directory. When using this convention, =public must be the only entry on this line, and must have no export options associated with it. Note that this does not actually export the named directory; you still have to set the exports options in a separate entry. (see the section on WebNFS in nfsd(8) for a discussion of WebNFS and the public root handle).
 
 
The next list shows the general export options mountd and nfsd understand:
 
 
* secure
This option requires that requests originate on a port lower than 1024. This option is on by default. To turn it off, specify insecure.
* ro
Only allows read-only requests on this NFS volume. The default is to allow write requests as well, which can also be made explicit by using the rw option.
* noaccess
This makes everything below the directory inaccessible for the client stated. This is useful when you want to export a directory hierarchy to a client, but want to exclude certain subdirectories. The client's view of a directory flagged with noaccess is very limited; it is allowed to read its attributes, lookup '.', and '..'. These are also the only entries returned by a redirect.
* link_relative
Converts absolute symbolic links (where the link contents start with a slash) into relative links by appending the necessary number of ../'s to get from the directory containing the link to the root on the server. This has subtle and perhaps questionable semantics when the file hierarchy is not mounted at its root.
* link_absolute
Leaves all symbolic links undisturbed. This is the default operation.
 
 
A very important issue is access control. Nfsd bases its access control to files on the server machine on the user and group ID provided in each NFS RPC request. The normal behavior a user would expect is that she can access her files on the server just as she would on a normal file system. This requires that the same user and group IDs are used on the client and the server machine. This is not always true, nor is it always desirable.
 
Very often, it is not desirable that the root user on a client machine is also treated as root when accessing files on the NFS server. To implement this, user ID 0 (root) is normally mapped to a different ID: the so-called anonymous or nobody user ID. This mode of operation (called root squashing) is the default, and can be turned off with no_root_squash.
 
By default, nfsd tries to obtain the anonymous user and group ID by looking up user nobody in the password file at startup time. On SuSE installations the ID (both user and group) is 65534 (or -2). These values can also be overridden by the anonuid and anongid options.
 
In addition, nfsd lets you specify arbitrary user and group IDs that should be mapped to user nobody as well. Finally, you can map all user requests to the anonymous user ID by specifying the all_squash option.
 
For the benefit of installations where uids differ between different machines, nfsd provides several mechanisms to dynamically map server uids to client uids and vice versa:
 
 
* Static mapping files,
* NIS-based mapping and
* Ugidd-based mapping.
 
 
Ugidd-based mapping is enabled with the map_daemon option, and uses the UGID RPC protocol. For this to work, you have to run the ugidd(8) mapping daemon on the client host. It is the least secure of the three methods -- by running ugidd, anyone can query the client host for a list of valid user names.
 
Static mapping is enabled by using the map_static option, which takes a file name as an argument that describes the mapping (see below for the syntax of the map file). NIS-based mapping queries the client's NIS server to obtain a mapping from user and group names on the server host to user and group names on the client. Here's the complete list of mapping options:
 
 
* root_squash
Map requests from user/group ID 0 to the anonymous user/group ID.
NOTE Note that this does not apply to any other user IDs that might be equally sensitive.
 
* no_root_squash
Turns off root squashing. This option is mainly useful for diskless clients, where root access is needed to (at minimum) access the root filesystem and write log files.
* squash_uids and squash_gids
This option specifies a list of user and group IDs that should be subject to anonymous mapping. A valid list of IDs looks like this:
 
suash_uids=0-15,20,25-50  
 
* all_squash
Maps all user and group IDs to the anonymous user. Useful for NFS-exported public FTP directories, news spool directories, etc. The opposite option is no_all_squash, which is the default setting.
* map_daemon
This option turns on dynamic user/group ID mapping. Each user ID in an NFS request will be translated to the equivalent server user ID, and each user ID in a NFS reply will be mapped the other way 'round. This option requires that rpc.ugidd(8) runs on the client host. The default setting is map_identity, which leaves all user IDs untouched. The normal squash options apply regardless of whether dynamic mapping is requested or not.
* map_static
This option enables static mapping. It specifies the name of the file that describes the user/group ID mapping, e.g.:
 
map_static=/etc/nfs/foobar.map  
  The file's format looks like this:
 
# Mapping for client homer:  
# remote local  
uid     0-99      -    # squash these  
uid  100-500   1000    # map 100-500 to 1000-1500  
gid     0-49      -    # squash these  
gid   50-100    700    # map 50-100 to 700-750  
 
* map_nis
This option enables NIS-based uid/gid mapping. For instance, when the server encounters the user ID 123 on the server, it will obtain the login name associated with it, and contact the NFS client's NIS server to obtain the uid the client associates with the name.
In order to do this, the NFS server must know the client's NIS domain. This is specified as an argument to the map_nis options, e.g.
 
map_nis=foo.com  
  Note that it may not be sufficient to simply specify the NIS domain here; you may have to take additional action before nfsd is actually able to contact the server. You need to configure the NFS server as a NIS client. For details, see Chapter 15.
* anonuid and anongid
The sole function of these options is to set the user and group ID of the anonymous account. This option is primarily useful for PC/NFS clients, where you might want all requests to appear to be from one user. As an example, consider the export entry for /home/charlie in the example below, which maps all requests to user ID 150 (which is supposedly that of user charlie).
 
 
Here is a more complex example of a NFS configuration:
 
 
# more complex /etc/exports file  
/               homer(rw) rusty(rw,no_root_squash)  
/projects       proj*.simpsons.net(rw)  
/usr            *.simpsons.net(ro) @trusted(rw)  
/home/charlie   pc001(rw,all_squash,anonuid=150,anongid=100)  
/pub            (ro,insecure,all_squash)  
/pub/private    (noaccess)  
 
 
The first line exports the entire filesystem to machines homer and rusty. In addition to write access, all uid squashing is turned off for host rusty. The second and third entry show examples for wildcard hostnames and netgroups (this is the entry @trusted). The fourth line shows the entry for the PC/NFS client discussed above. Line 5 exports the public FTP directory to every host in the world, executing all requests under the nobody account. The unsecure option in this entry also allows clients with NFS implementations that don't use a reserved port for NFS. The last line denies all NFS clients access to the private directory.
NOTE Don't forget to restart the NFS server every time you change the /etc/exports file. The daemons read this file only once at startup or when they receive the signal HUP. The easiest way to reread the configuration is to call
 
 
# /sbin/init.d/nfsserver restart  
 
 
after every change you make. Since NFS is a stateless protocol, you don't need to make the clients to unmount any filesystems. They won't be able to access anything on the server while the service is down, but as soon as it's up again, they can resume where they left off.
 
 
20.4 Monitoring NFS
 

Before we finish our discussion on NFS, there's a nice tool worth mentioning called showmount. With showmount, you can gather information about your own server, such as which client is using which mount point. You can also poll other servers on what they are willing to export.
 
Call showmount -a to see who uses your server and which directory is mounted by each client:
 

 
> showmount -a  
All mount points on Bart:  
Homer.simpsons.com:/ZIP  
Lisa.simpsons.com:/opt/applix  
Lisa.simpsons.com:/usr  
 
 
The inverse information - which directories are mounted via NFS on the local machine - you can use the mount command (this time called on host homer:
 
 
> mount -t nfs  
Bart:/ZIP on /ZIP type nfs (rw,noexec,nosuid,nodev,addr=192.168.0.66)  
 
 
To see which directories are exported by a host, you can query it with the option -e of showmount. The publicly accessible host linux01.gwdg.de exports the SuSE distribution for NFS installation over the Internet:
 
 
> showmount -e linux01.gwdg.de  
Export list for linux01.gwdg.de:  
/suse/suse_update <anon clnt>  
/suse/ftp.suse.de <anon clnt>  
/suse/ftp.suse.com <anon clnt>  
/suse/ftp.dosemu.org <anon clnt>  
/suse/6.0/evaluation <anon clnt>  
/suse/6.0/iso <anon clnt>  
/suse/6.0/i386.de <anon clnt>  
/suse/5.3/i386.de <anon clnt>  
/suse *.suse.de  
/ZIPx <anon clnt>  
/ZIP <anon clnt>  
/CD6 <anon clnt>  
/CD5 <anon clnt>  
/CD2 <anon clnt>  
/CD1 <anon clnt>  
 
 
TIP If you have problems with NFS, refer to the file /var/log/messages. It reports successful NFS mounts and unsuccessful approaches.
 
 
 
Summary:
  The Network File System (NFS) is used to share file systems over the network. The NFS server exports file systems to NFS clients. SuSE Linux offers two different NFS server implementations, the traditional user space NFS daemon, and the new kernel based NFS daemon. In terms of functionality, both offer the same options. The kernel based daemon promises higher performance, whereas the original daemon has a history of reliability, which the kernel based daemon still has to prove.
 
Certain restrictions can be put on the exported files systems. Only specified hosts may mount them, and they may have restricted permissions on the mounted file systems.
 
Permissions are based on hosts and user IDs accessing the files. There are different methods to map user IDs between the server and the client machines to work around different IDs for the same users on different machines.
 
Tools are available to monitor the service and to gain information about the NFS setup of local and remote machines.
 
--
Back Up Contents Next
--

Copyright (c) 1999 by Terrehon Bowden and Bodo Bauer
To contact the author please sent mail to bb@bb-zone.com