Chapter 31
Diskless Clients
|
|
|
|
|
|
In this chapter: |
|
|
|
|
Benefiting from diskless clients
|
|
Setting up an environment for diskless clients
|
|
Prepping the server to work with diskless clients
|
|
Creating a boot floppy for clients
|
|
Customizing the server setup for diskless clients
|
|
Sharing resources between multiple clients
|
|
|
|
|
|
Diskless clients are machines that don't require any local hard disk
space to operate. They boot over the network and run entirely over
network based filesystems. There are some pretty good reasons to have
diskless clients as your main client configuration:
|
|
|
Ease of administration
If you have a large network with a large number of
clients and some of them configured as dual boot machines, it can be
difficult to remotely change the setup of those machines. Even if they
are reachable via the network and you can log into those machines when
they run Linux, you may need to make revisions when they are running
Windows or some other operating system. Using diskless clients means
that you have all configuration files on one machine. The
configuration of every client is accessible no matter which OS is
running on this machine or if it's running at all.
|
|
Resource Usage
With diskless clients you only have to keep a small amount of data on
the server for each client. Nearly all binaries and applications are
stored once. This means no more wasted hard disk space on your client
machines. By keeping all the data on a central server, a reasonable
amount of disk space can be saved. In multi-boot environments this
makes even more sense. There's no need to repartition the client
machine. A boot floppy is all you need to get the client set up as a
Linux workstation. This method eliminates the need for high
performance machines as clients. The setup presented here will enable
you to run the clients as diskless X-Terminals (see
chapter 33
). This means only the display is at the
client itself. All applications run on the server machine. Even with
modern hardware it's much cheaper to have one high end server and lots
of low end clients than having midrange systems throughout your
network. And as the demand for faster hardware increases, it's usually
enough to update the server machine. All clients will profit from one
new machine that consolidate resources without purchasing new machines
for everyone.
|
|
Standardized setup
When you centralize administration it's much easier to update all client
machines and keep them on a similar status in terms of software
revisions and user setups.
When you centralize administration it's much easier to update all
client machines with software revisions and synchronize user
setups. In addition, the graphical user interface (GUI) setup can be
standardized much more easily. Many administrators report that giving
users who ordinarily use a small number of applications too many
choices leads to an increase in service requests. The number of
support requests can be reduced with a client server setup, which of
course requires much less effort than setting up local installations
for each client.
|
|
|
|
|
Unfortunately, you have to pay for these advances. There is the
one-time cost of overhead involved in getting the system running. This
is not as big of an issue as it sounds at first. As we go along we'll
see that there is only little "magic" involved in setting up the boot
server and creating the boot floppies for the clients. A trade off is
the loss of performance when doing everything over a network. But with
modern hardware and a clever network layout, this is not as bad as it
sounds.
|
|
|
|
The major drawback of the central administration concept is that we
create a single point of failure. If the server goes down, all client
machines are affected. To get around this, you want to have the most
reliable hardware you can afford. This means that RAID arrays should
be used for disk space, installing redundant power supplies, etc. You
may even think of mirroring the entire machine and have a second
machine ready to take over the service in the event your primary
machine breaks down.
|
|
|
31.1 | A guide for diskless clients |
|
Getting diskless clients to work requires to build up an environment
for them to boot from and to import the necessary file systems:
|
|
|
Setting up a server with the filesystems clients will use
This means we will have one (dedicated) machine where we install the
SuSE distribution in an encapsulated environment to be used by the
client machines. We will see how we can share major parts of the
installation between all connected clients.
|
|
Exporting the filesystems to the clients
When the software, which basically comes down to a second installation
of SuSE Linux is installed, we have to export it so that the
client machines can import the filesystems they need. We will use NFS
to do this.
|
|
Setting up a DHCP server to manage the IP addresses
DHCP (as explained in chapter 21) gives us the
option to dynamically manage the clients as they are booted. This
fits perfectly into the concept of having a single point of
administration.
|
|
Creating a boot floppy for the client machines
Even if there is no disk space required at run time, the client
machines still need something to boot from. Creating a floppy disk for
this purpose is the easiest approach. If you actually have a hard disk
in the client machine, you can install LILO and the boot image on this
disk. The most efficient way of getting the client machine booted is to
use a boot PROM which loads the Linux kernel via tfpt and boots
the machine using this image. But as boot PROMs for Linux are not
widely available, we'll concentrate on the floppy disk method here.
|
|
Setting up additional client machines
After the first client is up and running, we'll use this setup as
template for additional clients. We will see how less we need to copy
and how much we can share between the client machines.
|
|
|
|
31.2 | Setting up the server machine |
|
There are several aspects of the server that require attention. The
first step is to install the software, the second to setup DHCP to
assign IP addresses to the client machines, and the third is to
configure the NFS server so that the clients can import their file
systems.
|
31.2.1 | Installing the client's distribution |
|
One obvious question is why do we have to install the distribution a
second time for the client machines. Can't we just use the server's
filesystems and export them to the clients and only keep copies of the
variable parts for each client? The answer is actually 'yes'. But...
Keeping the clients and the server separate from each other has a
couple of advantages. First of all, you may want to have different
versions of software on the clients than you want to have on the
server. Servers tend to run for a long time with only a few updates
which are mostly security related. Clients, on the other hand, often
require that you have the latest version of the software available. So
you may pay for 2 Gigabytes of cheap disk space with possibly lots of
trouble in keeping your sever installation in a sane state. Where you
can have a very slim installation for the server, with only the few
tools you need to run it, you may need a full blown installation for
your client machines. Whenever possible, you should keep the server
uncluttered and loaded only with those applications that are
necessary, but keep your clients as up-to-date as possible with their
versions.
Another reason is that you might want to give the client
administration to someone other than the person who is responsible for
the server machines. Developers often have their own ideas on what a
system looks like and prefer to have as many recent versions of as
many tools as possible. These tools may be OK for client machines, but
for a server you want to be assured that you have the most reliable
software available, not the latest and least tested. This is why we
will have separate incarnations of SuSE Linux on the server machine in
this installation example.
OK, but now how do we install a second copy of the distribution on a
machine? Even if the SuSE distribution isn't really designed to
support diskless clients, YaST has a useful feature which helps us
install the packages we need. When you Start YaST, select the
Adjustments of installation menu. The last option in this
menu is Installation to a directory. Select this
option and you get a form where you can enter the directory which will
be used as the root directory for the installation.
This directory will be the root directory of the client machine. You
can put the client installation where ever you have disk space. In
this example we will use /Clients/192.168.0.129 as root
directory for the first client. You may already see the purpose behind
this numbering scheme. /Clients will contain all client
root directories. The actual client directories are named using the IP
number the client will get. This scheme provides a clear naming
pattern for a random number of client machines. You also can use the
client's names as directory names, which would work the same way.
|
|
One note at this point. We'll call the first client the
master client and additional clients will be referred to as
regular clients. This distinction is made because the first client
has some special functions within the setup. We'll see later how the
master client is used as a template for additional client
installations.
|
|
Once you set the installation directory, YaST behaves as if it is
running on the system installed in this directory. Since there is
nothing yet, it acts as if this is a brand new installation, which it
actually is. So the next step is to choose the source medium. Once you
do this, go into the Choose/Install packages submenu,
select a configuration, and start the installation. YaST now installs
the selected packages in the selected directory tree.
After the packages are copied to disk, continue as usual. It doesn't
matter to the system which kernel image you install. The kernel will
never be used, as we will boot from a customized kernel from a floppy
disk.
|
31.2.2 | Setting up DHCP |
|
To assign an IP address to the client machine, we need to set up
DHCP. In chapter 21 we covered DHCP and the general
structure of the DHCP configuration file /etc/dhcpd.conf.
Because the client uses the BOOTP protocol we can't use
DHCP's dynamic IP feature and reserve a group of addresses to the
clients. We have to create a separate entry for each client machine
which will map its MAC (Media Access Control) address to an IP
address. The MAC address is a unique number assigned to the network
card by the card's manufacturer. No two cards have the same
number. This is why this address is often used for autoconfiguration
purposes, because it offers a unique identification for the networked
machine.
The second setting that is passed to the client is the directory which
will be mounted from the server as the client's root file system. The
configuration shown below illustrates a working setup for one client:
|
| #
# dhcpd.conf
#
# Configuration for diskless clients
#
# DHCP needs at least one subnet definition
subnet 192.168.0.0 netmask 255.255.255.0 {
# For each diskless client we need to assign the IP address
# and the root filesystem
host DiskLess1 {
option root-path "/Clients/192.168.0.129";
hardware ethernet 00:80:c8:e8:33:04;
fixed-address 192.168.0.129;
}
|
|
You see the host section for the client. The option
root-path assigns the client to mount the directory
/Clients/192.168.0.129 as its root file system. The client's MAC address
is given with the hardware ethernet statement. How do
determine the MAC address? There are different ways to get this
information. Often the MAC address is printed on the network
adapter. Look for a label with a six byte hexadecimal number like the
one shown in the example above. If you don't find anything, the
easiest way to get the address may be to use the boot floppy provided
by SuSE to boot the machine. Load the module for the network adapter
and look at the output shown by initrc after the module
has been loaded. If the module has been successfully loaded, you
should see the MAC address in there. The last statement in the
host section is the IP address assigned to the client.
Create a configuration like the one shown above for your
environment. You will probably have to change the IP addresses,
definitely the MAC address, and maybe the client's root
directory. Then restart the DHCP server, and you are all set.
|
31.2.3 | Export the Client File Systems |
|
In the last section we assigned a directory of the server as the root
directory for the client. During the boot process, the client will
mount this directory via NFS. Therefore, we have to export this
directory to the server machine. In chapter 20 we learned
about the /etc/exports file and how it is used to export
directories over the network. This is exactly what we need here. All
we have to do is create an entry in this file allowing the client to
mount the assigned directory:
|
| #
# /etc/exports
#
# root directory for diskless client
/Client/192.168.0.129 192.168.0.129(rw,no_root_squash)
|
|
|
|
Note the option no_root_squash given in the
permissions list. The client needs root access to this file
system. Root squashing (refer to chapter 20) is the
default for NFS exports. It is essential that you turn this off in the exports file.
|
|
|
|
You also may want to export the /home directory to the
client. This depends on you local network setup. Usually
it makes sense to have a central home directory for all your client machines.
|
|
|
31.3 | Creating the Boot Floppy for Clients |
|
The most important part of the boot floppy is the kernel. We have to
create a custom kernel for the clients which will enable the root file
system to be mounted over NFS. What differentiates this kernel from
'regular' kernels like the ones provided by SuSE is that it has to be
able to get its network configuration and the location of its root
file system at boot time from the DHCP server. Another special setting
allows the kernel to mount the root file system over NFS.
|
31.3.1 | Configuring the Client Kernel |
|
To create the customized kernel, start by configuring the kernel as
usual by running make menuconfig as root in
the directory /usr/src/linux. Continue as if you were
creating a regular kernel for the client machine.
|
|
Be sure to compile all hardware support you need for this machine into
the kernel rather than creating modules for certain hardware
components. Be especially careful with those components that you need
for booting which have to be part of the kernel image. Normally it's a
good policy to have the network card driver as a module, but in this
case it's mandatory that this driver be part of the kernel image.
|
|
We need network access to boot the system. Once the root file system
is mounted, the kernel can load modules to access more hardware
components. But in order to mount the root file system, we need to
access the network. Remember, we are mounting the root file system via
NFS, this won't work without the right driver for the network card.
To boot the system, we need network access. The kernel can load
modules to access more hardware components once the root file system
is mounted.
Once you have checked all drivers for your hardware, go to the menu
Networking options and set the option IP:
kernel level autoconfiguration to Y. This enables
automatic IP addresses configuration and table routing during kernel
boot, based either on information supplied at the kernel command line
or by BOOTP or RARP protocols. This means the kernel will send a
broadcast message on the network once the ethernet card is initialized
to get the information needed to setup the network. The DHCP server
will answer to this broadcast message and assign an IP address to the
client machine. The server also tells the client the location from
which to mount the root file system.
The second part of the customized configuration involves allowing the
kernel to mount the root file system via NFS. This is done in the
submenu Network File Systems of the
Filesystems menu by setting the option Root
file system on NFS to Y. Needless to say, you must
enable the NFS file system as well.
Now compile the kernel as usual. While the kernel is compiling, we can
go on to the next step and initialize the boot floppy.
|
31.3.2 | Initialize the Boot Floppy |
|
Besides the kernel, we need a file system, a LILO configuration, and a
the boot loader on the floppy disk. To create the file system, insert
an empty floppy disk in the floppy drive and run mke2fs
to create the file system on device /dev/fd0:
|
| # mke2fs -m 0 /dev/fd0
mke2fs 1.14, 9-Jan-1999 for EXT2 FS 0.5b, 95/08/09
Linux ext2 filesystem format
Filesystem label=
360 inodes, 1440 blocks
0 blocks (0.00%) reserved for the super user
First data block=1
Block size=1024 (log=0)
Fragment size=1024 (log=0)
1 block group
8192 blocks per group, 8192 fragments per group
360 inodes per group
Writing inode tables: done
Writing superblocks and filesystem accounting information: done
|
|
You see I used the parameter -m 0 here, which tells
mke2fs not to reserve any disk space for special use by
the root user. As this is just a boot floppy, there is no
need to reserve anything here. Now mount the floppy so that we can
start to copy the needed files to the disk:
|
| # mount /dev/fd0 /floppy
|
|
The first file we need is the boot loader. We talked about
lilo in chapter 5 and saw that the file
/boot/boot.b contains the boot loader. We'll copy this
file to the floppy disk:
|
| # cp /boot/boot.b /floppy
|
|
|
31.3.3 | Configuring LILO as the Boot Loader |
|
To run lilo we must also have a
lilo-configuration. The configuration file for the boot
floppy could live anywhere in the file system. But to keep things
together so that we can reference to them later and or make changes to
the configuration, we'll put it on the floppy disk too. In this case,
we should consider what we want the configuration look like. We need
to assign lilo to put its boot strapping code on the
floppy disk and to use the kernel on the floppy to boot the
system. Furthermore, the kernel has to be assigned to mount the root
file system via NFS. The following example shows a
lilo.conf that fills these needs:
|
| #
# LILO Configuration for Diskless Clients
#
boot = /dev/fd0
install = /floppy/boot.b
map = /floppy/map
read-only
prompt
timeout = 0
vga = normal
image = /floppy/vmlinuz
root = /dev/nfs
label = Linux
|
|
You see the boot device is set to /dev/fd0, the floppy
drive. The boot loader and the map file reside on the floppy
disk. LILO will create the map file when we call it, so
we don't need to copy this file. The kernel image is
/floppy/zImage, and the root file system is set to
/dev/nfs, which is a special node in the
/dev tree for just this purpose-telling the kernel that
the root file system has to be mounted over NFS. The other options
used in this example are pretty standard.
|
|
Refer to chapter 5 for more information on these settings.
|
|
In most
instances, you can use the configuration shown. Use the file
/floppy/lilo.conf to hold the lilo
configuration.
Now we are two steps away from having a working boot floppy. The
kernel image is missing and we have to run lilo to get a
working boot sector on our floppy disk.
When the kernel is finished compiling, the kernel image should be in
the directory /usr/src/linux/arch/i386/boot. Depending on
the size of the image, it's either the file zImage or
bzImage. Let's say we have a large kernel because all
drivers had to be compiled into it and so it's named
bzImage. We copy this file to the floppy disk and call it
vmlinuz:
|
| # cp /usr/src/linux/arch/i386/boot/bzImage /floppy/vmlinuz
|
|
The final step is to call lilo with the newly created
configuration file:
|
| # lilo -C /floppy/lilo.conf
Added Linux *
|
|
Now unmount the floppy disk and label it as Diskless Client
Boot Floppy.
|
31.4 | Customizing the Setup for Diskless Operation |
|
Before we can use the installation on the server to boot our client
machine, we have to edit some files and perform some simple
customizations. YaST doesn't support diskless clients. We tricked it
to install the distribution to a directory tree other than
/, but this is all it can do for us in this case.
To get the client working we have to make some edits in the
configuration and slightly revise some of the rc-scripts to make them
work properly in a diskless environment.
|
31.4.1 | The Client Boot Process |
|
The first step is to edit the fstab file of the
client. This location specifies which partitions are mounted to which
particular mount-points. As there is no disk in the client, no
physical partitions are mounted anywhere. The root file system is
mounted from the server machine using NFS. This means the
fstab should look like this:
|
| 192.168.0.200:/Clients/192.168.0.129 / nfs defaults 0 0
192.168.0.1:/home /home nfs defaults 0 0
/dev/fd0 /floppy auto noauto,user 0 0
none /proc proc defaults 0 0
none /dev/pts devpts defaults 0 0
|
|
|
|
Please note that the address of the server is given in decimal dot
notation, not as a name. In this example the server for the diskless
machines has the address 192.168.0.200 and the home
directories are imported from 192.168.0.1. The first time
the client boots, it will not have a name server connection and the
bootstrap will fail if you enter the server's name here.
|
|
You may also want to mount the global home directory right from the
beginning as shown in the example.
The next step is to create the file fastboot in the
client's root directory. This is
/Clients/192.168.0.129/fastboot if you used the same path
for the client as I have in my examples. If this file exists, the boot
scripts will not try to check filesystems, which is a useless task
since we don't have any local filesystems on our local machine. To
make this setting permanent, we also have to edit the boot script
/sbin/init.d/boot on the client machine. In the SuSE
setup fastboot will be deleted during the boot process,
which is not what we want to happen. We want to make this setting
permanent. The easiest approach is to edit
/sbin/init.d/boot and remove it from the list of files
being deleted during the boot procedure. If you open the file, look
for the section where the file system check is performed:
|
| [...]
#
# do fsck and start sulogin, if it fails.
#
FSCK_RETURN=0
if test ! -f /fastboot -a -z "$fastboot" ; then
# on an umsdos root fs this mount will fail, so direct error messages
# to /dev/null.
# this seems to be ugly, but should not really be a problem.
mount -n -o remount,ro / 2> /dev/null
if test $? = 0; then
ECHO_RETURN=$rc_done_up
echo "Checking file systems..."
fsck -A -a
# A return code of 1 indicates that file system errors
# were corrected, but that the boot may proceed.
# A return code of 2 or larger indicates failure.
FSCK_RETURN=$?
if test $FSCK_RETURN -gt 1; then
echo -e "$rc_failed_up"
if test -x /sbin/init.d/kbd ; then
/sbin/init.d/kbd start
fi
echo
echo "fsck failed. Please repair manually and reboot. The root"
echo "file system is currently mounted read-only. To remount it"
echo "read-write do:"
echo
echo " bash# mount -n -o remount,rw /"
echo
echo "Attention: Only CONTROL-D will reboot the system in this"
echo "maintenance mode. shutdown or reboot will not work."
echo
PS1="(repair filesystem) # "
export PS1
# XXX /sbin/sulogin /dev/console
/sbin/sulogin /dev/tty1
# if the user has mounted something rw, this should be unmounted
echo "Unmounting file systems (ignore error messages)"
umount -avn
# on umsdos fs this would lead to an error message. so direct
# errors to /dev/null
mount -no remount,ro / 2> /dev/null
sync
reboot -f
fi
echo -e "$ECHO_RETURN"
sync
mount -n -o remount,rw /
else
mounts=/etc/mtab
test -r /proc/mounts && mounts=/proc/mounts
while read des fs type rest; do
case "$fs" in
/) break ;;
*) ;;
esac
done < $mounts
if test "$type" != "umsdos" ; then
echo
echo "*** ERROR! Cannot fsck because root is not read-only!"
echo
fi
fi
else
echo "File systems are NOT being checked."
mount -n -o remount,rw /
fi
#
# clean up
#
rm -f /etc/mtab* /etc/nologin /nologin /fastboot
[...]
|
|
You see the check will not be performed if /fastboot is
present. You also see that the file is removed in the cleanup
section after the root file system has been remounted with read-write
permission. Remove /fastboot from the list of
files to be deleted:
|
| #
# clean up
#
rm -f /etc/mtab* /etc/nologin /nologin
|
|
This is all we have to do to ensure a clean boot of the client. But
what about shutting down the client? If you watched the SuSE shutdown
procedure carefully, you may have noticed that the network services
are turned off early in the shut down process. This is OK if the
machine has local file systems to work with once the network
connection is gone. With diskless clients, this would be fatal and
result in a series of error messages and an unclean shut down.
|
31.4.2 | The Client Shuts Down |
|
To get a clean shutdown of the diskless client, we have to be certain
that the network services, especially the NFS service, stays up until
the very end. To ensure that the network connection stays alive, we
need to disable the shut down of:
|
|
|
the network interface,
|
|
the routing,
|
|
the RPC daemons and
|
|
the NFS mounted filesystems.
|
|
|
Of course the last one is the most important point, but to keep the
NFS mounts up, we need the network interface, proper routing, and RPC
services. How do we do ensure that the listed services don't get
turned off?
In chapter 7 we learned about the structure of the
rc-scripts SuSE uses to control system services during booting and
system shutdown. We saw that a service is halted when the appropriate
script receives the command line parameter stop. This
strongly implies that we can change the behavior of the shut down
process by modifying the stop case in the rc-scripts. The
scripts responsible for the services listed above are:
/sbin/init.d/network, /sbin/init.d/route,
/sbin/init.d/rpc and /sbin/init.d/nfs. What
we will do is remove everything that is normally done with the
stop case and replace it with a simple echo
statement as a reminder that these services have to stay up. It's
pretty much the same for all these scripts, so it should be enough to
demonstrate it with one sample script. Go ahead and do the same for
all four scripts listed above.
As an example we take the network script
/sbin/init.d/network. Load the script into your editor
and go to the stop case. You will find the following
code:
|
| [...]
stop)
if test -x /usr/bin/wall ; then
echo "Network is going down now!" | /usr/bin/wall
sleep 1
fi
for I in $NETCONFIG; do
eval NETDEV=\$NETDEV$I
case $NETDEV in
ippp*|ppp*|isdn*) ;;
*:*)
#
# *:* is probably an alias and will fail to ifconfig down
# so omit error message
#
ifconfig $NETDEV down 2> /dev/null
;;
*)
echo -n "Shutting down network device $NETDEV"
ifconfig $NETDEV down || return=$rc_failed
echo -e "$return"
;;
esac
done
;;
[...]
|
|
The script sends a warning to all users, then disables the network
interfaces. Doing this on the diskless client will disconnect it from
its file systems immediately. We don't want to do this. Replace this
code with:
|
| [...]
stop)
echo -n "The network interfaces can't be shut down for this client"
echo -e "$return"
;;
[...]
|
|
Now the script doesn't do anything in the stop case. The
other scripts have to be modified in exactly the same way. Replace all
code you find between stop) and the double semicolon
(;;) with a short message that this services shutdown has
been disabled. After you've modified the scripts, your diskless
network client is ready for its initial boot.
|
31.5 | Booting the Client |
|
If you've carefully followed all the instructions on setting up the
server and made the suggested modifications on the client's script as
described, you should be able to boot your client machine now. Use the
boot floppy you created to boot the client. In the boot messages you
should see the BOOTP requests which were sent from the client and the
IP address the client received from the server. The client then mounts
its root filesystem and goes on as if it were a regular file system on
a local disk. The output should look like this:
|
| [...]
eth0: VIA VT3043 Rhine at 0xfc80, 00:80:c8:e8:33:04, IRQ 11.
eth0: MII PHY found at address 8, status 0x782d advertising 05e1 Link 0000.
Sending BOOTP requests.... OK
IP-Config: Got BOOTP answer from 192.168.0.200, my address is 192.168.0.129
IP-Config: Guessing netmask 255.255.255.0
Looking up port of RPC 100003/2 on 192.168.0.200
Looking up port of RPC 100005/1 on 192.168.0.200
VFS: Mounted root (NFS filesystem) read-only.
[...]
|
|
At this point, two problems may arise. One is that the client
may have trouble getting its IP address or, it may be unable
to mount the root filesystem. If you set everything up as we've
discussed, this shouldn't happen. But if you run into trouble,
here are some hints on how to trace the problem.
If the client doesn't get an IP address, the problem could be on the
client side if the kernel configuration is wrong. If this is the case
you won't see the line Sending BOOTP requests.... in the
kernel messages. If you don't see this message you will have to
compile a new kernel and be certain that you set the option
IP: kernel level autoconfiguration to Y
in the Networking options menu. Make sure you
selected the correct driver for your network card. If no card is
detected by the kernel, it obviously can't use it to send the BOOTP
request. If the client sends the request but doesn't get any answer,
it means that your DHCP server setup is not correct. Consult the file
/var/log/messages on the server and see if there are
messages from the DHCP server. The server logs every request in this
file. If everything is OK, you'll find something like this for the log
message:
|
| Jul 6 05:07:09 Maggie dhcpd: BOOTREQUEST from 00:80:c8:e8:33:04 via eth0
Jul 6 05:07:09 Maggie dhcpd: BOOTREPLY for 192.168.0.129 to MrBurns
(00:80:c8:e8:33:04) via eth0
|
|
If the message you see is different, check your DHCP configuration and
make sure you restarted the DHCP server after you made changes to its
configuration file. The DHCP server configuration is also the place to
look for problems when the client doesn't get the correct path for its
root filesystem.
If this path is correct, but the server denies your command to
re-export to the client, (again -- check check
/var/log/messages), the NFS server configuration in
/etc/exports may be faulty. Double-check everything and
try again until the client boots.
So much about potential problems. It's unlikely that you will have
problems. The machine should boot just fine and will present itself as
if it were a newly installed SuSE Linux machine after the first
reboot. This implies that YaST will start and ask you the usual
questions, just as if it were a regular installation. At this point,
there will be no real difference between a diskless workstation and a
regular workstation with its Linux system on a local hard drive.
All the particular issues have been taken care of at this point and
you can treat the new member of your network like any other SuSE Linux
machine in your setup.
|
31.6 | Extending the Server to Multiple Clients |
|
Up to this point, we've only talked in terms of a single client. Of
course, this was just the beginning. It's a waste of time to repeat
the process for each client. Remember, the motivation was to ease and
reduce the number of administrative tasks. So what's a clever way to
get more clients up and running? Do you remember the table 1-1
in chapter 1 where we classified file systems into
sharable vs. not shareable and static vs. variable? Now we're at the
point were we can profit from this classification. Only one copy of
the shareable data is needed for all clients. Only the unsharable data
has to be present in separate copies for each client. Creating a new
client setup means to creating copies of the unshareable parts of the
filesystem onto the NFS server and making these copies available to
the new client machine. This and a new entry in the DHCP configuration
for the new client is all that is needed for each new machine that you
want to integrate into your network. We'll go step-by-step through a
practical example which will point out the tasks required to set up
the new client. We assume that the hardware of the new machine is
similar to out first client, so we don't have to create a new boot
floppy. If this is not the case for you, refer to the section above to
create a floppy for the client. The procedure is exactly the same as
for the initial client.
|
31.6.1 | Creating a Subdirectory for the Client |
|
Keeping the naming scheme and assuming that the second client has the
IP address 192.168.0.130, we create the directory
/Clients/192.168.0.130 on the server machine. This
becomes the root filesystem of the second client:
|
| # mkdir /Clients/192.168.0.130
|
|
Now we copy the parts of the first client that are unshareable into
this directory:
|
| # cd /Clients/192.168.0.130
# cp -av ../192.168.0.129/etc .
# cp -av ../192.168.0.129/root .
# cp -av ../192.168.0.129/var .
|
|
Next we need mountpoints to mount the sharable data:
|
| # cd /Clients/192.168.0.130
# mkdir home mnt proc usr floppy opt tmp
|
|
And now for the tricky part. If you read carefully, you may have
noticed that the directories /sbin, /bin,
/lib and /dev are missing. These directories
contain shareable data, but because we need them at boot time, and we
can't mount them via NFS because they have to be part of the root
filesystem. The easy solution for this would be to copy them just like
/etc, /root and /var. But this
would be a unnecessary waste of diskspace. The best is to create the
directories and hard-link all files within them. This way we have them
in each client's root file system, but they only once take up disk
space on the server.
The find utility can aid us in creating the directories and
linking the files:
|
| # cd /Clients/192.168.0.129
# find sbin -type d -exec mkdir -p ../192.168.0.130/{} \;
# find bin -type d -exec mkdir -p ../192.168.0.130/{} \;
# find lib -type d -exec mkdir -p ../192.168.0.130/{} \;
# find dev -type d -exec mkdir -p ../192.168.0.130/{} \;
|
|
This creates the directory tree for the new client using the master
as source for the layout. To link all files we call find like
shown below:
|
| # cd /Clients/192.168.0.129
# find sbin ! -type d -exec bash -c 'f={} ; ln $f ../192.168.0.130/`dirname $f`' \;
# find bin ! -type d -exec bash -c 'f={} ; ln $f ../192.168.0.130/`dirname $f`' \;
# find lib ! -type d -exec bash -c 'f={} ; ln $f ../192.168.0.130/`dirname $f`' \;
# find dev ! -type d -exec bash -c 'f={} ; ln $f ../192.168.0.130/`dirname $f`' \;
|
|
When executing these commands you will get two error messages:
|
| ln: sbin/init.d/init.d: hard link not allowed for directory
|
|
and
|
| ln: dev/fd: hard link not allowed for directory
|
|
These two files are soft links to directories which can't be hard
linked. You have to change to the new client's directory and create the
links manually:
|
| # cd /Clients/192.168.0.130/dev
# ln -s /proc/self/fd fd
# cd /Clients/192.168.0.130/sbin/init.d
# ln -s init.d .
|
|
That's it. That's all we need for the second client. Additional
clients are created exactly the same way. Using the setup shown, every
additional clients needs only about 20 mega bytes of diskspace on the
server machine.
Obviously it's a good idea to write a simple shell script with all the
commands needed to create a new client. This way you just have to
execute the script and the new client's directory will be generated
automatically:
|
| #!/bin/sh
#
# Create new diskless client
#
clients=/Clients
master=192.168.0.129
new=$1
# Create the new root directory
mkdir $clients/$new
# Copy the non-sharable parts
cd $clients/$new
for dir in etc root var ; do
cp -av ../$master/$dir .
done
# Create mount points and empty directories
mkdir home mnt proc usr floppy opt tmp
# Create directory tree for data needed on root file system
# and hard-link files
cd /$clients/$master
for dir in sbin bin lib dev ; do
echo "Processing $dir ..."
find $dir -type d -exec mkdir -p ../$new/{} \;
find $dir ! -type d -exec bash -c 'f={} ; ln $f ../'$new'/`dirname $f`; echo $f' \;
done
# softlink directories
cd /$clients/$new/dev
ln -s /proc/self/fd fd
cd /$clients/$new/sbin/init.d
ln -s init.d .
|
|
This script performs the tasks we did manually. It is meant
only as example. You may want to add some error handling and a nicer
interface before you use it in a real-world setting. But it
demonstrates that once a master client is set up, adding more clients
is a very easy task.
|
31.6.2 | Customizing the setup |
|
All subsequent clients have to mount the shareable data out of the
first client's subdirectory. This means we need a different
/etc/fstab file on all other clients. We need to mount
the directories usr, opt, and
home from the server machine. For /usr and
/opt we can use the versions we already have on the
server for our master client. We could do the same with
/home, but let's say we already have a directory tree for
the home directories on a different server that is shared by diskless
and regular clients. In this scenario, we need a
/etc/fstab file on the client machine like the one shown
here:
|
| #
# Client File System Table
#
# remote file systems
192.168.0.200:/Clients/192.168.0.130 / nfs defaults 0 0
192.168.0.200:/Clients/192.168.0.129/opt /opt nfs defaults 0 0
192.168.0.200:/Clients/192.168.0.129/usr /usr nfs defaults 0 0
192.168.0.1:/home /home nfs defaults 0 0
# local goodies
/dev/fd0 /floppy auto noauto,user 0 0
none /proc proc defaults 0 0
none /dev/pts devpts defaults 0 0
|
|
Of course we have to export these directories on the server
machines. The /etc/exports on the server for the diskless
clients has to be edited to allow the clients to mount their file
systems accordingly:
|
| # Master client
/Clients/192.168.0.129 192.168.0.129(rw,no_root_squash)
# shared file systems
/Clients/192.168.0.129/opt 192.168.0.0/255.255.255.0(ro)
/Clients/192.168.0.129/usr 192.168.0.0/255.255.255.0(ro)
# Clients
/Clients/192.168.0.130 192.168.0.130(rw,no_root_squash)
/Clients/192.168.0.131 192.168.0.131(rw,no_root_squash)
|
|
The setup shown reflects the needs of incorporating three client
machines into the network. The client with the address
192.168.0.129 acts as master client and the clients with
the addresses 192.168.0.130 and
192.168.0.131 are additional clients sharing filesystems
with the master.
We're nearly done. The only task left is to edit the client's
/etc/rc.config file. Since we copied it from the master
client, it reflects that machine's setup. Depending on the hardware
differences, you may have to change the mouse, modem, etc. parameters
here.
|
|
In any case you will have to modify the network
parameters. Enter the IP address of the new client in the
network-related sections and change the host name
accordingly. Everything else can be changed after the client boots for
the first time using YaST on the client machine.
|
|
The last thing to add on the server is the client's IP- and
MAC-addresses to your DHCP configuration. If this is finished, the new
client can be booted.
Once a master client exists, setting up subsequent clients is a pretty
easy task.
|
|
Summary: |
|
In this chapter we learned how to use the concept behind diskless
clients to ease and centralize workstation administration.
We set up a server to export file systems to diskless machines and
created a floppy disk to boot a PC without local hard disk. We extended
the setup to more than one client without having redundant data on the
server machines. Besides a small amount of unsharable data, the
clients can work on a single copy of the SuSE distribution
hosted on the server machine.
|
|