home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.sys.next.sysadmin
- Path: sparky!uunet!charon.amdahl.com!pacbell.com!ames!sun-barr!cs.utexas.edu!zaphod.mps.ohio-state.edu!menudo.uh.edu!usenet
- From: sears@tree.egr.uh.edu (Paul S. Sears)
- Subject: Re: Making a Client's File System Look Like its Server's
- Message-ID: <1992Nov11.212526.9289@menudo.uh.edu>
- Sender: usenet@menudo.uh.edu (USENET News System)
- Nntp-Posting-Host: thanatos.egr.uh.edu
- Reply-To: sears@tree.egr.uh.edu
- Organization: University of Houston
- References: <1992Nov11.195233.20996@leland.Stanford.EDU>
- Date: Wed, 11 Nov 1992 21:25:26 GMT
- Lines: 214
-
- In article <1992Nov11.195233.20996@leland.Stanford.EDU>
- gcolello@biosphere.Stanford.EDU (Greg Colello) writes:
- =>Paul:
- =>
- =>Thanks for the very detailed response to my detailed request. Below I
- =>answer your question about how we make the mail clients "FROM:" field have
- =>the server's address. I also have lots of questions in response to your
- =>answers:
- =>
- =>Greg
- =>
-
- Sure, my pleasure. And thanks for the tip on sendmail...
-
- =>---------------------------------------------
- =>In article <1992Nov11.152052.8171@menudo.uh.edu> sears@tree.egr.uh.edu
- =>(Paul S. Sears) writes:
- =>
- =>> =>8. Make the server the mail server.
- =>>
- =>> Good. export /usr/spool/mail. Make sure sticky bit is set. Chmod 1777
- =>> /usr/spool/mail.
- =>
- =>But you're exporting / as you explain below. I thought you couldn't export
- =>a child path if its parent was already exported (more below). What's this
- =>about setting the sticky bit?
- =>
-
- Ah, the rub.... You are indeed correct, in one sense. I think I need to make
- a distiction between exporting and mounting. We export our / partition (we
- have two on one disk / and /clients) as rw, but everyone mounts it as ro. All
- the clients mount the various directories in various ways. Our entry for the
- mail directory looks like this (imagine you were doing this by hand instead of
- in Netinfo..._
-
- #mount mailhost:/usr/spool/mail /usr/spool/mail nfs rw,bg,intr,noquota 0 0
-
- Our other single mountpoints (remember I suggested that you have as few
- mountpoints as you can get away with and just link everything to that
- mountpoint?) looks like this:
-
- #tree:/ /NeXTMount nfs ro,bg,intr,hard,nosuid,noquota,mnttimeo=5 0 0
-
- (note the HARD mount. Soft mounts are not reliable...)
-
- Then on all our clients links were made as such:
-
- /LocalApps -> /NeXTMount/LocalApps/@
- /LocalDeveloper -> /NeXTMount/LocalDeveloper/@
- /LocalLibrary -> /NeXTMount/LocalLibrary/@
-
- Everything goes through a single mount point. We have reduced our total
- number of mount points to the following:
- /usr/spool/mail (server 1) via hard mount
- /Users (server 1) via /Net
- /Users (server 2) via /Net
- /Users2 (server 2) via /Net
- /NeXTMount (server 1) via hard mount
- /NeXTMount2 (server 2) via hard mount
- /usr/spool/NeXTFaxes (server 2) via hard mount
- /CD-Rom (client) via /Net
- /Misc (client2) via /Net
-
- =>---------------------------------------------
- =>> =>My first question is how many of the step 5 directories need
- =>Read/Write
- =>> =>access? For example I know that /usr/spool/mail-->/private/spool/mail
-
-
-
- =>>
- =>
- =>Doesn't /usr/spool/mail need to be writable? I think so for users who
- =>prefer to use Unix command line mail (we have some).
- =>
- =>Whoa. Partition? Who said anything about a partition? We have no
- =>partitions on our disks. Is that how you are able to export the root as
- =>read only and yet also export children like "/usr/spool/mail" and "/Users"
- =>as read/write?
- =>
- =>BTW the 3.0 installer apparently doesn't give you the ability to create
- =>partitions when you use its "complete install" mode.
- =>
-
- Er, I have a bad habit of referring to everything as "partitions". Ok.. so
- you don't have multiple partitions... but you still have 1 partition :-)
-
- Yes, /usr/spool/mail needs to be writable. Check above how I made an attempt
- to clarify exporting and mounting...
-
- =>> be _hard_ mounts. Having things in /Net via automounter proved to be
- =>way too
- =>> unreliable when the server gets bogged down with NFS requests. Also, in
- =>your
- =>
- =>What is it with this /Net thing anyway? I want to understand what Next
- =>intended before I assume it's not for me. I've been studying the way it
- =>works and it is slightly confusing. I can only surmise that it was
- =>intended for read only mounts by the following line in /etc/mtab for the
- =>client stoma (placed there as a consequence of Netinfo setup on the server
- =>biosphere):
- =>
- =>stoma:(autonfsmount[87]) "/Net" nfs ro,intr,port=686 0 0
- =>
- =>Now, when the exported path "biosphere:/Users" is mounted on stoma, it is
- =>done as follows (also from /etc/mtab on the client):
- =>
- =>biosphere:/Users "/private/Net/biosphere/Users" nfs rw,bg,intr,noquota,net
- =>0 0
- =>
- =>The server's name is slipped into the path. Netinfo set that up. As part
- =>of the mount a soft link /Net/biosphere to /private/Net/biosphere is
- =>created. What causes this to be done? Autonfsmount? Also how does
- =>/etc/mtab get updated with mounts that I can't find in /etc/fstab? How are
- =>these /Net mountings (such as /Users) being directed?
- =>
- =>Now while /Net is read only, which is also obvious from its permissions
- =>dr-xr-xr-x, I assume that read/write directories (like /Users) can still
- =>be mounted to it and accessed as read/write by making a soft link /Users
- =>to /Net/biosphere/Users for example? I get this idea from observing how
- =>/usr/spool/mail appears to give universal access to the restricted path
- =>/private/spool/mail.
- =>
-
- /Net is what automounter uses to have dynamic mounts. These are soft mounts
- that can come and go as the exporting host comes and goes. /Net is the
- primary place you would mount the /User partitions/directories as it is a
- place that would _look_ the same no matter what client you were on. For
- example, if a user's home resides on a server, it would be
- /Net/server/Users/user's_home on all the clients. If you had other users on
- other servers, they would be /Net/serverxxx/Users/userhome.
-
- It basically provides a consistant place to mount things. However, you do not
- have much control on things that are mounted via automounter (ie., you can't
- unmount them manually) because automounter handles those details. If a server
- crashes, the directories in /Net will disappear, but they will re-appear as
- soon as the server comes back up (a benefit of using /Net). However, I only
- recommend using /Net for /User home directories. Static, hard mounts are much
- more reliable for everything else.
-
- =>---------------------------------------------
- =>> /etc/rc, up the # of nfsd processes from 6 to at least 8.
- =>>
- =>
- =>Why is this be unreliable? It isn't clear to me that /Users is
- =>automounted. Just /Net. Why this bump up in processes?
- =>
-
- /Users is exported from your home directory server and the clients mount
- /Users via /Net, so it looks like /Net/server/Users/xxxx on all the clients
- (it is a consistant location. Home directory does not change regardless of
- client).
-
- =>---------------------------------------------
- =>> And to address a particular misconception:
- =>>
- =>[deleted stuff]...
- =>
- =>> for the wrong reasons. NetBoot clients are a _severe_ drain on a
- =>server's
- =>> resources. Try to avoid NetBoot clients if possible. We have 11 (cubes
- =>with
- =>> 40M accelerator drives) out of 50 clients that are net boot. They are
- =>mostly
- =>> serviced by our secondary server which has mucho process cycles to burn,
- =>but
- =>> even that server has its performance degraded when the netboots are in
- =>heavy
- =>> use...
- =>>
- =>
- =>I don't get it. Why should Netbooting (with local swap disks) be any more
- =>of a stain on the server than having clients with their own OS and
- =>exporting root to them? Are you saying that serving the OS kernal is a
- =>bigger task than serving root? I would think that virtual memory would
- =>solve that kernal serving problem if the local swap disks were actually
- =>working.
- =>
-
- Er, well, think about it. First, a Netboot clients has _all_ of its
- directories on the server (like /private/etc...) which means alot of nfs calls
- to do anything basic (even stuff in /bin, /usr/bin). On a machine with
- minimal disk (105M with 73M used for NS), /bin, /private/etc, /usr/bin, and so
- on are local. Much faster, no need to bog the network down with nfs traffic.
-
- Secondly, if set up incorrectly, netboots will swap to the _server_. That
- means that it is swapping across the network... with limited bandwidth
- already, this is a serious bad thing!
-
- =>-----------------------------------------------------------------
- =>BTW The new NFS Manager app seems to work great. It took me a while to
- =>figure out what order it wanted me to do things in, but once I did it
- =>seemed to greatly simplify the task. I think it may have a great future.
- =>
-
- I need to warn you about a slight (nothing serious, but can be frustrating)
- bug in NFSManager.app. There is a little check box under the export
- filesystem panel that has to do with treat all unknown users as uid xxx. Do
- not uncheck this box. If you do, all mount requests to your server will be
- denied...
-
- =>-----------------------------------------------------------------
- =>Greg Colello
- =>Carnegie Institution, Department of Plant Biology
- =>Stanford University
- =>gcolello@biosphere.stanford.edu
-
- --
- Paul S. Sears * sears@uh.edu (NeXT Mail OK)
- The University of Houston * suggestions@tree.egr.uh.edu (NeXT
- Engineering Computing Center * comments, complaints, questions)
- NeXT System Administration * DoD#1967 '83 NightHawk 650SC
- >>> SSI Diving Certification #755020059 <<<
- "Programming is like sex: One mistake and you support it a lifetime."
-