home *** CD-ROM | disk | FTP | other *** search
- Xref: sparky comp.unix.large:380 mi.misc:744
- Path: sparky!uunet!know!hri.com!noc.near.net!news.Brown.EDU!qt.cs.utexas.edu!yale.edu!jvnc.net!darwin.sura.net!zaphod.mps.ohio-state.edu!caen!destroyer!cs.ubc.ca!newsserver.sfu.ca!sfu.ca!vanepp
- From: vanepp@fraser.sfu.ca (Peter Van Epp)
- Newsgroups: comp.unix.large,mi.misc
- Subject: Re: LISA VI paper availible via anon ftp
- Message-ID: <vanepp.721671320@sfu.ca>
- Date: 13 Nov 92 16:15:20 GMT
- References: <1dmrsdINNcs4@nigel.msen.com> <scs.721401144@hela.iti.org> <vanepp.721411740@sfu.ca> <5c4efaf4.1bc5b@pisa.citi.umich.edu> <vanepp.721543207@sfu.ca> <5c53cdbd.1bc5b@pisa.citi.umich.edu>
- Sender: news@sfu.ca
- Organization: Simon Fraser University, Burnaby, B.C., Canada
- Lines: 74
-
- rees@pisa.citi.umich.edu (Jim Rees) writes:
-
- >In article <vanepp.721543207@sfu.ca>, vanepp@fraser.sfu.ca (Peter Van Epp) writes:
-
- > If I had thought of it, I knew that :-), I was expecting that you might be
- > big enough to run out of 65000+ uids.
-
- >It's worse than that. There are still some Unix systems out there that
- >store the uid in a signed short, which means you've only got 32,000 of them
- >available. That's roughly the size of our user community, and I think we
- >have about 28,000 assigned now, even without IFS extensively deployed. That
- >doesn't leave much breathing room.
-
- I know, we have some uids above 32k and we see some interesting problems
- at times. Several of our part time operators couldn't do sudo on some
- machines before we diddled sudo to use an unsigned int (which I'm suprised
- worked, since the system still thinks its a signed int!), maybe we were
- just lucky :-)
-
-
- > We are small enough that we could make do with
- > a single NFS fileserver to give us central file services...
-
- >No way will that work with 30,000 users. Client caching is absolutely
- >essential, and it's also nice to have some sane kind of sharing semantics
- >(unlike what NFS gives you).
-
- While I completely agree that some form of AFS like system is the final
- answer, at the time we did this (and even now for that matter) most of the
- people here are not Unix experts (including me!), we had only one person that
- was an experienced Unix sys admin, all the rest of us are MTS retreads (not
- a bad point to be starting out from I will admit :-) ). At that time (and
- probably even now) Transarc didn't support the Silicon Graphics machines,
- and we already had some.
- Since we run an Auspex file server with 6 (of the possible 8) Ethernet
- ports and all the NFS service is in the machine room on secure Ethernets, we
- in fact manage to support some 11,00 active users (of a total of > 20,000)
- home directories from that single NFS server. We selected NFS because all
- the machines we had support it (and at that point we didn't know of all the
- security problems present in both NFS and Unix!). This entire phase of the
- conversion was designed with the thought that it was a 2 year solution (whether
- that ends up being true remains to be seen :-) ), and hoepfully when (and if!)
- the time comes to move on, AFS, DFS or IFS will be a more mainstream solution
- or equally possibly, we will have enough confidence in our Unix expertese to
- be able to do the work required to install it for ourselves.
- The security problems we are seeing and the lack of security in NFS
- (at least NFS that will work with all vendors) have caused us to restrict
- access to the file server to our machines, in our machine room, where we had
- hoped to be able to provide it to the desk top.
- We are currently looking at the Novell Netware product on a Sun to
- see if we can use that to export NFS mounted home directories to Macs and PCs
- in a semi secure manner (i.e. the NFS side will be controlled on a secure
- ethernet on the Sun not exposing the NFS mount point to the backbone Ethernet).
- In general all parts of this conversion were probably over specified
- and in most cases, selected with the capability to increase capacity by just
- adding money if we found that we had under estimated the load. Both the Auspex
- and the SGI machines are upgradeable to more capacity, the Auspex by adding
- more disks (which we have done) and more ethernets (which we haven't so far),
- and the SGI machines by just adding more CPU boards and booting the machine.
- I will note that after I commented in the the selection meeting that the
- single CPU model was probably alright, after all we could just give SGI more
- money and buy a another couple of cpus when it fell over dead (about 15
- minutes after we turned it on), which caused the bosses to buy the 2 cpu
- model right off... We in fact haven't had to upgrade yet (although with
- 20/20 hindsight I expect we might have had to with the single CPU model).
- Should we have to, we can plug in another 2 cpu board and boot, and away
- we will go, no fuss no muss, we have done it before on our previous SGI
- systems.
- I expect that this could have been done more cheaply, but no matter
- what happened, we would (and still could) find uses for all the machines that
- we have if we had indeed made a major under estimate of our requirements.
-
- Peter Van Epp / Operations and Technical Support
- Simon Fraser University, Burnaby, B.C. Canada
-