home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!newsstand.cit.cornell.edu!news.graphics.cornell.edu!boa.graphics.cornell.edu!hurf
- From: hurf@boa.graphics.cornell.edu (Hurf Sheldon)
- Newsgroups: comp.sys.hp
- Subject: Re: clusters
- Date: 18 Dec 1992 17:32:50 GMT
- Organization: Cornell University Program of Computer Graphics
- Lines: 53
- Distribution: world
- Message-ID: <1gt202INN1od@loon.graphics.cornell.edu>
- References: <BzFsCM.H5u@bunyip.cc.uq.oz.au> <42692@sdcc12.ucsd.edu>
- NNTP-Posting-Host: boa.graphics.cornell.edu
- Keywords: cluster
-
- In article <BzFsCM.H5u@bunyip.cc.uq.oz.au> werner@dirac.physics.uq.oz.au writes:
- >We are investigating buying several HP7?? snakes and have heard about
- >something called a cluster environment for HP workstations.
- >Has anybody used this system , what is the implementation (i.e RPC calls?),
- >what are the capabilities, what are the management problems, what are the
- >limitations, do all the machines have to be identical?
-
- The HP cluster environment is very nice with few drawbacks. We
- have a number of 700 systems running under a cluster, each with
- local swap. This works exceedingly well. There are few files that
- are unique to the local systems. All mounts at the server are
- mounts at the clients including nfs and cdroms. (You can only
- mount cdroms at the server - a very minor pain).
- Some real plusses:
- Network File I/O - Within the cluster you can see full ethernet
- bandwidth (vs nfs at something less than 20% effective) - I don't
- know how they do it but it works very well.
- Adding new systems - Run sam to add a client, enter
- the hardware address from the shipping label, the network id, plug in the
- network and the power cord and turn it on. That's it.
- Updating & patching - Run update on the server - that's it.
- Client hangs (we do a lot of develpment - it happens) - cycle
- power, it's up as soon as the memory check is done. (ala PC's)
- Mail, lpadmin is all done only on the server - essentially
- you only need to admin one machine for n < 30 systems.
- Some gotcha's:
- Don't plan on network swap with very many systems (>3)
- ( You can swith to/from net swap by editing one file on the server
- so if you need to run any client diskless it is easily done. )
- Server dies, everybody dies; the clients come around
- so much faster they hang so you have to cycle power on all the
- clients (sub-gotcha - this is so rare you can't remember where all
- clients are... really)
- Ethernet is much more sensitive to trouble - the old "I've got
- 15 secs to reconnect" won't work - more like 2 or 3 sec's tops
- before the client gets abandoned and the server fills the console
- with error messages (a sub-plus is you can wiggle a bad connector
- and see an instant error message)
- There can be some subtle misunderstandings because of the context
- dependent file structure but they get pretty obvious after you
- see one or two.
- All in all I reccommend the cluster environment to any hp site
- with more than one system of the same type. I think you would need
- more than one of each type to see any benefit to a 'mixed' cluster
- from a reduced overhead point of view.
-
-
- --
- Hurf Sheldon Network: hurf@graphics.cornell.edu
- Program of Computer Graphics Phone: 607 255 6713
-
- 580 Eng. Theory Center, Cornell University, Ithaca, N.Y. 14853
-
-