home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.sys.hp
- Path: sparky!uunet!usc!sdd.hp.com!apollo.hp.com!netnews
- From: mcsavaney_d@apollo.hp.com (#David McSavaney)
- Subject: Re: SW for snake farms
- Sender: usenet@apollo.hp.com (Usenet News)
- Message-ID: <C19L81.E6J@apollo.hp.com>
- Date: Fri, 22 Jan 1993 16:45:37 GMT
- References: <1993Jan22.105439.22750@email.tuwien.ac.at>
- Nntp-Posting-Host: macsav.ch.apollo.hp.com
- Organization: Hewlett-Packard Corporation, Chelmsford, MA
- Keywords: cluster of snakes, load-balancing
- Lines: 252
-
- In article <1993Jan22.105439.22750@email.tuwien.ac.at> strnadl@tph01.tuwien.ac.at writes:
- >Snake-farm: a more or less loosely coupled bunge of snakes where one
- > can share resources (e.g. balance the load over more
- > machines,...)
- > [much stuff deleted]
- >--
- >Christoph F. Strnadl | The term 'politically correct', co-
- >Institute for Theoretical Physics | opted by the white power elite as a
- >TU Vienna, Austria | tool for attacking multiculturalism,
- >email: strnadl@tph01.tuwien.ac.at | is no longer politically correct.
- >
-
- I've included a long piece of background information from July 92 about the
- Snakes Farm installed at CERN, though I am not sure if the contacts are
- still valid. You be pleased to know that further development
- of the Snakes Farm concept at CERN with Convex resulted in the
- announcement of the Cluster Computing program in October 92. Your local
- HP or Convex office will have more details.
-
- Software available for Cluster Computing includes:
- Linda from SCA
- NQS from Sterling
- Express from Parasoft
- ISIS from ISIS
- Load Balancer from Freedman Sharp
- PVM/HeNCE from Oak Ridge NL (Public Domain)
- Taskboker from HP
- ConvexNQS, ConvexPVM and ConvexMLIB
- Many networked sys admin tools from HP
-
- Best wishes
- Dave McSavaney, Product Manager,
- MP Workstations, Advanced Systems Division,
- Hewlett-Packard Co.
-
- Still not an official statement of Hewlett-Packard Company
- -----------------------------------------------------------
-
- > HP 'SNAKES FARM' AT CERN
- >
- >
- >
- > About CERN and the HP 'Snakes Farm'
- > -----------------------------------
- > Located near Geneva, Switzerland, CERN (European Laboratory
- > for Particle Physics) is one of the world's largest
- > scientific laboratories. Funded mainly by 17 Member States,
- > it's annual budget for 1992 was 945.47 million Swiss francs.
- > CERN has over 3000 employees and provides support for over
- > 6,000 visiting scientists per year.
- >
- > HP's 'Snakes Farm' consists of up to 50 HP 9000/700 servers
- > linked together and accessed via an HP 9000/720 workstation
- > centralizing job requests. The 'Snakes Farm' can be
- > configured to provide from 57 to 2,850 MIPs using 9000/720's.
- >
- > In December 1991, CERN installed 16 HP 9000/720 systems. The
- > system cluster is linked together and accessed via an HP
- > 9000/720 workstation that centralizes job requests.
- >
- >
- > What CERN Does
- > --------------
- > CERN's business is particle physics; studying the
- > innermost constituents of matter to find out how the Universe
- > was created and how it works.
- >
- > It is thought that the Universe was created when a huge
- > quantity of energy was suddenly transformed into billions of
- > particles which, after many changes and transformations,
- > finally stabilized and created the Universe as we know it.
- >
- > At CERN, this process can be artificially induced on a very
- > minute scale so that the process can be studied. To do this,
- > physicists inject particles with a large amount of energy by
- > accelerating them to almost the speed of light. When two
- > particles collide, energy is freed and transforms into
- > matter. Using detectors and computers, physicists observe
- > what these new particles do.
- >
- >
- > Scientific Equipment at CERN
- > ----------------------------
- > CERN has several types of particle accelerators and
- > colliders. Much of the research today is performed on the
- > Large Electron-Proton collider, or LEP. LEP is built in a 27-
- > Kilometer underground ring and is currently the world's
- > largest scientific machine. There are four large experiments
- > being conducted on LEP, which provide research material for
- > over 2,000 physicists worldwide.
- >
- > LEP works as follows: Electrons and positrons (the antimatter-
- > counterparts of electrons) are whirled around in the ring and
- > smashed together. Each collision produces a spray of up to
- > several hundred fragments (secondary particles). Four
- > detectors spaced round the ring intercept the emerging
- > fragments of matter. Sophisticated electronics equipment
- > within the detectors record and monitor the mass of incoming
- > data. Each detector has its own data acquisition system,
- > processing raw information before passing it on to the
- > experiment's main computing system for further processing and
- > recording onto cassettes.
- >
- > A new machine, the Large Hadron Collider (LHC) is planned for
- > 1998. LHC will bring protons into head-on collisions at
- > higher energies than ever achieved before. It will reveal
- > behavior of the fundamental particles of matter that has
- > never been studied before.
- >
- >
- > CERN's Computing Requirements
- > -----------------------------
- > CERN's data storage requirements are immense. The information
- > from a single collision could fill a telephone directory. All
- > of CERN's experimental data is stored on 60,000 cassettes,
- > representing over 50 Terabytes (1 Terabyte = 1,000
- > Gigabytes). The large size of the data sets presents
- > challenges in networking, data handling and data storage.
- >
- > As for processing power, a typical job is so large that it
- > runs for one CPU day. Each of the four current experiments on
- > LEP requires about 1000 MIPs for offline data processing and
- > event simulation. (Offline processing is performed some hours
- > or days after an experiment has been completed, typically
- > using data that has been recorded on cassettes.)
- >
- > The CERN computing center also has a fast-growing requirement
- > for CPU time for event simulation for LHC and LEP
- > experiments. For example, a large simulation effort is
- > underway by accelerator physicists to design the magnets and
- > beam transport specifications of the LHC.
- >
- > CERN needs experience with very large computing facilities in
- > preparation for LHC. LHC experiments will require between 100
- > and 1000 times the computer resources of the current
- > generation of LEP experiments. An investigation is being
- > undertaken to determine how to build and manage a system that
- > can meet future requirements.
- >
- >
- > Starting with the 'Snakes Farm' at CERN
- > ---------------------------------------
- > In 1991, computer specialists at the CERN decided to install
- > a 'Snakes Farm.' This decision was based on mainly on two
- > factors: price/performance and scalability.
- >
- > Computing services based on the 'Snakes Farm' were introduced
- > in December 1991. The service is called the Central
- > Simulation Facility (CSF) and is based on a cluster of 16 HP
- > 9000/720 workstations and servers connected to the CERN disk
- > and tape service developed for a centrally operated RISC*
- > environment (CORE). CSF is a joint CERN/HP project.
- >
- > The goal of CSF is to provide physicists an integrated system
- > capable of handling the full range of batch work typical of a
- > large High Energy Physics lab. For 1993, CSF plans to provide
- > an aggregate system throughput of 500 CERN Units*
- > (approximately 1 GFlop).
- >
- >
- > CSF Hardware Installation
- > -------------------------
- > At CSF, the 16 HP 9000/720 systems are rack mounted and
- > linked together. One of these 16 centralizes job requests and
- > manages the job queue.
- >
- > The 'Farm' is connected to an Ethernet segment. In this same
- > segment there are three Apollo DN 10000 workstations. The HP
- > 9000/720 servers work together with and augment the
- > capability of these workstations.
- >
- > Also on the same Ethernet segment are two servers that act as
- > tape-staging devices for jobs running anywhere in CERN's
- > centrally operated RISC environment. Over 50 tape drives are
- > operated round-the-clock by a dedicated tape robots and a
- > tape-handling team. CERN uses an IBM 3090/600J mainframe
- > computer to manage the data on these cassettes and the tape
- > robots. (The IBM mainframe was originally purchased as a
- > batch machine, but today it is used primarily for tape
- > handling and interactive online tasks such as electronic mail
- > and file editing.)
- >
- > The Ethernet segment accesses the sitewide FDDI backbone.
- > From the FDDI backbone, the cluster accesses the rest of the
- > site's Ethernet LAN through an FDDI-to-Ethernet DEC 600
- > bridge.
- >
- > Each HP 9000/720 system is configured with a 400 Mbyte local disk,
- > 16 MByte RAM, and HP-UX.
- > Most of the HP servers in the 'Farm' run 'diskless,' that is,
- > they are booted over the LAN from an HP-UX cluster server.
- > This frees up over 80 Mbytes per disk while reducing the
- > administrative load of maintaining separate file systems.
- >
- > CSF Batch Environment, File System and Usage
- > Users submit jobs to the HP 9000/720 workstation via NQS
- > (Network Queueing System). NQS manages the job queue and
- > distributes jobs evenly to the destination servers.
- >
- > At the start of a job, the executable program and any data
- > card files the job might need are copied to a temporary
- > workspace on the local disk. Job output is also written to
- > the local disk.
- >
- > When the job finishes, the results are recorded onto a
- > cartridge or sent over NFS (Network File System) to the user.
- > NQS then cleans up the temporary work area, and the system is
- > ready to accept the next task.
- >
- > People are currently in the various stages of porting their
- > large programs to the HP-UX Fortran compiler. Existing code
- > running on the HP-UX platform does not need to be recompiled
- > for the 'Farm'.
- >
- > Conclusions and Benchmark Results
- > ---------------------------------
- > Today, the 'Farm' handles a significant amount of the scalar
- > computation for simulation, analysis, data processing, and
- > visualization at CERN. The 'Farm' has doubled the CPU
- > capacity of CERN's central mainframes.
- >
- > CERN benchmark testing on the HP 9000/720 resulted in the
- > following:
- >
- > - One HP 9000/720: 10.5 CERN Units (together the 16 9000/720s
- > provide 168 CERN Units)
- >
- >
- > - IBM 3090/600J: 9.5 CERN Units per CPU (for a total of almost
- > 60 CERN units)
- >
- > - CRAY Y-MP (6NS): 11 CERN Units
- >
- > In addition, CERN's benchmark tests showed that the
- > processing power of the 'Farm' was comparable to that of an
- > IBM 9000/900.
- >
- > For more information
- > --------------------
- > Anyone interested in visiting the 'Snakes Farm' at CERN
- > should contact Les Robertson in Geneva at (41) 22-767-4916. More
- > detailed information about the HP 'Snakes Farm' can be
- > obtained from Michel Benard (HP) at (41) 22-780-8165 (Switzerland).
-
- ****Not sure if these are still valid*****
- >
- > A CERN Unit is a combination of floating-point, integer, and
- > double-precision tests
- >
- > .......................................................................
-
-
-