home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
The World of Computer Software
/
World_Of_Computer_Software-02-385-Vol-1of3.iso
/
c
/
condor40.zip
/
CONDOR
/
doc
/
install
/
install.me
< prev
next >
Wrap
Text File
|
1989-10-18
|
13KB
|
483 lines
.nr si 3n
.he 'CONDOR INSTALLATION GUIDE''%'
.fo 'Version 4.0.0'''
.+c
.(l C
.sz 14
CONDOR INSTALLATION GUIDE
.)l
.sp .5i
.sh 1 GENERAL
.pp
This document explains how to create and install
.i condor
from the source.
To do this, you will need the capability of becoming super user
on all of the machines where you want condor to run.
We also assume that you are familiar with local procedures for
creating unix\**
.(f
\**UNIX is a trademark of AT&T.
.)f
accounts, and allocating user and group id numbers.
In addition, if you wish to use NFS\**
.(f
\**NFS is a trademark of Sun Microsystems.
.)f
to share some of the
.i condor
executables and libraries between machines, you will
need to be familiar with exporting and importing
NFS file systems.
.sh 1 "OPERATIONS IN A HETEROGENEOUS ENVIRONMENT"
.pp
Most local area networks contain a variety of machine and
operating system types.
To maximize the sharing of resources,
.i condor
attempts to provide some interoperability between the various
machine architectures and operating systems.
This does add some complication to the
.i condor
code, and will require some consideration by the person creating
and installing the
.i condor
executables.
.pp
.i Condor
is designed so that all of the various machines may reside in a single
.q resource
pool.
All of the
.i condor
daemons communicate through XDR\**
.(f
\**XDR is a trademark of Sun Microsystems.
.)f
library routines, and are thus compatible even between
machines which use different byte ordering.
Users of a machine of any particular architecture and operating system
type will be able to submit jobs to be run on other architecture and
operating system combinations.
(Of course those jobs will need to be compiled and linked for the
appropriate target architecture/operating system combination.)
.(b
.TS
box;
c s s
l l l.
Supported Architecture and Operating System Combinations
Architecture Operating System Description
VAX ULTRIX Vax running Ultrix
VAX BSD43 Vax running 4.3BSD
MC68020 SUNOS32 Sun 3 running SunOs3.2
SPARC SUNOS32 Sun 4 running SunOs3.2
SPARC SUNOS40 Sun 4 running SunOs4.0
I386 DYNIX Sequent symmetry running Dynix\**
MIPS ULTRIX DECstation 3100 running Ultrix
.TE
.)b
.(f
\**Symmetry and Dynix are a trademarks of Sequent Computer Systems.
.)f
.sh 1 "INSTALLATION OVERVIEW"
.pp
If you use NFS, you can extract the
.i condor
source on a single machine, and then mount object and executable
file trees on machines of the various types for which you wish to
compile
.i condor.
Alternatively, you can distribute the source tree to the appropriate
machines using
.q rcp
or
.q rdist .
.pp
Once you have compiled executables suitable for the machines you wish
to include in your
.i condor
pool, you will need to make those executables available on the
member machines.
There are a number of possible ways to do this.
When you create the executables they will be placed in a
directory called <releasedir>, (details on how to set this pathname
will be given later).
To include the machine where you did the compilation in the pool,
you might set <releasedir> to be a well known directory, or create
a symbolic link to a well known directory.
For other machines in the pool, you might choose to use the compilation
machine as a fileserver; in this case you can use NFS to mount <releasedir>
in a well known place.
If the machines to be in the
.i condor
pool already have some of their executables remotely mounted
from fileservers, you might want to distribute the executables
from <releasedir> to those fileservers using
.q rcp
or
.q rdist .
.pp
Each machine which is a member of the
.i condor
pool will need to have a
.q condor
account.
If you use NFS, it will be necessary for all of the
.q condor
accounts to share common user and group id's.
.i Condor
must have its own group id.
Group id's such as
.q daemon
or
.q bin
are not suitable for use by
.i condor.
Each
.q condor
account will need a home directory containing 3 subdirectories,
.q log ,
.q spool ,
and
.q execute .
A script is provided which will create these directories with
the proper ownership and mode.
If you choose to have these directories remotely mounted, be sure each
.i condor
machine has its own private version of these directories.
Each
.q condor
account will have two files in the home directory called
.q condor_config
and
.q condor_config.local .
.q Condor_config
may be shared via NFS, but
.q condor_config.local
should be private.
.i Condor
users will need to access the
.i condor
.q bin
and
.q lib
directories.
These may also be shared with remotely mounted file systems.
Finally, each member machine will need to start the
.i condor
daemons at boot time.
This will be done by placing an entry in
.q /etc/rc
or
.q /etc/rc.local
which starts the
.q condor_master .
The master will determine which daemons should be run on its machine,
and will monitor their health, restarting them and mailing system
personnel about problems as necessary.
.pp
In addition to the member machines, you will need to designate one
machine to act as the
.q "central manager" .
The central manager will run two extra daemons which communicate with
and coordinate the daemons on all member machines.
These daemons will also be started and monitored by the master.
.sh 1 "CREATING AND DISTRIBUTING EXECUTABLES"
.np
Create a user account for
.q condor ,
and set up a condor group as well.
Change directory to the condor home directory and run as
.q condor ,
e.g.
.q "su condor" .
.np
Extract the
.q CONDOR
directory from the distribution file, e.g.
.q "uncompress Condor.tar.Z" ,
then
.q "tar xf Condor.tar" .
.np
If you wish to make executables for an architecture/operating system
other than the machine where you have extracted the tape,
you will need to either copy the files to the
.q compilation
machine, or preferably remotely mount them via NFS.
In any case, all the condor files should have owner
.q condor ,
and group
.q condor .
You should always be running as
.q condor
when you make condor executables.
.np
.q Cd
to the
.q CONDOR
directory, and edit
.q GENERIC/condor_config .
Set
.q CONDOR_HOST
to the hostname where the
central manager daemons will run.
We recommend setting this up as a network alias,
so you can easily change which machine
runs the central manager daemons later.
Set
.q CONDOR_ADMIN
to the electronic mailing address of the person responsible for
maintenance of
.i condor
at your site.
We recommend setting this to a mailing list, again so you
can easily change who the actual recipient is at a later date.
Set
.q RELEASEDIR
to the pathname where you want condor users
to access the executables and libraries.
This should probably not be the condor source directory.
.np
Create an object tree for the specific architecture/operating system type
for which you are creating executables.
Do this by running
.q "NewArch <ARCH> <OPSYS>"
in the
.q CONDOR
directory.
The
.q ARCH
and
.q OPSYS
arguments must be chosen from the above table.
This will create a directory called
.q "<ARCH>_<OPSYS>"
containing subdirectories for all of the
.i condor
object files.
.np
.q Cd
to the
.q "<ARCH>_<OPSYS>"
directory and compile all of the
.i condor
programs by typing
.q "make opt=release" .
.np
If you want to make the executables accessible via a symbolic link,
or by distributing to a fileserver, you can edit the template
.q INSTALL
script to do this for your situation.
If you will be using the compilation machine as a fileserver for
.i condor
executables, you should grant the necessary permissions in
.q /etc/exports .
.np
You will also need to install the
.i condor
man pages.
These will be found in
.q CONDOR/doc/man/{manl,catl} .
The exact commands will vary somewhat depending on the situation at your site.
If you mount your man pages on a shared fileserver, they may look something
like this:
.(q
rcp manl/* <fileserver>:/usr/man/manl
.br
rcp catl/* <fileserver>:/usr/man/catl
.)q
.np
To make and install executables for other architecture/operating system
types, go back to step 3.
.sh 1 "ADDING MEMBER MACHINES TO THE CONDOR POOL"
.lp
Complete the following steps on each machine you want to add to the
condor resource pool.
Add the machine which will act as the
.q "central manager"
first.
N.B. all of the steps in setting up a member of the condor pool will
require you to operate as the super user.
.np
Create an account for
.q condor
on the member machine.
Be sure to use the same user and group id's on all member machines.
.np
If you are planning to access the condor executables on this machine
via a remotely mounted file system, make sure that file system is
currently mounted,
and that there is an appropriate entry in
.q /etc/fstab
so that it will get mounted whenever the machine is booted.
.(b L
For example on a Sun 3 running SunOs 3.2 the fstab entry might look something
like:
.(q
<fileserver>:<some path>/MC68020_SUNOS32/release <RELEASEDIR> nfs ro 0 0
.)q
.)b
.np
Run the script
.q Condor.init .
This will link
.q condor_config
to a site specific version of that file,
and create the
.q log ,
.q spool ,
and
.q execute
directories with correct ownership and permissions.
.np
Run the script
.q Condor.on .
This will create and edit
.q condor_config.local
setting
.q START_DAEMONS
to
.q True
so that the condor daemons are able to run,
then it will actually start them.
.np
At this point the member machine should be fully operational.
On all machines you should find the
.q condor_master ,
.q condor_startd ,
and
.q condor_schedd
running.
Machines which run the X window system,
should also be running the
.q condor_kbdd .
Additionally the
.q "central manager"
machine should be running the
.q condor_collector
and
.q condor_negotiator .
You can check to see that the proper daemons are runing with
.(q
ps -ax | egrep condor
.)q
You should also run
.q condor_status
to see that the new machine shows up in the resource pool.
If you wish to run some trivial jobs to check operation of
all the
.i condor
software, example user programs and a
.q "job description"
file have been compiled and are provided in the
.q <ARCH>_<OPSYS>/client
directory.
.np
Add lines to
.q /etc/rc
or
.q /etc/rc.local
which will start
.q condor_master
at boot time.
.(b L
The entry will look something like this:
.(l
if [ -f /usr/uw/condor/bin/condor_master ]; then
/usr/uw/condor/bin/condor_master; echo -n ' condor' >/dev/console
fi
.)l
.)b
Note: do not attempt to run this command now,
condor is already running.
.sh 1 EXAMPLE
.pp
Figure 1 presents a simplified version of how
.i condor
is made and installed at the University of Wisconsin.
We keep all of the
.i condor
source, objects, and the master copies of the executables on
.q shorty
which is a
Sequent Symmetry machine running Dynix.
Users on all of our machines access the
.i condor
executable and library directories as
.q /usr/uw/condor/{bin,lib} .
In some cases this is accomplished by a symbolic link to the
<releasedir>, and in other cases it is done by a remote file
system mount either directly to shorty, or to a copy on a
separate fileserver.
.pp
The <releasedir> for the I386_DYNIX version of the executables and
libraries is
.q ~condor/CONDOR/I386_DYNIX/release
on shorty.
Shorty's users access those executables and libraries via
a symbolic link as
.q /usr/uw/condor/{bin,lib} .
.pp
We use
.q Bugs
which is a Vax running BSD4.3
to create the
.i condor
software for vaxen running 4.3 by remotely mounting
.q ~condor/CONDOR
from shorty.
From within the
.q CONDOR
directory on bugs, we were able to run
.q "NewArch VAX BSD43
creating the
.q VAX_BSD43
directory tree.
The <releasedir> for the VAX_BSD43 version of the executables and
libraries is
.q ~condor/CONDOR/VAX_BSD43/release.
We distribute that to a fileserver for use by all of the 4.3 vaxen
in the department, including bugs.
Users on these machines access the
.i condor
software as
.q /usr/uw/condor/{bin,lib}
just like on shorty, but this time it is through a remote file system.
.(b
.GS C
file example.grn
3 8
4 12
height 3.0
.GE
.)b
.sh 1 "Copyright Information"
.lp
Copyright 1986, 1987, 1988, 1989 University of Wisconsin
.lp
Permission to use, copy, modify, and distribute this software and its
documentation for any purpose and without fee is hereby granted,
provided that the above copyright notice appear in all copies and that
both that copyright notice and this permission notice appear in
supporting documentation, and that the name of the University of
Wisconsin not be used in advertising or publicity pertaining to
distribution of the software without specific, written prior
permission. The University of Wisconsin makes no representations about
the suitability of this software for any purpose. It is provided "as
is" without express or implied warranty.
.lp
THE UNIVERSITY OF WISCONSIN DISCLAIMS ALL WARRANTIES WITH REGARD TO
THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND
FITNESS. IN NO EVENT SHALL THE UNIVERSITY OF WISCONSIN BE LIABLE FOR
ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
.lp
.ta 10n
Authors: Allan Bricker and Michael J. Litzkow,
.br
University of Wisconsin, Computer Sciences Dept.