home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.lang.functional
- Path: sparky!uunet!cs.utexas.edu!torn!news.ccs.queensu.ca!qucis.queensu.ca!skill
- From: skill@qucis.queensu.ca (David Skillicorn)
- Subject: Bird-Meertens Formalism and Parallelism
- Message-ID: <1992Jul21.172747.7359@qucis.queensu.ca>
- Organization: Computing & Information Science, Queen's University
- Date: Tue, 21 Jul 1992 17:27:47 GMT
- Lines: 64
-
-
- Getting parallelism out of a functional language isn't nearly as easy
- as it appears. As far as I can tell there are really only two routes:
- using strictness or using speculation. Both have problems.
-
- Strictness depends on having lots of strict primitives or a compiler
- that can detect strictness well. Proponents of higher order functional
- programming seem to have gone for compiler detected strictness.
- The Bird-Meertens formalism goes for a fixed set of second order
- functions chosen to express massive strictness. Both choices are
- sensible and arguments about the cost/benefits of higher orderness
- have been going for at least 15 years without any real resolution.
-
- Speculation probably deserves more attention than it has received,
- but there are obvious problems with catching up and killing
- rogue speculation.
-
- Earlier messages have explained the construction of categorical
- data types and some of the good things that go along with it. I'll
- fill in some stuff to do with parallelism that hasn't already been
- mentioned.
-
- The CDT construction produces for each new type a generalized map
- operation and a generalized reduction operation. These have the
- obvious meaning on lists or bags. They do get more subtle when the
- data type being considered is more complex (such as trees, see
- Jeremy Gibbons thesis from Oxford, or arrays, see Banger's paper in
- the proceedings of the ATABLE meeting last month). However, the
- important thing is that these operations are both inherently parallel.
- A homomorphism theorem for each constructed type assures us that
- any homomorphism can be expressed as the composition of a generalized
- map and a generalized reduction; so parallelism comes easily.
-
- More importantly, homomorphisms reflect the locality behaviour of
- the constructors of a type. If the type's constructors join
- objects at a single point when building new objects, then
- homomorphisms will require computation to occur between the
- result of the homomorphism applied to the subobjects. Homomorphisms
- replace each constructor by a new operation.
-
- This all means that we can look at the constructors and deduce the
- communication pattern that will be used to evaluate any homomorphism.
- If the interconnection topology of our target includes this communication
- pattern as a (sub)topology then homomorphisms will only use local
- communication and we can treat the target as a free communication
- machine. This dramatically simplifies working out the time complexity of
- computations. But it also means that computations in the BMF style can
- be relatively architecture independent since they will work the same
- on any architecture containing an appropriate subtopology.
-
- Of course, all the other good features of the Bird-Meertens approach:
- transformation by equations that come for free, guarantees of
- completeness, abstraction from the details of decomposition
- and communication, apply in a parallel setting as well. Arguably
- BMF is a great model for parallel computation because it has
- just about the right level of abstraction.
-
- You can get more details on all of this by ftping papers from
- ftp.qucis.queensu.ca in directory pub/skill. There's also an
- extensive Bibtex bibliography on parallel programming models
- there as well.
-
- -david skillicorn
- skill@qucis.queensu.ca
-