home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.parallel
- Path: sparky!uunet!cs.utexas.edu!sun-barr!ames!haven.umd.edu!darwin.sura.net!gatech!hubcap!fpst
- From: cap@ifi.unizh.ch (Clemens Cap)
- Subject: Re: Parallel Programming Platform
- Message-ID: <1992Sep14.135936.7893@ifi.unizh.ch>
- Sender: fpst@hubcap.clemson.edu (Steve Stevenson)
- Organization: University of Zurich, Department of Computer Science
- References: <1992Sep9.124621.6184@ifi.unizh.ch> <1992Sep11.121758.16815@hubcap.clemson.edu>
- Date: Mon, 14 Sep 92 13:59:36 GMT
- Approved: parallel@hubcap.clemson.edu
- Lines: 101
-
- In article <1992Sep11.121758.16815@hubcap.clemson.edu> kaminsky-david@CS.YALE.EDU (David Kaminsky) writes:
- >I'd just like to note (as is noted in Cap's paper) that the
- >performance data for "Linda" given in the <Parform> paper are actually
- >data for POSYBL (a public domain version of Linda).
- >
- >As a result, these times almost certainly do not reflect the
- >performance achievable with optimized implementations of Linda like those
- >developed by Yale and SCA.
-
- Thank you very much for your remarks. We will try to improve this. In fact
- we are very interested in comparisons of message passing paradigms with
- shared memory and tuple space concepts. Upto now our experiments indicate
- advantages with message passing.
-
- One of the problems associated with commercial Linda and even the reason
- for the development of POSYBL is the cost. Since POSYBL was
- the only version of LINDA which was available to us timely and within
- the limits of our funding, our measurements were made with POSYBL.
-
- We are presently trying to remedy this problem, looking for possibilities
- to obtain better versions of LINDA. (Who can help out?) In the mean
- time we are glad for any possibility to have our codes run on better
- LINDA systems than ours.
-
- > Also in the paper:
- >
- >"The bottleneck of LINDA in a distributed environment
- >is the concept of tuple space, especially the necessary
- >scanning operations to find tuples of certain format".
- >
- > Much of the Linda tuple matching is done at compile
- >time (see Nick Carriero's Thesis, Yale University). "Scanning"
- >is not necessary. In addition, modern network Linda systems
- >distribute tuple space eliminating the bottleneck.
- >
-
- We do realize the efforts made in compile time analysis of tuple
- matching. However we feel that this option can only be exploited
- in some special applications or code specifically prepared for this
- analysis by the programmer. Publications on LINDA usually come up with such
- examples. If the number of machines participating in the computation is
- known only at runtime and if a suitable parallelization technique is used,
- many matchings can only be made at runtime.
-
- Distributing tuple space may of course help eliminating the bottleneck
- to some extent. On the other hand we need mechanisms searching
- those distributed tuple spaces and holding them consistent. This produces
- further network traffic and slows down the computation again.
- This especially becomes a problem when dealing with some 25 and more
- processors. We have not found network-LINDA measurements about such numbers of
- processors in the literature and as outlined, have not been able to
- make studies ourselves. However the Parform has been evaluated in
- systems upto 40 and more processors. Special techniques of the Parform
- are developed to ensure that in such systems Ethernet congestion still
- remains acceptable. We do not feel that there is still bandwidth on the net
- for protocols distributing tuple space.
-
- In parallel computing in distributed workstation environments the main
- bottleneck is communication bandwidth. Therefore systems which only
- communicate those data items which must be communicated should be
- superior to systems which have additional administrative overhead.
- Of course many organizational aspects only become clear at runtime and
- cannot be dealt with by compile time analyses (like compile time
- tuple matching in LINDA). In tuple space systems this may produce
- considerable overhead due to tuple space operations. Message passing
- systems have their problems too, but they do not have this problem since
- runtime organizational information can be structured much more efficiently
- than in tuple space systems.
-
- It is the main goal of the Parform to reduce this communication
- overhead to obtain maximal speedup. The fact that our speedup curve
- almost perfectly matches the speedup curve of a transputer multiprocessor
- system proves, that The Parform indeed gets the performance of a
- tightly coupled multiprocessor out of a distributed workstation
- environment. In our present prototypic version we even could optimize
- a number of things.
-
- We soon will make measurements to further improve the load balancing
- mechanisms of the Parform and we will study scalability upto 100
- workstations. We hope to have better versions of LINDA available then
- for a better comparison. Until then we share the difficulties and problems of
- the academic users and creators of POSYBL in obtaining commercial versions of
- LINDA.
-
- Clemens.
-
- --
- * Dr. Clemens H. CAP cap@ifi.unizh.ch (email)
- * Ass. Professor for Formal Methods in CS +(1) 257-4326 (office)
- * Dept. of Computer Science +(1) 322 02 19 (home)
- * University of Zurich +(1) 363 00 35 (fax)
- * Winterthurerstr. 190 CH-8057 Zurich, Switzerland
- * Motto: "Please do not read the last line of this signature".
-
-
- --
- * Dr. Clemens H. CAP cap@ifi.unizh.ch (email)
- * Ass. Professor for Formal Methods in CS +(1) 257-4326 (office)
- * Dept. of Computer Science +(1) 322 02 19 (home)
- * University of Zurich +(1) 363 00 35 (fax)
-
-