home *** CD-ROM | disk | FTP | other *** search
- Xref: sparky comp.parallel:2509 comp.lang.misc:3593
- Newsgroups: comp.parallel,comp.lang.misc
- Path: sparky!uunet!destroyer!gatech!hubcap!fpst
- From: halstead@crl.dec.com (Bert Halstead)
- Subject: Supercomputing '92 workshop: Data-Parallel Languages
- Message-ID: <1992Nov11.195037.22368@crl.dec.com>
- Keywords: data-parallel, programming languages
- Sender: news@crl.dec.com (USENET News System)
- Organization: DEC Cambridge Research Lab
- Date: Wed, 11 Nov 1992 19:50:37 GMT
- Approved: parallel@hubcap.clemson.edu
- Lines: 123
-
-
- Supercomputing '92 workshop: Data-Parallel Languages
-
- If you are attending Supercomputing '92 (in Minneapolis, Nov. 16-20)
- we invite you to attend the workshop on data-parallel languages that
- will be held from 1:30-5:00 pm on Wednesday, Nov. 18.
-
- The workshop will feature five talks by researchers active in the
- field of data-parallel languages, followed by a panel discussion
- featuring these speakers as panelists. The speakers will be
-
- Maya Gokhale (Supercomputing Research Center)
- Phil Hatcher (University of New Hampshire and Digital
- Equipment Corporation)
- Bob Morgan (Digital Equipment Corporation)
- Anthony Reeves (Cornell)
- Hans Zima (University of Vienna)
-
- Each presentation will consist of a 20-minute talk followed by a 10
- minute question/discussion period.
-
-
- Workshop Objectives
-
- The data-parallel programming model has received widespread attention
- as a solution to the problem of programming massively parallel
- machines. In this model, in contrast to function level parallelism,
- all the parallel processes associated with a job execute the same
- program on different instances of data. The underlying implementation
- may require a single thread of control, or may permit multiple
- ``program counters,'' allowing the processes to take different paths
- through the program. Platforms for data-parallel programming include
- SIMD machines as well as massively parallel MIMD machines.
-
- This workshop will bring together researchers and practitioners
- interested in language design and efficient compilation of data
- parallel languages. There are many dimensions to the problem of
- providing an efficient, expressive data-parallel programming
- environment portable across a range of platforms.
-
- One important issue is the ``world view'' supported by a data-parallel
- language, variously labeled global/local or macroscopic/microscopic.
- In the global (macroscopic) view, the programmer thinks in terms of
- data arrays being distributed across the processor array. The
- programmer may specify how the data arrays are to be spread across the
- processor array. Operations on the data arrays are performed in
- aggregate terms. Access to non-local data may be be transparent to
- the programmer, or alternatively, pre-defined functions which effect
- aggregate data motion may be used. CM Fortran, Fortran D, Vienna
- Fortran, Kali, Crystal, and the Cornell Paragon language are examples
- of data-parallel languages supporting a global view.
-
- In the local (microscopic) view, the programmer thinks in terms of one
- or more arrays of (virtual) processing elements (PE's), each
- performing a computation in lock step with all the other PE's. There
- is a syntactic distinction between access to locations in local memory
- and access to locations in the memory of another processor. There is
- a notion of a processor ID (in some cases more than one ID) that can
- be used as data in the computation as well as for inter-PE
- communication and host-PE communication. The data parallel C
- extensions such as C*, MPL, MultiC, DBC tend to support the local
- view.
-
- In terms of implementation, the distribution of shared data to
- processors is critical to efficient execution of the data-parallel
- program. Current research efforts attempt to infer efficient mapping
- of data to processors and interprocessor communication. However, most
- production compilers allow (require) the user to specify mapping and
- re-mapping of data.
-
- Another issue is the automatic management of synchronization. The
- semantics of data-parallel languages specify expression-level
- synchronization. This can be efficiently implemented on SIMD
- platforms, but is too slow if directly mapped to MIMD processor
- arrays. On the latter machines, only the synchronization necessary to
- maintain a correct semantics, that is, synchronization associated with
- interprocessor communication, appears in the generated program. Each
- PE performs its computation inedependently except when communication
- of data values is required. All processors participate in the
- communication step, and then once more resume independent operation.
- To reduce the number of synchronization steps, the compiler may apply
- transformations to move communication out of loops and to collect
- messages to the same destination into blocks for more efficient
- interprocessor communication.
-
- This workshop complements the High Performance Fortran workshop
- scheduled for the Tuesday afternoon, Nov. 17. While the latter
- workshop focuses specifically on disseminating information and
- collecting feedback about design decisions being made for one
- particular data-parallel language (High Performance Fortran), our
- workshop will explore the landscape of data-parallel languages more
- broadly, including (1) ideas not yet ``mainstream'' enough to be
- considered for High Performance Fortran, (2) ideas from the C world as
- well as the Fortran world, and (3) implementation as well as
- language-design issues.
-
- Topics of interest include
-
- * Existing data-parallel languages and language extensions.
-
- * The global/local view dichotomy: which is ``better''? Can they
- coexist?
-
- * Standardization: is the field ready for standardization? Which
- language features (if any) have a high probability of being
- standardized?
-
- * Portability within a machine class (SIMD or MIMD) as well as
- between classes (can the same program run efficiently on a MasPar
- and on an Intel Delta?).
-
- * Mapping of data to processors (automatic, user-directed).
-
- * Efficient compilation on SIMD and MIMD targets.
-
- * Language/compiler support for irregular computations.
-
-
- -Maya Gokhale
- Bert Halstead
-
- workshop organizers
-
-