home *** CD-ROM | disk | FTP | other *** search
- Xref: sparky comp.parallel:2048 comp.arch:9220 comp.lang.fortran:3424
- Newsgroups: comp.parallel,comp.arch,comp.lang.fortran
- Path: sparky!uunet!gatech!hubcap!fpst
- From: tomf@cs.man.ac.uk (Tom Franklin)
- Subject: call for participation: Virtual Shared Memory Symposium
- Message-ID: <1992Sep4.121521.18429@hubcap.clemson.edu>
- Sender: news@cs.man.ac.uk
- Organization: Clemson University
- Date: 4 Sep 92 10:28:40 GMT
- Approved: parallel@hubcap.clemson.edu
- Lines: 667
-
-
-
- Last call for participation in
-
- Virtual Shared Memory Symposium
- ===============================
-
- to be held at
- The Centre for Novel Computing
- The University of Manchester
- England
-
- on
- 17th and 18th September
-
- In Conjunction with
- SERC - NACC
- and
- BCS - PPSG
-
-
- Contents:
- Symposium timetable
- biographies of speakers and abstracts of talks
- applications details
-
-
- Virtual Shared Memory Symposium
-
- 17th and 18th September
-
- Programme
-
- Thursday 17 September
-
-
-
- 9:00 - 10:00 Registration
-
- 10:00 - 10:15 Welcome by John Gurd, CNC
-
- 10:15 - 11:05 Nic Holt, ICL
- Virtual Shared Memory in Commercial Applications
-
- 11:05 - 11:30 Tea and coffee
-
- 11:30 - 12:30 Vadim Abrossimov, Chorus Systemes
- Distributed Virtual Memory in Chorus
-
- 12:30 - 13:30 Lunch
-
- 13:30 - 14:30 Steve Frank, KSR
- Memory System Architecture and Programming
- Environment of the KSR1
-
- 14:30 - 15:15 Peter Bird, ACRI
- Proactive Systems
-
- 15:15 - 16:00 Tea and coffee
-
- 16:00 - 17:00 William Jalby, University of RENNES
- Replacement Policies For Hierarchical Memory Systems
-
- 17:00 - 18:00 Chris Wadsworth
- VSM: The Good, The Bad and The Unknown
-
-
-
- Friday 18 September
-
- 9:30 - 10:20 Clemens-August Thole, GMD
- High Performance Fortran and its Relevance for VSM
- Architectures
-
- 10:20 - 11:10 Mike Delves, University of Liverpool
- Development of an HPF-Conformant Parallel Fortran90
- Compiler
-
- 11:10 - 11:40 Tea and coffee
-
- 11:40 - 12:30 Sven Hammarling, NAg
- The development of a Numerical Software Library for
- Parallel Machines
-
- 12:30 - 13:30 Lunch
-
- 13:30 - 14:20 Iain Duff, Rutherford Appleton Laboratory
- The Solution of Sparse Systems on Parallel Computers
-
- 14:20 - 15:10 Harry Wijshoff, University of Leiden
- Implementation issues of Sparse Computations
-
- 15:10 - 15:50 Tea and coffee
-
- 15:50 - 16:40 David Culler, University of California at Berkeley
- Active Messages: a Fast, Universal Communication
- Mechanism
-
- 16:40 - 17:00 Closing Address - John Gurd, CNC
-
-
-
- Virtual Shared Memory Symposium
- Speakers and Abstracts:
-
-
- Professor Nic Holt
- ==================
-
- Virtual Shared Memory in Commercial Applications
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
- Biography
-
- Nic Holt is a System Designer at ICL and was responsible for the primitive
- architecture of the ICL Series 39 which features processing nodes
- interconnected by Optical Fibre, providing Virtual Shared Memory for
- commercial applications.
-
- Abstract
-
- The ICL Series 39 system, originally designed in 1982, has an architecture
- in which multiple processors each with closely coupled memory, known as
- "processing nodes", are interconnected by an optical fibre network. Series
- 39 supports Virtual Shared Memory by a technique known as Replicated
- Storage: pages of virtual memory may be replicated across multiple nodes.
- Hardware mechanisms in each node generate protocol on the network to
- propagate memory updates and provide a basic synchronisation mechanism
- between the nodes. The system architecture and a number of existing
- commercial applications will be described in detail. The VSM facilities of
- the EDS machine, a distributed memory parallel processor will also be
- mentioned. The characteristics of various shared memory schemes and their
- use by applications will be discussed.
-
-
-
- Vadim Abrossimov
- ================
-
- Distributed Virtual Shared Memory in Chorus
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
- Biography
-
- Vadim Abrossimov is one of the key architects of the Chorus micro-kernel.
- He concentrated on the design and implementation of the CHORUS distributed
- Virtual Memory Management.
-
- He joined Chorus Systemes at its creation in 1986 after spending two years
- at INRIA working on object oriented systems.
-
- Abstract
-
- Many high-end open systems under development today are characterized by the
- innovative use of multiple processors in distributed memory configurations
- ("multicomputers"). These parallel processors provide wider I/O throughput
- for mainframe-power UNIX systems, redundancy for reliable OLTP systems and
- enormous computation power for massively parallel supercomputer systems.
-
- To master these more complicated environments and get these more
- complicated machines to market faster, system builders need an operating
- system development environment that is as powerful as the development
- environment provided to applications builders by UNIX.
-
- Modern operating systems have to meet certain design objectives to meet
- these requirements and support the inherent distributed environment of
- high-end systems. And they must do this while providing complete BCS/ABI
- compatibility with UNIX and Open Systems standards.
-
- One approach to modern operating system design is to build the distributed
- operating system as a set of independent system servers using the
- primitive, generic services of a micro-kernel. The micro-kernel provides a
- virtual machine for processor use, memory allocation and communication
- between operating system components.
-
- To provide scalability and portability, a modern operating system should
- offer complete support for portability not only over a range of processors,
- but also over a range of hardware system-level architectures. It should
- also offer transparent re-usability of system components, modularity,
- scalability of hardware configurations and system services, and structured
- integration of device drivers and specific hardware features. To enable
- system services, a modern operating system should provide transparent
- distribution of services, fault tolerance, security, performance
- flexibility and full compatibility with standard operating system
- interfaces.
-
- The presentation outlines how a modern micro-kernel based operating system
- architecture such as CHORUS can meet these needs, in particular by looking
- at how it provides virtually shared memory over distributed configurations.
-
-
-
- Steve Frank
- ===========
-
- Memory System Architecture and Programming Environment of the KSR1
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
- Biography
-
- Mr. Frank is a co-founder of Kendall Square Research and made a major
- contribution to the architecture, design partitioning, and technology
- selection of the KSR1. He is presently involved in the definition of future
- products. Prior to joining KSR, he contributed to the architecture and
- implementation of three multiprocessors: Encore's MultiMax, the Synapse N+1
- and a multiprocessor for an experimental PBX at Rolm. Mr. Frank earned his
- B.S. and M.S. degrees in Electrical Engineering from the Massachusetts
- Institute of Technology.
-
- Abstract
-
- The KSR1 is the first highly parallel computer system that unites the power
- of parallel processing with a conventional shared memory software
- development environment to satisfy the production requirements of both the
- commercial and technical applications. Shared memory is achieved through a
- technique called ALLCACHE memory. This mechanism, implemented in hardware,
- builds the abstraction of shared memory on a set of distributed memory
- units which are managed as coherent caches. The underlying hardware also
- includes a full 64 bit superscalar processor.
-
- The talk will start by discussing related research which led to the
- development of the KSR1. The KSR1 programming environment and ALLCACHE
- memory architecture will be presented as a context to describe key
- architecture and implementation issues.
-
-
-
- Peter Bird
- ==========
-
- Proactive Systems
- ~~~~~~~~~~~~~~~~~
-
- Biography
-
- Peter Bird received his PhD in Computer Science from the University of
- Michigan. He studied retargetable, pattern directed code generators which
- optimised pipeline scheduling.
-
- He designed and developed compilers for a parallel pipelined machine for a
- data-flow specification language for ODEs used for Real-Time applications.
- Currently He is computer system architect for ACRI in Lyon, France, where a
- multi-nodal, high performance system is being developed.
-
- Abstract
-
- not available yet
-
-
- William Jalby
- =============
-
- Replacement Policies For Hierarchical Memory Systems
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
- Biography
-
- William Jalby began his career as a researcher at INRIA, then spent 18
- months at University of Illinois (Center for Supercomputing Research and
- Development) and in 1988 was appointed as a professor of Computer Science
- at the University of Rennes. His research interests mainly concern memory
- organization (architecture, software and performance evaluation). He is the
- vice chairman of the ESPRIT BRA APPARC project.
-
- Abstract
-
- One of the key components in determining the performance of a hierarchical
- memory system is the strategy used for replacing pages (blocks). In the
- general case (i.e. without any specific knowledge on the memory reference
- pattern), simple heuristics like LRU (Least Recently Used) exhibit
- relatively good behaviour. On the other hand, if the entire memory pattern
- is known a priori, an optimal replacement strategy can be used (Belady's
- MIN algorithm). In this talk, the old problem of replacement strategies is
- revisited but focusing on regular loop structures such as those arising in
- many numerical codes. In such cases, memory reference patterns can be
- determined at compile time and can be used to derive efficient replacement
- policies. After noting that LRU can exhibit pathological behaviour for such
- loop structures, we analyse in detail the behaviour of Belady's algorithm.
- In particular, for some simple cases, its (optimal) hit ratio is computed
- as well as its impact on array management. Finally, we describe some simple
- heuristics that achieve hit ratios close to optimal.
-
-
-
- Dr. C P Wadsworth
- =================
-
- Virtual Shared Memory: The Good, The Bad, and The Unknown
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
- Biography
-
- Dr. Chris Wadsworth is leader of the Parallel Processing Group in the
- Informatics Department at RAL, with projects in the systems aspects and
- techniques of parallel programming and the porting of applications
- software. The Group also takes a leading role in projects for Oxford
- Parallel, a joint centre with Oxford University under the DTI/SERC Parallel
- Applications Programme. His present interests focus on the exploitation of
- parallelism, the requirements for portable parallel software, and high
- level performance models for parallel machines.
-
- Abstract
-
- An overview of the advantages, disadvantages, and challenges of virtual
- shared memory computing will be presented. While the main benefits are
- readily appreciated, it is argued that particular challenges remain in the
- evolution of higher-level programming concepts. The role of sharing -- when
- to share, and how to do so safely -- in parallel program designs will be a
- particular topic for discussion.
-
-
-
- Clemens-August Thole
- ====================
-
- High Performance Fortran and its relevance for VSM architectures
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
- Biography
-
- Clemens-August Thole has worked in the field of programming models and
- applications for distributed memory architectures since 1984. He was
- project manager of the Esprit GENESIS project, which aimed for a
- programming environment for parallel architectures. He is currently
- employed by the Gesellschaft fuer Mathematik und Datenverarbeitung (GMD),
- St. Augustin, Germany, member of the core group of the High Performance
- Fortran Forum and chairman of the related European working group.
-
- abstract
-
- A virtual shared memory programming model for a parallel architecture makes
- the port of programs simpler. The limited speed of interconnection networks
- and clustering of the address space into pages and subpages requires
- detailed consideration of the mapping of the data structures onto the
- address space, the tiling of loops and the distribution of threads onto the
- processors.
-
- High Performance Fortran (HPF) as is currently being defined by the HPF
- Forum allows the application programmer to specify the mapping of data
- objects and statements to processors by compiler directives.
-
- The presentation gives an introduction to HPF and outlines the possibility
- for exploitation of this information by compilers of virtual shared memory
- architectures.
-
-
-
- Mike Delves
- ===========
-
- Development of an HPF-Conformant Parallel Fortran90 Compiler
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
- Biography
-
- Professor Delves has held the Chair of Computational Mathematics at the
- University of Liverpool since 1969. He is Director of the Centre for
- Mathematical Software Research, and the Transputer Support Centre, both
- self-supporting Research Centres within the University specialising in
- scientific/engineering parallel computing. His interests in computational
- mathematics include Parallel Algorithms, Integral and Partial Differential
- Equations and the Seising of High Level Scientific Languages.
-
- He is a member of ADA Europe and the Ada Numerics Task Force; and a founder
- member of the Esprit SIG on Parallel Languages for Scientific Computing,
- which currently provides a European forum for interacting with the US HPF
- initiative.He has published over 170 papers, is author of two books on
- numerical algorithms, and editor of four others.
-
- Abstract
-
- The High Performance Fortran (HPF) proposals provide a shared-memory style
- of programming in Fortran90 with the ability for the user to input
- sufficient help for compilers; the language plus directives supports data
- parallel SIMD programs. With NA Software and with input from Inmos, we are
- developing a compiler for an extended Fortran90 language; the code
- includes:
- o Full Fortran90;
- o HPF Directives;
- o MIMD syntax extensions.
-
- The work is taking place within the Esprit Supernode II and PPPE projects;
- target dates for the first compiler release are April 1993 for parallel
- Fortran90; end 1993 for HPF support. This paper gives details of the design
- of the language and compiler, and a summary of progress with it's
- development.
-
-
-
- Mr. Sven Hammarling
- ===================
-
- Development of Portable Numerical Software for Parallel Machines
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
- Biography
-
- Sven Hammarling is currently the Manager of the Numerical Libraries
- Division at the Numerical Algorithms Group in Oxford. He is one of the
- authors of the Level 2 and Level 3 Basic Linear Algebra Subprograms (BLAS),
- is involved in the LAPACK project, which has been developing a linear
- algebra package for high-performance computers and is Workpackage Manager
- for the Libraries Workpackage on the ESPRIT II project, Supernode II.
-
- Abstract
-
- NAG has always aimed to to make their library available on any type of
- computer for which there is reasonable demand for it, which in practice
- means any computer in widespread use for general purpose scientific
- computing. The NAG Fortran Library is currently available on more than
- fifty different machine ranges, and on somthing like a hundred different
- compiler versions. Thus portability of the library has always been a prime
- consideration. The advent of vector and parallel computers has required us
- to pay much more careful attention to the performance of the library, and
- the challenge has been to try satisfy the sometimes conflicting
- requirements of performance and portability.
-
- We shall discuss how we have approached the development of library software
- for modern high-performance machines, concentrating in particular on our
- involvement in the LAPACK project which has been developing a linear
- algebra package.
-
- The features of modern machines that have to be utilized to achieve
- efficiency include vector registers or pipelines, multiple processors and a
- hierarchy of memory. To retain portability in LAPACK, efficiency is
- achieved principally through the use of the Basic Linear Algebra
- Subprograms (BLAS), the matrix-vector operations of the Level~2 BLAS for
- single processor, non-hierarchical memory vector machines and the
- matrix-matrix operations of the Level~3 BLAS otherwise. In the case of the
- Level~2 BLAS this has meant restructuring the algorithms to clearly expose
- the matrix-vector nature of the operations, and in the case of the Level~3
- BLAS has necessitated the design of block algorithms to yield matrix-matrix
- operations.
-
- We shall also consider the impact of Fortran 90, which has recently become
- an ISO standard, on library development, particularly the use of the array
- features for expressing parallelism.
-
-
-
- Iain Duff
- =========
-
- The Solution of Sparse Systems on Parallel Computers
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
- Biography
-
- Iain S Duff is Group Leader of Numerical Analysis in the Central Computing
- Department at the Rutherford Appleton Laboratory. He is also project Leader
- of the Parallel Algorithms Group at CERFACS in Toulouse and is a visiting
- Professor of Mathematics The University of Strathclyde. Duff obtained a BSc
- from the University of Glasgow where he held an IBM Scholarship. He was
- Carnegie Fellow at Oxford, completed his diploma in Advanced Mathematics in
- 1970, and received his D Phil on "Analysis of Sparse Systems" from the
- University of Oxford in 1972. He then was Harkness Fellow visiting Stony
- Brook and Stanford and thereafter spent two years as a lecturer in Computer
- Science at the University of Newcastle before joining the Numerical
- Analysis Group at Harwell in 1975. He moved to his present position in June
- 1990.
-
- He has had several extended visits to Argonne National Laboratory, the
- Australian National university, the University of Colorado at Boulder,
- Stanford university, and the University of Umea.
-
- His principal research interests lie in sparse matrices and vector and
- parallel computing. He has also been involved for many years in the
- development and support of mathematical software, particularly through the
- Harwell Subroutine Library. He has published over 100 papers, has
- co-authored two books, and has been editor or joint-editor of several
- conference proceedings. He is editor of the IMA Journal of Numerical
- Analysis and associate editor of several other journals in numerical
- analysis and advanced scientific computing. He is a fellow of the Institute
- of Mathematics and its Applications, and a member of SIAM, SMAI and SBMAC.
-
- Abstract
-
- The multifrontal technique solves systems of sparse linear equations using
- Gaussian elimination and exploits parallelism both through sparsity via an
- elimination tree (a computational graph) and through use of Level 3 Basic
- Linear Algebra Subprograms.
-
- We briefly describe multifrontal methods and illustrate the benefits of
- parallelism from these two sources by examining the performance of
- multifrontal codes on a range of shared memory architectures. More recently
- we have examined the performance of our codes on the BBN TC 2000, a virtual
- shared memory machine. We show that, although a fairly straightforward
- adaptation of the shared memory code will, as expected, run on the TC 2000,
- design changes which recognize data locality yield a significantly improved
- performance. We feel that the issues we raise are important for any virtual
- shared memory environment and that the architecture must still be
- understood if high performance is to be obtained.
-
- Finally we indicate one way in which our techniques can be extended to
- distributed memory architectures or networks of workstations.
-
-
-
- H.A.G. Wijshoff
- ===============
-
- Implementation Issues of Sparse Computations
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
- Biography
-
- Professor Harry A.G. Wijshoff has been appointed as a full professor in
- computer systems and software at the University of Leiden since July 1,
- 1992. Previously he worked at University of Illinois, RIACS, NASA Ames, and
- Utrecht University. At the University of Leiden he is the leader of a group
- of 12 scientist in the area of high performance computing and parallel
- processing. He is the coordinator of a recently awarded Esprit III BRA
- project on Performance critical applications of parallel architectures
- (APPARC). His current research interests cover parallel architectures,
- sparse matrix computations, programming environments and performance
- evaluation.
-
- Abstract
-
- In this talk the intrinsic complexity of sparse computations will be
- addressed together with consequences of providing architectural support for
- these computations. Sparse computations will be a major challenge for
- shared virtual memory implementations as they do not lean themselves easily
- for exploiting data locality. Ways of overcoming this bottleneck will be
- discussed.
-
-
-
- David E. Culler
- ===============
-
- Active Messages: a fast, universal communication mechanism
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
- Biography
-
- David Culler is an Assistant Professor in the Computer Science Division of
- the University of California at Berkeley and a Presidential Faculty Fellow.
- His research interests include computer architecture, resource management,
- and the implementation of a wide range of parallel programming models,
- including dataflow, functional programming, hardware description languages,
- and explicit distributed memory.
-
- Abstract
-
- Multiprocessor architectures are traditionally divided into three
- categories based on their intended programming model: shared memory,
- message passing, and "novel". The last is a euphemism for dataflow,
- reduction, and the like. However, careful study reveals that the
- implementations of these apparently diverse architectures are surprisingly
- similar. Active Messages is a simple communication primitive that captures
- this common element. The implementation of Active Messages on current
- message passing machines (CM-5 and nCUBE/2) is an order of magnitude faster
- than the send/receive model for which the machines were designed. The
- universality of the mechanism has been demonstrated by realizations of
- shared memory, message passing, and dataflow models on the same machine. In
- addition, a variety of hybrid models have been developed, including a
- split-phase global memory extension to C, called Split-C. The goal of this
- work is to define better architectural primitives for communication, rather
- than to build new abstractions on top of existing inefficient primitives.
-
-
-
- Who Should Attend
- =================
-
- The symposium is aimed at all people working in the area of parallel
- computing. It will provide a detailed introduction to Virtual Shared Memory
- and current research issues.
-
- It will be of particular interest to developers of applications, whether
- numeric, symbolic or database applications, who need power of parallel
- computing, but have been put off in the past by the difficulties of
- parallel computing.
-
- The symposium will also be of interest to systems implementors and
- architects working on parallel systems.
-
-
- Venue
- =====
-
- The Symposium will be held at the Department of Computer Science, Computer
- Building, University of Manchester. The department has access and
- facilities for disabled visitors.
-
-
- Catering and Accommodation
- ==========================
-
- Every effort will be made to cater for special dietary requirements if
- details are provided with the completed application forms.
-
- Accommodation is provided in Hulme Hall, one of the University's halls of
- residence about 15 minutes walk, or a short bus ride from the department.
-
- Accommodation can only be provided if the form is returned by 9 September.
-
- _________________________________________________________________________
-
- Application Form
- To: Ursula Hayes
- Department of Computer Science
- University of Manchester
- Manchester
- M13 9PL
- England
-
- Telephone: +44 (61) 275 6172
- Fax: +44 (61) 275 6236
- email vsm@cs.man.ac.uk
-
- Title _________ Forename _______________________________
-
- Surname _____________________________________________________
-
- Address _____________________________________________________
-
- _____________________________________________________
-
- _____________________________________________________
-
- _____________________________________________________
-
- _____________________________________________________
-
- Postcode _____________________________________________________
-
- Telephone _____________________________________________________
-
- Fax _____________________________________________________
-
- email _____________________________________________________
-
- The fee includes the Symposium fee and proceedings,
- lunches and coffee.
-
- Fee: Full 200.00
- BCS PPSG 180.00
- Academic 100.00
-
- Nights in Hulme Hall @ 20.00:
- Wednesday 16th __
- Thursday 17th
- Friday 18th __
- Enclosed fee ____________
-
- Dietary Requirements: _________________________________________
-
- Please make Cheques payable to "The University of Manchester"
-
- _________________________________________________________________________
-
-
-
- --
- Tom.
-
- Tom Franklin
- Centre for Novel Computing Phone +44 61 275 6134
- Department of Computer Science Fax +44 61 275 6204
- University of Manchester
- Manchester email tomf@cs.man.ac.uk
- M13 9PL
-
- =================================================================
-
-