home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.parallel
- Path: sparky!uunet!usc!howland.reston.ans.net!paladin.american.edu!gatech!hubcap!fpst
- From: kitchel@manta.dpsi.com (Sidney W. Kitchel)
- Subject: Re: definition of parallel efficiency
- Message-ID: <kitchel.726766135@manta>
- Sender: news@usenet.ucs.indiana.edu (USENET News System)
- Nntp-Posting-Host: manta.dpsi.com
- Organization: Data Parallel Systems, Inc
- References: <1993Jan11.142638.7789@hubcap.clemson.edu>
- Date: Mon, 11 Jan 1993 15:28:55 GMT
- Approved: parallel@hubcap.clemson.edu
- Lines: 34
-
- Stephen Vavasis <vavasis@cs.cornell.EDU> writes:
-
-
- >I have seen two definitions of "parallel efficiency" in the literature,
- >and I am wondering whether one definition or the other has prevailed.
- >They are:
-
- >(1) par eff = (seq. time of the par. algorithm) / (p * parallel time)
-
- >(2) par eff = (seq. time of the best possible sequential algorithm
- > for the problem) / (p * parallel time)
-
- >If you had a parallel cyclic reduction algorithm, would you rate it
- >against a sequential cyclic reduction algorithm or against tridiagonal
- >Cholesky factorization?
-
- >How about when the underlying problem is more complicated, for example
- >solving a boundary value problem? In this case it's not clear how to
- >identify the best possible sequential algorithm.
-
- >I would like to get this definition straight because I am teaching a
- >class on parallelism.
-
- Since Frontiers '92, it also depends on whether you did the
- programming or God did.
- --Sid
-
-
- --
- Sidney W. Kitchel kitchel@cs.indiana.edu, kitchel@dpsi.com
- Data Parallel Systems, Inc. ============|| DPSI ||===============
- 4617 E. Morningside Drive (812) 334-8100
- Bloomington, Indiana, 47408 USA FAX: (812) 334-8121
-
-