home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.lang.fortran
- Path: sparky!uunet!zaphod.mps.ohio-state.edu!darwin.sura.net!news.udel.edu!perelandra.cms.udel.edu!mccalpin
- From: mccalpin@perelandra.cms.udel.edu (John D. McCalpin)
- Subject: Re: inverse matrix
- Message-ID: <C0p5JD.M0L@news.udel.edu>
- Sender: usenet@news.udel.edu
- Nntp-Posting-Host: perelandra.cms.udel.edu
- Organization: College of Marine Studies, U. Del.
- References: <C0I49C.Jrr@athena.cs.uga.edu> <93008.125409HDK@psuvm.psu.edu> <1993Jan8.201645.14915@news.eng.convex.com>
- Date: Mon, 11 Jan 1993 15:54:48 GMT
- Lines: 25
-
- In article <1993Jan8.201645.14915@news.eng.convex.com> dodson@convex.COM (Dave Dodson) writes:
- >I'd like to point out that it is almost never required or desirable to
- >compute the inverse of a matrix. Almost without exception, you can do
- >any computation in which you would use the inverse in a better way that
- >does not use the inverse. By 'better' I mean faster, uses less memory,
- >more accurate, etc.
-
- The direct use of the inverse matrix is generally the fastest way to
- solve a dense system of equations with multiple, consecutive right-hand-sides
- (as in a time-dependent fluid dynamics problem).
-
- On the Cray Y series, for example, multiplying by the inverse matrix is
- always (?) faster than performing the forward-backward substitution
- step for each new RHS. This is true even though both operations have
- the same computational complexity in terms of operation counts (=2 N^2).
- The difference is that matrix multiplication is a significantly
- simpler algorithm with less overhead than forward-backward substitution.
-
- I suspect that the same sorts of speedups would be observed on Dodson's
- Convex machines....
- --
- --
- John D. McCalpin mccalpin@perelandra.cms.udel.edu
- Assistant Professor mccalpin@brahms.udel.edu
- College of Marine Studies, U. Del. John.McCalpin@mvs.udel.edu
-