home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!mcsun!sun4nl!cwi.nl!dik
- From: dik@cwi.nl (Dik T. Winter)
- Newsgroups: comp.programming
- Subject: Re: floating point routines with double precision
- Message-ID: <6778@charon.cwi.nl>
- Date: 25 Jul 92 01:55:32 GMT
- References: <54944@mentor.cc.purdue.edu> <6760@charon.cwi.nl> <55036@mentor.cc.purdue.edu>
- Sender: news@cwi.nl
- Organization: CWI, Amsterdam
- Lines: 59
-
- In article <55036@mentor.cc.purdue.edu> hrubin@pop.stat.purdue.edu (Herman Rubin) writes:
- > In article <6760@charon.cwi.nl> dik@cwi.nl (Dik T. Winter) writes:
- > > He may have done
- > >a poor job of course (as some code sequences for the 205 show, but that one
- > >had an extremely bad Fortran compiler; past tense because the one I used
- > >has been decommissioned more than a year ago).
- >
- > The 205 is much easier for the purpose of increasing precision because
- > the floating-point arithmetic is not forced to be normalized.
- True to a point. First: single precision is inexact. I.e. if you have two
- machine numbers a and b and an operation o, such that 'a o b' is a machine
- number, the 205 does not always deliver that machine number. The reason
- is that when the machine delivers a normalized number, the number is first
- truncated and normalized afterwards; shades of the 6600.
- And why is it that when a is a double precision number it is not guaranteed
- that '(a * 2.0) * 0.5 = a' even if there is no intermediate overflow?
- The forcing or not forcing of normalization has nothing to do with the ease
- of extending precision. The basic requirement is that intermediate operations
- give an exact result if possible. Of course, the unnormalized arithmetic
- of the 205 gives exact results, but that means that you can go to higher
- precision only through those unnormalized arithmetic operations. If the
- normalized arithmetic operations would be exact you could go through them
- as well.
- > The Crays
- > are much worse here, although they are not forced normalized for
- > multiplication, but because only the most significant part of the
- > product can be obtained.
- True, but forced normalization has nothing to do with it. Consider the
- following routine in some pseudo language (Algol-60):
- "procedure" addexact(a, b, c, cc); "value" a, b;
- "real" a, b, c, cc;
- "begin"
- c:= a + b;
- cc:= "if" a >= b" "then" c - a - b "else" c - b - a
- "end" addexact;
- This will give you in the pair (c,cc) under some conditions the exact sum
- of a and b. This can again be used to create double precision addition
- and subtraction. IEEE and Cray fall under the conditions, 205 does not.
- (BTW, the example comes from an article by T.J.Dekker from an issue of CACM
- of 1966 or thereabouts, where he shows Algol-60 routines that will do
- double precision operations.)
- >
- > If we want everything to be in the compiler, the compiler will be very
- > large indeed. One of the points I wish to stress is that, no matter
- > how much is put in the compiler, much more, even from the standpoint
- > of practicality, will be omitted. And as knowledge progresses, the
- > variety of things to consider increases.
- >
- > With the present languages, it is already necessary to consider
- > alternate codings to achieve the same purpose for such simple
- > things as adding two vectors.
- But, if I understand you right you want to make the languages larger, which
- in turn makes the compilers larger to which you object in the paragraph
- above. What is it. Larger languages with larger compilers? Or smaller
- languages with smaller compilers?
- --
- dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland
- home: bovenover 215, 1025 jn amsterdam, nederland
- dik@cwi.nl
-