home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.std.c
- Path: sparky!uunet!europa.asd.contel.com!gatech!asuvax!ncar!steve
- From: steve@unidata.ucar.edu (Steve Emmerson)
- Subject: understanding FLT_DIG
- Message-ID: <steve.726275211@unidata.ucar.edu>
- Sender: news@ncar.ucar.edu (USENET Maintenance)
- Organization: University Corporation for Atmospheric Research (UCAR)
- Distribution: na
- Date: Tue, 5 Jan 1993 23:06:51 GMT
- Lines: 19
-
- Hi,
-
- I'm having trouble understanding FLT_DIG, which is defined in <float.h>,
- According to section 2.2.4.2.2 of the ANSI standard, FLT_DIG is given by
- the following on base-2 machines:
-
- FLT_DIG = (int)((p-1)*log10(2))
-
- where `p' is the precision (the number of base-2 digits in the
- significand).
-
- On a hypothetical base-2 machine with 4 bits of precision, the above yields
- 0 for FLT_DIG. Yet, those 4 bits can represent the integral values 1
- through 9 exactly. Thus, shouldn't FLT_DIG be 1?
-
- Eagerly awaiting enlightenment.
- --
-
- Steve Emmerson steve@unidata.ucar.edu ...!ncar!unidata!steve
-