home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!olivea!decwrl!sdd.hp.com!cs.utexas.edu!hellgate.utah.edu!lanl!cochiti.lanl.gov!jlg
- From: jlg@cochiti.lanl.gov (Jim Giles)
- Newsgroups: comp.lang.fortran
- Subject: Re: Scientists as Programmers (was Re: Smal
- Message-ID: <1992Aug31.164354.3518@newshost.lanl.gov>
- Date: 31 Aug 92 16:43:54 GMT
- References: <BttB9z.IAy@mentor.cc.purdue.edu> <1992Aug30.232409.15262@nrao.edu> <1992Aug31.163405.2169@newshost.lanl.gov>
- Sender: (null)@(null) ((null))
- Organization: Los Alamos National Laboratory
- Lines: 72
-
- In article <1992Aug30.232409.15262@nrao.edu>, cflatter@nrao.edu (Chris Flatters) writes:
- |> [...]
- |> Subroutine calls are not that expensive. Here are some examples.
- |>
- |> function call+return double precision fp operation
- |> (average time in us) (average time in us)
- |>
- |> SPARCstation IPX 0.115 0.146
- |> 25 MHz 386SX/387SX 1.75 3.37
- |>
- |> (IPX timing from Sun C++ 2.1 using -O4; 386SX/387SX timing taken from
- |> GNU C 1.39 with -O under 386BSD 0.1). Unless the work carried out in
- |> a subroutine is trivial the overhead of a function call can be discounted.
-
- Very good, you've fallen into the trap of listing the call and return
- *instructions* as the only cost of the call. The *real* cost is in
- register scheduling (including canonical interface protocols) around
- the call, as well as a break in optimization basic-blocks (and many `live'
- values must be assumed `killed' by the call). Herman Rubin was right
- to begin with, procedure calls are, and will remain for a long time,
- among the most expensive of operations. Calls would be expensive even if
- the specific branch instructions used to implement and return from them
- were *free*.
-
- |> [...]
- |> It is rarely necessary to violate type constraints in a strongly typed language.
- |> When it is necessary it is possible to localize the code that does this. Most
- |> typesafe languages provide mechanisms to avoid the constraints of the type
- |> system in these rare cases (eg. the WORD data type in Modula 2).
-
- The definition of the term `strongly typed' is that the types of all
- expressions are known to the compiler at compile time (it would be
- preferable if that were called static typing, but that's now used to
- mean that the data is statically *allocated* - oh well). It is, however,
- *often* useful to be able to violate type constraints (if you know what
- you're doing - it's always a machine/system dependent thing to do).
-
- No one is recommending (and Herman didn't) that strict typing be abandoned.
- It is possible to have type coercion and *still* be able to statically
- determine type. For example, the following declaration might be introduced
- (in no particular language):
-
- float :: x(500)
- type float_internal is
- bit.1 :: sign
- bit.8 :: exponent
- bit.23 :: significand
- end type float_internal
- map x as float_internal ! overlay the structure of floats on the array
- ...
- x.sign(1) = x.sign(1)+1 ! change the sign of x(1)
- x.exponent(5) = x.exponent(5)+1 ! multiply x(5) by 2
-
- This is much better than some anonymous `word' type. Notice that
- everything used above is strongly typed. The notion of `bit.n' as
- being n-bit unsigned integers was used, as well as the notion that
- a structure or record of such things is *packed* and remains in
- the order declared. The only thing machine dependent is the map
- itself: otherwise the struct, or its components, behave exactly
- as any other objects of their respective declared types.
-
- |> [...]
- |> Restrictions on the introduction of new operators arise from practical
- |> considerations. The introduction of a new infix operator changes the
- |> syntax of the language significantly.
-
- Well, it depends on how your syntax is specified and processed. Haskell
- allows just about anything to be used as an operator and yet never has
- its syntax changed by user definitions.
-
- --
- J. Giles
-