home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
Media Share 9
/
MEDIASHARE_09.ISO
/
progmisc
/
comp9304.zip
/
93-04
next >
Wrap
Text File
|
1993-04-30
|
491KB
|
12,805 lines
From compilers Thu Apr 1 07:00:12 EST 1993
Xref: iecc comp.compilers:4459 news.answers:6791
Newsgroups: comp.compilers,news.answers
Path: iecc!compilers-sender
From: compilers-request@iecc.cambridge.ma.us (John R. Levine)
Subject: comp.compilers monthly message and Frequently Asked Questions
Message-ID: <monthly-Apr-93@comp.compilers>
Followup-To: poster
Keywords: administrivia
Sender: compilers-sender@iecc.cambridge.ma.us
Supersedes: <monthly-Mar-93@comp.compilers>
Organization: Compilers Central
Date: Thu, 1 Apr 1993 12:00:05 GMT
Approved: compilers@iecc.cambridge.ma.us
Expires: Sat, 1 May 1993 23:59:00 GMT
Archive-name: compilers-faq
This is the comp.compilers monthly message, last edited April 1993.
NOTE: At the end of this message are some answers to frequently asked
questions. Please read them before you post.
-- What is comp.compilers?
It is a moderated usenet news group addressing the topics of compilers in
particular and programming language design and implementation in general.
It started in 1986 as a moderated mailing list, but interest quickly grew to
the point where it was promoted to a news group. Recent topics have
included optimization techniques, language design issues, announcements of
new compiler tools, and book reviews.
Messages come from a wide variety of people ranging from undergraduate
students to well-known experts in industry and academia. Authors live all
over the world -- there are regular messages from the U.S, Canada, Europe,
Australia, and Japan, with occasional ones from as far away as Malaysia. I
have no idea how large the readership is, since the anarchic nature of
usenet makes it impossible to tell who reads it, but I believe that the total
is in the tens of thousands.
Unless there is specific language to the contrary, each message represents
only the personal opinion of its author. I claim no compilation copyright on
comp.compilers. As far as I am concerned, anyone can reproduce any message
for any purpose. Individual authors may retain rights to their messages,
although I will not knowingly post anything that does not permit unlimited
distribution in any form. If you find comp.compilers useful in writing a
book, producing a product, etc., I would appreciate an acknowledgement of
usenet and comp.compilers.
-- How do I receive it?
The easiest way is to read comp.compilers on a system that gets usenet news.
If you don't have access to usenet news, it's also available via E-mail via
a LISTSERV forwarder at the American University. To subscribe a person
should send e-mail to listserv@american.edu with one line in the mail
message (not in the subject!) That line should read:
SUBSCRIBE COMPIL-L full_name
for example:
SUBSCRIBE COMPIL-L Ima Hacker
To get off the list the subscriber should send e-mail to the same address
with the message: SIGNOFF COMPIL-L
If you have problems getting on or off the list, please contact me. In
particular, if you want to use an address other than your own personal mail
address, you have to ask me to set it up. If I receive bounce messages for
an address on the mailing list for two days in a row, I delete it. If this
happens to you and your address subsequently becomes reachable again, you
can resubscribe.
-- How do I submit a message?
Mail it to compilers@iecc.cambridge.ma.us, also known as compilers@iecc.uucp
or iecc!compilers. I review messages nearly every day, usually including
weekends, and most messages are posted to the net within a day after I
receive them. Occasionally when I go on vacation there may be up to a
week's delay, though I try to send out a message when that will happen.
Most net news systems will automatically turn posted messages into mail to
compilers, but some, particularly systems running notes, don't do that
correctly. As a result, I sometimes receive hundreds of copies of a
message, all mangled slightly differently. Please mail your contributions
unless you're sure your posting software works correctly.
When you send a message to compilers, I understand that to mean that you
want me to post it to usenet, which means it will be sent to tens of
thousands of potential readers at thousands of computers all around the
world. It may also appear in a printed comp.compilers annual and other
books, in the ACM SIGPLAN Notices, in on-line and off-line archives,
CD-ROMs, and anywhere else that some reader decides to use it.
If you don't want me to post something, send it instead to
compilers-request. (See below.)
-- What happens to submitted messages?
Barring mail problems, they arrive in a special mailbox here at iecc. I
then edit the headers, trim down quoted text, fix typos and grammatical
errors, remove cute signatures, and then post them to usenet. If I think a
message needs more editing than that, I return it to the author for
rewriting. The main reasons I return a message are that it appears more
appropriate for another group, the message is too garbled to fix, it
contains too much quoted material relative to the amount of new material, or
I don't understand it. I also usually return messages that directly attack
individuals, since the net has plenty of other places for ad-hominem battles.
Another possibility is that a message doesn't have a valid return e-mail
address. If your mail system insists on putting a bogus address in the From:
line, be sure that you put a usable address in your signature.
If a message asks a simple question I sometimes answer it myself rather than
posting it. When two or three messages arrive with the same answer to a
question, I usually post only one of them, with a comment crediting the
others.
If you send in a message and don't either see it posted or receive something
back in a few days, it probably got lost in the mail and you should contact
me, preferably via a different mail route. I post or respond to all
messages except for ones that appear to have been sent by mistake, e.g. no
contents, or contents consisting only of another quoted message. Sometimes
when I'm feeling exasperated I disregard messages that re-ask one of the
frequently asked questions that are answered below.
One of the most time-consuming jobs in moderating the group is trimming down
the quotes in followup articles. In most cases, you can expect readers to
have seen the previous article, so only a few lines of quoted text should be
needed to remind the reader of the context.
I have installed a simple-minded quote filter that mechanically returns to
the sender any message that contains more quoted than unquoted lines. Please
edit your quotes before you send in a response, to avoid having the filter
bounce your message. Since the quote filter is pretty dumb, I do look at
bounced messages myself. If the filter bounces a message of yours by mistake,
don't panic -- it'll get posted anyway.
``Help wanted'' and ``Position Available'' messages are collected each week
and posted in a digest every Sunday.
-- How do I respond to the author of a message?
I try to be sure that every message contains valid From: and Reply-To:
headers. The automatic "reply" commands in most news readers let you send
mail to the author. If you're replying to a message in a digest, be sure
to respond to the author of the particular message, not to the pseudo-author
of the digest.
Some obsolete news readers attempt to reply using the Path: header, but for
technical reasons the Path: header in a moderated message cannot point to
the actual author. In fact, the Path: header in a compilers message is
deliberately a bad mail address, so if you have such a news reader you'll
have to edit the addresses in responses yourself and, I hope, encourage your
system manager to update your news and mail software.
Sometimes mail to an author bounces, either because a gateway isn't
working or because the return address is unregistered or otherwise bad.
Please don't ask me to forward it, since my machine is no better connected
than anyone else's. (It's not on the Internet and only talks uucp.) If
you send me a message obviously intended for the author of an item, I will
discard it on the theory that if it wasn't important enough for you to
send it to the right place, it isn't important enough for me, either.
-- How do I contact the moderator?
Send me mail at compilers-request@iecc.cambridge.ma.us. If for some
reason your system chokes on that address (it shouldn't, it's registered)
mail to Levine-John@yale.edu or johnl@spdcc.com will get to me. I treat
messages to compilers-request as private messages to me unless they state
that they are for publication.
-- Are back issues available?
I have complete archives going back to the original mailing list in 1986.
The archives now fill about 6 megabytes, and are growing at over 200K per
month. I update the archives at the end of each month. People with ftp
access can get them from primost.cs.wisc.edu, (128.105.2.115) where James
Larus has kindly provided space. The archives contain a compressed Unix
mailbox format file for each month, with names like 91-08.Z. The file
INDEX.Z lists all of the subject lines for every message in the archives,
and in most cases is the first file you should retrieve.
The archives are available via modem from Channel One, an excellent local
BBS. You have to register, but no payment is needed to download the
archives which are in Area 6. (If you call more than once or twice, it
would be nice to sign up for at least the $25 trial membership.) The 2400
BPS telephone number is +1 617 354 8873, and the Telebit number is +1 617
354 0470. There is a ZIP format archive per month with names like
comp9108.zip, with the most recent archive also containing the index.
There is now a mail server at compilers-server@iecc.cambridge.ma.us that can
mail you indexes, messages, and the files mentioned below. Send it a
message containing "help" to get started.
I have also published a printed edition of the 1990 messages grouped
by thread and topic, and with some indexes, and expect to publish a
1992 and maybe 1991 edition. Send me mail for further details, or
see the message about the book which should immediately follow this
one.
-- Some Frequently Asked Questions:
NOTE: Many issues are discussed occasionally on comp.compilers, but not
frequently enought to make the FAQ sheet. If you have a question but the
answer isn't in the FAQ, you may well be able to get good background by
reading the appropriate articles in the archive. If you can FTP, please
at least get the index and look through it.
The various files that I mention below that I have are in the compilers
archive at primost.cs.wisc.edu, and are also available from the mail
server mentioned above. If you can FTP them, please do so rather than
using the mail server, since the mail bandwith is quite small.
* Where can I get a C or C++ grammar in yacc?
Jim Roskind's well-known C++ grammar is in the archive, as is a C grammar
written by Jeff Lee. Dave Jones posted a parser as message 91-09-030.
Another C grammar was posted to comp.sources.misc in June 1990, v13 i52,
archive name ansi-c_su. GCC and G++ are based on yacc grammars, see
below.
* Where can I get the Gnu C compiler?
GCC is a high-quality free C and C++ compiler. (Free is not the same as
public domain, see the GCC distribution for details.) It is available in
source from from prep.ai.mit.edu. You need an existing C compiler and
libraries to bootstrap it.
A version for 386 MS-DOS by DJ Delorie <dj@ctron.com> is available by FTP
from barnacle.erc.clarkson.edu or wowbagger.pc-labor.uni-bremen.de and by
mail from archive-server@sun.soe.clarkson.edu in the archive msdos/djgpp.
See messages 91-09-054 and 91-09-066.
* Are there other free C compilers?
The lcc compiler, written by people at Princeton and Bell Labs, is
available via FTP from princeton.edu. It is supposed to generate code as
good as GCC while being considerably faster and smaller. It comes with a
demonstration VAX code generator and documentation on the code generation
interfaces. Production code generators for the VAX, MIPS, and Motorola
68020 are available for research use to universities willing to execute a
license agreement; the FTP package elaborates. Lcc uses a hard-coded C
parser because it's faster than yacc.
* Where can I get a Fortran grammar in yacc or a Fortran compiler?
I have a small subset parser in the archive mentioned above. The F2C
Fortran to C translator is a respectable Fortran system (so long as
you have a C compiler to compile its output and its libraries) and
contains a full F77 parser and is available in source form via FTP
from research.att.com and by mail from netlib@research.att.com.
* Where can I get Modula-2, Pascal or Ada grammars in yacc?
I have one each of those, too, in the archive mentioned above, though I
haven't tried to use any of them.
* Where can I get a Cobol grammar in yacc?
Nowhere for free, as far as I can tell. This question is asked every few
months and there has never, ever, been any positive response. Perhaps some
of the interested people could get together and write one. The commercial
PCYACC from Abraxas (see below) comes with a bunch of sample grammars
including one for Cobol-85.
* Where can I get a Basic grammar in yacc?
Take a look at ftp.uu.net:comp.sources.unix/volume2/basic which contains
a Basic interpreter with yacc parser.
* Are there free versions of yacc and lex ?
Vern Paxton's flex is a superior reimplementation of lex. It is available
from the same places as Gnu sources. Berkeley Yacc is a quite compatible
PD version of yacc by Bob Corbett, available as ~ftp/pub/byacc.tar.Z on
okeeffe.berkeley.edu. Gnu Bison is derived from an earlier version of
Corbett's work and is also fairly compatible with yacc.
* Are there versions of yacc and lex for MS-DOS?
There are several of them. Commercial versions are MKS lex&yacc from MKS
in Waterloo Ont., +1 519 884 2251 or inquiry@mks.com, and PCYACC from
Abraxas Software in Portland OR, +1 503 244 5253. Both include both yacc
and lex along with a lot of sample code.
The standard flex source compiles under the usual DOS compilers, although
you may want to make some of the buffers smaller. A DOS version of Bison
is on wuarchive.wustl.edu [128.252.135.4] and other servers under
/mirrors/msdos/txtutl/bison111.zip. See message 92-07-012 for more info.
* What other compilers and tools are freely available?
There is a three-part FAQ posting in comp.compilers and other groups
listing compiler tools freely available in source form, maintained by
David Muir Sharnoff <muir@cogsci.berkeley.edu>. It is posted
monthly, right after this message. If it's not on your system, you
can FTP it from pit-manager.mit.edu in the directory
/pub/usenet/news.answers/free-compilers, or via mail by sending a
message to to mail-server@pit-manager.mit.edu with the command "send
usenet/news.answers/free-compilers/*" in the text.
* How can I get started with yacc and lex and compiler writing in general?
By reading any of the many books on the topic. Here are a few of them.
Also see message 93-01-155 which reviews many compiler textbooks.
Aho, Sethi, and Ullman, "Compilers: Principles, Techniques, and Tools,"
Addison Wesley, 1986, ISBN 0-201-10088-6, the "dragon book". Describes
clearly and completely lexing and parsing techniques including the ones in
yacc and lex. The authors work or have worked at Bell Labs with Steve
Johnson and Mike Lesk, the authors of Yacc and Lex.
Alan Holub, "Compiler Design in C," Prentice-Hall, 1990, ISBN
0-13-155045-4. A large book containing the complete source code to a
reimplementation of yacc and lex and a C compiler. Quite well written,
too, though it has a lot of errors. The fourth printing is supposed to
correct most of them.
John R. Levine, Tony Mason, and Doug Brown, ``Lex & Yacc,'' 2nd Edition,
O'Reilly and Associates, 1992, ISBN 1-56592-000-7, $29.95. A concise
introduction with completely worked out examples and an extensive
reference section. The new edition is completely revised from the earlier
1990 edition.
Donnely and Stallman, "The Bison Manual," part of the on-line distrubution
of the FSF's Bison, a reimplementation of yacc. As with everything else from
the FSF, full source code is included.
Axel T. Schreiner and H. George Friedman, Jr., "Introduction to Compiler
Construction with UNIX," Prentice-Hall, 1985. Oriented to tutorial work.
Good for beginners. Develops a small subset-of-C compiler through the book.
(Recommended by Eric Hughes <hughes@ocf.Berkeley.EDU>.) Richard Hash
<rgh@shell.com> comments that the book has many typographical errors, and
readers should be suspicious of the examples until they actually try them.
Richard Y. Kim <richard@ear.mit.edu> reports that sources are available for
FTP as a.cs.uiuc.edu:pub/friedman/tar.
Bennett, J.P. "Introduction to Compiling Techniques - A First Course Using
Ansi C, Lex and Yacc," McGraw Hill Book Co, 1990, ISBN 0-07-707215-4.
It's intended for a first course in modern compiler techniques, is very
clearly written, and has a full chapter on YACC. I found it to be a good
introductory text before getting into the 'Dragon book'. (Recommended by
John Merlin <J.H.Merlin@ecs.southampton.ac.uk>.) Source code is available
at ftp.bath.ac.uk.
Charles N. Fischer & Richard J. LeBlanc, "Crafting A Compiler", Benjamin
Cummings Publishing, Menlo Park, CA, 1988, ISBN 0-8053-3201-4. There's
also a revised version as of 1990 or 1991 titled "Crafting A Compiler in
C", with all examples in C (the original used ADA/CS). Erich Nahum
<nahum@cs.umass.edu> writes: A key compiler reference. We used the
original to great effect in Eliot Moss' graduate compiler construction
class here at UMass. My feeling is that Fischer & LeBlanc is a good
tutorial, and one should use Aho, Sethi, & Ullman as a reference.
Des Watson, "High-Level Languages and Their Compilers," International
Computer Science Series, Addison-Wesley Publishing Company, Wokingham
England, 1989. Adrian Howard <adrianh@cogs.sussex.ac.uk> writes: This is
the kindest, most readable introduction to compilers at the graduate level
I have ever read - an excellent example of what textbooks should all be
like.
W.M. Waite and G. Goos, "Compiler Construction," Springer-Verlag, New
York, 1984. Dick Grune <dick@cs.vu.nl> writes: A theoretical approach to
compiler construction. Refreshing in that it gives a completely new view
of many subjects. Heavy reading, high information density.
J.P. Tremblay and P.G. Sorenson, "The Theory and Practice of Compiler
Writing," McGraw-Hill, 1985. Dick Grune <dick@cs.vu.nl> writes: Extensive
and detailed. Heavy reading. To be consulted when other sources fail.
James E. Hendrix, "The Small-C Compiler", 2nd ed., M&T Books, ISBN
0-934375-88-7 <Book Alone>, 1-55851-007-9 <MS-DOS Disk>,
0-934375-97-6 <Book AND Disk>.
William Jhun <ec_ind03@oswego.edu> writes: It explaines the C-language is
thorough....and explains every single aspect of the compiler. The book
compares source code to p-code to assembly. It goes over a nice set of
optimization routines, explains the parser, the back end, and even
includes source code, which the compiler on the disk can actually compile
itself. It's an extremely interesting book, check it out.
If anyone sends in others, I'll be happy to add them to the list.
* Where I can I FTP the sources to the programs in Holub's "Compiler
Design in C" ?
You can't. See page xvi of the Preface for ordering information.
Regards,
John Levine, comp.compilers moderator
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Thu Apr 1 07:00:19 EST 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: compilers-request@iecc.cambridge.ma.us (John R. Levine)
Subject: Comp.compilers 1990 Annual
Message-ID: <book-Apr-93@comp.compilers>
Keywords: administrivia
Sender: compilers-sender@iecc.cambridge.ma.us
Supersedes: <book-Mar-93@comp.compilers>
Organization: Compilers Central
Date: Thu, 1 Apr 1993 12:00:15 GMT
Approved: compilers@iecc.cambridge.ma.us
Expires: Sat, 1 May 1993 23:59:00 GMT
NOTE: This is a monthly repost of the message about the printed version of
the 1990 comp.compilers message.
Comp.compilers 1990 Annual
Edited by John R. Levine
The Comp.compilers 1990 Annual is a printed edition of the messages posted
to the usenet newsgroup comp.compilers in 1990. Usenet is an informal
distributed electronic bulletin board system connecting thousands of
computers around the world. Most of the systems attached to it run some
version of Unix, though others systems ranging from MS-DOS personal
computers to Cray mainframes now participate. Comp.compilers is a
moderated usenet news group addressing the topics of compilers in
particular and programming language design and implementation in general.
It started in 1986 as a moderated mailing list, but interest quickly grew
to the point where it was promoted to a news group. Recent topics have
included optimization techniques, language design issues, announcements of
new compiler tools, and book reviews.
Messages come from a wide variety of people ranging from undergraduate
students to well-known experts in industry and academia. Authors live all
over the world - there are regular messages from the U.S, Canada, Europe,
Australia, and Japan, with occasional ones from as far away as Malaysia.
The anarchic nature of usenet makes it impossible to tell how large the
readership is, but the total is probably in the tens of thousands.
The book's contents include 807 of the year's total 914 messages, leaving
out only administrative messages and unanswered questions. The messages
themselves are unedited except for removing boilerplate header and trailer
lines.
The book is 604 pages, each 11 x 8.5 inches with two pages of text side by
side in reasonably legible 8 point type. Messages are grouped by topic
(C, optimization, book reviews, etc.) and within each topic messages in a
thread are grouped together. There is a permuted subject index, a keyword
index, and an author index. The book is GBC bound, a plastic spiral
binding that lies flat.
Pricing
The price is $40 per book, plus $2 sales tax for copies delivered in
Massachusetts, plus appropriate postage and packaging per copy:
Pick up in Cambridge free | Foreign surface $10
U.S. surface $3 | Foreign airmail:
U.S. priority $5 | Americas $12
U.S. Federal Express $20 | Europe $20
Canada $7 | All other $30
No further discounts apply on single copies of the book, as this price is
pre-discounted from the list price of $50. Quantity discounts and
shipping charges for large or unusual orders can be negotiated.
How to order
All orders must be prepaid. Send a check payable to I.E.C.C. along with
the delivery address. We cannot take purchase orders, credit cards, or
COD orders. Foreign orders must be prepaid in U.S. dollars, preferably by
a check on a U.S. bank, but anything our bank can handle is acceptable.
(They charge $20 extra for foreign checks and $5 for incoming bank wires.)
The mailing address is:
I.E.C.C.
P.O. Box 349
Cambridge MA 02238-0349
The book is also available at list price from several bookstores.
Being real bookstores, they take credit cards and the like.
Computer Literacy Bookshop, 2590 N 1st St, San Jose CA 95131
+1 408 435 1118, orders@clbooks.com
Quantum Books, 4 Cambridge Center, Cambridge MA 02141
+1 617 494 5042, quanbook@world.std.com
The book has ISBN 0-944954-02-2, and is under the imprint of Center Book
Publishers, Inc.
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Thu Apr 1 07:00:28 EST 1993
Xref: iecc comp.compilers:4461 comp.lang.misc:9656 comp.sources.d:4721 comp.archives.admin:881 news.answers:6792
Newsgroups: comp.compilers,comp.lang.misc,comp.sources.d,comp.archives.admin,news.answers
Path: iecc!compilers-sender
From: David Muir Sharnoff <muir@idiom.berkeley.ca.us>
Subject: Catalog of compilers, interpreters, and other language tools [p1of3]
Message-ID: <free1-Apr-93@comp.compilers>
Followup-To: comp.archives.admin
Summary: montly posting of free language tools that include source code
Keywords: tools, FTP, administrivia
Sender: compilers-sender@iecc.cambridge.ma.us
Supersedes: <free1-Mar-93@comp.compilers>
Reply-To: muir@idiom.berkeley.ca.us
Organization: University of California, Berkeley
Date: Thu, 1 Apr 1993 12:00:23 GMT
Approved: compilers@iecc.cambridge.ma.us
Expires: Sat, 1 May 1993 23:59:00 GMT
Archive-name: free-compilers/part1
Last-modified: 1993/03/24
Version: 3.2
Catalog of Free Compilers and Interpreters.
This document attempts to catalog freely availiable compilers,
interpreters, libraries, and language tools. This is still a draft
document: it has errors, it is not complete, and I might re-organize
it. It is my intention that it be aimed at developers rather than
researchers. I am much more intersted in production quality systems.
There is some overlap of coverage between this document and other
lists and catalogs. See the references section for a list...
To be on this list a package must be free and have source code
included. If there are any packages on this list that do not have
source code included or are not free, then I would appreciate it if it
is brought to my attention so that I may correct the error.
There are many fields filled in with a question mark (?). If you have
information which would allow me to remove question marks, please
send it to me. The only field which I do not expect to be completely
obvious is the "parts" field because I wish to distinguish between
compilers, translators, and interpretors. To qualify as a compiler
as I'm using the term, it must compile to a machine-readable format
no higher-level than assembly. Why? Just because. If you've
got a better idea, send it in.
This document may be ftp'ed from idiom.berkeley.ca.us. Be nice to my
SLIP link.
If you would be interested in purchasing a CD-ROM with a complete
set of the source code listed here, let me know. If enough people
are interested, I might cut a disc. Bear in mind that you can get
most, if not all, of this from Prime Time Freeware's disc set.
Or would you be more interested in sparc binaries? $250 interested?
Or would you like to either take over maintenance of this document
or pay me to keep doing it? (hint: maintaining this is taking too
much of my time)
David Muir Sharnoff <muir@idiom.berkeley.ca.us>, 1993/02/13
------------------------- selected major changes ------------------------------
Selected changes section
language package
-------- -------
new listings:
Ada Adamakegen
Assembler (various) GNU assembler (GAS)
Assembler (multpl 8bit) ?
Assembler (DSP56k) ? [5]
BNF (yacc), Ada aflex-ayacc [1]
C GNU superoptimizer
C MasPar mpl & ampl
C dsp56k-gcc [5]
C dsp56165-gcc [5]
C (Duel) DUEL (a <practical> C debugging language)
C, C++, Objective-C GNU CC [so, I'm a bit slow --muir]
C, C++ Xcoral
C++ aard ?
CASE - DSP Ptolomy [5]
CLU SUN CLU
E Amiga E
E (a persistent C++) GNU E
elisp GNU Emacs
: es (a functional shell) es
math manipulation FUDGIT
math - unix bc GNU BC
math - symbolic calcltr GNU Calc
Modula-2 PRAM emulator and parallel modula-2 compiler
Modual-2, Modula-3 M2toM3
NewsClip NewsClip
Pascal a frontend
Pascal tptc
Pascal ptc
Prolog BinProlog [2]
Prolog 2? (@aisun1.ai.uga.edu) [2]
Prolog ? (@aisun1.ai.uga.edu) [2]
Prolog Open Prolog [2]
Prolog UPMAIL Tricia Prolog [2]
Prolog Modular SB-Prolog [2]
Prolog ? BABYLON [3]
Prolog XWIP (X Window Interface for Prolog)
Prolog PI
Scheme PCS/Geneva [4]
Simula Cim
Sisal Optimizing Sisal Compiler ?
Standard ML The ML Kit
Tcl Tcl/Tk Contrib Archive
troff groff [care to make a TeX entry? --muir]
VHDL ALLIANCE
new versions:
C, C++, ... gdb 4.8
C, nroff c2man 1.10
Common Lisp CLISP 1993/03/10
BNF (yacc) byacc 1.9
Gopher Gopher 2.28a
lisp RefLisp 2.67
Logo Berkeley Logo 2.9 alpha
Logo MswLogo 3.2
Milarepa Milarepa Prototype 1.0
Modula-3 SRC Modual-3 2.11
perl, yacc perl-byacc 1.8.2
Sather Sather 0.2i
Scheme scm 4b4
Scheme Scheme->C 15mar93
sed GNU sed 1.11
SGML sgmls 1.1
Tcl Tcl 6.6, Tk 3.1
edits:
Extended BNF, Modula-2 GMD Compiler Toolbox (aka Cocktail)
rc (plan 9 shell) rc
J J
Scheme PC-Scheme [4]
deleted (no source):
Modula-2 fst
Modula-2, Pascal metro
Oberon Oberon from ETH Zurich
[1] New info from the Language List by Bill Kinnersley
[2] New info from comp.lang.prolog FAQ by Jamie Andrews
[3] New info from the Lisp FAQ by Mark Kantrowitz
[4] New info from the Scheme FAQ by Mark Kantrowitz
[5] New info from the comp.dsp FAQ by Phil Lapsley
-------------------------------------------------------------------------------
------------------------------- tools -----------------------------------------
-------------------------------------------------------------------------------
language: ABC
package: ABC
version: 1.04.01
parts: ?
author: Leo Geurts, Lambert Meertens,
Steven Pemberton <Steven.Pemberton@cwi.nl>
how to get: ftp programming/languages/abc/* from mcsun.eu.net or ftp.eu.net
description: ABC is an imperative language embedded in its own
environment. It is interactive, structured,
high-level, very easy to learn, and easy to use.
It is suitable for general everyday programming,
such as you would use BASIC, Pascal, or AWK for.
It is not a systems-programming language. It is an
excellent teaching language, and because it is
interactive, excellent for prototyping. ABC programs
are typically very compact, around a quarter to a
fifth the size of the equivalent Pascal or C program.
However, this is not at the cost of readability,
on the contrary in fact.
references: "The ABC Programmer's Handbook" by Leo Geurts,
Lambert Meertens and Steven Pemberton, published by
Prentice-Hall (ISBN 0-13-000027-2)
"An Alternative Simple Language and Environment for PCs"
by Steven Pemberton, IEEE Software, Vol. 4, No. 1,
January 1987, pp. 56-64.
ports: unix, MSDOS, atari, mac
contact: abc@cwi.nl
updated: 1991/05/02
language: ABCL/1 (An object-Based Concurrent Language)
package: ABCL/1
version: ?
parts: ?
author: Akinori Yonezawa, ABCL Group now at Department of Information
Science, the University of Tokyo
how to get: ftp pub/abcl1/* from camille.is.s.u-tokyo.ac.jp
description: Asynchronous message passing to objects.
references: "ABCL: An Object-Oriented Concurrent System", Edited by
Akinori Yonezawa, The MIT Press, 1990, (ISBN 0-262-24029-7)
restriction: no commercial use, must return license agreement
requires: Common Lisp
contact: abcl@is.s.u-tokyo.ac.jp
updated: 1990/05/23
language: ABCL ???
package: ABCL/R2
version: ?
author: masuhara@is.s.u-tokyo.ac.jp, matsu@is.s.u-tokyo.ac.jp,
takuo@is.s.u-tokyo.ac.jp, yonezawa@is.s.u-tokyo.ac.jp
how to get: ftp pub/abclr2/* from camille.is.s.u-tokyo.ac.jp
description: ABCL/R2 is an object-oriented concurrent reflective language
based on Hybrid Group Architecture. As a reflective language,
an ABCL/R2 program can dynamically control its own behavior,
such as scheduling policy, from within user-program. An an
object-oriented concurrent language, this system has almost all
functions of ABCL/1.
requires: Common Lisp
updated: 1993/01/28
language: Ada
package: Ada/Ed
version: 1.11.0a+
parts: translator(?), interpreter, ?
author: ?
how to get: ftp pub/Ada/Ada-Ed from cnam.cnam.fr
description: Ada/Ed is a translator-interpreter for Ada. It is
intended as a teaching tool, and does not have the
capacity, performance, or robustness of commercial
Ada compilers. Ada/Ed was developed at New York
University, as part of a long-range project in
language definition and software prototyping.
conformance: last validated with version 1.7 of the ACVC tests.
being an interpreter, it does not implement most
representation clauses, and thus does not support systems
programming close to the machine level.
contact: ? Michael Feldman <mfeldman@cs.washington.edu> ?
updated: 1992/05/08
language: Ada
package: Ada grammar
version: ?
parts: scanner(lex), parser(yacc)
how to get: ftp from primost.cs.wisc.edu or mail to
compilers-server@iecc.cambridge.ma.us
contact: masticol@dumas.rutgers.edu
updated: 1991/10/12
language: Ada
package: Compiler for Toy/Ada in SML/NJ
version: ?
parts: translator(?)
author: Amit Bhatiani <bhatiaa@polly.cs.rose-hulman.edu>
how to get: ftp pub/compiler*.tar.Z from master.cs.rose-hulman.edu
conformance: subset
updated: 1992/04/08
language: Ada
package: NASA PrettyPrinter
version: ?
parts: Ada LR parser, ?
how to get: ftp from Ada Software Repository on wsmr-simtel20.army.mil
description: pretty-print program that contains an ada parser
requires: Ada
info-source: Michael Feldman <mfeldman@seas.gwu.edu> in comp.compilers
[he also has a yacc grammar for ada]
updated: 1991/02/01
language: Ada
package: yacc grammar for Ada
version: ?
parts: parser(yacc)
author: Herman Fischer
how to get: ftp PD2:<ADA.EXTERNAL-TOOLS>GRAM2.SRC
from wsmr-simtel20.army.mil
contact: ?
updated: 1991/02/01
language: Ada
package: Paradise
version: 2.0
parts: library
how to get: ftp pub/Ada/Paradise from cnam.cnam.fr
author: ?
description: Paradise is a subsystem (a set of packages) developped
to implement inter-processes, inter-tasks and
inter-machines communication for Ada programs in
the Unix world. This subsystem gives the user full
access to files, pipes, sockets (both Unix and
Internet), and pseudo-devices.
ports: Sun, Dec, Sony Mips, Verdex compiler, DEC compiler,
Alsys/Systeam compiler
contact: paradise-info@cnam.cnam.fr
updated: 1992/09/30
language: Ada
package: Adamakegen
version: 2.6.3
parts: makefile generator
author: Owen O'Malley <omalley@porte-de-st-ouen.ics.uci.edu>
how to get: ftp ftp/pub/arcadia/adamakegen* from spare.ics.uci.edu
description: A program that generates makefiles for Ada programs
requires: Icon
ports: Verdix, SunAda
updated: 1993/03/02
language: ADL (Adventure Definition Language)
package: ADL
parts: interpreter
author: Ross Cunniff <cunniff@fc.hp.com>, Tim Brengle
how to get: comp.sources.games archive volume 2
description: An adventure language, semi-object-oriented with LISP-like
syntax. A superset of DDL.
updated: ?
language: Algol, Foogol
package: foogol
version: ?
parts: compiler
author: ?
how to get: comp.sources.unix archive volume 8
conformance: subset of Algol
description: ?
ports: VAX
updated: ?
language: ALLOY
package: ALLOY
version: 2.0?
parts: interpreter, documentation, examples
author: Thanasis Mitsolides <mitsolid@cs.nyu.edu>
how to get: ftp pub/local/alloy/* from cs.nyu.edu
description: ALLOY is a higher level parallel programming language
appropriate for programming massively parallel computing
systems. It is based on a combination of ideas from
functional, object oriented and logic programming languages.
The result is a language that can directly support
functional, object oriented and logic programming styles
in a unified and controlled framework. Evaluating modes
support serial or parallel execution, eager or lazy
evaluation, non-determinism or multiple solutions etc.
ALLOY is simple as it only requires 29 primitives in all
(half of which for Object Oriented Programming support).
ports: sparc, ?
updated: 1991/06/11
language: APL
package: I-APL
how to get: ftp languages/apl/* from watserv1.waterloo.edu
updated: 1992/07/06
language: APL
package: APLWEB
version: ?
parts: translator(web->apl), translator(web->TeX)
author: Dr. Christoph von Basum <CvB@erasmus.hrz.uni-bielefeld.de>
how to get: ftp languages/apl/aplweb/* from watserv1.uwaterloo.ca
updated: 1992/12/07
language: Assembler (various)
package: GNU assembler (GAS)
version: 2.0
parts: assembler, documentation
how to get: ftp gas-2.0.tar.z from a GNU archive site
description: Many CPU types are now handled, and COFF and IEEE-695 formats
are supported as well as standard a.out.
ports: Sun-3, Sun-4, i386/{386BSD, BSD/386, Linux, PS/2-AIX},
VAX/{Ultrix,BSD,VMS}
bugs: bug-gnu-utils@prep.ai.mit.edu
updated: 1993/03/09
language: Assembler (8051)
package: CAS: The Free Full-Featured 8051 Assembler
version: 1
parts: assembler
author: Mark Hopkins <markh@csd4.csd.uwm.edu>
how to get: ftp /pub/8051/assem from csd4.csd.uwm.edu
description: an Experimental public domain one-pass assembler for the 8051
with C-like syntax. Related software contained in /pub/8051,
including arbitrary precision math, and multitasking routines.
ports: MSDOS, Ultrix, Sun (contact author)
requries: ANSI-C compiler
updated: 1992/08/13
language: Assembler (mc6809)
package: usim
version: 0.11
parts: simulator, documentation
author: Ray P. Bellis <rpb@psy.ox.ac.uk>
how to get: ftp /pub/mc6809/usim-* from ftp.cns.ox.ac.uk
description: a mc6809 simulator
updated: 1993/02/14
language: Assembler (DSP56000)
package: ?
version: 1.1
parts: assembler
author: Quinn Jensen <jensenq@qcj.icon.com>
how to get: alt.sources archive or ftp ? from wuarchive.wustl.edu
description: ?
updated: ?
language: Assembler (6502, Z80, 8085, 68xx)
package: ?
version: ?
author: msmakela@cc.helsinki.fi and Alan R. Baldwin
how to get: ftp ? from ccosun.caltech.edu
description: I have enhanced a set of 68xx and Z80 and 8085 cross assemblers
to support 6502. These assemblers run on MS-DOS computers or on
any systems that support standard Kerninghan & Richie C, for
example, Amiga, Atari ST and any "big" machines
updated: 1993/03/10
language: ? attribute grammar ?
package: Alpha
version: pre-release
parts: semantic-analysis generator?, documentation(german)
author: Andreas Koschinsky <koschins@cs.tu-berlin.de>
how to get: from author
description: I have written a compiler generator. The generator is called
Alpha and uses attribute grammars as specification calculus.
Alpha is the result of a thesis at Technische Universitaet
Berlin. I am looking for someone who would like to test and use
Alpha. Alpha generates compilers from a compiler
specification. This specification describes a compiler in
terminology of attribute grammars. Parser and Scanner are
generated by means of Bison and Flex. Alpha generates an
ASE-evaluator (Jazayeri and Walter). The documentation is in
german since it is a thesis at a german university.
updated: 1993/02/16
language: awk (new)
package: mawk
version: 1.1.3
how to get: ftp public/mawk* from oxy.edu
parts: interpreter
author: Mike Brennan <brennan@bcsaic.boeing.com>
conformance: superset
+ RS can be a regular expression
features: + faster than most new awks
ports: sun3,sun4:sunos4.0.3 vax:bsd4.3,ultrix4.1 stardent3000:sysVR3
decstation:ultrix4.1 msdos:turboC++
contact: Mike Brennan <brennan@bcsaic.boeing.com>
status: actively developed
updated: 1993/03/14
language: awk (new)
package: GNU awk (gawk)
version: 2.14
parts: interpreter, documentation
author: David Trueman <david@cs.dal.ca> and
Arnold Robbins <arnold@cc.gatech.edu>
how to get: ftp gawk-2.14.tar.Z from a GNU archive site
conformance: superset
ports: unix, msdos:msc5.1
status: activly developed
updated: 1992/11/18
language: BASIC
package: bwBASIC (Bywater BASIC interpreter)
version: 1.10
parts: interpreter, shell, ?
author: Ted A. Campbell <tcamp@acpub.duke.edu>
how to get: ftp pub/bywater/* from duke.cs.duke.edu
description: ?
conformance: large superset of ANSI Standard for Minimal BASIC (X3.60-1978)
requires: ANSI C
ports: DOS, Unix
updated: 1992/11/05
language: BASIC
package: ? basic ?
version: ?
parts: paser(yacc), interpreter
author: ?
how to get: comp.sources.unix archives volume 2
updated: ?
language: BASIC
package: ? bournebasic ?
version: ?
parts: interpreter
author: ?
how to get: comp.sources.misc archives volume 1
description: ?
updated: ?
language: BASIC
package: ? basic ?
version: ?
parts: interpreter
author: ?
how to get: ftp ? from wsmr-simtel20.army.mil
description: ?
contact: ?
updated: ?
language: BASIC
package: ubasic
version: 8
parts: ?
author: Yuji Kida
how to get: ? ask archie ?
references: reviewed in Notices of the A.M.S #36 (May/June 1989),
and "A math-oriented high-precision BASIC", #38 (3/91)
contact: ?
updated: 1992/07/06
language: BCPL
package: ?
version: ?
author: ?
how to get: ftp systems/amiga/programming/languages/BCPL/BCPL4Amiga.lzh
from wuarchive.wustl.edu.
description: The original INTCODE interpreter for BCPL.
ports: Amiga, UNIX, MSDOS
contact: ?
updated: ?
language: BCPL
package: ?
version: ?
how to get: ftp [.languages]bcpl.tar_z from ftp.syd.dit.csiro.au
description: A BCPL* (Basic Combined Programming Language) compiler
bootstrap kit with an INTCODE interpreter in C.
contact: Ken Yap <ken@syd.dit.CSIRO.AU>
updated: ?
language: BNF (Extended)
package: TXL: Tree Transformation Language
version: 6.0
parts: translator generator
author: Jim Cordy <cordy@qucis.queensu.ca>
how to get: ftp txl/00README for instructions from qusuna.qucis.queensu.ca
description: + TXL is a generalized source-to-source translation
system suitable for rapidly prototyping computer
languages and language processors of any kind. It has
been used to prototype several new programming
languages as well as specification languages, command
languages, and more traditional program transformation
tasks such as constant folding, type inference, source
optimization and reverse engineering. TXL takes
as input an arbitrary context-free grammar in extended
BNF-like notation, and a set of show-by-example
transformation rules to be applied to inputs parsed
using the grammar.
updated: 1992/02/23
language: BNF (Extended)
package: Gray
version: 3
parts: parser generator(Forth)
author: Martin Anton Ertl <anton@mips.complang.tuwien.ac.at>
how to get: author; version 2 is on various ftp sites
description: Gray is a parser generator written in Forth. It takes
grammars in an extended BNF and produces executable Forth
code for recursive descent parsers. There is no special
support for error handling.
requires: Forth
ports: TILE Release 2 by Mikael Patel
updated: 1992/05/22
language: BNF ??
package: ZUSE
version: ?
parts: parser generator(?)
author: Arthur Pyster
how to get: ? Univ Calif at Santa Barbara ?
description: ll(1) paser generator
requires: Pascal
updated: 1986/09/23
language: BNF ??
package: FMQ
version: ?
parts: paser generator w/error corrector generator
author: Jon Mauney
how to get: ftp from csczar.ncsu.edu
status: ?
contact: ?
updated: 1990/03/31
language: BNF ??
package: ATS (Attribute Translation System)
version: ?
author: ? University of Saskatchewan ?
how to get: ?
description: generates table-driven LL(1) parsers with full insert-only
error recovery. It also handles full left-attribute semantic
handling, which is a dream compared to using YACC's parser
actions.
contact: ?
info-source: Irving Reid <irving@bli.com> in comp.compilers
status: ?
updated: 1988/11/29
language: BNF (Extended)
package: PCCTS (Purdue Compiler-Construction Tool Set)
version: 1.06
parts: scanner generator, parser generator (LL(k)), documentation,
tutorial
author: Terence J. Parr <parrt@ecn.purdue.edu>, Will E. Cohen
<cohenw@ecn.purdue.edu>, Henry G. Dietz <hankd@ecn.purdue.edu>
how to get: ftp pub/pccts/1.06 from marvin.ecn.purdue.edu
uk: ftp /comput*/progra*/langu*/tools/pccts/* from src.doc.ic.ac.uk
description: PCCTS is similar to a highly integrated version of YACC
and LEX; where ANTLR (ANother Tool for Language
Recognition) corresponds to YACC and DLG (DFA-based
Lexical analyzer Generator) functions like LEX.
However, PCCTS has many additional features which make
it easier to use for a wide range of translation
problems. PCCTS grammars contain specifications for
lexical and syntactic analysis, semantic predicates,
intermediate-form construction and error reporting.
Rules may employ Extended BNF (EBNF) grammar constructs
and may define parameters, return values and local
variables. Languages described in PCCTS are recognized
via LL(k) parsers constructed in pure, human-readable,
C code. PCCTS parsers may be compiled with C++.
ports: UNIX, DOS, OS/2
portability: very high
contact: Terence J. Parr <parrt@ecn.purdue.edu>
updated: 1992/12/14
language: Coco (BNF variant) ?
package: Cocol ?
version: 2 ?
parts: parser geneartor(LL(1))
description: ?
status: ?
contact: Pat Terry?
updated: ?
language: BNF ??
package: LLGen
version: ?
parts: parser generator
author: ? Fischer and LeBlanc ?
how to get: ? ftp from csczar.ncsu.edu ?
description: LL(1) parser generator
conformance: subset of FMQ
reference: "Crafting A Compiler", by Fischer and LeBlanc
status: ?
contact: ?
updated: 1990/03/31
language: BNF (Extended), BNF (yacc), Modula-2
package: GMD Toolbox for Compiler Construction (aka Cocktail)
version: 9209
parts: parser generator (LALR -> C, Modula-2), documentation,
parser generator (LL(1) -> C, Modula-2), tests,
scanner generator (-> C, Modula-2), tests
translator (Extended BNF -> BNF), translator (Modula-2 -> C),
translator (BNF (yacc) -> Extended BNF), examples
abstract syntax tree generator, attribute-evaluator generator,
how to get: ftp pub/cocktail/dos from ftp.karlsruhe.gmd.de
OS/2: ftp.eb.ele.tue.nl/pub/src/cocktail/dos-os2.zoo
description: A huge set of compiler building tools.
requires: (ms-dos only) DJ Delorie's DOS extender (go32)
(OS/2 only) emx programming environment for OS/2
ports: msdos, unix, os/2
contact: Josef Grosch <grosch@karlsruhe.gmd.de>
OS/2: Willem Jan Withagen <wjw@eb.ele.tue.nl>
discussion: subscribe to Cocktail using listserv@eb.ele.tue.nl
updated: 1992/10/01
language: BNF ????
package: T-gen
version: 2.1
parts: parser generator, documentation, ?
author: Justin Graver <graver@comm.mot.com>
how to get: ftp pub/st80_r41/T-gen2.1/* from st.cs.uiuc.edu
description: T-gen is a general-purpose object-oriented tool for the
automatic generation of string-to-object translators.
It is written in Smalltalk and lives in the Smalltalk
programming environment. T-gen supports the generation
of both top-down (LL) and bottom-up (LR) parsers, which
will automatically generate derivation trees, abstract
syntax trees, or arbitrary Smalltalk objects. The simple
specification syntax and graphical user interface are
intended to enhance the learning, comprehension, and
usefulness of T-gen.
ports: ParcPlace Objectworks/Smalltalk 4.0 & 4.1
requires: Smalltalk-80
updated: 1992/10/18
language: BNF
package: Eli Compiler Construction System
version: 3.4.2
parts: ?????, translator(WEB->BNF?)
how to get: ftp pub/cs/distribs/eli/* from ftp.cs.colorado.edu
ports: Sun-3/SunOS4.1 Sun-4/SunOS4.1.2 RS/6000/AIX3 Mips/Ultrix4.2
HP9000/300/HP-UX8.00 HP9000/700/HP-UX8.07
description: Eli integrates off-the-shelf tools and libraries with
specialized language processors to generate complete compilers
quickly and reliably. It simplifies the development of new
special-purpose languages, implementation of existing languages
on new hardware and extension of the constructs and features of
existing languages.
discussion: <eli-request@cs.colorado.edu>
contact: <compiler@cs.colorado.edu>, <compiler@uni-paderborn.de>
updated: 1993/02/11
language: Milarepa
package: Milarepa Perl/BNF Parser
version: Prototype 1.0
parts: parser-generator, examples, tutorial
author: Jeffrey Kegler <jeffrey@jeffrey@netcom.com>
description: Milarepa takes a source grammar in the Milarepa language (a
straightforward mix of BNF and Perl) and generates a Perl file,
which, when enclosed in a simple wrapper, parses some third
language described by the source grammar.
This is intended to be a real hacker's parser. It is not
restricted to LR(k), and the parse logic follows directly from
the BNF. It handles ambiguous grammars, ambiguous tokens
(tokens which were not positively identified by the lexer) and
allows the programmer to change the start symbol. The grammar
may not be left recursive. The input must be divided into
sentences of a finite maximum length. There is no fixed
distinction between terminals and non-terminals, that is, a
symbol can both match the input AND be on the left hand side of
a production. Multiple Marpa grammars are allowed in a single
perl program.
It's only a prototype primarily due to poor speed. This is
intended to be remedied after Perl 5.0 is out.
requires: perl
updated: 1993/03/17
language: BNF (yacc)
package: NewYacc
version: 1.0
parts: parser generator, documenation
how to get: ftp src/newyacc.1.0.*.Z from flubber.cs.umd.edu
author: Jack Callahan <callahan@mimsy.cs.umd.edu>
description: [someone want to fill it in? --muir]
reference: see Dec 89 CACM for a brief overview of NewYacc.
updated: 1992/02/10
language: BNF (yacc)
package: bison
version: 1.18
parts: parser generator, documentation
author: Robert Corbett ?
how to get: ftp bison-1.16.tar.Z from a GNU archive site
bugs: bug-gnu-utils@prep.ai.mit.edu
ports: unix, atari, ?
restriction: !! will apply the GNU General Public License to *your* code !!
updated: 1992/01/28
language: BNF (yacc)
package: ? jaccl ?
version: ?
parts: parser generator
author: Dave Jones <djones@megatest.uucp>
description: a LR(1) parser generator
how to get: ?
updated: 1989/09/08
language: BNF (yacc)
package: byacc (Berkeley Yacc)
version: 1.9
parts: parser generator
author: Robert Corbett <Robert.Corbett@eng.sun.com>
how to get: To be determined. Probably ftp from a Berkeley system.
description: ?
history: Used to be called Zoo, and before that, Zeus
updated: 1993/02/22
language: BNF (yacc)
package: aflex-ayacc
version: 1.2a
parts: parser generator (Ada), scanner generator (Ada)
author: IRUS (Irvine Research Unit in Software)
how to get: ftp pub/irus/aflex-ayacc_1.2a.tar.Z from liege.ics.uci.edu
description: Lex and Yacc equivalents that produce Ada output
announcements: irus-software-request@ics.uci.edu
contact: irus-software-request@ics.uci.edu
updated: 1993/01/06
language: BURS ?
package: Iburg
version: ?
parts: parser generator?
author: Christopher W. Fraser <cwf@research.att.com>, David R. Hanson
<drh@princeton.edu>, Todd A. Proebsting <todd@cs.arizona.edu>
how to get: ftp pub/iburg.tar.Z from ftp.cs.princeton.edu
description: Iburg is a program that generates a fast tree parser. It is
compatible with Burg. Both programs accept a cost-augmented
tree grammar and emit a C program that discovers an optimal
parse of trees in the language described by the grammar. They
have been used to construct fast optimal instruction selectors
for use in code generation. Burg uses BURS; Iburg's matchers
do dynamic programming at compile time.
updated: 1993/02/10
language: C, C++, Objective-C, RTL
package: GNU CC (gcc)
version: 2.3.3
parts: compiler, runtime, libraries, examples, documentation
author: Richard Stallman <rms@gnu.ai.mit.edu> and others
how to get: ftp gcc-2.3.3.tar.Z from a GNU archive site
description: A very high quality, very portable compiler for C, C++,
Objective-C. The compiler is designed to support multiple
front-ends and multiple back-ends by translating first
into RTL (Register Transfer Language) and from there into
assembly for the target architecture. Front ends for
Ada, Pascal, and Fortran are all under development.
conformance: C: superset of K&R C and ANSI C.
C++: not exactly cfront 3.0? [could someone tell me which
version of cfront it is equivalent to, if any? --muir]
Objective-C: ?
portability: very high in the theory, somewhat annoying in practice
ports: 3b1, a29k, aix385, alpha, altos3068, amix, arm, convex,
crds, elxsi, fx2800, fx80, genix, hp320,
i386-{dos,isc,sco,sysv.3,sysv.4,mach,bsd,linix}, iris,
i860, i960, irix4, m68k, m88ksvsv.3, mips-news,
mot3300, next, ns32k, nws3250-v.4, hp-pa, pc532,
plexus, pyramid, romp, rs6000, sparc-sunos,
sparc-solaris2, sparc-sysv.4, spur, sun386, tahoe, tow,
umpis, vax-vms, vax-bsd, we32k
status: actively developed
restriction: Copyleft
bugs: gnu.gcc.bug
discussion: gnu.gcc.help
announcements: gnu.gcc.announce
updated: 1992/12/26
language: C
package: GNU superoptimizer
version: 2.2
author: Torbjorn Granlund <tege@gnu.ai.mit.edu> with Tom Wood
parts: exhaustive instruction sequence optimizer
how to get: ftp superopt-2.2.tar.Z from a GNU archive site
description: GSO is a function sequence generator that uses an exhaustive
generate-and-test approach to find the shortest instruction
sequence for a given function. You have to tell the
superoptimizer which function and which CPU you want to get
code for.
This is useful for compiler writers.
restriction: Copyleft
ports: Alpha, Sparc, i386, 88k, RS/6000, 68k, 29k, Pyramid(SP,AP,XP)
bugs: Torbjorn Granlund <tege@gnu.ai.mit.edu>
updated: 1993/02/16
language: C
package: xdbx
version: 2.1
parts: X11 front end for dbx
how to get: retrieve xxgdb from comp.sources.x volumes 11, 12, 13, 14, & 16
contact: Po Cheung <cheung@sw.mcc.com>
updated: 1992/02/22
language: C
package: ups
version: 2.1
parts: interpreter, symbolic debugger, tests, documentation
how to get: ? ftp from contrib/ups*.tar.Z from export.lcs.mit.edu ?
unofficial: unofficial enhancements by Rod Armstrong <rod@sj.ate.slb.com>,
available by ftp misc/unix/ups/contrib/rob from sj.ate.slb.com
author: Mark Russell <mtr@ukc.ac.uk>
description: Ups is a source level C debugger that runs under X11 or
SunView. Ups includes a C interpreter which allows you to add
fragments of code simply by editing them into the source window
ports: Sun, Decstation, VAX(ultrix), HLH Clipper
discussion: ups-users-request@ukc.ac.uk
bugs: Mark Russell <mtr@ukc.ac.uk>
updated: 1991/05/20
language: C (ANSI)
package: lcc
version: 1.8
parts: compiler, test suite, documentation
author: Dave Hanson <drh@cs.princeton.edu>
how to get: ftp pub/lcc/lccfe-*.tar.Z from princeton.edu
description: + hand coded C parser (faster than yacc)
+ retargetable
+ code "as good as GCC"
ports: vax (mips, sparc, 68k backends are commercial)
status: small-scale production use using commerical backends; the
commercial backends are cheap (free?) to universities.
discussion: lcc-requests@princeton.edu
updated: 1992/02/20
language: C
package: GCT
version: 1.4
parts: test-coverage-preprocessor
author: Brian Marick <marick@cs.uiuc.edu>
how to get: ftp pub/testing/gct.file/ftp.* from cs.uiuc.edu
description: GCT is test-coverage tool based on GNU C. Coverage tools
measure how thoroughly a test suite exercises a program.
restriction: CopyLeft
discussion: Gct-Request@cs.uiuc.edu
support: commercial support available from author, (217) 351-7228
ports: sun3, sun4, rs/6000, 68k, 88k, hp-pa, ibm 3090,
ultrix, convex, sco
updated: 1993/02/12
langauge: C
package: MasPar mpl, ampl
version: 3.1
parts: compiler
how to get: ftp put/mpl-* from maspar.maspar.com
description: mpl & ampl - the intrinsic parallel languages for MasPar's
machines are C (ampl is actually a gcc port these days). You
can get the source from marpar.com.
contact: ?
updated: ?
language: C
package: dsp56k-gcc
version: ?
parts: compiler
how to get: ftp pub/ham/dsp/dsp56k-tools/dsp56k-gcc.tar.Z from nic.funet.fi
au: ftp pub/micros/56k/g56k.tar.Z from evans.ee.adfa.oz.au
description: A port of gcc 1.37.1 to the Motorola DSP56000 done by
Motorola
contact: ?
updated: ?
language: C
package: dsp56165-gcc
version: ?
parts: compiler
author: Andrew Sterian <asterian@eecs.umich.edu>
how to get: ftp usenet/alt.sources/? from wuarchive.wustl.edu
description: A port of gcc 1.40 to the Motorola DSP56156 and DSP56000.
updated: ?
language: C
package: Harvest C
version: 2.1
how to get: ftp mac/development/languages/harves* from archive.umich.edu
description: ?
ports: Macintosh
contact: Eric W. Sink
updated: 1992/05/26
language: C, C++
package: Xcoral
version: 1.72
parts: editor
how to get: ftp ? from ftp.inria.fr
description: Xcoral is a multiwindows mouse-based text editor, for X Window
System, with a built-in browser to navigate through C functions
and C++ classes hierarchies... Xcoral provides variables width
fonts, menus, scrollbars, buttons, search, regions,
kill-buffers and 3D look. Commands are accessible from menus
or standard key bindings. Xcoral is a direct Xlib client and
run on color/bw X Display.
contact: ?
updated: 1993/03/14
language: C++
package: aard ???
version: ?
parts: memory use tracer
how to get: ftp pub/aard.tar.Z from wilma.cs.brown.edu
description: We have a prototype implementation of a tool to do memory
checking. It works by keeping track of the typestate of each
byte of memory in the heap and the stack. The typestate can be
one of Undefined, Uninitialized, Free or Set. The program can
detect invalid transitions (i.e. attempting to set or use
undefined or free storage or attempting to access uninitialized
storage). In addition, the program keeps track of heap
management through malloc and free and at the end of the run
will report all memory blocks that were not freed and that are
not accessible (i.e. memory leaks).
The tools works using a spliced-in shared library.
contact: Steve Reiss <spr@cs.brown.edu>
requires: Sparc, C++ 3.0.1, SunOS 4.X
language: C++
package: ET++
version: 3.0-alpha
parts: class libraries, documentation
how to get: ftp C++/ET++/* from iamsun.unibe.ch
contact: Erich Gamma <gamma@ifi.unizh.ch>
updated: 1992/10/26
language: C++
package: C++ grammar
how to get: comp.sources.misc volume 25
description: [is this a copy of the Roskind grammer or something else?
--muir]
parts: parser(yacc)
updated: 1991/10/23
language: C++
package: COOL
version: ?
parts: libraries, tests, documentation
how to get: ftp ? from cs.utexas.edu
description: A C++ class library developed at Texas Instruments. Cool
contains a set of containers like Vectors, List, Has_Table,
etc. It uses a shallow hierarchy with no common base
class. The funtionality is close to Common Lisp data
structures (like libg++). The template syntax is very close
to Cfront3.x and g++2.x. Can build shared libraries on Suns.
ports: ?
contact: Van-Duc Nguyen <nguyen@crd.ge.com>
updated: 1992/08/05
language: C++, Extended C++
package: EC++
version: ?
parts: translator(C++), documentation
author: Glauco Masotti <masotti@lipari.usc.edu>
how to get: ? ftp languages/c++/EC++.tar.Z from ftp.uu.net ?
description: EC++ is a preprocessor that translates Extended C++
into C++. The extensions include:
+ preconditions, postconditions, and class invariants
+ parameterized classes
+ exception handling
+ garbage collection
status: ?
updated: 1989/10/10
language: C++
package: LEDA
version: 3.0
parts: libraries
how to get: ftp pub/LEDA/* from ftp.cs.uni-sb.de
description: library of efficient data types and algorithms.
New with 3.0: both template and non-template versions.
contact: Stefan N"aher <stefan@mpi-sb.mpg.de>
updated: 1992/11/30
language: E (a persistent C++ variant)
package: GNU E
version: 2.3.3
parts: compiler
how to get: ftp exodus/E/gnu_E* from ftp.cs.wisc.edu
description: GNU E is a persistent, object oriented programming language
developed as part of the Exodus project. GNU E extends C++
with the notion of persistent data, program level data objects
that can be transparently used across multiple executions of a
program, or multiple programs, without explicit input and
output operations.
GNU E's form of persistence is based on extensions to the C++
type system to distinguish potentially persistent data objects
from objects that are always memory resident. An object is
made persistent either by its declaration (via a new
"persistent" storage class qualifier) or by its method of
allocation (via persistent dynamic allocation using a special
overloading of the new operator). The underlying object
storage system is the Exodus storage manager, which provides
concurrency control and recovery in addition to storage for
persistent data.
restriction: Copyleft; not all runtime sources are available (yet)
requires: release 2.1.1 of the Exodus storage manager
contact: exodus@cs.wisc.edu
updated: 1993/01/20
language: C (ANSI)
package: ? 1984 ANSI C to K&R C preprocessor ?
version: ?
parts: translator(K&R C)
author: ?
how to get: from comp.sources.unix archive volume 1
status: ?
updated: ?
language: C (ANSI)
package: unproto ?
version: ? 4 ? 1.6 ?
parts: translator(K&R C)
author: Wietse Venema <wietse@wzv.win.tue.nl>
how to get: ftp pub/unix/unproto4.shar.Z from ftp.win.tue.nl
contact: ?
updated: ?
language: C (ANSI)
package: cproto
version: ?
parts: translator(K&R C)
author: Chin Huang <chin.huang@canrem.com>
how to get: from comp.sources.misc archive volume 29
description: cproto generates function prototypes from function definitions.
It can also translate function definition heads between K&R
style and ANSI C style.
ports: UNIX, MS-DOS
updated: 1992/07/18
langauge: C (ANSI)
package: cextract
version: 1.7
parts: translator(K&R C), header file generator
how to get: ftp from any comp.sources.reviewed archive
author: Adam Bryant <adb@cs.bu.edu>
description: A C prototype extractor, it is ideal for generating
header files for large multi-file C programs, and will
provide an automated method for generating all of the
prototypes for all of the functions in such a program.
It may also function as a rudimentary documentation
extractor, generating a sorted list of all functions
and their locations
ports: Unix, VMS
updated: 1992/11/03
language: C, ANSI C, C++
package: ? The Roskind grammars ?
version: ? 2.0 ?
parts: parser(yacc), documenation
author: Jim Roskind <jar@hq.ileaf.com>
how to get: ftp pub/*grammar* from ics.uci.edu
description: The C grammar is CLEAN, it does not use %prec, %assoc, and
has only one shift-reduce conflict. The C++ grammar has
a few conflicts.
status: ?
updated: 1989/12/26
language: C, C++
package: xxgdb
version: 1.06
parts: X11 front end for gdb
how to get: retrieve xxgdb from comp.sources.x volumes 11, 12, 13, 14, & 16
contact: Pierre Willard <pierre@la.tce.com>
updated: 1992/02/22
language: C, C++
package: gdb
version: 4.8
parts: symbolic debugger, documentation
how to get: ftp gdb-*.tar.[zZ] from a GNU archive site
author: many, but most recently Stu Grossman <grossman@cygnus.com>
and John Gilmore <gnu@cygnus.com> of Cygnus Support
ports: most unix variants, vms, vxworks, amiga, msdos
bugs: <bug-gdb@prep.ai.mit.edu>
restriction: CopyLeft
updated: 1993/02/19
language: Duel (a <practical> C debugging language)
package: DUEL
version: 1.10
parts: front end
author: Michael Golan <mg@cs.princeton.edu>
how to get: ftp duel/* from ftp.cs.princeton.edu
description: DUEL is a front end to gdb. It implements a language
designed for debbuging C programs. It maily features
efficient ways to select and display data items.
requires: gdb
status: author is pushing the system hard.
updated: 1993/03/15
language: C, C++, Objective C
package: emx programming environment for OS/2
version: 0.8f
parts: gcc, g++, gdb, libg++, .obj linkage, DLL, headers
how to get: ftp pub/os2/2.0/programming/emx-0.8f from ftp-os2.nmsu.edu
europe: ftp soft/os2/emx-0.8f from rusmv1.rus.uni-stuttgart.de
author: Ebenhard Mattes <mattes@azu.informatik.uni-stuttgart.de>
discussion: subscribe to emxlist using listserv@ludd.luth.se
updated: 1992/09/21
language: C
package: PART's C Pthreads
version: ?
parts: library
author: PART (POSIX / Ada-Runtime Project)
how to get: ftp pub/PART/pthreads* from ftp.cs.fsu.edu
description: As part of the PART project we have been designing and
implementing a library package of preemptive threads which is
compliant with POSIX 1003.4a Draft 6. A description of the
interface for our Pthreads library is now available on ftp. Our
implementation is limited to the Sun SPARC architecture and
SunOS 4.1.x. We do not make any use of Sun's light-weight
processes to achieve better performance (with one I/O-related
exception).
restriction: GNU Library General Public License
discussion: send "Subject: subscribe-pthreads" to mueller@uzu.cs.fsu.edu
contact: pthreads-bugs@ada.cs.fsu.edu
updated: 1993/03/05
language: C, nroff
package: c2man
version: 1.10
parts: documentation generator (C -> nroff -man)
how to get: alt.sources archive
author: Graham Stoney <greyham@research.canon.oz.au>
description: c2man is a program for generating Unix style manual pages in
nroff -man format directly from ordinary comments embedded
in C source code
updated: 1992/11/20
language: Small-C
package: smallc
version: ?
parts: compiler
author: ?
how to get: ?, comp.sources.unix volume 5
description: Small-C is a subset of the C programming language for which a
number of public-domain compilers have been written. The
original compiler was written by Ron Cain and appeared in the
May 1980 issue of Dr.Dobb's Journal. More recently, James
E.Hendrix has improved and extended the original Small-C
compiler and published "The Small-C Handbook", ISBN
0-8359-7012-4 (1984). Both compilers produce 8080 assembly
language, which is the most popular implementation of Small-C
to-date. My 6502 Small-C compiler for the BBC Micro is based
on "RatC", a version of the original Ron Cain compiler
described by R.E.Berry and B.A.Meekings in "A Book on C", ISBN
0-333-36821-5 (1984). The 6502 compiler is written in Small-C
and was bootstrapped using Zorland C on an Amstrad PC1512 under
MSDOS 3.2, then transferred onto a BBC Micro using Kermit. The
compiler can be used to cross-compile 6502 code from an MSDOS
host, or as a 'resident' Small-C compiler on a BBC Micro.
conformance: subset of C
ports: 68k, 6809, VAX, 8080, BBC Micro, Z80
updated: 1989/01/05
language: C-Refine, C++-Refine, *-Refine
package: crefine
version: 3.0
parts: pre-processor, documentation
how to get: aquire from any comp.sources.reviewed archive
author: Lutz Prechelt <prechelt@ira.uka.de>
description: C-Refine is a preprocessor for C and languages that
vaguely resemble C's syntax. It allows symbolic naming
of code fragments so as to redistribute complexity and
provide running commentary.
portability: high
ports: unix, msdos, atari, amiga.
updated: 1992/07/16
language: CAML (Categorical Abstract Machine Language)
package: CAML
version: 3.1
parts: ?
author: ?
description: CAML is a language belonging to the ML family including:
+ lexical binding discipline
+ static type inference
+ user-defined (sum and product) types
+ possibly lazy data structures
+ possibly mutable data structures
+ interface with the Yacc parser generator
+ pretty-printing tools
+ and a complete library.
how to get: ? ftp lang/caml from nuri.inria.fr ?
status: ?
discussion: ?
ports: Sun-3 Sun-4 Sony-68k Sony-R3000 Decstation Mac-A/UX Apollo
portability: ?
bugs: weis@margaux.inria.fr or caml@margaux.inria.fr
updated: ?
language: Caml Light
package: Caml Light
version: 0.4
how to get: ftp lang/caml-light/* from nuri.inria.fr
author: Xavier Leroy <xleroy@margaux.inria.fr>
parts: bytecode compiler, runtime, scanner generator, parser generator
ports: most unix, Macintosh, Amiga, MSDOS
conformance: subset of CAML
features: very small
performance: five to ten times slower than SML-NJ
portability: very high
contact: Xavier Leroy <xleroy@margaux.inria.fr>
updated: 1991/10/05
language: Candle, IDL (Interface Description Language)
package: Scorpion System
version: 5.0
author: University of Arizona
parts: software development environment for developing
software development environments, documentation
how to get: ftp scorpion/* from cs.arizona.edu
description: 20 tools that can be used to construct specialized
programming environments
history: The Scorpion Project was started by Prof. Richard
Snodgrass as an outgrowth of the SoftLab Project (which pro-
duced the IDL Toolkit) that he started when he was at the
University of North Carolina. The Scorpion Project is
directed by him at the University of Arizona and by Karen
Shannon at the University of North Carolina at Chapel Hill.
reference: "The Interface Description Language: Definition and Use,"
by Richard Snodgrass, Computer Science Press, 1989,
ISBN 0-7167-8198-0
ports: Sun-3, Sun-4, Vax, Decstation, NeXT, Sequent, HP9000
discussion: info-scorpion-request@cs.arizona.edu
contact: scorpion-project@cs.arizona.edu
updated: 1991/04/10
language: CASE-DSP (Computer Aided Software Eng. for Digital Signal Proc)
package: Ptolemy
version: ?
parts: grahpical algorithm layout, code generator, simulator
how to get: ftp pub/? from ptolemy.bekeley.edu
description: Ptolemy provides a highly flexible foundation for the
specification, simulation, and rapid prototyping of systems.
It is an object oriented framework within which diverse models
of computation can co-exist and interact. For example, using
Ptolemy a data-flow system can be easily connected to a
hardware simulator which in turn may be connected to a
discrete-event system, etc. Because of this, Ptolemy can be
used to model entire systems.
In addition, Ptolemy now has code generation capabilities.
From a flow graph description, Ptolemy can generate both C code
and DSP assembly code for rapid prototyping. Note that code
generation is not yet complete, and is included in the current
release for demonstration purposes only.
ports: Sun-4, MIPS/Ultrix; DSP56001, DSP96002, and SPROC.
contact: ptolemy@ohm.berkeley.edu
updated: ?
language: Common Lisp
package: CMU Common Lisp
version: 16f
parts: incremental compiler, profiler, runtime, documentation,
editor, debugger
how to get: ftp /afs/cs.cmu.edu/project/clisp/release/16f-source.tar.Z
from ftp.cs.cmu.edu. Precompiled versions also available
description: includes *macs-like editor (hemlock), pcl, and clx.
conformance: mostly X3J13 compatible.
ports: Sparc/Mach Sparc/SunOS Mips/Mach IBMRT/Mach
contact: slisp@cs.cmu.edu
updated: 1992/12/17
language: Common Lisp
package: PCL (Portable Common Loops)
version: 8/28/92 PCL
parts: library
author: ? Richard Harris <rharris@ptolemy2.rdrc.rpi.edu> ?
how to get: ftp pcl/* from parcftp.xerox.com
description: A portable CLOS implementation. CLOS is the object oriented
programming standard for Common Lisp. Based on Symbolics
FLAVORS and Xerox LOOPS, among others. Loops stands for
Lisp Object Oriented Programming System.
status: ?
ports: Lucid CL 4.0.1, CMUCL 16e, ?
updated: 1992/09/02
language: Common Lisp
package: WCL
version: 2.14
parts: ?, shared library runtime, source debugger
author: Wade Hennessey <wade@leland.Stanford.EDU>
how to get: ftp pub/wcl/* from sunrise.stanford.edu
description: A common lisp implementation as a shared library. WCL
Is not a 100% complete Common Lisp, but it does have
the full development environment including dynamic file
loading and debugging. A modified version of GDB provides
mixed-language debugging. A paper describing WCL was
published in the proceedings of the 1992 Lisp and Functional
Programming Conference.
requires: GNU C 2.1 (not 2.2.2)
ports: Sparc/SunOS
contact: <wcl@sunrise.stanford.edu>
discussion: <wcl-request@sunrise.stanford.edu>
updated: 1992/10/28
language: Common Lisp
package: KCL (Kyoto Common Lisp)
parts: translator(C), interpreter
how to get: ? ftp pub/kcl*.tar.Z from rascal.ics.utexas.edu ?
author: T. Yuasa <yuasa@tutics.tut.ac.jp>, M. Hagiya
<hagiya@is.s.u-tokyo.ac.jp>
description: KCL, Kyoto Common Lisp, is an implementation of Lisp,
It is written in the language C to run under Un*x-like
operating systems. KCL is very C-oriented; for example,
the compilation of Lisp functions in KCL involves a
subsidiary C compilation.
conformance: conforms to the book ``Common Lisp: The Language,''
G. Steele, et al., Digital Press, 1984.
restriction: must sign license agreement
discussion: kcl-request@cli.com
bugs: kcl@cli.com
updated: 1987/06
language: Common Lisp
package: AKCL (Austin Kyoto Common Lisp)
version: 1-615
parts: improvements
author: Bill Schelter <wfs@cli.com>
how to get: ftp pub/akcl-*.tar.Z from rascal.ics.utexas.edu
author: Bill Schelter <wfs@rascal.ics.utexas.edu>
description: AKCL is a collection of ports, bug fixes, and
performance improvements to KCL.
ports: Decstation3100, HP9000/300, i386/sysV, IBM-PS2/aix, IBM-RT/aix
SGI Sun-3/Sunos[34].* Sun-4 Sequent-Symmetry IBM370/aix,
VAX/bsd VAX/ultrix NeXT
updated: 1992/04/29
language: Common Lisp
package: CLX
version: 5.01
parts: library
how to get: ftp contrib/CLX.R5.01.tar.Z from export.lcs.mit.edu
description: Common Lisp binding for X
contact: ?
ports: ?, CMU Common Lisp
bugs: bug-clx@expo.lcs.mit.edu
updated: 1992/08/26
language: Common Lisp
package: CLISP
version: ?
parts: bytecode compiler, translator(->C), runtime, library, editor
author: Bruno Haible <haible@ma2s2.mathematik.uni-karlsruhe.de>,
Michael Stoll <michael@rhein.iam.uni-bonn.de>
how to get: ftp pub/lisp/clisp from ma2s2.mathematik.uni-karlsruhe.de
description: CLISP is a Common Lisp (CLtL1) implementation by Bruno Haible
of Karlsruhe University and Michael Stoll of Munich University,
both in Germany. It needs only 1.5 MB of RAM. German and
English versions are available, French coming soon. Packages
running in CLISP include PCL and, on Unix machines, CLX.
conformance: CLISP is mostly CLtL1 compliant. It implements 99% of the
standard
ports: Atari, Amiga, MS-DOS, OS/2, Linux, Sun4, Sun386i, HP90000/800
and others
discussion: send "subscribe clisp-list" to
listserv@ma2s2.mathematik.uni-karlsruhe.de
restriction: GNU General Public License
updated: 1993/03/10
language: Common Lisp
package: Cartier's Contribs
version: 1.2
parts: libraries, documentation
author: Guillaume Cartier <cartier@math.uqam.ca>
how to get: ftp pub/mcl2/contrib/Cartiers* from cambridge.apple.com
description: libraries for MCL
requires: Macintosh Common Lisp
updated: 1992/11/30
language: Common Lisp
package: QT-OBJECTS
version: ?
author: Michael Travers <mt@media.mit.edu> and others
parts: library
description: interface between MCL and QuickTime
requires: Macintosh Common Lisp
updated: 1992/12/20
language: Common Lisp
package: Memoization ?
version: ?
parts: library
how to get: ftp pub/Memoization from archive.cs.umbc.edu
author: Marty Hall <hall@aplcenmp.apl.jhu.edu>
description: Automatic memoization is a technique by which an existing
function can be transformed into one that "remembers"
previous arguments and their associated results
updated: 1992/11/30
language: Common Lisp
package: GINA (Generic Interactive Application)
version: 2.2
parts: language binding, class library, interface builder
how to get: ftp /gmd/gina from ftp.gmd.de
usa: ftp contrib/? from export.lcs.mit.edu
description: GINA is an application framework based on Common Lisp and
OSF/Motif to simplify the construction of graphical
interactive applications. It consists of:
+ CLM, a language binding for OSF/Motif in Common Lisp.
+ the GINA application framework, a class library in CLOS
+ the GINA interface builder, an interactive tool implemented
with GINA to design Motif windows.
requires: OSF/Motif 1.1 or better. Common Lisp with CLX, CLOS, PCL and
processes.
ports: Franz Allegro, Lucid, CMU CL and Symbolics Genera
discussion: gina-users-request@gmdzi.gmd.de
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Thu Apr 1 07:00:38 EST 1993
Xref: iecc comp.compilers:4462 comp.lang.misc:9657 comp.sources.d:4722 comp.archives.admin:882 news.answers:6793
Newsgroups: comp.compilers,comp.lang.misc,comp.sources.d,comp.archives.admin,news.answers
Path: iecc!compilers-sender
From: David Muir Sharnoff <muir@idiom.berkeley.ca.us>
Subject: Catalog of compilers, interpreters, and other language tools [p2of3]
Message-ID: <free2-Apr-93@comp.compilers>
Followup-To: comp.archives.admin
Summary: montly posting of free language tools that include source code
Keywords: tools, FTP, administrivia
Sender: compilers-sender@iecc.cambridge.ma.us
Supersedes: <free2-Mar-93@comp.compilers>
Reply-To: muir@idiom.berkeley.ca.us
Organization: University of California, Berkeley
References: <free1-Apr-93@comp.compilers>
Date: Thu, 1 Apr 1993 12:00:32 GMT
Approved: compilers@iecc.cambridge.ma.us
Expires: Sat, 1 May 1993 23:59:00 GMT
Archive-name: free-compilers/part2
Last-modified: 1993/03/24
Version: 3.2
language: Concurrent Clean
package: The Concurrent Clean System
version: 0.8.1
parts: development environment, documentation, compiler(byte-code),
compiler(native), interpreter(byte-code), examples
how to get: ftp pub/Clean/* from ftp.cs.kun.nl
author: Research Institute for Declarative Systems,
University of Nijmegen
description: The Concurrent Clean system is a programming
environment for the functional language Concurrent
Clean, developed at the University of Nijmegen, The
Netherlands. The system is one of the fastest
implementations of functional languages available at
the moment. Its I/O libraries make it possible to do
modern, yet purely functional I/O (including windows,
menus, dialogs etc.) in Concurrent Clean. With the
Concurrent Clean system it is possible to develop
real-life applications in a purely functional
language.
* lazy and purely functional
* strongly typed - based on Milner/Mycroft scheme
* module structure
* modern I/O
* programmer-infulenced evaluation order by annotations
contact: clean@cs.kun.nl
ports: Sun-3, Sun-4, Macintosh
updated: 1992/11/07
language: Dylan
pakcage: Thomas
version: ? first public release ?
parts: translator(Scheme)
how to get: ftp pub/DEC/Thomas from gatekeeper.pa.dec.com
author: Matt Birkholz <Birkholz@crl.dec.com>, Jim Miller
<JMiller@crl.dec.com>, Ron Weiss <RWeiss@crl.dec.com>
description: Thomas, a compiler written at Digital Equipment
Corporation's Cambridge Research Laboratory compiles
a language compatible with the language described
in the book "Dylan(TM) an object-oriented dynamic
language" by Apple Computer Eastern Research and
Technology, April 1992. It does not perform well.
Thomas is NOT Dylan(TM).
ports: MIT's CScheme, DEC's Scheme->C, Marc Feeley's Gambi, Mac, PC,
Vax, MIPS, Alpha, 680x0
requires: Scheme
updated: 1992/09/11
language: E
package: Amiga E
version: 2.1b
parts: compiler, assembler, linker, utilities
author: Wouter van Oortmerssen <Wouter@mars.let.uva.nl>
how to get: ftp amiga/dev/lang/AmigaE21b.lha from amiga.physik.unizh.ch
description: An Amiga specific E compiler. E is a powerful and flexible
procedural programming language and Amiga E a very fast com-
piler for it, with features such as compilation speed of
20000 lines/minute on a 7 Mhz amiga, inline assembler and
linker integrated into compiler, large set of integrated
functions, module concept with 2.04 includes as modules,
flexible type-system, quoted expressions, immediate and typed
lists, low level polymorphism, exception handling and much,
much more. Written in Assembly and E.
discussion: comp.sys.amiga.programmer (sometimes)
ports: Amiga
portability: not portable at all
status: actively developed
updated: 1993/03/01
language: EDIF (Electronic Design Interchange Format)
package: Berkeley EDIF200
version: 7.6
parts: translator-building toolkit
author: Wendell C. Baker and Prof A. Richard Newton of the Electronics
Research Laboratory, Department of Electrical Engineering and
Computer Sciences at the University of California, Berkeley, CA
how to get: ftp from pub/edif in ic.berkeley.edu
description: ?
ports: ?
restriction: no-profit w/o permission
updated: 1990/07
language: EDIF v 2 0 101
package: University of Manchester EDIF v 2 0 101 Syntax Checker
how to get: ftp pub/edif from edif.cs.man.ac.uk
description: Parser/Syntax checker for EDIF v 2 0 101 written in ANSI-C
language: Eiffel
package: ?
version: ?
parts: source checker
author: Olaf Langmack <langmack@inf.fu-berlin.de> and Burghardt Groeber
how to get: ftp pub/heron/ep.tar.Z from ftp.fu-berlin.de
description: A compiler front-end for Eiffel-3 is available. It has been
generated automatically with the Karlsruhe toolbox for
compiler construction according to the most recent public
language definition. The parser derives an easy-to-use
abstract syntax tree, supports elementary error recovery
and provides a precise source code indication of errors. It
performs a strict syntax check and analyses 4000 lines of
source code per second on a Sun-SPARC workstation.
updated: 1992/12/14
language: EuLisp
package: Feel (Free and Eventually Eulisp)
version: 0.75
parts: interpreter, documentation
how to get: ftp pub/eulisp from ftp.bath.ac.uk
author: Pete Broadbery <pab@maths.bath.ac.uk>
description: + integrated object system
+ a module system
+ parallelism
+ interfaces to PVM library, tcp/ip sockets, futures,
Linda, and CSP.
ports: most unix
portability: high, but can use shared memory and threads if available
updated: 1992/09/14
language: FMPL of Accardi
package: FMPL interpreter
version: 1
parts: interpreter, documentation
author: Jon Blow <blojo@xcf.berkeley.edu>
how to get: ftp src/local/fmpl/* from xcf.berkeley.edu
description: FMPL is an experimental prototype-based object-oriented
programming language developed at the Experimental Computing
Facility of the University of California, Berkeley.
+ lambda-calculus based constructs.
+ event-driven (mainly I/O events)
updated: 1992/06/02
language: FORTH
package: TILE Forth
version: 2.1
parts: interpreter
author: Mikael Patel <mip@sectra.se>
how to get: ftp tile-forth-2.1.tar.Z from a GNU archive site
description: Forth interpreter in C; many Forth libraries
conformance: Forth83
restriction: shareware/GPL
ports: unix
updated: 1991/11/13
language: FORTH
package: cforth
version: ?
parts: interpreter
author: ?
how to get: comp.sources.unix archive volume 1
description: ?
updated: ?
language: FORTH
package: F68K
version: ?
how to get: ftp atari/Languages/f68k.* from archive.umich.edu
description: a portable Forth system for Motorola 68k computers
ports: Atari ST/TT, Amiga, Sinclair QL and OS9
portability: very high for 68000 based systems
contact: Joerg Plewe <joerg.plewe@mpi-dortmund.mpg.de>
updated: 1992/12/14
language: Forth, Yerk
package: Yerk
version: 3.62
parts: ?
how to get: ftp pub/Yerk/? from oddjob.uchicago.edu
description: Yerk is an object oriented language based on a
Forth Kernel with some major modifications. It
was originally known as Neon, developed and sold
as a product by Kriya Systems from 1985 to 1989.
Several of us at The University of Chicago have
maintained Yerk since its demise as a product.
Because of the possible trademark conflict that
Kriya mentions, we picked the name Yerk, which is
at least not an acronym for anything, but rather
stands for Yerkes Observatory, part of the Department
of Astronomy and Astrophysics at U of C.
author: ?
updated: ?
language: Forth?
package: Mops
version: 2.3
parts: ?
how to get: ftp pub/Yerk/? from oddjob.uchicago.edu
description: ???
updated: 1993/03/22
language: Fortran
package: f2c
version: ?
parts: translator(C)
author: ?
how to get: ftp ft2/? from netlib@research.att.com
bugs: dmg@research.att.com
updated: ? 1991/02/16 ?
language: Fortran
package: Floppy
version: ?
parts: ?
how to get: ffccc in comp.sources.misc archive volume 12
description: ?
contact: ?
updated: 1992/08/04
language: Fortran
package: Flow
version: ?
parts: ?
how to get: comp.sources.misc archive volume 31
author: Julian James Bunn <julian@vxcrna.cxern.ch>
descripton: The Flow program is a companion to Floppy, it allows the user
to produce various reports on the structure of Fortran
77 code, such as flow diagrams and common block tables.
requires: Floppy
ports: VMS, Unix, CMS
language: Fortran
package: Adaptor (Automatic DAta Parallelism TranslatOR)
version: ?
parts: translator(Fortran), documentation
how to get: ftp gmd/adaptor/* from ftp.gmd.de
description: Adaptor is a tool that transforms data parallel
programs written in Fortran with array extensions,
parallel loops, and layout directives to parallel
programs with explicit message passing.
ADAPTOR is not a compiler but a source to source
transformation that generates Fortran 77 host and
node programs with message passing. The new
generated source codes have to be compiled by the
compiler of the parallel machine.
ports: Alliant FX/2800, iPSC/860, Net of Sun-4 or RS/6000
Workstations (based on PVM), Parsytec GCel, Meiko Concerto
contact: Thomas Brandes <brandes@gmdzi.gmd.de>
updated: 1992/10/17
language: Fortran, C
package: cfortran.h
version: 2.6
parts: macros, documentation, examples
author: Burkhard Burow
how to get: ftp cfortran/* from zebra.desy.de
description: cfortran.h is an easy-to-use powerful bridge between
C and FORTRAN. It provides a completely transparent, machine
independent interface between C and FORTRAN routines and
global data.
cfortran.h provides macros which allow the C preprocessor to
translate a simple description of a C (Fortran) routine or
global data into a Fortran (C) interface.
references: reviewed in RS/Magazine November 1992 and
a user's experiences with cfortran.h are to be described
in the 1/93 issue of Computers in Physics.
portability: high
ports: VAX VMS or Ultrix, DECstation, Silicon Graphics, IBM RS/6000,
Sun, CRAY, Apollo, HP9000, LynxOS, f2c, NAG f90.
contact: burow@vxdesy.cern.ch
updated: 1992/04/12
langauge: Fortran
package: fsplit
version: ?
parts: ?
how to get: ?
description: a tool to split up monolithic fortran programs
updated: ?
language: Fortran
package: ?
version: ?
author: Steve Mccrea <mccrea@gdwest.gd.com>
description: a tool to split up monolithic fortran programs
requires: new awk
updated: ?
language: FP
package: ? funcproglang ?
version: ?
parts: translator(C)
author: ?
how to get: comp.sources.unix archive volume 13
descrition: ? Backus Functional Programming ?
updated: ?
language: Garnet ??
package: Garnet
version: 2.1 alpha
how to get: ftp from /usr/garnet/? from a.gp.cs.cmu.edu
description: ?
contact: ?
updated: ?
language: Garnet
package: Multi-Garnet
version: 2.1
how to get: ftp /usr/garnet/alpha/src/contrib/multi-garnet
from a.gp.cs.cmu.edu
author: Michael Sannella <sannella@cs.washington.edu>
description: better contstraint system for Garnet ??
updated: 1992/09/21
language: Gofer (Haskell derivitive)
package: Gofer
version: 2.28a
parts: interpreter, translator(->C), documentation, examples
author: Mark Jones <jones-mark@cs.yale.edu>
how to get: ftp pub/haskell/gofer from nebula.cs.yale.edu
uk: pub/Packages/Gofer from ftp.comlab.ox.ac.uk
description: Gofer is based quite closely on the Haskell programming
language, version 1.2. It supports lazy evaluation, higher
order functions, pattern matching, polymorphism, overloading
etc and runs on a wide range of machines.
conformances: Gofer does not implement all of Haskell, although it is
very close.
status: maintained but not developed (for a while anyway)
ports: many, including Sun, PC, Mac, Atari, Amiga
updated: 1993/03/09
language: Haskell
package: Chalmers Haskell (aka Haskell B.)
version: ?
parts: ?
how to get: ftp pub/haskell/chalmers/hbc from animal.cs.chalmers.se
requires: LML
contact: ?
updated: 1992/07/06
language: Haskell
package: The Glasgow Haskell Compiler (GHC)
version: 0.10
parts: translator(C), tests, profiler
how to get: ftp pub/haskell/glasgow/* from nebula.cs.yale.edu
uk: ftp pub/haskell/glasgow/* from ftp.dcs.glasgow.ac.uk
se: ftp pub/haskell/glasgow/* from animal.cs.chalmers.se
description: + almost all of Haskell is implemented
+ An extensible I/O system is provided, based on a "monad"
+ significant language extensions are implemented: Fully
fledged unboxed data types, Ability to write arbitrary in-line
C-language code, Incrementally-updatable arrays, Mutable
reference types.
+ generational garbage collector
+ Good error messages
+ programs compiled with GHC "usually" beat
Chalmers-HBC-compiled ones.
+ compiler is written in a modular and well-documented way.
+ Highly configurable runtime system.
- No interactive system.
- Compiler is greedy on resources.
requires: GNU C 2.1+, perl, Chalmers HBC 0.998.x (source build only)
conformance: Almost all of Haskell is implemented.
ports: Sun4
portability: should be high
bugs: <glasgow-haskell-bugs@dcs.glasgow.ac.uk>
contact: <glasgow-haskell-request@dcs.glasgow.ac.uk>
updated: 1992/12/14
language: Hermes
package: IBM Watson prototype Hermes system
version: 0.8alpha patchlevel 01
parts: bytecode compiler, bytecode translator(C), runtime
author: Andy Lowry <lowry@watson.ibm.com>
how to get: ftp pub/hermes/README from software.watson.ibm.com
description: Hermes is a very-high-level integrated language and
system for implementation of large systems and
distributed applications, as well as for
general-purpose programming. It is an imperative,
strongly typed, process-oriented language. Hermes
hides distribution and heterogeneity from the
programmer. The programmer sees a single abstract
machine containing processes that communicate using
calls or sends. The compiler, not the programmer,
deals with the complexity of data structure layout,
local and remote communication, and interaction with
the operating system. As a result, Hermes programs are
portable and easy to write. Because the programming
paradigm is simple and high level, there are many
opportunities for optimization which are not present in
languages which give the programmer more direct control
over the machine.
reference: Strom, Bacon, Goldberg, Lowry, Yellin, Yemini. Hermes: A
Language for Distributed Computing. Prentice-Hall, Englewood
Cliffs, NJ. 1991. ISBN: O-13-389537-8.
ports: RS6000 Sun-4 NeXT IBM-RT/bsd4.3 (Sun-3 and Convex soon)
discussion: comp.lang.hermes
updated: 1992/03/22
language: Hope
package: ?
parts: ?
how to get: ftp ? from brolga.cc.uq.oz.au
author: ?
description: Functional language with polymorphic types and lazy lists.
First language to use call-by-pattern.
ports: Unix, Mac, PC
updated: 1992/11/27
language: ici
package: ici
parts: interpreter, documentation, examples
author: Tim Long
how to get: ftp pub/ici.cpio.Z from extro.ucc.su.oz.au
description: ICI has dynamic arrays, structures and typing with the flow
control constructs, operators and syntax of C. There are
standard functions to provided the sort of support provided
by the standard I/O and the C libraries, as well as addi-
tional types and functions to support common needs such as
simple data bases and character based screen handling.
ports: Sun4, 80x86 Xenix, NextStep, MSDOS
features: + direct access to many system calls
+ structures, safe pointers, floating point
+ simple, non-indexed built in database
+ terminal-based windowing library
contact: ?
portability: high
status: actively developed.
updated: 1992/11/10
language: Icon
package: icon
version: 8.7 (8.5, 8.0 depending on platform)
parts: interpreter, compiler (some platforms), library (v8.8)
author: Ralph Griswold <ralph@CS.ARIZONA.EDU>
how to get: ftp icon/* from cs.arizona.edu
description: Icon is a high-level, general purpose programming language that
contains many features for processing nonnumeric data,
particularly for textual material consisting of string of
characters.
- no packages, one name-space
- no exceptions
+ object oiented features
+ records, sets, lists, strings, tables
+ unlimited line length
- unix interface is primitive
+ co-expressions
references: "The Icon Programmming Language", Ralph E. Griswold and
Madge T. Griswold, Prentice Hall, seond edition, 1990.
"The Implementation of the Icon Programmming Language",
Ralph E. Griswold and Madge T. Griswold, Princeton
University Press 1986
ports: Amiga, Atari, CMS, Macintosh, Macintosh/MPW, MSDOS, MVS, OS/2,
Unix (most variants), VMS, Acorn
discussion: comp.lang.icon
contact: icon-project@cs.arizona.edu
updated: 1992/08/21
language: IDL (Project DOE's Interface Definition Language)
package: SunSoft OMG IDL CFE
version: 1.0
parts: compiler front end, documentation
author: SunSoft Inc.
how to get: ftp pub/OMG_IDL_CFE_1.0 from omg.org
description: OMG's (Object Management Group) CORBA 1.1 (Common
Object Request Broker Architecture) specification
provides the standard interface definition between
OMG-compliant objects. IDL (Interface Definition
Language) is the base mechanism for object
interaction. The SunSoft OMG IDL CFE (Compiler Front
End) provides a complete framework for building CORBA
1.1-compliant preprocessors for OMG IDL. To use
SunSoft OMG IDL CFE, you must write a back-end; full
instructions are included. No problem. A complete
compiler of IDL would translate IDL into client side
and server side routines for remote communication in
the same manner as the currrent Sun RPCL compiler. The
additional degree of freedom that the IDL compiler
front end provides is that it allows integration of new
back ends which can translate IDL to various
programming languages. Locally at Sun we are working on
a back end that will produce C and C++, and we know of
companies (members of OMG) that are interested in other
target languages such as Pascal or Lisp.
contact: idl-cfe@sun.com
updated: 1992/10/23
language: IFP (Illinois Functional Programming)
package: ifp
version: 0.5
parts: interpreter
author: Arch D. Robison <robison@shell.com>
how to get: comp.sources.unix archive volume 10
description: A variant of Backus' "Functional Programming" language
with a syntax reminiscent of Modula-2. The interpreter
is written in portable C.
references: [1] Arch D. Robison, "Illinois Functional Programming: A
Tutorial," BYTE, (February 1987), pp. 115--125.
[2] Arch D. Robison, "The Illinois Functional
Programming Interpreter," Proceedings of 1987 SIGPLAN
Conference on Interpreters and Interpretive Techniques,
(June 1987), pp. 64-73
ports: UNIX, MS-DOS, CTSS (Cray)
updated: ?
language: INTERCAL
package: ?
version: ?
how to get: archie?
description: ?
contact: ?
updated: ?
language: J
package: J-mode
what: add on to J
parts: emacs macros
how to get: ftp pub/j/gmacs/j-interaction-mode.el from think.com
updated: 1991/03/04
language: J
package: J from ISI
version: 6
parts: interpreter, tutorial
author: Kenneth E. Iverson and Roger Hui <hui@yrloc.ipsa.reuter.com>
how to get: ftp languages/apl/j/* from watserv1.waterloo.edu
description: J was designed and developed by Ken Iverson and Roger Hui. It
is similar to the language APL, departing from APL in using
using the ASCII alphabet exclusively, but employing a spelling
scheme that retains the advantages of the special alphabet
required by APL. It has added features and control structures
that extend its power beyond standard APL. Although it can be
used as a conventional procedural programming language, it can
also be used as a pure functional programming language.
ports: Dec, NeXT, SGI, Sun-3, Sun-4, VAX, RS/6000, MIPS, Mac, Acorn
IBM-PC, Atari, 3b1, Amiga
updated: 1992/10/31
language: Janus
package: qdjanus
version: 1.3
parts: translator(prolog)
author: Saumya Debray <debray@cs.arizona.edu>
how to get: ftp janus/qdjanus/* from cs.arizona.edu
conformance: mostly compliant with "Programming in Janus" by
Saraswat, Kahn, and Levy.
description: janus is a janus-to-prolog compiler meant to be used
with Sicstus Prolog
updated: 1992/05/18
language: Janus
package: jc
version: 1.50 alpha
parts: translator(C)
author: David Gudeman <gudeman@cs.arizona.edu>
how to get: ftp janus/jc/* from cs.arizona.edu
description: jc is a janus-to-C compiler (considerably faster than qdjanus).
jc is a _sequential_ implementation of a _concurrent_ language.
status: jc is an experimental system, undergoing rapid development.
It is in alpha release currently.
bugs: jc-bugs@cs.arizona.edu
discussion: janusinterest-request@parc.xerox.com
ports: sun-4, sun-3, Sequent Symmetry
updated: 1992/06/09
language: Kevo
package: kevo
version: 0.9b2
parts: ?, demo programs, user's guid, papers
author: Antero Taivalsaari <antero@csr.uvic.ca>
how to get: ftp /ursa/kevo/* from ursamajor.uvic.ca
description: Experimental prototype-based object-oriented system.
Although the Kevo system has been built to experiment
with ideas which are somewhat irrelevant from the
viewpoint of Forth, the system does bear some
resemblance to Forth; in particular, the system
executes indirect threaded code, and a great deal
of the primitives are similar to those of Forth's.
ports: Macintosh ('020 or better)
contact: kevo-interest@ursamajor.uvic.ca
updated: 1992/09/21
language: PCN
package: PCN
version: 2.0
parts: compiler?, runtime, linker, libraries, tools, debugger,
profiler, tracer
author: Ian Foster <foster@mcs.anl.gov>, Steve Tuecke
<tuecke@mcs.anl.gov>, and others
how to get: ftp pub/pcn/pcn_v2.0.tar.Z from info.mcs.anl.gov
description: PCN is a parallel programming system designed to improve
the productivity of scientists and engineers using parallel
computers. It provides a simple language for specifying
concurrent algorithms, interfaces to Fortran and C, a
portable toolkit that allows applications to be developed
on a workstation or small parallel computer and run
unchanged on supercomputers, and integrated debugging and
performance analysis tools. PCN was developed at Argonne
National Laboratory and the California Institute of
Technology. It has been used to develop a wide variety of
applications, in areas such as climate modeling, fluid
dynamics, computational biology, chemistry, and circuit
simulation.
ports: (workstation nets): Sun4, NeXT, RS/6000, SGI
(multicomputers): iPSC/860, Touchstone DELTA
(shared memory multiprocessors): Symmetry/Dynix
contact: <pcn@mcs.anl.gov>
updated: 1993/02/12
language: RLaB language (math manipulation - MATLAB-like)
package: RLaB
version: 0.50 - first public release, still alpha
parts: interpreter, libraries, documentation
author: Ian Searle <ians@eskimo.com>
how to get: ftp pub/alpha/RLaB from evans.ee.adfa.oz.au
description: RLaB is a "MATLAB-like" matrix-oriented programming
language/toolbox. RLaB focuses on creating a good experimental
environment (or laboratory) in which to do matrix math
Currently RLaB has numeric scalars and matrices (real and
complex), and string scalars, and matrices. RLaB also contains
a list variable type, which is a heterogeneous associative
array.
restriction: GNU General Public License
requires: GNUPLOT, lib[IF]77.a (from f2c)
ports: many unix, OS/2, Amiga
bugs: Ian Searle <ians@eskimo.com>
updated: 1993/02/16
language: FUDGIT language (math manipulation)
package: FUDGIT
version: 2.27
parts: interpreter
author: Thomas Koenig <ig25@rz.uni-karlsruhe.de> ??
how to get: ftp /pub/linux/sources/usr.bin/fudgit-* from tsx-11.mit.edu ??
description: FUDGIT is a double-precision multi-purpose fitting program. It
can manipulate complete columns of numbers in the form of
vector arithmetic. FUDGIT is also an expression language
interpreter understanding most of C grammar except pointers.
Morever, FUDGIT is a front end for any plotting program
supporting commands from stdin. It is a nice mathematical
complement to GNUPLOT, for example.
requires: GNUPLOT
ports: AIX, HPUX, Linux, IRIX, NeXT, SunOS, Ultrix
updated: 1993/02/22
language: Unix BC (arbitrary-precision arithmetic language)
package: GNU BC
version: 1.02
parts: interpreter?
author: ?
how to get: ftp bc-1.02.tar.Z from a GNU archive site
description: Bc is an arbitrary precision numeric processing language. Its
syntax in similar to C but differs in many substantial areas.
This version was written to be a POSIX compliant bc processor
with several extensions to the draft standard. This version
does not use the historical method of having bc be a compiler
for the dc calculator. This version has a single executable
that both compiles the language and runs the resulting "byte
code". The "byte code" is NOT the dc language.
bugs: ?
updated: ?
language: Calc? (symbolic math calculator)
package: Calc
version: 2.02
parts: interpreter, emacs mode
author: ?
how to get: ftp calc-2.02.tar.z from a GNU archive site
description: Calc is an extensible, advanced desk calculator and
mathematical tool written in Emacs Lisp that runs as part of
GNU Emacs. It is accompanied by the "Calc Manual", which
serves as both a tutorial and a reference. If you wish, you
can use Calc as only a simple four-function calculator, but it
also provides additional features including choice of algebraic
or RPN (stack-based) entry, logarithms, trigonometric and
financial functions, arbitrary precision, complex numbers,
vectors, matrices, dates, times, infinities, sets, algebraic
simplification, differentiation, and integration.
bugs: ?
updated: ?
language: lex
package: flex
version: 2.3.7
parts: scanner generator
how to get: ftp flex-2.3.7.tar.Z from a GNU archive site or ftp.ee.lbl.gov
author: Vern Paxson <vern@ee.lbl.gov>
updated: 1992/10/20
language: LIFE (Logic, Inheritance, Functions, and Equations)
package: Wild_LIFE
version: first-release
parts: interpreter, manual, tests, libraries, examples
author: Paradise Project, DEC Paris Research Laboratory.
how to get: ftp pub/plan/Life.tar.Z from gatekeeper.dec.com.
description: LIFE is an experimental programming language with a
powerful facility for structured type inheritance. It
reconciles styles from functional programming, logic
programming, and object-oriented programming. LIFE
implements a constraint logic programming language with
equality (unification) and entailment (matching)
constraints over order-sorted feature terms. The
Wild_LIFE interpreter has a comfortable user interface
with incremental query extension ability. It contains
an extensive set of built-in operations as well as an X
Windows interface.
conformance: semantic superset of LOGIN and LeFun. Syntax is similar
to prolog.
discussion: life-request@prl.dec.com
bugs: life-bugs@prl.dec.com
contact: Peter Van Roy <vanroy@prl.dec.com>
ports: MIPS-Ultrix
portability: good in theory
updated: 1992/12/14
language: lisp
package: RefLisp
version: 2.67
parts: interpreter, documentation, examples, profiler
author: Bill Birch <bbirch@hemel.bull.co.uk>
how to get: ftp implementations/reflisp/* from the directory
/afs/cs.cmu.edu/user/mkant/Public/Lisp on ftp.cs.cmu.edu
description: The interpreter is a shallow-binding (i.e., everything has
dynamic scope), reference counting design making it suitable
for experimenting with real-time and graphic user interface
programming. Common Lisp compatibility macros are provided, and
most of the examples in "Lisp" by Winston & Horn have been run
on RefLisp. RefLisp makes no distinction between symbol-values
and function-values, so a symbol can be either but not both.
There are Lisp modules for lexical scope and for running
indefinite extent Scheme programs.
status: "Last Update for a While," author is emigrating to Australia
ports: MSDOS (CGA/EGA/VGA), Unix (AIX)
updated: 1993/02/09
language: lisp
package: xlisp
version: 2.1
parts: interpreter
author: David Micheal Betz <dbetz@apple.com>
how to get: ftp pub/xlisp* from wasp.eng.ufl.edu
usmail: contact Tom Almy <toma@sail.labs.tek.com>
windows: ftp util/wxlslib.zip from ftp.cica.indiana.edu
version2.0: ftp pub/xlisp/* from cs.orst.edu
description: XLISP is an experimental programming language
combining some of the features of Common Lisp with an
object-oriented extension capability. It was
implemented to allow experimentation with
object-oriented programming on small computers.
conformance: subset of Common Lisp with additions of Class and Object
portability: very high: just needs a C compiler
ports: unix, amiga, atari, mac, MSDOS
restriction: ? no commercial use ?
updated: 1992/05/26 (unix), 1987/12/16 (other platforms)
language: lisp
package: "LISP, Objects, and Symbolic Programming"
version: ?
parts: book with compiler included
author: Robert R. Kessler and Amy R. Petajan
publisher: Scott, Foresman and Company, Glenview, IL
how to get: bookstore...
updated: 1988
language: lisp
package: franz lisp
version: ?
how to get: [does anyone know where you get franz lisp??? --muir]
author: ?
discussion: franz-friends-request@berkeley.edu
updated: ?
language: lisp (WOOL - Window Object Oriented Language)
package: GWM (Generic Window Manager)
version: ?
parts: interpreter, examples
author: ?
how to get: ftp contrib/gwm/* from export.lcs.mit.edu
france: ftp pub/gwm/* from avahi.inria.fr
description: Gwm is an extensible window manager for X11. It is
based on a WOOL kernel, and interpreted dialect of lisp
with specific winow management primitives.
discussion: gwm-talk@???
contact: ?
updated: ?
language: lisp (elisp - Emacs Lisp)
package: GNU Emacs
version: 18.59
parts: editor, interpreter, documentation
author: Richard Stallman <rms@gnu.ai.mit.edu> and others
description: An editor that is almost an operating system. Quite
programmable. [someone want to say something better? --muir]
discussion: alt.religion.emacs, gnu.emacs.sources
announcements: gnu.emacs.announce
bugs: gnu.emacs.bug
help: gnu.emacs.help
ports: Unix, VMS, ?
updated: ?
language: Logo
package: logo
version: 4
parts: interpreter
author: ?
how to get: comp.sources.unix archive volume 10
description: ?
updated: ?
language: Logo
package: Berkeley Logo
version: 2.9 - alpha
parts: interpreter
author: Brian Harvey <bh@anarres.CS.Berkeley.EDU>
how to ge: ftp pub/*logo* from anarres.cs.berkeley.edu
description: + Logo programs are compatible among Unix, PC, and Mac.
+ "richer" than MswLogo
- pretty slow.
- doesn't do anything fancy about graphics. (One turtle.)
ports: unix, pc, mac
updated: 1993/03/01
language: Logo
package: MswLogo
version: 3.2
parts: interpreter
author: George Mills <mills@athena.lkg.dec.com>
how to get: ftp pd1:<msdos.log>/MSW*.ZIP from OAK.Oakland.Edu
description: A windows front-end for Berkeley Logo
status: activly developed
bugs: George Mills <mills@athena.lkg.dec.com>
ports: MS Windows 3.x
updated: 1992/10/17
language: Lolli (logic programming)
package: Lolli
parts: ?
how to get: ftp pub/Lolli/Lolli-07.tar.Z. from ftp.cis.upenn.edu
author: ? Josh Hodas <hodas@saul.cis.upenn.edu> ?
description: Lolli is an interpreter for logic programming based
on linear logic principles.
Lolli can be viewed as a refinement of the the
Hereditary Harrop formulas of Lambda-Prolog. All the
operators (though not the higher order unification) of
Lambda-Prolog are supported, but with the addition of
linear variations. Thus a Lolli program distinguishes
between clauses which can be used as many, or as few,
times as desired, and those that must be used exactly
once.
requires: ML
updated: 1992/11/08
language: LOOPN
package: LOOPN
version: ?
parts: compiler?, simulator
how to get: ftp departments/computer_sci*/loopn.tar.Z from ftp.utas.edu.au
description: I wish to announce the availability of a compiler, simulator
and associated source control for an object-oriented petri net
language called LOOPN. In LOOPN, a petri net is an extension
of coloured timed petri nets. The extension means firstly that
token types are classes. In other words, they consist of both
data fields and functions, they can be declared by inheriting
from other token types, and they can be used polymorphically.
The object-oriented extensions also mean that module or subnet
types are classes. LOOPN has been developed over a period of
about 5 years at the University of Tasmania, where it has been
used in teaching computer simulation and the modelling of
network protocols. A petri net is a directed, bipartite graph;
nodes are either places (represented by circles) or transitions
(represented by rectangles). A net is marked by placing tokens
on places. When all the places pointing to a transition (the
input places) have a token, the net may be fired by removing a
token from each input place and adding a token to each place
pointed to by the transition (the output places). Petri nets
are used to model concurrent systems, particularly in the
network protocol area.
contact: Charles Lakos <charles@probitas.cs.utas.edu.au>
updated: 1992/12/20
language: MeldC (MELD, C)
package: MeldC
version: 2.0
parts: microkernel, compiler, debugger, manual, examples
author: MELD Project, Programming Systems Laboratory at
Columbia University
how to get: obtain license from <MeldC@cs.columbia.edu>
restriction: must sign license, cannot use for commercial purposes
description: MeldC 2.0: A Reflective Object-Oriented Coordination
Programming Language MELDC is a C-based, concurrent,
object-oriented language built on a reflective
architecture. The core of the architecture is
a micro-kernel (the MELDC kernel), which encapsulates
a minimum set of entities that cannot be modeled as
objects. All components outside of the
kernel are implemented as objects in MELDC itself
and are modularized in the MELDC libraries. MELDC is
reflective in three dimensions: structural,
computational and architectural. The structural
reflection indicates that classes and meta-classes are
objects, which are written in MELDC. The
computational reflection means that object behaviors
can be computed and extended at runtime. The
architectural reflection indicates that new
features/properties (e.g., persistency and
remoteness) can be constructed in MELDC.
ports: Sun4/SunOS4.1 Mips/Ultrix4.2
contact: <MeldC@cs.columbia.edu>
updated: 1992/12/15
language: ML
package: LML
version: ?
parts: compiler(?), interactive environment
how to get: ftp ? from animal.cs.chalmers.se
description: lazy, completely functional variant of ML.
ports: ?
contact: ?
updated: 1992/07/06
langauge: m4
package: GNU m4
version: 1.0
parts: interperter, ?
how to get: ftp m4-1.0.tar.Z from a GNU archive site
author: ?
description: A macro preprocessor language, somewhat flexible.
conformance: ?
ports: ?
updated: 1991/10/25
language: Modula-2, Pascal
package: m2
version: ? 7/2/92 ?
parts: ? compiler ?
history: The compiler was designed and built by Michael L.
Powell, and originally released in 1984. Joel
McCormack sped the compiler up, fixed lots of bugs, and
swiped/wrote a User's Manual. Len Lattanzi ported the
compiler to the MIPS.
description: A modula-2 compiler for VAX and MIPS. A Pascal
compiler for VAX is also included. The Pascal compiler
accepts a language that is almost identical to Berkeley
Pascal.
conformance: extensions:
+ foreign function and data interface
+ dynamic array variables
+ subarray parameters
+ multi-dimensional open array parameters
+ inline proceedures
+ longfloat type
+ type-checked interface to C library I/O routines
how to get: ftp pub/DEC/Modula-2/m2.tar.Z from gatekeeper.dec.com
restriction: must pass changes back to Digital
ports: vax (ultrix, bsd), mips (ultrix)
contact: modula-2@decwrl.pa.dec.com
updated: 1992/07/06
language: Modula-2
package: mtc
parts: translator(C)
how to get: ftp soft/unixtools/compilerbau/mtc.tar.Z
from rusmv1.rus.uni-stuttgart.de
author: ?
description: ?
ports: ?
updated: 1991/10/25
language: Modula-2, Modula-3
package: M2toM3 ?
version: ?
parts: translator(Modula-2 -> Modula-3), ?
author: ?
how to get: ftp pub/DEC/Modula-3/contrib/M2toM3 from gatekeeper.dec.com
description: ?
requires: ?
updated: ?
language: Modula-2
package: PRAM emulator and parallel modula-2 compiler ??
version: ?
parts: compiler, emulator
how to get: ftp pub/pram/* from cs.joensuu.fi
description: A software emulator for parallel random access machine (PRAM)
and a parallel modula-2 compiler for the emulator. A PRAM
consists of P processors, an unbounded shared memory, and a
common clock. Each processor is a random access machine (RAM)
consisting of R registers, a program counter, and a read-only
signature register. Each RAM has an identical program, but the
RAMs can branch to different parts of the program. The RAMs
execute the program synchronously one instruction in one clock
cycle.
pm2 programming language is Modula-2/Pascal mixture having
extensions for parallel execution in a PRAM. Parallelism is
expressed by pardo-loop- structure. Additional features include
privat/shared variables, two synchronization strategies, load
balancing and parallel dynamic memory allocation.
contact: Simo Juvaste <sjuva@cs.joensuu.fi>
updated: 1993/02/17
language: Modula-3
package: SRC Modula-3
version: 2.11
parts: translator(C), runtime, library, documentation
how to get: ftp pub/DEC/Modula-3/m3-*.tar.Z from gatekeeper.dec.com
description: The goal of Modula-3 is to be as simple and safe as it
can be while meeting the needs of modern systems
programmers. Instead of exploring new features, we
studied the features of the Modula family of languages
that have proven themselves in practice and tried to
simplify them into a harmonious language. We found
that most of the successful features were aimed at one
of two main goals: greater robustness, and a simpler,
more systematic type system. Modula-3 retains one of
Modula-2's most successful features, the provision for
explicit interfaces between modules. It adds objects
and classes, exception handling, garbage collection,
lightweight processes (or threads), and the isolation
of unsafe features.
conformance: implements the language defined in SPwM3.
ports: i386/AIX 68020/DomainOS Acorn/RISCiX MIPS/Ultrix 68020/HP-UX
RS6000/AIX IBMRT/4.3 68000/NextStep i860/SVR4 SPARC/SunOS
68020/SunOS sun386/SunOS Multimax/4.3 VAX/Ultrix
contact: Bill Kalsow <kalsow@src.dec.com>
discussion: comp.lang.modula3
updated: 1992/02/09
language: Modula-3
package: m3pc
parts: ?
author: ?
how to get: ftp pub/DEC/Modula-3/contrib/m3pc* from gatekeeper.dec.com
description: an implementation of Modula-3 for PCs.
[Is this SRC Modula-3 ported? --muir]
updated: ?
language: Motorola DSP56001 assembly
package: a56
version: 1.1
parts: assembler
author: Quinn C. Jensen <jensenq@qcj.icon.com>
how to get: alt.sources archive
updated: 1992/08/10
language: natural languages
package: proof
parts: parser, documentation
author: Craig R. Latta <latta@xcf.Berkeley.EDU>
how to get: ftp src/local/proof/* from scam.berkeley.edu
description: a left-associative natural language grammar scanner
bugs: proof@xcf.berkeley.edu
discussion: proof-request@xcf.berkeley.edu ("Subject: add me")
ports: Decstation3100 Sun-4
updated: 1991/09/23
language: NewsClip ?
package: NewsClip
version: 1.01
parts: translator(NewsClip->C), examples, documentation
author: Looking Glass Software Limited but distributed by
ClariNet Communications Corp.
description: NewsClip is a very high level language designed for
writing netnews filters. It translates into C.
It includes support for various newsreaders.
restriction: Cannot sell the output of the filters. Donation is hinted at.
status: supported for ClariNet customers only
contact: newsclip@clarinet.com
updated: 1992/10/25
language: Oaklisp
package: oaklisp
version: 1.2
parts: interface, bytecode compiler, runtime system, documentation
author: Barak Pearlmutter, Kevin Lang
how to get: ftp /afs/cs.cmu.edu/user/bap/oak/ftpable/* from f.gp.cs.cmu.edu
description: Oaklisp is a Scheme where everything is an object. It
provides multiple inheritence, a strong error system,
setters and locators for operations, and a facility for
dynamic binding.
status: actively developed?
contact: Pearlmutter-Barak@CS.Yale.Edu ?
updated: 1992/05 ?
language: Oberon
package: Oberon from ETH Zurich
version: 2.2 (msdos: 1.0)
parts: compiler, programming environment, libraries, documenation
how to get: ftp Oberon/* from neptune.inf.ethz.ch
MSDOS: ftp Oberon/DOS386/* from neptune.inf.ethz.ch
macintosh: ??? same package or different ??? ftp
/mac/development/languages/macoberon2.40.sit.hqx
from archive.umich.edu
author: Josef Templ <templ@inf.ethz.ch>
conformance: superset (except Mac)
ports: DECstation/MIPS/Ultrix/X11 Macintosh/68020/MacOS/QuickDraw
IBM/RS6000/AIX/X11 Sun-4/SunOS4/X11 Sun-4/SunOS4/pixrect
MSDOS
contact: Leuthold@inf.ethz.ch
updated: 1992/07/20
language: Oberon2
package: Oberon-2 LEX/YACC definition
version: 1.4
parts: parser(yacc), scanner(lex)
how to get: mail bevan@cs.man.ac.uk with Subject "b-server-request~ and
body "send oberon/oberon_2_p_v1.4.shar"
author: Stephen J Bevan <bevan@cs.man.ac.uk>
parts: scanner(lex) parser(yacc)
status: un-officially supported
updated: 1992/07/06
language: OPS5
package: PD OPS5
version: ?
parts: interpreter
how to get: ftp /afs/cs.cmu.edu/user/mkant/Public/Lisp/ops5* from
ftp.cs.cmu.edu
author: Written by Charles L. Forgy and ported to Common Lisp by
George Wood and Jim Kowalski.
description: Public domain implementation of an OPS5 interpreter. OPS5 is
a programming language for production systems. ??????
contact: ? Mark Kantrowitz <mkant+@cs.cmu.edu> ?
requires: Common Lisp
updated: 1992/10/17
language: Parallaxis
package: parallaxis
version: 2.0
parts: ?, simulator, x-based profiler
author: ?
how to get: ftp pub/parallaxis from ftp.informatik.uni-stuttgart.de
description: Parallaxis is a procedural programming language based
on Modula-2, but extended for data parallel (SIMD) programming.
The main approach for machine independent parallel programming
is to include a description of the virtual parallel machine
with each parallel algorithm.
ports: MP-1, CM-2, Sun-3, Sun-4, DECstation, HP 700, RS/6000
contact: ? Thomas Braunl <braunl@informatik.uni-stuttgart.de> ?
updated: 1992/10/23
language: Parlog
package: SPM System (Sequential Parlog Machine)
version: ?
parts: ?, documenation
author: ?
how to get: ? ftp lang/Parlog.tar.Z from nuri.inria.fr
description: a logic programming language ?
references: Steve Gregory, "Parallel Logic Programming in PARLOG",
Addison-Wesely, UK, 1987
ports: Sun-3 ?
restriction: ? no source code ?
updated: ??
language: Pascal
package: p2c
version: 1.20
parts: translator(Pascal->C)
author: Dave Gillespie <daveg@synaptics.com>
how to get: ftp ? from csvax.cs.caltech.edu
conformance: supports ANSI/ISO standard Pascal as well as substantial
subsets of HP, Turbo, VAX, and many other Pascal dialects.
ports: ?
updated: 1990/04/13
language: Pascal
package: ? iso_pascal ?
version: ?
parts: scanner(lex), parser(yacc)
author: ?
how to get: comp.sources.unix archive volume 13
description: ?
updated: ?
language: Pascal, Lisp, APL, Scheme, SASL, CLU, Smalltalk, Prolog
package: Tim Budd's C++ implementation of Kamin's interpreters
version: ?
parts: interpretors, documentation
author: Tim Budd <budd@cs.orst.edu>
how to get: ? ftp pub/budd/kamin/*.shar from cs.orst.edu ?
description: a set of interpretors written as subclasses based on
"Programming Languages, An Interpreter-Based Approach",
by Samuel Kamin.
requires: C++
status: ?
contact: Tim Budd <budd@fog.cs.orst.edu>
updated: 1991/09/12
language: Pascal
package: ? frontend ?
version: Alpha
parts: frontend (lexer, parser, semantic analysis)
author: Willem Jan Withagen <wjw@eb.ele.tue.nl>
how to get: ftp pub/src/pascal/front* from ftp.eb.ele.tue.nl
description: a new version of the PASCAL frontend using the Cocktail
compiler tools.
updated: 1993/02/24
language: Pascal
package: ptc
version: ?
parts: translator(Pacal->C)
how to get: ftp languages/ptc from uxc.sco.uiuc.edu ? (use archie?)
description: ?
contact: ?
updated: ?
language: Turbo Pascal, Turbo C
package: tptc
version: ?
parts: translator(Turbo Pascal->Turbo C)
how to get: ftp mirrors/msdos/turbopas/tptc17*.zip from wuarchive.wustl.edu
description: (It does come with full source and a student recently used it
as a start for a language that included stacks and queues as a
built-in data type.
contact: ?
updated: ?
language: Perl (Practical Extraction and Report Language)
package: perl
version: 4.0 patchlevel 36
parts: interpreter, debugger, libraries, tests, documentation
how to get: ftp pub/perl.4.0/* from jpl-devvax.jpl.nasa.gov
OS/2 port: ftp pub/os2/all/unix/prog*/perl4019.zip from hobbes.nmsu.edu
Mac port: ftp software/mac/src/mpw_c/Mac_Perl_405_* from nic.switch.ch
Amiga port: ftp perl4.035.V010.* from wuarchive.wustl.edu
VMS port: ftp software/vms/perl/* from ftp.pitt.edu
Atari port: ftp amiga/Languages/perl* from atari.archive.umich.edu
DOS port: ftp pub/msdos/perl/* from ftp.ee.umanitoba.ca
author: Larry Wall <lwall@netlabs.com>
description: perl is an interpreted language optimized for scanning
arbitrary text files, extracting information from those text
files, and printing reports based on that information. It's
also a good language for many system management tasks.
features: + very-high semantic density becuase of powerful operators
like regular expression substitution
+ exceptions, provide/require
+ associative array can be bound to dbm files
+ no arbitrary limits
+ direct access to almost all system calls
+ can access binary data
+ many powerful common-task idioms
- three variable types: scalar, array, and hash table
- unappealing syntax
references: "Programming Perl" by Larry Wall and Randal L. Schwartz,
O'Reilly & Associates, Inc. Sebastopol, CA.
ISBN 0-93715-64-1
discussion: comp.lang.perl
bugs: comp.lang.perl; Larry Wall <lwall@netlabs.com>
ports: almost all unix, MSDOS, Mac, Amiga, Atari, OS/2, VMS
portability: very high for unix, not so high for others
updated: 1993/02/07
language: perl, awk, sed, find
package: a2p, s2p, find2perl
parts: translators(perl)
author: Larry Wall
how to get: comes with perl
description: translators to turn awk, sed, and find into perl.
language: perl, yacc
package: perl-byacc
version: 1.8.2
parts: parser-generator(perl)
how to get: ftp local/perl-byacc.tar.Z from ftp.sterling.com
author: Rick Ohnemus <rick@IMD.Sterling.COM>
description: A modified version of byacc that generates perl code. Has '-p'
switch so multiple parsers can be used in one program (C or
perl).
portability: Should work on most (?) UNIX systems. Also works with
SAS/C 6.x on AMIGAs.
updated: 1993/01/24
language: Postscript
package: Ghostscript
version: 2.5.2
parts: interpreter, ?
author: L. Peter Deutsch <ghost%ka.cs.wisc.edu@cs.wisc.edu>
how to get: ftp pub/GNU/ghostscript* from a GNU archive site
description: ?
updated: 1992/10/07
language: Postscript, Common Lisp
package: PLisp
version: ?
parts: translator(Postscript), programming environment(Postscript)
description: ?
author: John Peterson <peterson-john@cs.yale.edu>
updated: ?
language: Prolog
package: SB-Prolog
version: 3.1 ?
author: interpreter
how to get: ftp pub/sbprolog from sbcs.sunysb.edu
description: ?
contact: ? warren@sbcs.sunysb.edu ?
restriction: GNU General Public License
updated: ?
langauge: Prolog
package: Modular SB-Prolog
version: ?
parts: interpreter
how to get: ftp pub/dts/mod-prolog.tar.Z from ftp.dcs.ed.ac.uk
description: SB-Prolog version 3.1 plus modules
ports: Sparc
contact: Brian Paxton <mprolog@dcs.ed.ac.uk>
restriction: GNU General Public License
updated: ?
language: ALF [prolog variant]
package: alf (Algebraic Logic Functional programming language)
version: ?
parts: runtime, compiler(Warren Abstract Machine)
author: Rudolf Opalla <opalla@julien.informatik.uni-dortmund.de>
how to get: ftp pub/programming/languages/LogicFunctional from
ftp.germany.eu.net
description: ALF is a language which combines functional and
logic programming techniques. The foundation of
ALF is Horn clause logic with equality which consists
of predicates and Horn clauses for logic programming,
and functions and equations for functional programming.
Since ALF is an integration of both programming
paradigms, any functional expression can be used
in a goal literal and arbitrary predicates can
occur in conditions of equations.
updated: 1992/10/08
language: CLP (Constraint Logic Programming language) [Prolog variant]
package: CLP(R)
version: 1.2
parts: runtime, compiler(byte-code), contstraint solver
author: IBM
how to get: mail to Joxan Jaffar <joxan@watson.ibm.com>
description: CLP(R) is a constraint logic programming language
with real-arithmetic constraints. The implementation
contains a built-in constraint solver which deals
with linear arithmetic and contains a mechanism
for delaying nonlinear constraints until they become
linear. Since CLP(R) subsumes PROLOG, the system
is also usable as a general-purpose logic programming
language. There are also powerful facilities for
meta programming with constraints. Significant
CLP(R) applications have been published in diverse
areas such as molecular biology, finance, physical
modelling, etc. We are distributing CLP(R) in order
to help widen the use of constraint programming, and
to solicit feedback on the system
restriction: free for academic and research purposes only
contact: Roland Yap <roland@bruce.cs.monash.edu.au>, Joxan Jaffar
ports: unix, msdos, OS/2
updated: 1992/10/14
language: Prolog (variant)
package: Aditi
version: Beta Release
parts: interpreter, database
author: Machine Intelligence Project, Univ. of Melbourne, Australia
how to get: send email to aditi@cs.mu.oz.au
description: The Aditi Deductive Database System is a multi-user
deductive database system. It supports base relations
defined by facts (relations in the sense of relational
databases) and derived relations defined by rules that
specify how to compute new information from old
information. Both base relations and the rules
defining derived relations are stored on disk and are
accessed as required during query evaluation. The
rules defining derived relations are expressed in a
Prolog-like language, which is also used for expressing
queries. Aditi supports the full structured data
capability of Prolog. Base relations can store
arbitrarily nested terms, for example arbitrary length
lists, and rules can directly manipulate such terms.
Base relations can be indexed with B-trees or
multi-level signature files. Users can access the
system through a Motif-based query and database
administration tool, or through a command line
interface. There is also in interface that allows
NU-Prolog programs to access Aditi in a transparent
manner. Proper transaction processing is not supported
in this release.
ports: Sparc/SunOS4.1.2 Mips/Irix4.0
contact: <aditi@cs.mu.oz.au>
updated: 1992/12/17
language: Lambda-Prolog
package: Prolog/Mali (PM)
version: ? 6/23/92 ?
parts: translator(C), linker, libraries, runtime, documentation
how to get: ftp pm/* from ftp.irisa.fr
author: Pascal Brisset <brisset@irisa.fr>
description: Lambda-Prolog, a logic programming language defined by
Miller, is an extension of Prolog where terms are
simply typed $\lambda$terms and clauses are higher
order hereditary Harrop formulas. The main novelties
are universal quantification on goals and implication.
references: + Miller D.A. and Nadathur G. "Higher-order logic
programming", 3rd International Conference on Logic
Programming, pp 448-462, London 1986.
+ Nadathur G. "A Higher-Order Logic as a Basis for Logic
Programming", Thesis, University of Pennsylvania, 1987.
requires: MALI-V06 abstract memory. MALI is available by anonymous ftp
from ftp.irisa.fr
ports: unix
discussion: prolog-mali-request@irisa.fr
contact: pm@irisa.fr
updated: 1992/07/06
language: Prolog (variant)
package: CORAL
version: ?
parts: interpreter, interface(C++), documentation
author: ?
how to get: ftp ? from ftp.cs.wisc.edu
description: The CORAL deductive database/logic programming system was
developed at the University of Wisconsin-Madison. The CORAL
declarative language is based on Horn-clause rules with
extensions like SQL's group-by and aggregation operators, and
uses a Prolog-like syntax. * Many evaluation techniques are
supported, including bottom-up fixpoint evaluation and top-down
backtracking. * A module mechanism is available. Modules are
separately compiled; different evaluation methods can be used
in different modules within a single program. * Disk-resident
data is supported via an interface to the Exodus storage
manager. * There is an on-line help facility
requires: AT&T C++ 2.0 (G++ soon)
ports: Decstation, Sun4
updated: 1993/01/29
language: Prolog
package: BinProlog
version: 1.39
parts: compiler?
how to get: ftp BinProlog/* from clement.info.umoncton.ca
description: ?
ports: IBM-PC/386, Sun-4, Sun-3
contact: Paul Tarau <tarau@info.umoncton.ca>
updated: ?
language: prolog
package: SWI-Prolog
version: 1.6.12
author: Jan Wielemaker <jan@swi.psy.uva.nl>
how to get: ftp pub/SWI-Prolog from swi.psy.uva.nl
OS/2: ftp pub/toolw/SWI/* from mpii02999.ag2.mpi-sb.mpg.de
conformance: superset
features: "very nice Ed. style prolog, best free one I've seen"
ports: Sun-4, Sun-3 (complete); Linux, DEC MIPS (done but
incomplete, support needed); RS6000, PS2/AIX, Atari ST,
Gould PN, NeXT, VAX, HP-UX (known problems, support needed);
MSDOS (status unknown), OS/2
restriction: GNU General Public License
status: activly developed
discussion: prolog-request@swi.psy.uva.nl
contact: (OS/2) Andreas Toenne <atoenne@mpi-sb.mpg.de>
updated: 1993/03/05
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Thu Apr 1 07:00:48 EST 1993
Xref: iecc comp.compilers:4463 comp.lang.misc:9658 comp.sources.d:4723 comp.archives.admin:883 news.answers:6794
Newsgroups: comp.compilers,comp.lang.misc,comp.sources.d,comp.archives.admin,news.answers
Path: iecc!compilers-sender
From: David Muir Sharnoff <muir@idiom.berkeley.ca.us>
Subject: Catalog of compilers, interpreters, and other language tools [p3of3]
Message-ID: <free3-Apr-93@comp.compilers>
Followup-To: comp.archives.admin
Summary: montly posting of free language tools that include source code
Keywords: tools, FTP, administrivia
Sender: compilers-sender@iecc.cambridge.ma.us
Supersedes: <free3-Mar-93@comp.compilers>
Reply-To: muir@idiom.berkeley.ca.us
Organization: University of California, Berkeley
References: <free2-Apr-93@comp.compilers>
Date: Thu, 1 Apr 1993 12:00:42 GMT
Approved: compilers@iecc.cambridge.ma.us
Expires: Sat, 1 May 1993 23:59:00 GMT
Archive-name: free-compilers/part3
Last-modified: 1993/03/24
Version: 3.2
language: Prolog
package: Frolic
version: ?
how to get: ftp pub/frolic.tar.Z from cs.utah.edu
requires: Common Lisp
contact: ?
updated: 1991/11/23
language: Prolog
package: ? Prolog package from the University of Calgary ?
version: ?
how to get: ftp pub/prolog1.1/prolog11.tar.Z from cpsc.ucalgary.ca
description: + delayed goals
+ interval arithmetic
requires: Scheme
portability: reliese on continuations
contact: ?
updated: ?
language: Prolog
package: ? slog ?
version: ?
parts: translator(Prolog->Scheme)
author: dorai@cs.rice.edu
how to get: ftp public/slog.sh from titan.rice.edu
description: macros expand syntax for clauses, elations etc, into Scheme
ports: Chez Scheme
portability: reliese on continuations
updated: ?
language: Prolog
package: LM-PROLOG
version: ?
parts: ?
author: Ken Kahn and Mats Carlsson
how to get: ftp archives/lm-prolog.tar.Z from sics.se
requires: ZetaLisp
contact: ?
updated: ?
language: Prolog
package: Open Prolog
version: ?
parts: ?
host to get: ftp languages/open-prolog/* from grattan.cs.tcd.ie
description: ?
ports: Macintosh
contact: Michael Brady <brady@cs.tcd.ie>
updated: ?
language: Prolog
package: UPMAIL Tricia Prolog
version: ?
parts: ?
how to get: ftp pub/Tricia/README from ftp.csd.uu.se
description: ?
contact: <tricia-request@csd.uu.se>
updated: ?
language: Prolog
package: ?; ? (two systems)
version: ?; ?
parts: ?; ?
how to get: ftp ai.prolog/Contents from aisun1.ai.uga.edu
description: ?; ?
contact: Michael Covington <mcovingt@uga.cc.uga.edu>
ports: MSDOS, Macintosh; MSDOS
updated: ?; ?
language: Prolog
package: XWIP (X Window Interface for Prolog)
version: 0.6
parts: library
how to get: ftp contrib/xwip-0.6.tar.Z from export.lcs.mit.edu
description: It is a package for Prologs following the Quintus foreign
function interface (such as SICStus). It provides a (low-level)
Xlib style interface to X. The current version was developed
and tested on SICStus 0.7 and MIT X11 R5 under SunOS 4.1.1.
portability: It is adaptable to many other UNIX configurations.
contact: xwip@cs.ucla.edu
updated: 1993/02/25
language: Prolog
package: PI
version: ?
parts: library
how to get: ftp pub/prolog/ytoolkit.tar.Z from ftp.ncc.up.pt
description: PI is a interface between Prolog applications and XWindows that
aims to be independent from the Prolog engine, provided that it
has a Quintus foreign function interface (such as SICStus,
YAP). It is mostly written in Prolog and is divided in two
libraries: Edipo - the lower level interface to the Xlib
functions; and Ytoolkit - the higher level user interface
toolkit
contact: Ze' Paulo Leal <zp@ncc.up.pt>
updated: 1993/03/02
language: Prolog
package: ISO draft standard
parts: language definition
how to get: ftp ? from ftp.th-darmstadt.de
updated: 1992/07/06
langauge: BABYLON (Prolog variant???)
package: BABYLON
version: ?
parts: development environment
how to get: ftp gmd/ai-research/Software/* from gmdzi.gmd.de
description: BABYLON is a development environment for expert systems. It
includes frames, constraints, a prolog-like logic formalism,
and a description language for diagnostic applications.
requires: Common Lisp
ports: many ?
contact: ?
updated: ?
language: Python
package: Python
version: 0.9.8
parts: interpeter, libraries, documentation, emacs macros
how to get: ftp pub/python* from ftp.cwi.nl
america: ftp pub/? from wuarchive.wustl.edu
author: Guido van Rossum <guido@cwi.nl>
description: Python is a simple, yet powerful programming language
that bridges the gap between C and shell programming,
and is thus ideally suited for rapid prototyping. Its
syntax is put together from constructs borrowed from a
variety of other languages; most prominent are
influences from ABC, C, Modula-3 and Icon. Python is
object oriented and is suitable for fairly large programs.
+ packages
+ exceptions
+ good C interface
+ dynamic loading of C modules
- arbitrary restrictions
discussion: python-list-request@cwi.nl
ports: unix and Macintosh
updated: 1993/01/09
language: Ratfor
package: ? ratfor ?
version: ?
parts: translator(Ratfor->Fortran IV)
author: Brian Kernighan and P.J. Plauger (wrote the book anyway)
how to get: comp.sources.unix archives volume 13
description: Ratfor is a front end langauge for Fortran. It was designed
to give structured control structures to Fortran. It is
mainly of historical significance.
updated: ?
language: Y (cross between C and Ratfor)
package: y+po
version: ?
parts: compiler
author: Jack W. Davidson and Christopher W. Fraser
how to get: ftp pub/y+po.tar.Z from ftp.cs.princeton.edu
description: Davidson/Fraser peephole optimizer PO [1-3] [where the GCC RTL
idea and other optimization ideas came from] along with the Y
compiler [cross between C+ratfor] is ftpable from
ftp.cs.princeton.edu: /pub/y+po.tar.Z. It is a copy of the
original distribution from the University of Arizona during the
early 80's, totally unsupported, almost forgotten [do not bug
the authors] old code, possibly of interest to
compiler/language hackers.
references: Jack W. Davidson and Christopher W. Fraser, "The Design and
Application of a Retargetable Peephole Optimizer", TOPLAS, Apr.
1980.
Jack W. Davidson, "Simplifying Code Through Peephole
Optimization" Technical Report TR81-19, The University of
Arizona, Tucson, AZ, 1981.
Jack W. Davidson and Christopher W. Fraser, "Register
Allocation and Exhaustive Peephole Optimization"
Software-Practice and Experience, Sep. 1984.
status: history
langauge: Relation Grammar
package: rl
version: ?
how to get: fto rl/* from flash.bellcore.com
author: Kent Wittenburg <kentw@bellcore.com>
description: The RL files contain code for defining Relational
Grammars and using them in a bottom-up parser to
recognize and/or parse expressions in Relational
Languages. The approach is a simplification of that
described in Wittenburg, Weitzman, and Talley (1991),
Unification-Based Grammars and Tabular Parsing for
Graphical Languages, Journal of Visual Languages and
Computing 2:347-370.
This code is designed to support the definition and
parsing of Relational Languages, which are
characterized as sets of objects standing in
user-defined relations. Correctness and completeness
is independent of the order in which the input is given
to the parser. Data to be parsed can be in many forms
as long as an interface is supported for queries and
predicates for the relations used in grammar
productions. To date, this software has been used to
parse recursive pen-based input such as math
expressions and flowcharts; to check for data integrity
and design conformance in databases; to automatically
generate constraints in drag-and-drop style graphical
interfaces; and to generate graphical displays by
parsing relational data and generating output code.
ports: Allegro Common Lisp 4.1, Macintosh Common Lisp 2.0
requires: Common Lisp
updated: 1992/10/31
language: REXX
package: Regina ?
version: 0.03d
author: Anders Christensen <anders@pvv.unit.no>
how to get: ftp andersrexx/rexx-0.03d.tar.Z from rexx.uwaterloo.ca
or ftp ? from flipper.pvv.unit.no
ports: unix
discussion: comp.lang.rexx
updated: ?
language: REXX
package: ?
version: 102
author: ? al ?
how to get: ftp alrexx/rx102.tar.Z from rexx.uwaterloo.ca
or ftp ? from tony.cat.syr.edu
requires: C++
ports: unix
discussion: comp.lang.rexx
contact: ?
updated: 1992/05/13
language: REXX
package: imc
version: 1.3
parts: ?
how to get: ftp pub/freerexx/imc/rexx-imc-1.3.tar.Z from rexx.uwaterloo.ca
ports: SunOS
updated: ?
language: S/SL (Syntax Semantic Language)
package: ssl
version: ?
author: Rick Holt, Jim Cordy <cordy@qucis.queensu.ca> (language),
Rayan Zachariassen <rayan@cs.toronto.edu> (C implementation)
parts: parser bytecode compiler, runtime
how to get: ftp pub/ssl.tar.Z from neat.cs.toronto.edu
description: A better characterization is that S/SL is a language
explicitly designed for making efficient recusive-descent
parsers. Unlike most other languages, practicially the
LEAST expensive thing you can do in S/SL is recur. A
small language that defines input/output/error token
names (& values), semantic operations (which are really
escapes to a programming language but allow good
abstration in the pseudo-code), and a pseudo-code
program that defines a grammar by the token stream the
program accepts. Alternation, control flow, and
1-symbol lookahead constructs are part of the
language. What I call an S/SL "implementation", is a
program that compiles this S/SL pseudo-code into a
table (think byte-codes) that is interpreted by the
S/SL table-walker (interpreter). I think the pseudo-code
language is LR(1), and that the semantic mechanisms turn it
into LR(N) relatively easily.
+ more powerful and cleaner than yac
- slower than yacc
reference: + Cordy, J.R. and Holt, R.C. [1980] Specification of S/SL:
Syntax/Semantic Language, Computer Systems Research
Institute, University of Toronto.
+ "An Introduction to S/SL: Syntax/Semantic Language" by
R.C. Holt, J.R. Cordy, and D.B. Wortman, in ACM Transactions
on Programming Languages and Systems (TOPLAS), Vol 4, No.
2, April 1982, Pages 149-178.
updated: 1989/09/25
language: Sather
package: Sather programming language and environment
version: 0.2i
parts: translator(C), debugger, libraries, documentation, emacs macros
author: International Computer Science Institute in Berkeley, CA
how to get: ftp pub/sather/sa-0.2i.tar.Z from ftp.icsi.berkeley.edu
europe: ftp pub/Sather/* from ftp.gmd.de
aus: ftp pub/sather/* from lynx.csis.dit.csiro.au
japan: ftp pub/lang/sather/* from sra.co.jp
conformance: reference implemantation
description: Sather is a new object-oriented computer language
developed at the International Computer Science
Institute. It is derived from Eiffel and attempts to
retain much of that language's theoretical cleanliness
and simplicity while achieving the efficiency of C++.
It has clean and simple syntax, parameterized classes,
object-oriented dispatch, multiple inheritance, strong
typing, and garbage collection. The compiler generates
efficient and portable C code which is easily
integrated with existing code.
package: A variety of development tools including a debugger and browser
based on gdb and a GNU Emacs development environment
have also been developed. There is also a class library
with several hundred classes that implement a variety
of basic data structures and numerical, geometric,
connectionist, statistical, and graphical abstractions.
We would like to encourage contributions to the library
and hope to build a large collection of efficient,
well-written, well-tested classes in a variety of areas
of computer science.
ports: Sun-4 HP9000/300 Decstation5000 MIPS SonyNews3000 Sequent/Dynix
SCO SysVR3.2 NeXT (from others: RS6000 SGI)
portability: high
discussion: sather-request@icsi.berkeley.edu
bugs: sather-admin@icsi.berkeley.edu
status: actively developed.
updated: 1992/07/02
language: Scheme
package: Schematik
version: 1.1.5.2
parts: programming environment
author: Chris Kane, Max Hailperin <max@nic.gac.edu>
how to get: ftp /pub/next/scheme/* from ftp.gac.edu
europe: ftp /pub/next/ProgLang from ftp.informatik.uni-muenchen.de
description: Schematik is a NeXT front-end to MIT Scheme for
the NeXT. It provides syntax-knowledgeable text
editing, graphics windows, and user-interface to
an underlying MIT Scheme process. It comes packaged
with MIT Scheme 7.1.3 ready to install on the NeXT.
ports: NeXT, MIT Scheme 7.1.3
portability: requires NeXTSTEP
contact: schematik@gac.edu
updated: 1993/03/11
language: Scheme
package: T
version: 3.1
parts: compiler
author: ?
how to get: ftp pub/systems/t3.1 from ftp.ai.mit.edu
description: a Scheme-like language developed at Yale. T is
written in itself and compiles to efficient native
code.
(A multiprocessing version of T is available from
masala.lcs.mit.edu:/pub/mult)
ports: Decstation, Sparc, sun-3, Vax(unix), Encore, HP, Apollo,
Mac (A/UX)
contact: t-project@cs.yale.edu.
bugs: t3-bugs@cs.yale.edu
updated: 1991/11/26
language: Scheme
package: scm
version: 4b4
parts: interpreter, conformance test, documentation
author: Aubrey Jaffer <jaffer@zurich.ai.mit.edu>
conformance: superset of Revised^3.99 Report on the Algorithmic
Language Scheme and the IEEE P1178 specification.
how to get: ftp archive/scm/* from altdorf.ai.mit.edu
canada: ftp pub/oz/scheme/new from nexus.yorku.ca
restriction: GNU General Public License
contributions: send $$$ to Aubrey Jaffer, 84 Pleasant St., Wakefield, MA 01880
ports: unix, amiga, atari, mac, MSDOS, nos/ve, vms
updated: 1993/02/18
language: Scheme
package: Scheme Library (slib)
version: 1d0
parts: library, documentation
how to get: ftp archive/scm/slib1b*.tar.Z from altdorf.ai.mit.edu
description: SLIB is a portable scheme library meant to provide
compatibiliy and utility functions for all standard scheme
implementations.
ports: Scm4b, Chez, ELK 1.5, GAMBIT, MITScheme, Scheme->C,
Scheme48, T3.1.
status: actively developed
contact: Aubrey Jaffer <jaffer@zurich.ai.mit.edu>
updated: 1993/03/03
language: Scheme
package: Hobbit
version: release 1
parts: translator(->C), documentation
author: Tanel Tammet <tammet@cs.chalmers.se>
how to get: ftp archive/scm/hobbit1.tar.Z from altdorf.ai.mit.edu
description: The main aim of hobbit is to produce maximally fast C programs
which would retain most of the original Scheme program
structure, making the output C program readable and modifiable.
Hobbit is written in Scheme and is able to self-compile.
Hobbit release 1 works together with the scm release scm4b3.
Future releases of scm and hobbit will be coordinated.
requires: scm 4b3
updated: 1993/02/07
language: Scheme
package: siod (Scheme In One Day, or Scheme In One Defun)
version: 2.9
author: George Carrette <gjc@paradigm.com>
how to get: ftp src/lisp/siod-v2.8-shar from world.std.com
description: Small scheme implementation in C arranged as a set of
subroutines that can be called from any main program
for the purpose of introducing an interpreted extension
language. Compiles to ~20K bytes of executable. Lisp
calls C and C calls Lisp transparently.
ports: VAX/VMS, VAX UNIX, Sun3, Sun4, Amiga, Macintosh, MIPS, Cray
updated: 1992/09/01
language: Scheme
package: MIT Scheme (aka C-Scheme)
version: 7.2
parts: interpreter, large runtime library, emacs macros,
native-code compiler, emacs-like editor, source-level debugger
author: MIT Scheme Team (primarily Chris Hanson, Jim Miller, and
Bill Rozas, but also many others)
how to get: ftp archive/scheme-7.2 from altdorf.ai.mit.edu
DOS floppies ($95) and Unix tar tapes ($200) from
Scheme Team / c/o Prof. Hal Abelson / MIT AI Laboratory /
545 Technology Sq. / Cambridge, MA 02139
description: Scheme implementation with rich set of utilities.
conformance: full compatibility with Revised^4 Report on Scheme,
one known incompatibility with IEEE Scheme standard
ports: 68k (hp9000, sun3, NeXT), MIPS (Decstation, Sony, SGI),
HP-PA (600, 700, 800), Vax (Ultrix, BSD), Alpha (OSF),
i386 (DOS/Windows, various Unix)
bugs: bug-cscheme@zurich.ai.mit.edu
discussion: info-cscheme@zurich.ai.mit.edu
(cross-posted to comp.lang.scheme.c)
status: activly developed
updated: 1992/08/24
language: Scheme
package: Scheme->C
version: 15mar93
parts: translator(C)
author: Digital Western Research Laboratory; Joel Bartlett
how to get: ftp pub/DEC/Scheme-to-C/* from gatekeeper.dec.com
description: Translates Revised**4 Scheme to C that is then compiled
by the native C compiler for the target machine. This
design results in a portable system that allows either
stand-alone Scheme programs or programs written in both
compiled and interpreted Scheme and other languages.
documentation: send Subject "help" to WRL-Techreports@decwrl.dec.com
for technical report. Other documentation in
Scheme-to-C directory on gatekeeper.
conformance: superset of Revised**4
+ "expansion passing style" macros
+ foreign function call capability
+ interfaces to Xlib (Ezd & Scix)
+ records
ports: VAX/ULTRIX, DECstation ULTRIX, Alpha AXP OSF/1,
Microsoft Windows 3.1, Apple Macintosh 7.1,
HP 9000/300, HP 9000/700, Sony News, SGI Iris and
Harris Nighthawk and other UNIX-like m88k systems.
The 01nov91 version is also available on Amiga, SunOS,
NeXT, and Apollo systems.
status: actively developed, contributed ports welcomed
updated: 1993/03/15
language: Scheme
package: PC-Scheme
version: 3.03
parts: compiler, debugger, profiler, editor, libraries
author: Texas Instruments
how to get: ftp archive/pc-scheme/* from altdorf.ai.mit.edu
description: Written by Texas Instruments. Runs on MS-DOS 286/386 IBM PCs
and compatibles. Includes an optimizing compiler, an
emacs-like editor, inspector, debugger, performance testing,
foreign function interface, window system and an
object-oriented subsystem. Also supports the dialect used in
Abelson and Sussman's SICP.
conformance: Revised^3 Report, also supports dialect used in SICP.
ports: MSDOS
restriction: official version is $95, contact rww@ibuki.com
updated: 1992/02/23
language: Scheme
package: PCS/Geneva
version; ?
parts: compiler, debugger, profiler, editor, libraries
how to get: send email to schemege@uni2a.unige.ch
description: PCS/Geneva is a cleaned-up version of Texas Instrument's PC
Scheme developed at the University of Geneva. The main
extensions to PC Scheme are 486 support, BGI graphics, LIM-EMS
pagination support, line editing, and assmebly-level
interfacing.
contact: schemege@uni2a.unige.ch
updated: ?
language: Scheme
package: Gambit Scheme System
version: 1.8.2
parts: interpreter, compiler, linker
author: Marc Feeley <feeley@iro.umontreal.ca>
how to get: ftp pub/gambit1.7.1/* from trex.iro.umontreal.ca
description: Gambit is an optimizing Scheme compiler/system.
conformance: IEEE Scheme standard and `future' construct.
restriction: Mac version of compiler & source costs $40.
ports: 68k: unix, sun3, hp300, bbn gp100, NeXT, Macintosh
updated: 1992/07/01
language: Scheme
package: Elk (Extension Language Kit)
version: 2.0
parts: interpreter
how to get: ftp pub/elk/elk-2.0.tar.Z from tub.cs.tu-berlin.de
usa: ftp contrib/elk-2.0.tar.Z from export.lcs.mit.edu
author: Oliver Laumann <net@cs.tu-berlin.de>, Carsten Bormann
<cabo@cs.tu-berlin.de> ?
description: Elk is a Scheme interpreter designed to be used as a
general extension language.
+ interfaces to Xlib, Xt, and various widget sets.
+ dynamic loading of extensions
+ almost all artificial limitations removed
conformance: Mostly R3RS compatable.
ports: unix, ultrix, vax, sun3, sun4, 68k, i386, mips, ibm rt,
rs6000, hp700, sgi, sony
updated: 1992/11/30
language: Scheme
package: XScheme
version: 0.28
parts: ?
author: David Betz <dbetz@apple.com>
how to get: ftp pub/scheme/* from nexus.yorku.ca
description: ?
discussion: comp.lang.lisp.x
contact: ?
updated: 1992/02/02
language: Scheme
package: Fools' Lisp
version: 1.3.2
author: Jonathan Lee <jonathan@scam.berkeley.edu>
how to get: ftp src/local/fools.tar.Z from scam.berkeley.edu
description: a small Scheme interpreter that is R4RS conformant.
ports: Sun-3, Sun-4, Decstation, Vax (ultrix), Sequent, Apollo
updated: 1991/10/31
language: Scheme
package: Scheme84
version: ?
parts: ?
how to get: Send a tape w/return postage to: Scheme84 Distribution /
Nancy Garrett / c/o Dan Friedman / Department of Computer
Science / Indiana University / Bloomington, Indiana. Call
1-812-335-9770.
description: ?
requires: VAX, Franz Lisp, VMS or BSD
contact: nlg@indiana.edu
updated: ?
language: Scheme
package: Scheme88
version: ?
parts: ?
how to get: ftp pub/scheme/* from nexus.yorku.ca
contact: ?
updated: ?
language: Scheme
package: UMB Scheme
version: ?
parts: ?, editor, debugger
author: William Campbell <bill@cs.umb.edu>
how to get: ftp pub/scheme/* from nexus.yorku.ca
conformance: R4RS Scheme
ports: ?
updated: ?
language: Scheme
package: PseudoScheme
version: 2.8
parts: translator(Common Lisp)
author: Jonathan Rees <jar@cs.cornell.edu>
conformance: R3RS except call/cc.
requires: Common Lisp
ports: Lucid, Symbolics CL, VAX Lisp, Explorer CL
announcements: info-clscheme-request@mc.lcs.mit.edu
updated: ?
language: Scheme
package: Similix
version: ?
parts: partial evaulator, debugger
how to get: ftp misc/Similix.tar.Z from ftp.diku.dk
description: Similix is an autoprojector (self-applicable partial
evaluator) for a higher order subset of the strict functional
language Scheme. Similix handles programs with user defined
primitive abstract data type operators which may process
global variables (such as input/output operators).
conformance: subset
contact: Anders Bondorf <anders@diku.dk>
requires: Scheme
ports: Chez Scheme, T
updated: 1991/09/09
language: Scheme
package: ? syntax-case ?
version: 2.1
parts: macro system, documentation
how to get: ftp pub/scheme/syntax-case.tar.Z from iuvax.cs.indiana.edu
author: R. Kent Dybvig <dyb@cs.indiana.edu>
description: We have designed and implemented a macro system that is
vastly superior to the low-level system described in
the Revised^4 Report; in fact, it essentially
eliminates the low level altogether. We also believe
it to be superior to the other proposed low-level
systems as well, but each of you can judge that for
yourself. We have accomplished this by "lowering the
level" of the high-level system slightly, making
pattern variables ordinary identifiers with essentially
the same status as lexical variable names and macro
keywords, and by making "syntax" recognize and handle
references to pattern variables.
references: + Robert Hieb, R. Kent Dybvig, and Carl Bruggeman "Syntactic
Abstraction in Scheme", IUCS TR #355, 6/92 (revised 7/3/92)
+ R. Kent Dybvig, "Writing Hygienic Macros in Scheme with
Syntax-Case", IUCS TR #356, 6/92 (revised 7/3/92).
ports: Chez Scheme
updated: 1992/07/06
language: Scheme
package: x-scm
version: ?
parts: ?
author: Larry Campbell <campbell@redsox.bsw.com>
how to get: alt.sources archive
description: x-scm is a bolt-on accessory for the "scm" Scheme interpreter
that provides a handy environment for building Motif and
OpenLook applications. (There is some support as well for raw
Xlib applications, but not enough yet to be useful.)
requires: scm, X
ports: ?
updated: 1992/08/10
language: Scheme, Prolog
package: "Paradigms of AI Programming"
version: ?
parts: book with interpreters and compilers in Common Lisp
author: Peter Norvig
how to get: bookstore, and ftp pub/norvig/* from unix.sri.com
updated: ?
language: Scheme
package: PSD (Portable Scheme Debugger)
version: 1.0
parts: debugger
author: Kellom{ki Pertti <pk@cs.tut.fi>
how to get: ftp /pub/src/languages/schemes/psd.tar.Z from cs.tut.fi
description: source code debugging from emacs
requires: R4RS compliant Scheme, GNU Emacs.
restriction: GNU GPL
updated: 1992/07/10
language: Scheme
package: Tiny Clos
version: first release
how to get: ftp pub/mops/* from parcftp.xerox.com
description: A core part of CLOS (Common Lisp Object System) ported to
Scheme and rebuilt using a MOP (Metaobject Protocol).
This should be interesting to those who want to use MOPs
without using a full Common Lisp or Dylan.
ports: MIT Scheme 11.74
discussion: mailing list: mops, administered by gregor@parc.xerox.com
contact: Gregor Kiczales <gregor@parc.xerox.com>
updated: 1992/12/14
langauge: Scheme
package: VSCM
version: 93Jan26
parts: runtime, bytecode compiler
author: Matthias Blume <blume@kastle.Princeton.EDU> ?
how to get: ftp pub/scheme/imp/vscm93Jan26.tar.Z from nexus.yorku.cs
description: VSCM is an implementation of Scheme based on a virtual machine
written in ANSI C.
conformance: conforms to the R4RS report except non-integral number types
portability: very high
udated: 1993/01/26
language: Scheme
package: PSI
version: pre-release
parts: interpreter, virtual machine
author: Ozan Yigit <oz@ursa.sis.yorku.ca>, David Keldsen, Pontus Hedman
how to get: from author
description: I am looking for a few interested language hackers to play with
and comment on a scheme interpreter. I would prefer those who
have been hacking portable [non-scheme] interpreters for many
years. The interpreter is PSI, a portable scheme interpreter
that includes a simple dag compiler and a virtual machine. It
can be used as an integrated extension interpreter in other
systems, allows for easy addition of new primitives, and it
embodies some other interesting ideas. There are some unique[2]
code debug/trace facilities, as well, acceptable performance
resulting from a fairly straight-forward implementation.
Continuations are fully and portably supported, and perform
well. PSI is based on the simple compilers/vm in Kent
Dbyvig's thesis.
compliance: R^4RS compatible with a number of useful extensions.
updated: 1993/02/19
language: sed
package: GNU sed
version: 1.11
parts: interpreter, ?
author: ?
how to get: ftp sed-1.11.tar.z from a GNU archive site
contact: ?
updated: 1992/05/31
language: Self
package: Self
version: 2.0
parts: ?, compiler?, debugger, browser
author: The Self Group at Sun Microsystems & Stanford University
how to get: ftp ? from self.stanford.edu
The Self Group at Sun Microsystems Laboratories,
Inc., and Stanford University is pleased to announce
Release 2.0 of the experimental object-oriented
exploratory programming language Self.
Release 2.0 introduces full source-level debugging
of optimized code, adaptive optimization to shorten
compile pauses, lightweight threads within Self,
support for dynamically linking foreign functions,
changing programs within Self, and the ability to
run the experimental Self graphical browser under
OpenWindows.
Designed for expressive power and malleability,
Self combines a pure, prototype-based object model
with uniform access to state and behavior. Unlike
other languages, Self allows objects to inherit
state and to change their patterns of inheritance
dynamically. Self's customizing compiler can generate
very efficient code compared to other dynamically-typed
object-oriented languages.
discussion: self-request@self.stanford.edu
ports: Sun-3 (no optimizer), Sun-4
contact: ?
updated: 1992/08/13
language: SGML (Standardized Generalized Markup Language)
package: sgmls
version: 1.1
parts: parser
author: James Clark <jjc@jclark.com> and Charles Goldfarb
how to get: ftp pub/text-processing/sgml/sgmls-1.0.tar.Z from ftp.uu.net
uk: ftp sgmls/sgmls-1.1.tar.Z from ftp.jclark.com
description: SGML is a markup language standardized in ISO 8879.
Sgmls is an SGML parser derived from the ARCSGML
parser materials which were written by Charles
Goldfarb. It outputs a simple, easily parsed, line
oriented, ASCII representation of an SGML document's
Element Structure Information Set (see pp 588-593
of ``The SGML Handbook''). It is intended to be
used as the front end for structure-controlled SGML
applications. SGML is an important move in the
direction of separating information from its
presentation, i.e. making different presentations
possible for the same information.
bugs: James Clark <jjc@jclark.com>
ports: unix, msdos
updated: 1993/02/22
language: Korn Shell
package: SKsh
version: 2.1
author: Steve Koren <koren@hpfcogv.fc.hp.com>
parts: interpreter, utilities
how to get: ftp pub/amiga/incom*/utils/SKsh021.lzh from hubcap.clemson.edu
description: SKsh is a Unix ksh-like shell which runs under AmigaDos.
it provides a Unix like environment but supports many
AmigaDos features such as resident commands, ARexx, etc.
Scripts can be written to run under either ksh or SKsh,
and many of the useful Unix commands such as xargs, grep,
find, etc. are provided.
ports: Amiga
updated: 1992/12/16
language: Korn Shell
package: bash (Bourne Again SHell)
version: 1.12
parts: parser(yacc), interpreter, documentation
how to get: ftp bash-1.12.tar.Z from a GNU archive site
author: Brian Fox <bfox@vision.ucsb.edu>
description: Bash is a Posix compatable shell with full Bourne shell syntax,
and some C-shell commands built in. The Bourne Again Shell
supports emacs-style command-line editing, job control,
functions, and on-line help.
restriction: GNU General Public License
bugs: gnu.bash.bug
updated: 1992/01/28
language: Korn Shell
package: pd-ksh
version: 4.8
author: Simon J. Gerraty <sjg@zen.void.oz.au>
how to get: ?
description: ?
contact: Simon J Gerraty <sjg@melb.bull.oz.au> (zen.void.oz.au is down)
updated: ?
language: csh (C-Shell)
package: tcsh
version: 6.03
parts: interpreter
author: Christos Zoulas <christos@ee.cornell.edu>
how to get: ftp ? from ftp.spc.edu
description: a modified C-Shell with history editing
ports: unix, OpenVMS
updated: 1992/12/16
language: rc (Plan 9 shell)
package: rc
version: 1.4
parts: interpretor
author: Byron Rakitzis <byron@netapp.com>
how to get: comp.sources.misc volume 30; or ftp pub/shells/* from
ftp.white.toronto.edu
description: a free implementation of the Plan 9 shell.
discussion: rc-request@hawkwind.utcs.toronto.edu
updated: 1992/05/26
language: es (a functional shell)
package: es
version: 0.8
parts: interpreter
author: Byron Rakitzis <byron@netapp.com>, Paul Haahr <haahr@adobe.com>
how to get: ftp ftp.white.toronto.edu:/pub/es/es-0.8.tar.Z
description: shell with higher order functions
updated: 1993/03/22
language: Simula
package: Lund Simula
version: 4.07
author: ?
how to get: ftp misc/mac/programming/+_Simula/* from rascal.ics.utexas.edu
description: ?
contact: Lund Software House AB / Box 7056 / S-22007 Lund, Sweden
updated: 1992/05/22
language: Simula
package: Cim
version: 1.10
parts: translator(->C), ?
author: Sverre Johansen, Stenk Krogdahl and Terje Mjos
how to get: ftp cim/* from ftp.ifi.uio.no
description: Cim is a compiler for the programming language Simula.
from Department of informatics, University of Oslo
It offers a class concept, separate compilation with
full type checking, interface to external C-routines,
an application package for process simulation
and a coroutine concept.
Cim is a Simula compiler whose portability is based on
the C programming language. The compiler and the
run-time system is written in C, and the compiler
produces C-code, that is passed to a C-compiler for
further processing towards machine code.
conformance: except unspecified parameters to formal or virtual procedures
ports: Vax (Ultrix,VMS), 68020/30 (SunOS,Next,HPUX), sparc (Sunos),
mips (SGI,Dec,CD), 9000s705 (HPUX), alpha (OSF/1),
m88k (Triton,Aviion), Apollo, Cray (YMP), Encore Multimax,
9000s800 (HPUX), 386/486 (LINUX,SCO,Interactive),
Atari (MINIX) and Comodore Amiga (AmigaDos),
contact: cim@ifi.uio.no
updated: 1993/02/25
language: SISAL 1.2
package: The Optimizing SISAL Compiler
version: 12.0
parts: compiler?, manuals, documentation, examples, debugger,...
author: David C. Cann <cann@sisal.llnl.gov>
how to get: ftp pub/sisal from sisal.llnl.gov
description: Sisal is a functional language designed to be competitive with
Fortran, and other imperative languages for scientific jobs.
In particualar, OSC uses advanced optimizing techniques to
achieve fast speeds for computation intensive programs.
It also features routines for making efficient use
of parallel processors, such as that on the Cray.
ports: ?
updated: ?
language: Smalltalk
package: Little Smalltalk
version: 3
author: Tim Budd <budd@cs.orst.edu> ?
how to get: ftp pub/budd/? from cs.orst.edu
ports: unix, pc, atari, vms
status: ?
updated: ?
language: Smalltalk
package: GNU Smalltalk
version: 1.1.1
parts: ?
author: Steven Byrne <sbb@eng.sun.com>
how to get: ftp smalltalk-1.1.1.tar.Z from a GNU archive site
description: ?
discussion: ?
bugs: gnu.smalltalk.bug
contact: ?
updated: 1991/09/15
language: Smalltalk
package: msgGUI
version: 1.0
parts: library
author: Mark Bush <bush@ecs.ox.ac.uk>
how to get: ftp pub/Packages/mst/mstGUI-1.0.tar.Z from ftp.comlab.ox.ac.uk
description: GUI for GNU Smalltalk. This this package contains the basics
for creating window applications in the manner available in
other graphical based Smalltalk implementations.
updated: 1992/12/14
language: Smalltalk
package: Mei
version: 0.50
parts: interpreters(Lisp,Prolog), examples, libraries, tools, editor,
browser
author: Atsushi Aoki <aoki@sra.co.jp> and others
how to get: ftp pub/goodies/misc/Mei.tar.Z from mushroom.cs.man.ac.uk
us: ftp pub/MANCHESTER/misc/Mei from st.cs.uiuc.edu
jp: ftp pub/lang/smalltalk/mei/Mei0.50.tar.Z from srawgw.sra.co.jp
description: Mei is a set of class libraries for Objectworks Smalltalk
Release 4.1. it includes: 1. Grapher Library (useful for
drawing diagrams); 2. Meta Grapher Library (grapher to develop
grapher); 3. Drawing tools and painting tools (structured
diagram editors and drawing editors); 4. GUI editor (graphical
user interface builder); 5. Lisp interpreter; 6. Prolog
interpreter; 7. Pluggable gauges; 8. Extended browser;
(package, history, recover, etc.)
restriction: GNU General Public License
requires: Objectworks Smalltalk Release 4.1
contact: Watanabe Katsuhiro <katsu@sran14.sra.co.jp>
updated: 1993/01/20
language: Snobol4
package: SIL (Macro Implementation of SNOBOL4)
version: 3.11
how to get: ftp snobol4/* from cs.arizona.edu
contact: snobol4@arizona.edu
updated: 1986/07/29
language: Snobol4
package: vinilla
version: ?
author: Catspaw, Inc.
how to get: ftp snobol4/vanilla.arc from cs.arizona.edu
contact: ?
ports: MSDOS
updated: 1992/02/05
language: SR (Synchronizing Resources)
package: sr
version: 2.0
parts: ?, documentation, tests
how to get: ftp sr/sr.tar.Z from cs.arizona.edu
description: SR is a language for writing concurrent programs.
The main language constructs are resources and
operations. Resources encapsulate processes and
variables they share; operations provide the primary
mechanism for process interaction. SR provides a novel
integration of the mechanisms for invoking and
servicing operations. Consequently, all of local and
remote procedure call, rendezvous, message passing,
dynamic process creation, multicast, and semaphores are
supported.
reference: "The SR Programming Language: Concurrency in Practice",
by Gregory R. Andrews and Ronald A. Olsson, Benjamin/Cummings
Publishing Company, 1993, ISBN 0-8053-0088-0
contact: sr-project@cs.arizona.edu
discussion: info-sr-request@cs.arizona.edu
ports: Sun-4, Sun-3, Decstation, SGI Iris, HP PA, HP 9000/300,
NeXT, Sequent Symmetry, DG AViiON, RS/6000, Multimax,
Apollo, and others.
updated: 1992/09/01
language: Standard ML
package: SML/NJ (Standard ML of New Jersey)
version: 0.93
parts: compiler, libraries, extensions, interfaces, documentation,
build facility
author: D. B. MacQueen <dbm@research.att.com>, Lal George
<george@research.att.com>, AJ. H. Reppy <jhr@research.att.com>,
A. W. Appel <appel@princeton.edu>
how to get: ftp dist/ml/* from research.att.com
description: Standard ML is a modern, polymorphically typed, (impure)
functional language with a module system that supports flexible
yet secure large-scale programming. Standard ML of New Jersey
is an optimizing native-code compiler for Standard ML that is
written in Standard ML. It runs on a wide range of
architectures. The distribution also contains:
+ an extensive library - The Standard ML of New Jersey Library,
including detailed documentation.
+ CML - Concurrent ML
+ eXene - an elegant interface to X11 (based on CML)
+ SourceGroup - a separate compilation and "make" facility
ports: M68K, SPARC, MIPS, HPPA, RS/6000, I386/486
updated: 1993/02/18
language: Concurrent ML
package: Concurrent ML
version: 0.9.8
parts: extension
how to get: ftp pub/CML* from ftp.cs.cornell.edu or get SML/NJ
description: Concurrent ML is a concurrent extension of SML/NJ, supporting
dynamic thread creation, synchronous message passing on
synchronous channels, and first-class synchronous operations.
First-class synchronous operations allow users to tailor their
synchronization abstractions for their application. CML also
supports both stream I/O and low-level I/O in an integrated
fashion.
bugs: sml-bugs@research.att.com
requires: SML/NJ 0.75 (or later)
updated: 1993/02/18
language: Standard ML
package: sml2c
version: ?
parts: translator(C), documentation, tests
how to get: ftp /usr/nemo/sml2c/sml2c.tar.Z from dravido.soar.cs.cmu.edu
linux: ftp pub/linux/smlnj-0.82-linux.tar.Z from ftp.dcs.glasgow.ac.uk
author: School of Computer Science, Carnegie Mellon University
conformance: superset
+ first-class continuations,
+ asynchronous signal handling
+ separate compilation
+ freeze and restart programs
history: based on SML/NJ version 0.67 and shares front end and
most of its runtime system.
description: sml2c is a Standard ML to C compiler. sml2c is a batch
compiler and compiles only module-level declarations,
i.e. signatures, structures and functors. It provides
the same pervasive environment for the compilation of
these programs as SML/NJ. As a result, module-level
programs that run on SML/NJ can be compiled by sml2c
without any changes. It does not support SML/NJ style
debugging and profiling.
ports: IBM-RT Decstation3100 Omron-Luna-88k Sun-3 Sun-4 386(Mach)
portability: easy, easier than SML/NJ
contact: david.tarditi@cs.cmu.edu anurag.acharya@cs.cmu.edu
peter.lee@cs.cmu.edu
updated: 1991/06/27
langauge: Standard ML
package: The ML Kit
version: 1
parts: interprter, documentation
author: Nick Rothwell, David N. Turner, Mads Tofte <tofte@diku.dk>,
and Lars Birkedal at Edinburgh and Copenhagen Universities.
how to get: ftp diku/users/birkedal/* from ftp.diku.dk
uk: ftp export/ml/mlkit/* from lfcs.ed.ac.uk
description: The ML Kit is a straight translation of the Definition of
Standard ML into a collection of Standard ML modules. For
example, every inference rule in the Definition is translated
into a small piece of Standard ML code which implements it. The
translation has been done with as little originality as
possible - even variable conventions from the Definition are
carried straight over to the Kit. The Kit is intended as a
tool box for those people in the programming language community
who may want a self-contained parser or type checker for full
Standard ML but do not want to understand the clever bits of a
high-performance compiler. We have tried to write simple code
and modular interfaces.
updated: 1993/03/12
language: TCL (Tool Command Language)
package: TCL
version: 6.6
parts: interpreter, libraries, tests, documentation
how to get: ftp tcl/tcl6.6.tar.Z from sprite.berkeley.edu
msdos: ftp ? from cajal.uoregon.edu
macintosh: ftp pub/ticl from bric-a-brac.apple.com
examples: ftp tcl/* from barkley.berkeley.edu
author: John Ousterhout <ouster@cs.berkeley.edu>
description: TCL started out as a small language that could be
embedded in applications. It has now been extended
into more of a general purpose shell type programming
language. TCL is like a text-oriented Lisp, but lets
you write algebraic expressions for simplicity and to
avoid scaring people away.
+ may be used as an embedded interpreter
+ exceptions, packages (called libraries)
- only a single name-space
+ provide/require
- no dynamic loading ability
? - arbitrary limits ?
- three variable types: strings, lists, associative arrays
bugs: ?
discussion: comp.lang.tcl
ports: ?
updated: 1993/02/23
language: TCL
package: BOS - The Basic Object System
version: 1.31
parts: library
author: Sean Levy <Sean.Levy@cs.cmu.edu>
how to get: ftp tcl/? from barkley.berkeley.edu
description: BOS is a C-callable library that implements the
notion of object and which uses Tcl as its interpreter
for interpreted methods (you can have "compiled"
methods in C, and mix compiled and interpreted
methods in the same object, plus lots more stuff).
I regularly (a) subclass and (b) mixin existing
objects using BOS to extend, among other things,
the set of tk widgets (I have all tk widgets wrapped
with BOS "classes"). BOS is a class-free object
system, also called a prototype-based object system;
it is modeled loosely on the Self system from
Stanford.
updated: 1992/08/21
language: TCL
package: Wafe
version: 0.94
parts: interface
author: Gustaf Neumann <neumann@dec4.wu-wien.ac.at>
how to get: ftp pub/src/X11/wafe/wafe-0.94.tar.Z from ftp.wu-wien.ac.at
description: Wafe (Widget[Athena]front end) is a package that implements
a symbolic interface to the Athena widgets (X11R5) and
OSF/Motif. A typical Wafe application consists of two
parts: a front-end (Wafe) and an application program which
runs typically as a separate process. The distribution
contains sample application programs in Perl, GAWK, Prolog,
TCL, C and Ada talking to the same Wafe binary.
discussion: send "subscribe Wafe <Your Name>" to listserv@wu-wien.ac.at
updated: 1993/02/13
language: TCL
package: Cygnus Tcl Tools
version: Release-930124
author: david d 'zoo' zuhn <zoo@cygnus.com>
how to get: ftp pub/tcltools-* from cygnus.com
description: A rebundling of Tcl and Tk into the Cyngus GNU build
framework with 'configure'.
updated: 1993/01/24
language: Tiny
package: Omega test, Extended Tiny
version: 3.0.0
parts: translator(fortran->tiny), tiny interpreter?, analysis tools
author: William Pugh <pugh@cs.umd.edu> and others
how to get: ftp pub/omega from ftp.cs.umd.edu
description: The Omega test is implemented in an extended version of
Michael Wolfe's tiny tool, a research/educational tool
for examining array data dependence algorithms and
program transformations for scientific computations.
The extended version of tiny can be used as a
educational or research tool. The Omega test: A system
for performing symbolic manipulations of conjunctions
of linear constraints over integer variables. The
Omega test dependence analyzer: A system built on top
of the Omega test to analyze array data dependences.
contact: omega@cs.umd.edu
updated: 1992/12/14
Language: Extended Tiny
Package: Extended Tiny
Version: 3.0 (Dec 12th, 1992)
parts: programming environment, dependence tester, tests
translator(Fortran->tiny), documentation, tech. reports
author: original author: Michael Wolfe <cse.ogi.edu>,
extended by William Pugh et al. <pugh@cs.umd.edu>
how to get: ftp pub/omega from cs.umd.edu
description: A research/educational tool for experimenting with
array data dependence tests and reordering transformations.
It works with a language tiny, which does not have procedures,
goto's, pointers, or other features that complicate dependence
testing. The original version of tiny was written by Michael
Wolfe, and has been extended substantially by a research group
at the University of Maryland. Michael Wolfe has made further
extensions to his version of tiny.
contact: Omega test research group <omega@cs.umd.edu>
ports: Any unix system (xterm helpful but not required)
updated: 1993/01/23
language: troff, nroff, eqn, tbl, pic, refer, Postscript, dvi
package: groff
version: 1.07
parts: document formatter, documentation
author: James Clark <jjc@jclark.com>
how to get: ftp groff-1.07.tar.z from a GNU archive site
description: [An absolutely fabulous troff --muir]
restriction: GNU General Public License
requires: C++
updated: 1993/03/03
language: UNITY
package: MasPar Unity
version: ?
parts: ?
author: ?
how to get: ftp pub/maspar/maspar_unity* from SanFrancisco.ira.uka.de
contact: Lutz Prechelt <prechelt@ira.uka.de> ?
updated: ?
language: UNITY
package: HOL-UNITY
version: 2.1
parts: verification tool
how to get: ?
contact: Flemming Andersen <fa@tfl.dk> ?
language: Verilog, XNF
package: XNF to Verilog Translator
version: ?
parts: translator(XNF->Verilog)
author: M J Colley <martin@essex.ac.uk>
how to get: ftp pub/dank/xnf2ver.tar.Z from punisher.caltech.edu
description: This program was written by a postgraduate student as part
of his M.Sc course, it was designed to form part a larger
system operating with the Cadence Edge 2.1 framework. This
should be bourne in mind when considering the construction
and/or operation of the program.
updated: ?
language: VHDL
package: ALLIANCE
version: 1.1
parts: compiler, simulator, tools and environment, documentation
how to get: ftp pub/cao-vlsi/alliance from ftp-masi.ibp.fr
description: ALLIANCE 1.1 is a complete set of CAD tools for teaching
Digital CMOS VLSI Design in Universities. It includes VHDL
compiler and simulator, logic synthesis tools, automatic place
and route, etc... ALLIANCE is the result of a ten years effort
at University Pierre et Marie Curie (PARIS VI, France).
ports: Sun4, also not well supported: Mips/Ultrix, 386/SystemV
discussion: alliance-request@masi.ibp.fr
contact: cao-vlsi@masi.ibp.fr
updated: 1993/02/16
language: Web
package: web2c
version: 5-851d
parts: translator(C)
how to get: ftp TeX/web2c.tar.Z from ics.uci.edu
de: ftp pub/tex/src/web2c/web2c.tar.Z from ftp.th-darmstadt.de
description:
contact: Karl Berry <karl@claude.cs.umb.edu>
updated: 1993/02/22
language: Web
package: Web
version: ?
parts: translator(Pascal)
author: Donald Knuth
how to get: ftp ? from labrea.stanford.edu
description: Donald Knuth's programming language where you
write the source and documentation together.
contact: ?
updated: ?
-------------------------------------------------------------------------------
------------------------------ archives ---------------------------------------
-------------------------------------------------------------------------------
language: Ada
package: AdaX
description: an archive of X libraries for Ada. Includes Motif
[note, I chose this server out of many somewhat randomly.
Use archie to find others --muir]
how to get: ftp pub/AdaX/* from falcon.stars.rosslyn.unisys.com
contact: ?
language: APL, J
package: APL, J, and other APL Software at Waterloo
how to get: ftp languages/apl/index from watserv1.waterloo.edu
contact: Leroy J. (Lee) Dickey <ljdickey@math.waterloo.edu>
language: C, C++, Objective C, yacc, lex, postscript,
sh, awk, smalltalk, sed
package: the GNU archive sites
description: There are many sites which mirror the master gnu archives
which live on prep.ai.mit.edu. Please do not use
the master archive without good reason.
how to get: ftp pub/gnu/* from prep.ai.mit.edu
USA: ftp mirrors4/gnu/* from wuarchive.wustl.edu
ftp pub/src/gnu/* from ftp.cs.widener.edu
ftp gnu/* from uxc.cso.uiuc.edu
ftp mirrors/gnu/* from col.hp.com
ftp pub/GNU/* from gatekeeper.dec.com
ftp packages/gnu/* from ftp.uu.net
Japan: ftp ? from ftp.cs.titech.ac.jp
ftp ftpsync/prep/* from utsun.s.u-tokyo.ac.jp
Australia: ftp gnu/* from archie.au
Europe: ftp gnu/* from src.doc.ic.ac.uk
ftp pub/GNU/*/* from ftp.informatik.tu-muenchen.de [re-org'ed]
ftp pub/gnu/* from ftp.informatik.rwth-aachen.de
ftp pub/gnu/* from nic.funet.fi
ftp pub/gnu/* from ugle.unit.no
ftp pub/gnu/* from isy.liu.se
ftp pub/gnu/* from ftp.stacken.kth.se
ftp pub/gnu/* from sunic.sunet.se [re-org'ed]
ftp pub/gnu/* from ftp.win.tue.nl
ftp pub/gnu/* from ftp.diku.dk
ftp software/gnu/* from ftp.eunet.ch
ftp gnu/* from archive.eu.net [re-org'ed]
note: Many gnu files are now compressed with gzip. You can
tell a gzip'ed file because it has a lower-case .z rather
than the capital .Z that compress uses. Gzip is available
from these same archives
language: lisp
package: MIT AI Lab archives
description: archive of lisp extensions, utilities, and libraries
how to get: ftp pub/* from ftp.ai.mit.edu
contact: ?
language: lisp
package: Lisp Utilities collection
how to get: ftp /afs/cs.cmu.edu/user/mkant/Public/Lisp from ftp.cs.cmu.edu
contact: cl-utilities-request@cs.cmu.edu
language: Scheme
package: The Scheme Repository
description: an archive of scheme material including a bibliography,
the R4RS report, sample code, utilities, and implementations.
how to get: ftp pub/scheme/* from nexus.yorku.ca
contact: Ozan S. Yigit <scheme@nexus.yorku.ca>
language: Smalltalk
package: Manchester Smalltalk Goodies Library
description: a large collection of libraries for smalltalk.
Created by Alan Wills, administered by Mario Wolczko.
how to get: ftp uiuc/st*/* from st.cs.uiuc.edu
uk: ftp uiuc/st*/* from mushroom.cs.man.ac.uk
contact: goodies-lib@cs.man.ac.uk
language: Tcl
package: Tcl/Tk Contrib Archive
description: An archive of Tcl/tk things.
how to get: ftp tcl/* from barkley.berkeley.edu
contact: Jack Hsu <tcl-archive@barkley.berkeley.edu>
-------------------------------------------------------------------------------
----------------------------- references --------------------------------------
-------------------------------------------------------------------------------
name: Catalog of embeddable Languages.
author: Colas Nahaboo <colas@bagheera.inria.fr>
how to get: posted to comp.lang.misc,comp.lang.tcl
description: Descriptions of languages from the point of view of
embedding them.
version: 2
updated: 1992/07/09
name: Compilers bibliography
author: Cheryl Lins <lins@apple.com>
how to get: ftp pub/oberon/comp_bib_1.4.Z from ftp.apple.com
description: It includes all the POPLs, PLDIs, Compiler Construction,
TOPLAS, and LOPAS. Plus various articles and papers from
other sources on compilers and related topics
version: 1.4
updated: 1992/10/31
name: Language List
author: Bill Kinnersley <billk@hawk.cs.ukans.edu>
how to get: posted regularly to comp.lang.misc; ftp from
primost.cs.wisc.edu or idiom.berkeley.ca.us
description: Descriptions of almost every computer langauge there is.
Many references to available source code.
version: 1.7 ?
updated: 1992/04/05
name: The Lisp FAQs
author: Mark Kantrowitz <mkant+@cs.cmu.edu>
how to get: posted regularly to comp.lang.lisp,news.answers,comp.answers
description: details of many lisps and systems written in lisps
including many languages not elsewhere.
version: 1.30
updated: 1993/02/08
name: Survey of Interpreted Languages
author: Terrence Monroe Brannon <tb06@CS1.CC.Lehigh.ED>
how to get: Posted to comp.lang.tcl,comp.lang.misc,comp.lang.perl,
gnu.emacs.help,news.answers; or ftp
pub/gnu/emacs/elisp-ar*/pack*/Hy*Act*F*/survey-inter*-languages
from archive.cis.ohio-state.edu.
description: Detailed comparision of a few interpreters: Emacs Lisp,
Perl, Python, and Tcl.
version: ?
updated: ?
name: The Apple II Programmer's Catalog of Languages and Toolkits
author: Larry W. Virden <lvirden@cas.org>
description: a survey of language tools available for the Apple ][.
how to get: posted to comp.sys.apple2, comp.lang.misc; ftp from
idiom.berkeley.ca.us
version: 2.0
updated: 1993/02/12
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Thu Apr 1 15:38:23 EST 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: assmann@karlsruhe.gmd.de (Uwe Assmann)
Subject: Theory on loop transformations
Message-ID: <93-04-006@comp.compilers>
Keywords: theory, optimize
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: GMD Forschungsstelle an der Universitaet Karlsruhe
Date: Thu, 1 Apr 1993 08:20:25 GMT
Approved: compilers@iecc.cambridge.ma.us
I remember that M. Wolfe (and probably others) have tried to develop a
theory on loop transformations that looked at the loop transformation as a
matrix. Whenever a transformation is performed this amounts to a matrix
operation over the loop. Does anybody have references or new information?
In general, I am interested in a general theory on loop transformations.
--
Uwe Assmann
GMD Forschungsstelle an der Universitaet Karlsruhe
Vincenz-Priessnitz-Str. 1
7500 Karlsruhe GERMANY
Email: assmann@karlsruhe.gmd.de Tel: 0721/662255 Fax: 0721/6622968
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Thu Apr 1 15:39:31 EST 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: mwolfe@dat.cse.ogi.edu (Michael Wolfe)
Subject: High Performance Compilers Summer Courses
Message-ID: <93-04-007@comp.compilers>
Keywords: courses
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Oregon Graduate Institute - Computer Science & Engineering
Date: Thu, 1 Apr 1993 17:32:08 GMT
Approved: compilers@iecc.cambridge.ma.us
Once again, the Oregon Graduate Institute is offering
Summer Intensive Workshops on High Performance Compilers
Analysis and Optimization for Modern Architectures
Monday-Friday, August 16-20, 1993
Advanced Analysis and Optimizing for Parallelism
Monday-Friday, August 23-27, 1993
with Michael Wolfe
Hotel Vintage Plaza
Portland, Oregon
The past two years' courses have been well attended and received. This
year's offerings build on previous courses, but are updated and expanded.
The full details of these courses are available electronically
via anonymous ftp at:
% ftp cse.ogi.edu [or ftp 129.95.40.2]
Name: anonymous
Password: myname@where.am.i
ftp> cd pub
ftp> cd HPC
ftp> get brochure
ftp> quit
%
Or send email to
mwolfe@cse.ogi.edu (for technical inquiries)
lpease@admin.ogi.edu (for registration of a printed brochure)
What follows is the (lengthy) course outline for the two courses.
==================================================================
ANALYSIS AND OPTIMIZATION FOR MODERN ARCHITECTURES
Monday-Friday, August 16-20, 1993
Course Outline:
MONDAY:
1. Pipelined Processor Architecture
. Architectural Options
- pipelined functional units
- large register file (usually)
- vector instruction set
- cache memory
. Control Unit Options
- register windows
- VLIW/super-scalar control unit
- register renaming
- multithreading
. Representative Architectures
Multiflow Trace/300, Cydrome Cydra 5, IBM RS/6000, Intel i860,
Cray C90, Digital Alpha, Hewlett Packard PA-RISC
2. Compiler Framework
- front end
- high-level optimizations
- back-end optimizations
- code generation
- peephole optimizations
- basic blocks, control flow graph
. Basic Data Structures
- tuple, list, tree, graph
3. Data Flow Analysis
. Dataflow Problems
- live variables
- reaching definitions
- dominators
. Applications
- dead code elimination
- use-def chains, constant propagation
- loop discovery
. Dataflow Framework
- lattice framework
- iterative algorithm
. Complications
- aliasing (pointers, reference formals)
- volatile variables
. Other Dataflow Solution Methods
- syntax-based elimination methods
- interval analysis
- slotwise analysis
4. Dominator Analysis
- dominator trees
- more details on loop discovery
- unreachable code elimination
TUESDAY:
5. Machine Independent Optimizations
- constant propagation
- constant folding
- copy propagation
- constant conditional elimination
- common subexpression elimination
6. Loop Optimizations
- code floating
- strength reduction (induction variables)
- linear test replacement
- partial redundancy elimination
7. Procedure Optimizations
- leaf procedures
- tail recursion, tail calls
8. Improving Optimizations
- loop rotation
- code replication (tracing)
- procedure integration
WEDNESDAY:
9. Local Register Allocation
- minimizing register utilization
10. Register Allocation via Coloring
- spill heuristics
- coloring algorithms
- allocating register pairs
- register coalescing
- splitting live ranges
11. Other Register Allocation Heuristics
- hierarchical allocation
- vector register allocation
- register allocation in loops
THURSDAY:
12. Instruction Scheduling
- basic block scheduling
- extended basic block formation
- filling delay slots
13. Scheduling in Loops
- software pipelining
- polycyclic scheduling
- loop unrolling, peeling
- register assignment in loops
- vector instruction scheduling, chaining
FRIDAY:
14. Peephole Optimizations
- tail merging
- jump optimizations
- optimizing for branch prediction
- instruction placement
15. Interacting with Debuggers
- currency of values in registers
- currency of values in memory
- reporting values of variables
- reporting position
16. Instruction Selection
- hand-written code generators
- table-driven code generators
17. Engineering a Real Compiler
- software engineering
- time to market
- managing data structures
- interactive data structure browser
- graphical display of data structures
- 'source-level' debugging
==================================================================
ADVANCED ANALYSIS AND OPTIMIZING FOR PARALLELISM
Monday-Friday, August 23-27, 1993
Course Outline:
MONDAY:
1. Parallel Computer Architecture
. Architectural Options
- vector instruction sets
- multiple CPUs sharing memory
- multiple operations per instruction
- super-scalar control unit
- super-pipelined data unit
- cache memory organization
- cache coherence mechanisms
- massively parallel SIMD and MIMD
- message passing systems
- latency tolerating systems
- locality-based systems
. Current Examples
Intel i860, IBM RS/6000, Multiflow Trace/300, Cray C-90, Alliant FX/80,
Intel iPSC/860, Thinking Machines CM-2 and CM-5, MasPar MP-2
2. Compiler Framework
. Compiler Structure
- front end
- high-level optimizations
- back-end optimizations
- code generation
- peephole optimizations
. Internal Data Structures
- basic blocks, control flow graph
3. Basic Data Structures and Concepts
- lists, trees, graphs
- formal introduction into graphs
- graph algorithms (traversal, spanning trees, cycles)
4. Dataflow Problems in a Lattice Context
- monotone dataflow framework
- reaching definitions, live variables
- iterative solution
5. Control Flow Analysis
- dominators
- control dependence
- identifying loops
6. Static Single Assignment (Factored Use-Def Chains)
- conversion to static single assignment
- conditional constant propagation
- induction variable identification
- comparison to classical methods
- handling parallel language extensions
7. Sparse Dataflow Evaluation Graphs
- constructing the sparse graph
TUESDAY:
8. Data Dependence Analysis Techniques
. Introduction
- types: flow, anti, output dependence
- abstractions: distance, direction vectors
. Subscript Analysis
- formulating a dependence equation
- handling symbolic variables
. Solving the Dependence Equation
- single variable exact test
- Banerjee's inequalities,
- Stanford sieve
- Lambda, Delta, Omega, Power tests
. Complications
- I/O dependence
- aliasing, EQUIVALENCE, COMMON
- structure aliasing
. Beyond Classical Methods
- array kill analysis
- pointers, dynamic aliasing
- argument aliasing
. Program Dependence Graph
WEDNESDAY:
9. Non-Loop Parallelism
. Hierarchical Task Graph
10. Program Restructuring for Shared Memory
- parallelization
- scheduling parallel code
- induction variables, temporaries, etc.
- distribution
- alignment
- index set splitting
- synchronization
- interchanging
- optimizing for memory locality
- linear index set transformations
- privatization
- performance modeling
11. Restructuring for Vector Computers
- vectorization
- scalar expansion
- reductions
- strip mining
12. Restructuring for SIMD Computers
- strip mining, combing
- scalarization, fusion
13. Restructuring for Scalar Computers
- tiling for cache locality
- stride sensitivity
- handling nontightly nested loops
14. Other Restructuring Transformation
- loop fusion
- loop rotation
- handling nontightly nested loops
THURSDAY:
15. Interprocedural Analysis
. Summarizing the Effects of Procedures
- flow insensitive USE and MOD information
- summarizing USE and MOD of arrays
. Interprocedural Constant Propagation
. Interprocedural Alias Analysis
16. High Performance Fortran Language
. Parallelism
- array assignment, FORALL
. Data Distribution
- Align, Distribute, Template
. Expected Behavior
. Complications
- pointer assignments
- dynamic realignment
- inherited alignments
17. Local HPF Analysis
. Conformance
. Alignment Analysis
18. HPF Code Generation
. Generate a Node Program
- local vs. global index sets and indices
- local data allocation
. Communication
- where to place communication
- communication vectorization
FRIDAY:
19. Optimizing HPF Node Programs
. Index Set Calculation
- recovering induction variables
- finite state machine analysis
- guard regions
- overlapping communication
20. Interprocedural Analysis for HPF
. Reaching Distributions
- procedure cloning
. Guard Region Analysis
21. Other Parallel Languages
. Dataparallel C
- front-end/back-end model
- communication classification
. SISAL
- sequentialize to reduce data copying
- dynamic task scheduling
. Crystal
- automatic data alignment
- parallel code generation
- elimination of temporal domain storage
22. Current Research
. Extensions to SSA model
- better handling of aliases
. Data Restructuring
- sparse array implementation
- influencing sparse array representation
- compression of sparse index sets
. Low Level Extensions to HPF
- MetaMP extensions
- interface to HPF
- optimize MetaMP portion of program
. C++ Optimizations
- achieve performance of Fortran from C++
- expose C++ methods to optimizations
23. Engineering a Real Compiler
- software engineering
- time to market
- managing data structures
- interactive data structure browser
- graphical display of data structures
- 'source-level' debugging
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Thu Apr 1 23:27:17 EST 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: cosc19py@jane.uh.edu (93S07005)
Subject: Semantic actions in LR parser
Message-ID: <93-04-008@comp.compilers>
Keywords: LR(1), parse, question, comment
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: University of Houston
Date: Fri, 2 Apr 1993 02:45:00 GMT
Approved: compilers@iecc.cambridge.ma.us
Hi,
I am just wondering if we have to put all semantic actions on the tail
parts of some productions for LR grammar instead of any positions in LL
grammar?
e.g. A -> b B c d C D e {semantic action}
Are there some kind LR parsers exists without this restriction? In yacc,
it allows no restriction but transforms those productions into many pseudo
productions which seems unnatural. In addition to this approach, Are there
any parsers can achieve it?
Any pointers would be appreciated.
Sue
[I can't see how an LR parser can execute the action until the entire
RHS has been read, since before that it doesn't know what rule it's going
to recognize. -John]
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Fri Apr 2 15:35:36 EST 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: jjan@cs.rug.nl (Beeblebrox)
Subject: Re: Theory on loop transformations
Message-ID: <93-04-009@comp.compilers>
Keywords: theory, optimize
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Dept. of Computing Science, Groningen University
References: <93-04-006@comp.compilers>
Date: Fri, 2 Apr 1993 14:31:41 GMT
Approved: compilers@iecc.cambridge.ma.us
From: assmann@karlsruhe.gmd.de (Uwe Assmann)
> ... M. Wolfe (and probably others) have tried to develop a theory on loop
>transformations that looked at the loop transformation as a matrix.
Yep. I found it in ACM SIGPLAN'91 Conference: p.30-45,
authors Michael E. Wolf (not Wolfe) and Monica S. Lam (Stanford Uni.)
--
Jan Jongejan email: jjan@cs.rug.nl
Dept. Comp.Sci.,
Univ. of Groningen,
Netherlands.
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Fri Apr 2 15:39:59 EST 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: roy@prism.gatech.edu (Roy Mongiovi)
Subject: Re: Semantic actions in LR parser
Message-ID: <93-04-010@comp.compilers>
Keywords: LR(1), parse
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Georgia Institute of Technology
References: <93-04-008@comp.compilers>
Date: Fri, 2 Apr 1993 15:01:24 GMT
Approved: compilers@iecc.cambridge.ma.us
LR parsers can only perform semantic actions when the recognize a handle
(right-hand side). You can either split up the right-hand sides into
pieces so that the pieces end where you need the semantic actions, or you
can stick in epsilon productions whose only purpose is to cause semantic
actions.
--
Roy J. Mongiovi Systems Support Specialist Information Technology
Georgia Institute of Technology, Atlanta, Georgia 30332-0715
roy@prism.gatech.edu
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Fri Apr 2 15:48:13 EST 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: dsehr@gomez.intel.com (David Sehr)
Subject: Re: Theory on loop transformations
Message-ID: <93-04-011@comp.compilers>
Keywords: optimize, theory
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Architecture & Software Technology, Intel Corp, Santa Clara, CA
References: <93-04-006@comp.compilers>
Date: Fri, 2 Apr 1993 16:31:04 GMT
Approved: compilers@iecc.cambridge.ma.us
Uwe Assmann (assmann@karlsruhe.gmd.de) wrote:
> I remember that M. Wolfe (and probably others) have tried to develop a
> theory on loop transformations that looked at the loop transformation as a
> matrix.
A colleague here at Intel has been working on exactly that problem for
several years. His name is Utpal Banerjee, and he has recently
published a book describing the approach you mention (using unimodular
matrices for dependence testing and transformation). The full info:
Loop Transformations for Restructuring Compilers: the Foundations
Utpal Banerjee
Kluwer Academic Publishers
ISBN: 0-7923-9318-X
Other possibilities to pursue are the recent papers of William Pugh of
the University of Maryland (pugh@cs.umd.edu), and Paul Feautrier of the
University of P. et M. Curie (feautrier@masi.ibp.fr).
Hope this helps.
David
--
David C. Sehr, Intel Corporation
2200 Mission College Blvd., M/S RN6-18
Santa Clara, CA 95052-8119
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Fri Apr 2 15:49:23 EST 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: colas@opossum.inria.fr (Colas Nahaboo)
Subject: Regexps from shell wilcards
Message-ID: <93-04-012@comp.compilers>
Keywords: lex, question
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Koala Project, Bull Research France
Date: Fri, 2 Apr 1993 16:31:24 GMT
Approved: compilers@iecc.cambridge.ma.us
Is there an algorithm to convert shell-expressions into regular
expressions? (i.e. generate the string ".*[.]c" from the input "*.c")
In the same vein, is there an algorithm to generate case-independent
regular expressions from nomal ones? (i.e. generate the string
"[aA][bB][cC][eEfFgG]*" from the input "abc[efg]*")
--
Colas Nahaboo, Koala (Bull Research). colas@koala.inria.fr
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Fri Apr 2 15:55:17 EST 1993
Xref: iecc comp.object:9759 comp.arch:27570 comp.compilers:4471
Newsgroups: comp.object,comp.arch,comp.compilers
Path: iecc!compilers-sender
From: "Steven A. Moyer" <sam2y@koa.cs.virginia.edu>
Subject: Re: non-caching load and GC
Message-ID: <93-04-013@comp.compilers>
Keywords: architecture, GC
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: University of Virginia Computer Science Department
References: <C4ppA5.BLx.1@cs.cmu.edu>
Date: Fri, 2 Apr 1993 18:53:03 GMT
Approved: compilers@iecc.cambridge.ma.us
monnier+@cs.cmu.edu (Stefan Monnier) writes:
>I have the impression that it could be interesting for a GC to use loads
>that don't necessarily bring the data in the cache (like the
>pipelined-float-load of the i860) in order not to completely flush the
>cache while garbage collecting (this seems most interesting when sweeping,
>but could be useful for copying collectors also).
>
>I haven't found any paper discussing this kind of 'optimisation'.
>
>Does such a paper exist ? Or is the idea just dumb ?
>Which processors have such a load ?
I'll address these questions in reverse order:
1) Currently the only microprocessor with such an instruction is the i860.
2) Utilizing a non-caching load instruction in a manner complementary to
caching is in fact a good idea and can significantly improve effective
memory bandwidth for many computations.
3) My very recently completed dissertation develops a family of compiler
optimizations, called 'access ordering', that exploit a non-caching
load instruction. These algorithms are applicable to stream-oriented
computations (loosely, vectorizable) and are derived in the context of
scientific computing. Essentially, one would like to apply blocking
techniques to cache multiply-referenced data items and then use non-caching
load instructions to reference single-visit items.
Note: the technique is also very useful for implementing the
'copy optimization' by using non-caching loads to reference
items to be copied to the contiguous memory region.
Applying non-caching loads in such a fashion requires much more than
simply replacing load instructions that reference single-visit items
with non-caching loads; one must then consider the effect of the
observed reference sequence on the other side of the cache. Access
ordering algorithms perform loop unrolling and reorder non-caching
loads to exploit the underlying memory system characteristics (eg
architecture and component type).
The work presented in my dissertation focuses on reordering algorithms
for a number of common architecture/component pairs; performance models
are derived for the resulting sequences of non-caching accesses.
Combining caching and non-caching accesses is discussed in a general
way, but is not formalized (hey, ya got to draw the line somewhere :-)
Papers on access ordering have not reached print yet, but if you are
interested in obtaining further information you can get two pertinent
techreports via anonymous ftp to uvacs.cs.virginia.edu; the reports
are the compressed postscript files:
pub/techreports/IPC-92-02.ps.Z (single module architecture)
pub/techreports/IPC-92-12.ps.Z (interleaved architecture)
Warning: these reports contain old (and not particularly well done, I'll
admit) notation; all the notation in the actual dissertation
has been significantly altered and matured and results have a
much greater degree of formality. Furthermore, by altering
notation many of the equations simplified significantly.
Therefore, if you find this optimization to be useful for your
work, it is recommended you contact me to receive a copy
of the complete (and *much* improved) dissertaton text.
I hope this helps in addressing your question. I'm afraid I'm not that
familiar with the state of the art in GC algorithms, but from your
description it sounds as if access ordering techniques may be helpful.
Steve
--
Steve Moyer
Computer Science Department
University of Virginia
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Sat Apr 3 20:09:49 EST 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: karsten@tfl.dk (Karsten Nyblad)
Subject: Re: Semantic actions in LR parser
Message-ID: <93-04-014@comp.compilers>
Keywords: LR(1), parse
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: TFL
References: <93-04-008@comp.compilers> <93-04-010@comp.compilers>
Date: Sat, 3 Apr 1993 09:37:19 GMT
Approved: compilers@iecc.cambridge.ma.us
roy@prism.gatech.edu (Roy Mongiovi) writes:
>LR parsers can only perform semantic actions when the recognize a handle
>(right-hand side). You can either split up the right-hand sides into
>pieces so that the pieces end where you need the semantic actions, or you
>can stick in epsilon productions whose only purpose is to cause semantic
>actions.
Hi,
That is wrong. When there is only one item in the kernel of a state, the
production of that item will always be reduced at a later point. So, you
can allow for actions to be executed when you push a terminal or
nonterminal and that brings you to a state that has only one item in
kernel.
Even that can be generalized. All items of the kernel of a state has the
same symbol before the . of the item, where the . denotes the point until
which the parser has accepted the symbols of the production of the item.
If the same action has been specified on all productions of the items of
the kernel of a state on pushing the symbol before the ., then that action
can executed by the parser.
That can also the generalized. If the first terminals of the symbols
following . differs, then that can be used to select the actions to be
taken.
Example:
Assume the productions A -> A B {action 1} C
B -> B { action 1 } D
C -> B { action 2 } E
are part of the productions of a grammar.
The actions are to be executed when B is pushed.
In a state with the kernel
A -> A B . C
B -> B . D
the action 1 can be taken.
In a state with the kernel
B -> B . D
C -> B . E
both action 1 and 2 could be taken and thus the
grammar is ambigouos. That ambiguity can be solved if
D and E do not start with the same terminals, e.g.
are different terminals.
Karsten Nyblad
TFL, A Danish Telecommunication Research Lab.
karsten@tfl.dk
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Sat Apr 3 20:10:25 EST 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: pugh@cs.umd.edu (Bill Pugh)
Subject: Re: Theory on loop transformations
Message-ID: <93-04-015@comp.compilers>
Keywords: theory, optimize, bibliography
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: U of Maryland, Dept. of Computer Science, Coll. Pk., MD 20742
References: <93-04-006@comp.compilers>
Date: Sat, 3 Apr 1993 16:43:26 GMT
Approved: compilers@iecc.cambridge.ma.us
assmann@karlsruhe.gmd.de (Uwe Assmann) writes:
>I remember that M. Wolfe (and probably others) have tried to develop a
>theory on loop transformations ...
>In general, I am interested in a general theory on loop transformations.
Actually, it was M. E. Wolf, not M. Wolfe (confusing, isn't it?).
Here are a couple of references on the subject.
Bill Pugh
----------
U. Banerjee.
Unimodular transformations of double loops.
In {\em Proc. of the 3rd Workshop on Programming Languages and
Compilers for Parallel Computing}, Irvine, CA, August 1990.
Paul Feautrier.
Some efficient solutions to the affine scheduling problem, part i,
one-dimensional time.
Technical Report 92.28, IBP/MASI, April 1992.
Paul Feautrier.
Some efficient solutions to the affine scheduling problem, part ii,
multi-dimensional time.
Technical Report 92.78, IBP/MASI, Oct 1992.
Wayne Kelly and William Pugh,
Generating Schedules and Code within a
Unified Reordering Transformation Framework,
Technical Report CS-TR-2995,
Dept. of Computer Science, University of Maryland, College Park,
November, 1992
K. G. Kumar, D. Kulkarni, and A. Basu.
Deriving good transformations for mapping nested loops on hieracical
parallel machines in polynomial time.
In {\em Proc. of the 1992 International Conference on
Supercomputing}, July 1992.
Wei Li and Keshav Pingali.
A singular loop transformation framework based on non-singular
matrices.
In {\em 5th Workshop on Languages and Compilers for Parallel
Computing}, Yale University, August 1992.
Lee-Chung Lu.
A unified framework for systematic loop transformations.
In {\em Proceedings of Third ACM SIGPLAN Symp. on the Principles \&
Practice of Parallel Programming}, April 1991.
William Pugh.
Uniform techniques for loop optimization.
In {\em 1991 International Conference on Supercomputing}, pages
341--352, Cologne, Germany, June 1991.
J. Ramanujam.
Non-unimodular transformations of nested loops.
In {\em Supercomputing `92}, November 1992.
Vivek Sarkar and Radhika Thekkath.
A general framework for iteration-reordering loop transformations.
In {\em ACM SIGPLAN'92 Conference on Programming Language Design and
Implementation}, San Francisco, California, Jun 1992.
Michael E. Wolf and Monica S. Lam.
A data locality optimizing algorithm.
In {\em ACM SIGPLAN'91 Conference on Programming Language Design and
Implementation}, 1991.
Michael E. Wolf and Monica S. Lam.
A loop transformation theory and an algorithm to maximize
parallelism.
In {\em IEEE Transactions on Parallel and Distributed Systems}, July
1991.
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Sat Apr 3 20:11:17 EST 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: Wei Li <wei@cs.cornell.EDU>
Subject: Re: Theory on loop transformations
Message-ID: <93-04-016@comp.compilers>
Keywords: theory, optimize, bibliography, FTP
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Cornell University, CS Dept., Ithaca, NY
References: <93-04-006@comp.compilers>
Date: Sat, 3 Apr 1993 18:19:48 GMT
Approved: compilers@iecc.cambridge.ma.us
assmann@karlsruhe.gmd.de (Uwe Assmann) writes:
|> In general, I am interested in a general theory on loop transformations.
We have a matrix-oriented approach to loop transformations that uses
non-singular matrices to represent loop transformations. Non-singular
matrices generalize the unimodular approach (unimodular matrices are a
special case of non-singular matrices in which the determinant is 1 or
-1). Some important transformations such as loop tiling can only be
modeled by non-singular matrices. Furthermore, we provide a completion
algorithm that makes the theory easier to use in practice. In
transformations for parallelism and data locality, it is very useful to
have such completion algorithm. Our work was presented at the 5th
Compiler Workshop at Yale last year. A journal version is to appear soon
in IJPP.
We have used the non-singular matrix framework to develop optimizations
for data locality in our compiler for NUMA parallel machines. You can
find how the transformation matrix is constructed automatically. The
algorithms are in the paper that appeared in ASPLOS V (ACM SIGPLAN
Notices, Vol 27, Number 9, Sep. 1992).
The papers can also be found via ftp from Cornell (ftp.cs.cornell.edu,
pub/TyphoonCompiler/papers-ps/).
---------------------------------------------------------------------------
file: framework.ps
"A Singular Loop Transformation Framework
Based on Non-singular Matrices"
by Wei Li and Keshav Pingali
---------------------------------------------------------------------------
file: asplos92.ps
"Access Normalization: Loop Restructuring for NUMA Compilers"
by Wei Li and Keshav Pingali
---------------------------------------------------------------------------
file: pnuma.ps
"Loop Transformations for NUMA Machines"
by Wei Li and Keshav Pingali
SIGPLAN Notices, January 1993
-- Wei Li
Department of Computer Science
Cornell University
Ithaca, NY 14853
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Sat Apr 3 20:13:14 EST 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: mehrotra@csrd.uiuc.edu (Sharad Mehrotra)
Subject: Re: Theory on loop transformations
Message-ID: <93-04-017@comp.compilers>
Keywords: optimize, theory
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: University of Illinois, Center for Supercomputing R&D
References: <93-04-006@comp.compilers> <93-04-011@comp.compilers>
Date: Sun, 4 Apr 1993 00:26:53 GMT
Approved: compilers@iecc.cambridge.ma.us
>[re references on theory of loop transformations]
Those are excellent pointers, but unimodular transformations are only one
aspect of the larger problem of automatic program parallelization. If you
are just getting started in the area, you might find the following CSRD
report (also to appear soon in the Proceedings of the IEEE) useful:
Banerjee, U., Eigenman, R., Nicolau, A., and Padua, D.,
"Automatic Program Parallelization", CSRD TR 1250, November 1992.
The report contains a timely survey of the field and a bibliography with
161 citations.
Many CSRD Tech Reports are available for anonymous ftp from host
sp2.csrd.uiuc.edu (128.174.153.4) in directory CSRD_Info/reports. We'll
try and arrange to put the PostScript for this report there soon. If it's
not available in a few days, email reinhart@csrd.uiuc.edu, and ask for a
paper copy by snail mail.
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Sun Apr 4 22:19:43 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: Warner Losh <imp@Boulder.ParcPlace.COM>
Subject: Re: Regexps from shell wildcards
Message-ID: <93-04-018@comp.compilers>
Keywords: lex
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: ParcPlace Boulder
References: <93-04-012@comp.compilers>
Date: Mon, 5 Apr 1993 00:01:56 GMT
Approved: compilers@iecc.cambridge.ma.us
>Is there an algorithm to convert shell-expressions into regular
>expressions? (i.e. generate the string ".*[.]c" from the input "*.c")
It is fairly straightforward to do this conversion for /bin/sh. Just
change '*' to '.*' and quote all the meta characters that have no special
meaning in /bin/sh, but do in the regexp package you are using. However,
if you wanted to do /bin/csh shell expressions, then you'll find that
things like "*.{c,C,H,h,cf}" cause problems and cause the output string
length to grow wildly.
>In the same vein, is there an algorithm to generate case-independent
>regular expressions from nomal ones? (i.e. generate the string
>"[aA][bB][cC][eEfFgG]*" from the input "abc[efg]*")
I think this also falls withing the relm of brute force algorithms. Since
there are only two states (inside and outside of square brackets), a
single pass through the string, copying to a dest string should do the
trick.
Warner
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Mon Apr 5 23:18:03 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: kanze@us-es.sel.de (James Kanze)
Subject: Re: Regexps from shell wildcards
Message-ID: <93-04-019@comp.compilers>
Keywords: lex
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Compilers Central
References: <93-04-012@comp.compilers> <93-04-018@comp.compilers>
Date: Mon, 5 Apr 1993 11:21:54 GMT
Approved: compilers@iecc.cambridge.ma.us
Warner Losh writes:
|> >Is there an algorithm to convert shell-expressions into regular
|> >expressions? (i.e. generate the string ".*[.]c" from the input "*.c")
|> It is fairly straightforward to do this conversion for /bin/sh. Just
|> change '*' to '.*' and quote all the meta characters that have no special
|> meaning in /bin/sh, but do in the regexp package you are using.
Plus, change '.' to '[.]', if not already in []'s, and '?' to '.'.
(Note that this will require some state to determine whether one is
already in []'s or not.)
|> However, if you wanted to do /bin/csh shell expressions, then you'll find
|> that things like "*.{c,C,H,h,cf}" cause problems and cause the output
|> string length to grow wildly.
What's wrong with "*.{c,C,H,h,cf}" becoming ".*[.]([c|C|H|h|cf)". In
sum, just replace "{}" with "()", and the commas within it with '|'.
The output string doesn't grow at all. (Of course, some of your older
regexp programs, like grep, can't handle '|'.)
--
James Kanze email: kanze@us-es.sel.de
GABI Software, Sarl., 8 rue du Faisan, F-67000 Strasbourg, France
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Mon Apr 5 23:19:50 EDT 1993
Xref: iecc comp.theory:5894 comp.databases:19193 comp.compilers:4478
Newsgroups: comp.theory,comp.databases,comp.compilers
Path: iecc!compilers-sender
From: assmann@karlsruhe.gmd.de (Uwe Assmann)
Subject: Graphs generated by predicates
Message-ID: <93-04-020@comp.compilers>
Followup-To: comp.theory
Keywords: theory, question
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: GMD Forschungsstelle an der Universitaet Karlsruhe
Date: Mon, 5 Apr 1993 13:28:03 GMT
Approved: compilers@iecc.cambridge.ma.us
I wonder whether there is a classification of graphs with different edge
colors based on the 'generating predicates'.
By this I mean that a graph with different edge colors is described by its
vertices and its relations (which represent the edge colors); the
relations, however, can be described as binary predicates. Regard the
famous 'ancestor example' which describes the transitive hull of the 'son'
relation:
ancestor(A,D) :- son(A,D).
ancestor(A,D) :- ancestor(A,A1), son(A1,D).
That means, that the ancestor-relation (ancestor-edges) can be defined in
terms of the son-relation, respectively the ancestor-graph in terms of the
son-graph. Now my question is: is there a classification of graphs that
takes into account, which form of predicates 'generate which forms of
graphs?
--
Uwe Assmann
GMD Forschungsstelle an der Universitaet Karlsruhe
Vincenz-Priessnitz-Str. 1
7500 Karlsruhe GERMANY
Email: assmann@karlsruhe.gmd.de Tel: 0721/662255 Fax: 0721/6622968
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Mon Apr 5 23:20:28 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: macrakis@osf.org (Stavros Macrakis)
Subject: Re: Regexps from shell wildcards
Message-ID: <93-04-021@comp.compilers>
Keywords: lex
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: OSF Research Institute
References: <93-04-012@comp.compilers> <93-04-018@comp.compilers>
Date: Mon, 5 Apr 1993 16:39:07 GMT
Approved: compilers@iecc.cambridge.ma.us
colas@opossum.inria.fr (Colas Nahaboo) asks for an algorithm to convert
shell-expressions into regular expressions.
Warner Losh <imp@Boulder.ParcPlace.COM> answers:
Just change '*' to '.*' and quote all the meta characters...
You also need to "anchor" the beginning and end with "^" and "$", since
shell patterns must match the whole filename, and Unix regular expressions
match any substring.
...to do /bin/csh shell expressions, then you'll find that things like
"*.{c,C,H,h,cf}" cause problems and cause the output string length
to grow wildly.
"^.*\.(c|C|H|h|cf)$" causes no problems. Of course, this requires
full regular expressions (egrep, emacs), not brain-dead subsets (grep,
ex).
-s
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Mon Apr 5 23:21:44 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: Paul Robinson <tdarcos@mcimail.com>
Subject: Serendipitious Compiler Stuff
Message-ID: <93-04-022@comp.compilers>
Keywords: tools, FTP
Sender: compilers-sender@iecc.cambridge.ma.us
Reply-To: Paul Robinson <tdarcos@mcimail.com>
Organization: Compilers Central
Date: Mon, 5 Apr 1993 15:35:42 GMT
Approved: compilers@iecc.cambridge.ma.us
The term "serendipity" refers to the finding of something of value when
looking for something else. (Like striking oil while digging for gold.)
While looking for something else on Oak.Oakland.Edu (a Unix mirror site
for wsmr-simtel20.army.mil), I found some other things in the FTP
directories, which is compiler related. The following items are ones I
found of interest (unimportant replies deleted):
% ftp oak.oakland.edu
user anonymous
pass e-mail@address.domain
ftp> cd pub/unix-c/languages
ftp> dir
drwxr-xr-x 2 1716 0 512 Jan 23 1991 ada
drwxr-xr-x 2 1716 0 512 Jan 23 1991 assembler
drwxr-xr-x 2 1716 0 512 Jan 23 1991 basic
drwxr-xr-x 2 1716 0 2560 Jan 23 1991 c
drwxr-xr-x 2 1716 0 512 Apr 28 1992 cplusplus
drwxr-xr-x 2 1716 0 512 Jan 23 1991 forth
drwxr-xr-x 2 1716 0 512 Jan 23 1991 fortran
drwxr-xr-x 2 1716 0 512 Jan 23 1991 fp
drwxr-xr-x 2 1716 0 512 Jan 23 1991 icon
drwxr-xr-x 2 1716 0 512 Jan 23 1991 lisp
drwxr-xr-x 2 1716 0 512 Jan 23 1991 logo
drwxr-xr-x 2 1716 0 512 Jan 23 1991 modula-2
drwxr-xr-x 2 1716 0 512 Jan 23 1991 occam
drwxr-xr-x 2 1716 0 512 Jan 23 1991 ops5
drwxr-xr-x 2 1716 0 512 Jan 23 1991 pascal
drwxr-xr-x 2 1716 0 512 Jan 23 1991 prolog
drwxr-xr-x 2 1716 0 512 Jan 23 1991 smalltalk
drwxr-xr-x 2 1716 0 512 Jan 23 1991 sr
ftp> dir assembler
-rw-r--r-- 1 1716 0 21678 Mar 3 1989 asm80.tar-z
-rw-r--r-- 1 1716 0 22548 Mar 3 1989 cross6502.tar-z
-rw-r--r-- 1 1716 0 29323 Mar 3 1989 cross6809.tar-z
-rw-r--r-- 1 1716 0 13651 Mar 3 1989 dis6502.tar-z
-rw-r--r-- 1 1716 0 38833 Mar 3 1989 dis68000.tar-z
-rw-r--r-- 1 1716 0 60509 Mar 3 1989 dis68k.tar-z
-rw-r--r-- 1 1716 0 36112 Mar 3 1989 dis88.tar-z
-rw-r--r-- 1 1716 0 45217 Mar 3 1989 disasm.tar-z
-rw-r--r-- 1 1716 0 22287 Feb 2 1990 disz80.tar-z
-rw-r--r-- 1 1716 0 40213 Mar 3 1989 genasm.tar-z
-rw-r--r-- 1 1716 0 29053 Mar 3 1989 hp41.tar-z
-rw-r--r-- 1 1716 0 48624 Mar 3 1989 zmac.tar-z
ftp> dir basic
-rw-r--r-- 1 1716 0 111041 Mar 3 1989 basic.tar-z
ftp> dir pascal
-rw-r--r-- 1 1716 0 9437 Mar 3 1989 iso-pascal.tar-z
-rw-r--r-- 1 1716 0 18332 Mar 3 1989 karel.tar-z
-rw-r--r-- 1 1716 0 552692 May 17 1990 p2c.tar-z
-rw-r--r-- 1 1716 0 16479 Mar 3 1989 pstrings.tar-z
-rw-r--r-- 1 1716 0 185937 Mar 3 1989 ptoc.tar-z
-rw-r--r-- 1 1716 0 69213 Mar 3 1989 software-tools.tar-z
-rw-r--r-- 1 1716 0 43187 Mar 3 1989 turbo-tools.tar-z
ftp> dir fortran
-rw-r--r-- 1 1716 0 414721 Oct 30 1990 f2c.tar-z
-rw-r--r-- 1 1716 0 203613 May 17 1990 floppy.tar-z
-rw-r--r-- 1 1716 0 9978 Mar 3 1989 fxref.tar-z
-rw-r--r-- 1 1716 0 49162 Mar 3 1989 prep.tar-z
-rw-r--r-- 1 1716 0 34723 Aug 30 1990 psdraw.tar-z
-rw-r--r-- 1 1716 0 22888 Mar 3 1989 ratfor.tar-z
The stuff in the "C" directory appears to mostly be libraries for C
programs. The "p2c.tar-z" and "f2c.tar-z" files are the Pascal to C and
Fortran to C programs. I have picked up the Basic one (basic.tar-z) and
it claims to be a public domain version of DEC's MU-Basic with Microsoft
Basic mixed together. (MU Basic was written in PDP-11 assembler; I've
seen it.)
So some of these files may be of interest to people wanting to understand
how compilers work.
Note that these files end in ".z" NOT ".Z" so you need GNUZIP to decompress
them, NOT compress. Note this is not the ZIP format; GNUZIP is the GNU
version of Compress, only it creates files different from Compress (but it
will also extract Compress' ".Z" files, or so it claims).
-----
Paul Robinson -- TDARCOS@MCIMAIL.COM
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Mon Apr 5 23:24:03 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: Todd Hodes <tdh6t@helga3.acc.virginia.edu>
Subject: Re: Wanted: Regular Expression -> Finite Automata C code =-
Message-ID: <93-04-023@comp.compilers>
Keywords: lex, question, comment
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: University of Virginia Computer Science Department
Date: Mon, 5 Apr 1993 17:05:45 GMT
Approved: compilers@iecc.cambridge.ma.us
I wanted to use the code in Sedgewick's 'Algorithms in C' book,
but found the following bug:
When it parses REs of the form a(b+c), i.e. a terminal concatenated
with a union clause, the output is as follows:
State : 0 1 2 3 4 5 6
Char : a b c -
Next1 : 1 2 3 6 5 6 0
Next2 : 1 2 3 6 2 6 0
Weeeeell, this ain't quite right. State #1 should go on an input
of "a" to state #4 in addition to or instead of state #2.
Anyone with an idea how to salvage his code or new code would be quite
a savior. This is for a technical report to be given to the world before
summer. It is a teaching tool implementing Hopcraft and Ullmans RE ->
NFA-w-epsilons -> NFA -> DFA "circle of equivalence" transforms with a
graphical interface in SUIT under X. Everything works except Sedgewick's
code, and I dread rewriting it, figuring that if Sedgewick got it wrong,
it must be HARD! (I haven't even found his bug yet.) I already tried
writing it once (iteratively, even). Figures that the only code I steal
doesn't work. :)
I've already had the following ideas tossed at me:
1) Use Henry Spencer's regexp [not exactly what I need -
just union, concatenation and closure is fine]
2) Pillage source from Unix utilities (flex, lex, grep)
[ughh -- haven't tried this yet... again, full Unix REs
are too much]
3) Give up ;>
(Thanks to Jonathan A. Chandross <jac@yoko.rutgers.edu> for the pointer to
this group and some additional info about how bugs have been found in the
code before and posted here.)
Thanks,
T.
--
Todd Hodes, tdh6t@virginia.edu
[This came up last month, but without a whole lot of helpful suggestions.
-John]
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Mon Apr 5 23:27:19 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: "Paul Purdom" <pwp@cs.indiana.edu>
Subject: Semantic Actions and LR(k)
Message-ID: <93-04-024@comp.compilers>
Keywords: LR(1), parse
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Computer Science, Indiana University
References: <93-04-008@comp.compilers> <93-04-010@comp.compilers>
Date: Mon, 5 Apr 1993 17:45:48 GMT
Approved: compilers@iecc.cambridge.ma.us
I hear that there has been some discussion of LR(k) parsing and semantic
action routines concerning the question of where one can place calls to
the routines without causing parsing difficulties.
People interested in this question may wish to see the article Semantic
Routines and LR(k) Parsers by Cynthia Brown and myself, Acta Informatica
14 (1980) p299-315.
One might wish to call a routine before the first symbol of the right side
of a production (thereby simulating LL(k) with an LR(k) parser), or after
the i-th symbol for any i up to an including the length of the production.
The above paper shows that each position is either:
1. Forbidden, the grammar is no longer parseable (with an LR(k) parser) if a
routine is called there. The only forbidden positions are the positions
zero of left recursive productions.
2. Contingent, the grammar is parseable provided the same routine is called
at certain other positions in the grammar.
3. Free, the grammar is parseable whether or not the same routine is called
at other positions.
The paper has an algorithm for classifying positions.
A more refined analysis has been done by Michael R. Anderson. The last I
knew (1992), his email address was mra@opal.idbsu.edu.
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Mon Apr 5 23:28:23 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: mauney@csljon.csl.ncsu.edu (Jon Mauney)
Subject: Re: Semantic actions in LR parser
Message-ID: <93-04-025@comp.compilers>
Keywords: LR(1), parse
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: NCSU
References: <93-04-008@comp.compilers> <93-04-014@comp.compilers>
Date: Mon, 5 Apr 1993 17:39:52 GMT
Approved: compilers@iecc.cambridge.ma.us
roy@prism.gatech.edu (Roy Mongiovi) writes:
>LR parsers can only perform semantic actions when the recognize a handle
>... you can stick in epsilon productions whose only purpose is to cause
>semantic actions.
karsten@tfl.dk (Karsten Nyblad) writes:
>... So, you can allow for actions to be executed when you push a terminal or
>nonterminal and that brings you to a state that has only one item in kernel.
>Even that can be generalized. ...
These are the same thing. Creating a nonterminal, with an epsilon
production, for each action, and inserting them at the places Nyblad
suggests will not cause parse conflicts (and inserting them in places he
implicitly disallows will cause conflicts). It is simply a matter of
syntactic sugar whether the tool does this for you, or makes you do it
manually. (There is also the question of whether the tool can recongnize
that two actions are identical.)
--
Jon Mauney mauney@csc.ncsu.edu
Mauney Computer Consulting (919)828-8053
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Mon Apr 5 23:29:53 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: henry@zoo.toronto.edu (Henry Spencer)
Subject: Re: Regexps from shell wilcards
Message-ID: <93-04-026@comp.compilers>
Keywords: lex
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: U of Toronto Zoology
References: <93-04-012@comp.compilers>
Date: Mon, 5 Apr 1993 20:57:57 GMT
Approved: compilers@iecc.cambridge.ma.us
colas@opossum.inria.fr (Colas Nahaboo) writes:
>Is there an algorithm to convert shell-expressions into regular
>expressions? (i.e. generate the string ".*[.]c" from the input "*.c")
The mapping is fairly trivial, but depends on the exact shell syntax you
are interested in. In general, all the constructs are present in both
forms, and you can just map construct-by-construct, but you have to watch
details. For example, mapping shell "*" to regexp ".*" is wrong, because
shell "*" does not match "/". If you write down the exact rules for the
shell syntax you're using, transforming it to regular expressions is
typically easy.
>In the same vein, is there an algorithm to generate case-independent
>regular expressions from nomal ones? (i.e. generate the string
>"[aA][bB][cC][eEfFgG]*" from the input "abc[efg]*")
Again, the real question is defining what you mean by "case-independent
regular expression". It's not trivial; does [^x]y match Xy? As I recall,
those of us on the POSIX.2 regular-expressions working group noticed this
question too late, and the standard as shipped will be rather vague on the
subject. Our informal conclusion, which we hope will make it into an
eventual tidying-up of the standard, was that the right way for
case-independent regular expressions to act is based on a model in which
case distinctions vanish from the alphabet. You can't take that too
literally or complexities arise, but it's a good guide. So no,
case-independent [^x]y does not match Xy, because the [^x] covers all
kinds of X's, be they uppercase or lowercase.
Again, once you have defined what you're talking about, implementation
is easy. For the case-distinctions-vanish model, any literal letter x
becomes [xX], and the contents of a bracket expression [xyz] are augmented
with any case counterparts of the things in it, giving [xyzXYZ]. The hard
thing to do in a portable way, actually, is to find out which characters
have case counterparts and what they are.
--
Henry Spencer @ U of Toronto Zoology, henry@zoo.toronto.edu utzoo!henry
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Tue Apr 6 10:27:01 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: assmann@karlsruhe.gmd.de (Uwe Assmann)
Subject: SUMMARY: Loop transformations with unimodular matrices
Message-ID: <93-04-027@comp.compilers>
Keywords: optimize, summary, theory
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: GMD Forschungsstelle an der Universitaet Karlsruhe
References: <93-04-020@comp.compilers>
Date: Tue, 6 Apr 1993 10:48:59 GMT
Approved: compilers@iecc.cambridge.ma.us
Here comes a summary of answers concerning the description of loop
transformations with unimodular matrices. Thanks to all who have
responded.
------------------------------------------------------------------------------
From: pugh@cs.umd.edu (Bill Pugh)
Actually, it was M. E. Wolf, not M. Wolfe (confusing, isn't it?). Here
are a couple of references on the subject.
U. Banerjee.
Unimodular transformations of double loops.
In {\em Proc. of the 3rd Workshop on Programming Languages and
Compilers for Parallel Computing}, Irvine, CA, August 1990.
Paul Feautrier.
Some efficient solutions to the affine scheduling problem, part i,
one-dimensional time.
Technical Report 92.28, IBP/MASI, April 1992.
Paul Feautrier.
Some efficient solutions to the affine scheduling problem, part ii,
multi-dimensional time.
Technical Report 92.78, IBP/MASI, Oct 1992.
Wayne Kelly and William Pugh,
Generating Schedules and Code within a
Unified Reordering Transformation Framework,
Technical Report CS-TR-2995,
Dept. of Computer Science, University of Maryland, College Park,
November, 1992
K. G. Kumar, D. Kulkarni, and A. Basu.
Deriving good transformations for mapping nested loops on hieracical
parallel machines in polynomial time.
In {\em Proc. of the 1992 International Conference on
Supercomputing}, July 1992.
Wei Li and Keshav Pingali.
A singular loop transformation framework based on non-singular
matrices.
In {\em 5th Workshop on Languages and Compilers for Parallel
Computing}, Yale University, August 1992.
Lee-Chung Lu.
A unified framework for systematic loop transformations.
In {\em Proceedings of Third ACM SIGPLAN Symp. on the Principles \&
Practice of Parallel Programming}, April 1991.
William Pugh.
Uniform techniques for loop optimization.
In {\em 1991 International Conference on Supercomputing}, pages
341--352, Cologne, Germany, June 1991.
J. Ramanujam.
Non-unimodular transformations of nested loops.
In {\em Supercomputing `92}, November 1992.
Vivek Sarkar and Radhika Thekkath.
A general framework for iteration-reordering loop transformations.
In {\em ACM SIGPLAN'92 Conference on Programming Language Design and
Implementation}, San Francisco, California, Jun 1992.
Michael E. Wolf and Monica S. Lam.
A data locality optimizing algorithm.
In {\em ACM SIGPLAN'91 Conference on Programming Language Design and
Implementation}, 1991.
Michael E. Wolf and Monica S. Lam.
A loop transformation theory and an algorithm to maximize
parallelism.
In {\em IEEE Transactions on Parallel and Distributed Systems}, July
1991.
------------------------------------------------------------------------------
From: wak@cs.UMD.EDU (Wayne Kelly)
@inproceedings{Ban90,
author = "U. Banerjee",
title = "Unimodular Transformations of Double Loops",
booktitle = "Proc. of the 3rd Workshop on Programming Languages and
Compilers for Parallel Computing",
month = aug,
year = 1990,
address = "Irvine, CA" }
@INPROCEEDINGS{WL91,
author = {Michael E. Wolf and Monica S. Lam},
title = {A Data Locality Optimizing Algorithm},
booktitle = {ACM SIGPLAN'91 Conference on Programming Language
Design and Implementation},
year = 1991
}
@INPROCEEDINGS{WL91b,
author = {Michael E. Wolf and Monica S. Lam},
title = {A loop transformation theory and an algorithm to maximize
parallelism},
booktitle = {IEEE Transactions on Parallel and Distributed Systems},
month = {July},
year = 1991
}
Unimodular transformations is a unified framework that is able to describe
any transformation that can be obtained by compositions of loop
interchange, loop skewing, and loop reversal. Unfortunately, unimodular
transformations are limited by two facts: they can only be applied to
perfectly nested loops, and all statements in the loop nest are
transformed in the same way. They can therefore not represent some
important transformations such as loop fusion, loop distribution and
statement reordering.
We have developed a framework that generalizes unimodular transformations.
Our framework can represent a much broader set of reordering
transformations, including any transformation that can be obtained from
some combination of: loop interchange, loop reversal, loop skewing,
statement reordering, loop distribution, loop fusion, loop scaling, loop
alignment, index set splitting, loop blocking, loop interleaving, loop
coalescing. I would be happy to send you a copy of our paper if you are
interested.
---------------------------------------------------------------------------
From: wei@cs.cornell.EDU (Wei Li)
We have a matrix-oriented approach to loop transformations that uses
non-singular matrices to represent loop transformations. Non-singular
matrices generalize the unimodular approach (unimodular matrices are a
special case of non-singular matrices in which the determinant is 1 or
-1). Some important transformations such as loop tiling can only be
modeled by non-singular matrices. Furthermore, we provide a completion
algorithm that makes the theory easier to use in practice. In
transformations for parallelism and data locality, it is very useful to
have such completion algorithm. Our work was presented at the 5th
Compiler Workshop at Yale last year. A journal version is to appear soon
in IJPP.
We have used the non-singular matrix framework to develop optimizations
for data locality in our compiler for NUMA parallel machines. You can
find how the transformation matrix is constructed automatically. The
algorithms are in the paper that appeared in ASPLOS V (ACM SIGPLAN
Notices, Vol 27, Number 9, Sep. 1992).
The papers can also be found via ftp from Cornell (ftp.cs.cornell.edu,
pub/TyphoonCompiler/papers-ps/).
file: framework.ps
"A Singular Loop Transformation Framework
Based on Non-singular Matrices"
by Wei Li and Keshav Pingali
file: asplos92.ps
"Access Normalization: Loop Restructuring for NUMA Compilers"
by Wei Li and Keshav Pingali
file: pnuma.ps
"Loop Transformations for NUMA Machines"
by Wei Li and Keshav Pingali
SIGPLAN Notices, January 1993
-- Wei Li
Department of Computer Science
Cornell University
Ithaca, NY 14853
---------------------------------------------------------------------------
From: mehrotra@csrd.uiuc.edu (Sharad Mehrotra)
Those are excellent pointers, but unimodular transformations are only one
aspect of the larger problem of automatic program parallelization. If you
are just getting started in the area, you might find the following CSRD
report (also to appear soon in the Proceedings of the IEEE) useful:
Banerjee, U., Eigenman, R., Nicolau, A., and Padua, D.,
"Automatic Program Parallelization", CSRD TR 1250, November 1992.
The report contains a timely survey of the field and a bibliography with
161 citations.
Many CSRD Tech Reports are available for anonymous ftp from host
sp2.csrd.uiuc.edu (128.174.153.4) in directory CSRD_Info/reports. We'll
try and arrange to put the PostScript for this report there soon. If it's
not available in a few days, email reinhart@csrd.uiuc.edu, and ask for a
paper copy by snail mail.
---------------------------------------------------------------------------
From: dsehr@gomez.intel.com (David Sehr)
A colleague here at Intel has been working on exactly that problem for
several years. His name is Utpal Banerjee, and he has recently published
a book describing the approach you mention (using unimodular matrices for
dependence testing and transformation). The full info:
Loop Transformations for Restructuring Compilers: the Foundations
Utpal Banerjee
Kluwer Academic Publishers
ISBN: 0-7923-9318-X
Other possibilities to pursue are the recent papers of William Pugh of the
University of Maryland (pugh@cs.umd.edu), and Paul Feautrier of the
University of P. et M. Curie (feautrier@masi.ibp.fr).
David C. Sehr, Intel Corporation
2200 Mission College Blvd., M/S RN6-18
Santa Clara, CA 95052-8119
---------------------------------------------------------------------------
From: paik@mlo.dec.com (Samuel S. Paik)
Access Normalization: Loop Restructuring for NUMA Compilers. Wei Li,
Keshav Pingali. Proceedings of the Fifth International Conference on
Architectural Support for Programming Languages and Operating Systems,
SIGPLAN Notices, Vol. 27, No. 9, Sept 1992, pp. 285-295. ACM.
Generalizes Banerjee's work on unimodular matrices for modeling
loop transformations to invertable matrices, and applies this to
restructuring loops for NUMA multiprocessors. Also available as a
Cornell University CS technical report.
---------------------------------------------------------------------------
From: Francois IRIGOIN <irigoin@cri.ensmp.fr>
Some loop transformations are equivalent to a change of basis. To preserve
the number of iterations, the change of basis matrix has to be unimodular.
This idea has been around implictly for at least 6 years. Utpal Banerjee
has a paper on the subject. He's also written a book. Perhaps, you should
get in touch with him: <banerjee@csrd.uiuc.edu>.
Non-unimodular matrices are useful for tiling transformations.
Some loop transformations cannot be put in this framework: loop
distribution and loop alignment are good examples.
I'd like to know why you are interested in this subject. We've been
active in this field for many years but did not really manage to get in
touch with people in Germany, except the SUPERB team, Hans Zima/Michael
Gerndt.
Also, there is a Workshop in June in Germany (Dagsthul) about scheduling.
Simple schedules lead to matrix transformations, complex ones cover the
other loop transformation.
Francois Irigoin tel. +33 1 64 69 48 48
Centre de Recherche en Informatique fax. +33 1 64 69 47 09
Ecole des Mines de Paris e-mail: irigoin@cri.ensmp.fr
35 rue Saint Honore irigoin@fremp11.bitnet
77300 FONTAINEBLEAU
FRANCE
------------------------------------------------------------------------------
From: jrbd@craycos.com (James Davies)
I have a paper here for ASPLOS V (1992), published by ACM, which seems
related; it's called "Access Normalization: Loop restructuring for NUMA
Compilers", by Wei Li and Keshav Pingali of Cornell University, and
discusses using invertible matrices to model loop transformations. They
refer to a paper by Utpal Banerjee in Proceedings of the Workshop on
Advances in Languages and Compilers for Parallel Processing, August 1990,
called "Unimodular Transformations of Double Loops", which is supposed to
have also used matrices to model loop transforms. (I don't know who
published this workshop, all I have is the reference in the Li-Pingali
paper.)
------------------------------------------------------------------------------
From: vadik@cs.UMD.EDU (Vadim Maslov)
Ask Dr. Pugh (pugh@cs.umd.edu) and/or Mr. Kelly (wak@cs.umd.edu) from
University of Maryland. They have a theory that goes beyond unimodular
transformations. There are a couple of papers on it which are
electronically available.
------------------------------------------------------------------------------
From: Joe Hummel <jhummel@esp.ICS.UCI.EDU>
Actually, I think it was Utpal Banerjee with his work on unimodular
transformations. His new book, which is just being published, should have
lots of info on this. Utpal also has a paper, I think it appeared in the
last year or two in the IEEE Trans on Parallel and Distributed Computing.
------------------------------------------------------------------------------
From: Lode Nachtergaele <nachterg@imec.be>
M.E. Wolfe, M. Lam, "A loop transformation theory and an algorithm to
maximize parallelism", IEEE transactions on parallel and distributed
systems, Vol.2, October 1991
Look also to the work going on at the university of Maryland. The papers
and reports can be ftp'ed from : ftp.cs.umd.edu
Lode Nachtergaele
IMEC V.Z.W.
Kapeldreef 75
3001 Heverlee
Belgium
Phone : +32 (0)16 28.15.12
E-mail: nachterg@imec.be
------------------------------------------------------------------------------
Yep. I found it in ACM SIGPLAN'91 Conference: p.30-45,
authors Michael E. Wolf (not Wolfe) and Monica S. Lam (Stanford Uni.)
--
Jan Jongejan email: jjan@cs.rug.nl
Dept. Comp.Sci.,
Univ. of Groningen,
Netherlands.
--
Uwe Assmann
GMD Forschungsstelle an der Universitaet Karlsruhe
Vincenz-Priessnitz-Str. 1
7500 Karlsruhe GERMANY
Email: assmann@karlsruhe.gmd.de Tel: 0721/662255 Fax: 0721/6622968
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Tue Apr 6 14:52:06 EDT 1993
Xref: iecc comp.arch:27674 comp.compilers:4486 comp.object:9795
Newsgroups: comp.arch,comp.compilers,comp.object
Path: iecc!compilers-sender
From: "Steven A. Moyer" <sam2y@server.cs.virginia.edu>
Subject: Utilization of Non-caching Access Instructions
Message-ID: <93-04-028@comp.compilers>
Originator: sam2y@koa.cs.Virginia.EDU
Keywords: optimize, architecture, GC, report, FTP
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: University of Virginia Computer Science Department
References: <C4ppA5.BLx.1@cs.cmu.edu> <93-04-013@comp.compilers>
Date: Tue, 6 Apr 1993 14:22:32 GMT
Approved: compilers@iecc.cambridge.ma.us
In following up a thread on the utilization of non-caching load
instructions (ala i860) for implementing GC algorithms, I discussed a
general optimization for increasing effective memory bandwidth that
utilized such an instruction. The techreports I cited contained some
older work and I received many requests to make available the newer
recently completed dissertation text.
I have made the complete text a technical report and have placed it in an
anonymous ftp directory located at uvacs.cs.virginia.edu. The report is
the compressed postscript file:
pub/techreports/CS-93-18.ps.Z
I hope this information proves useful; comments are certainly welcome.
And yes, I've learned my lesson about posting references to older material
;-)
Steve
-------------------------------------------------------------------------
Abstract:
Access Ordering and Effective Memory Bandwidth
High-performance scalar processors are characterized by multiple pipelined
functional units that can be initiated simultaneously to exploit
instruction level parallelism. For scientific codes, the performance of
these processors depends heavily on memory bandwidth. To achieve peak
processor rate, data must be supplied to the arithmetic units at the peak
aggregate rate of consumption.
Access ordering, a loop optimization that reorders non-caching accesses to
better utilize memory system resources, is a compiler technology that
addresses the memory bandwidth problem for scalar processors executing
scientific codes. For a given computation, memory architecture, and
memory device type, an access ordering algorithm determines a well-defined
interleaving of vector references that maximizes effective bandwidth.
Consequently, analytic models of performance can also be derived.
Access ordering is fundamentally different from, though complementary to,
both caching and access scheduling techniques that attempt to overlap
computation with memory latency. Simulation results demonstrate that
for a given computation, access ordering can significantly increase
effective bandwidth over that achieved by the natural reference sequence.
--
Steve Moyer
Computer Science Department
University of Virginia
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Tue Apr 6 17:25:27 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: clark@zk3.dec.com (Chris Clark USSG)
Subject: Semantic actions in LR parser
Message-ID: <93-04-029@comp.compilers>
Keywords: LR(1), parse
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Compilers Central
References: <93-04-008@comp.compilers> <93-04-010@comp.compilers>
Date: Tue, 6 Apr 1993 15:26:32 GMT
Approved: compilers@iecc.cambridge.ma.us
It was asked.
> I am just wondering if we have to put all semantic actions on the tail
> parts of some productions for LR grammar instead of any positions in LL
> grammar?
Most LR parsers (i.e. yacc) introduce pseudo productions only because it
is convenient to do so. It simplifies the LR engine, by allowing it to
fold the action selection into the reduction code. It's actually a fairly
clever hack to recognize that you can simulate shift actions by
introducing pseude productions. A similar hack gives you a "poor person's
ELR parser" allowing regular expressions on the RHS. However, in both
cases, I feel there is a better way, which we used in our product,
Yacc++(R) and the Language Objects Library.
In, Yacc++, we model our LR engine as an abstract machine, and the
generator outputs "assembly language" for it. Doing so, makes it
straight-forward to put actions with shifts as well as with reduces.
Essentially, as you are building your dotted items for a state, some of
them may include actions. You just encode those actions as part of your
shift transitions. If you model it right, it is fairly simple. When you
build the dotted productions for a state, you are essentially simulating
running a bunch of LL parsers in parallel. Thus, you can do anything in
an LR parser, that you can in an LL parser, except it's difficult to
program in recursive descent--because your programming language probably
does not allow running a bunch of routines in parallel and selecting the
one which succeeds. (I guess recursive descent in prolog would work fine
aside from the backtracking overhead.)
One curious feature which falls out, is that if you can detect two actions
are the same you can execute the action even if the eventual reduces are
parts of two distinct productions. That allows you to defer (and
eliminate) some conflicts. (In our model, two actions are the same if
they are character for character the same string.) Eliminating the pseudo
productions, also means smaller state tables.
I hope this helps.
Disclaimer, signature, et. al.
Chris Clark
I am biased in favor of parser generators and work for,
Compiler Resources, Inc.
3 Proctor St.
Hopkinton, MA 01748
(508) 435-5016 fax: (508) 435-4847
For a technical literature packet (including a price list) send email
to: bz%compres@primerd.prime.com
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Tue Apr 6 18:15:54 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: keerthi@leland.Stanford.EDU (Keerthi Angammana)
Subject: Looking for a MATLAB parser
Message-ID: <93-04-030@comp.compilers>
Keywords: parse, question
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: DSG, Stanford University, CA 94305, USA
Date: Tue, 6 Apr 1993 15:32:12 GMT
Approved: compilers@iecc.cambridge.ma.us
Hi,
Does anybody know if a parser (or at least a complete grammar) for
MATLAB is available for free someplace ?
thanks
-keerthi
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Tue Apr 6 18:17:01 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: Wei Li <wei@cs.cornell.EDU>
Subject: Re: Theory on loop transformations
Message-ID: <93-04-031@comp.compilers>
Keywords: theory, optimize
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Cornell University, CS Dept., Ithaca, NY
References: <93-04-006@comp.compilers>
Date: Tue, 6 Apr 1993 16:25:16 GMT
Approved: compilers@iecc.cambridge.ma.us
assmann@karlsruhe.gmd.de (Uwe Assmann) writes:
|> I remember that ... Whenever a transformation is performed this amounts to
|> a matrix operation over the loop.
I find the matrix approach easy to use, and have successfully used it for
improving data locality on parallel machines with memory hierarchies such
as the BBN Butterfly and KSR1. All you need to do is to construct one
matrix that represents the transformation.
My question to folks who have implemented loop transformations:
if you have done it in the traditional way, i.e. a sequence of loop
transformations as opposed to the matrix approach, what is your experience
about coming up with the right sequence of transformations to apply?
We can start with a small set of transformations such as loop interchange,
loop skewing, loop distribution/jamming etc.
Thanks.
--
Wei Li
Department of Computer Science
Cornell University
Ithaca, NY 14853
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Tue Apr 6 18:17:50 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: bourne-linda@CS.YALE.EDU (Linda Bourne)
Subject: Alan J. Perlis Symposium
Message-ID: <93-04-032@comp.compilers>
Keywords: conference
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Yale CS Mail/News Gateway
Date: Tue, 6 Apr 1993 10:20:14 GMT
Approved: compilers@iecc.cambridge.ma.us
PROGRAMMING LANGUAGES:
THE NEXT GENERATION
Alan J. Perlis Symposium
Sponsored by the
Department of Computer Science
Yale University
April 29, 1993
9:45 Opening Remarks
Drew McDermott
Chairman
10:00 Languages for Multi-levelled Computers
Used as Models, Tools, and Toys
Peter Naur
University of Copenhagen
11:00 Concurrent Logic Programming:
Past, Present, and Future
Ehud Shapiro
Weizmann Institute of Science
1:30 Object-Oriented Programming and C++
Bjarne Stroustrup
AT&T Bell Laboratories
2:30 Total Functional Programming
David Turner
University of Kent
4:00 Panel Discussion
Following the panel discussion a public reception will be held at the
Department of Computer Science, Arthur K. Watson Hall.
All talks are free and open to the public.
Yale School of Organization and Management
135 Prospect Street
Room B74
Corner Sachem and Prospects Streets
New Haven, Connecticut
For information please call (203) 432-1246
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Wed Apr 7 09:03:32 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: gnb@leo.bby.com.au (Gregory N. Bond)
Subject: Re: Regexps from shell wildcards
Message-ID: <93-04-033@comp.compilers>
Keywords: lex
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Burdett, Buckeridge & Young, Melbourne, Australia
References: <93-04-012@comp.compilers> <93-04-018@comp.compilers>
Date: Tue, 6 Apr 1993 23:23:09 GMT
Approved: compilers@iecc.cambridge.ma.us
Warner Losh <imp@Boulder.ParcPlace.COM> writes:
if you wanted to do /bin/csh shell expressions, then you'll find that
things like "*.{c,C,H,h,cf}" cause problems and cause the output string
length to grow wildly.
Worse than that, the csh {foo,bar} construct is not a file glob and
in general has semantics that cannot be duplicated with REs:
- Order is preserved, so *.{h,c} is NOT the same as *.[hc]
- Is expanded regardles of matches, so "echo {foo,bar}.c" will work
whether or not foo.c or bar.c exist.
Of course, in any one application these may not be a problem, and
more-or-less mechanical conversion to (foo|bar) might be acceptable.
Just as a hint, here is some perl code I use to convert sh-type globs
to REs in a Perl package. The input glob pattern is known to contain
no '/' characters (the handling of which is "interesting" recursion).
I make no promises about this, but it hasn't failed me yet.
# Convert shell-style glob pattern to regex
$pat =~ s/[.=<>+_\\-]/\\$&/g;
$pat =~ s/\?/./g;
$pat =~ s/\*/.*/g;
# Hide leading . from wildcards
$pat =~ s/^\.\*/[^.].*/; # .* -> [^.].*
$pat =~ s/^\.([^\*])/[^.]$1/; # .x -> [^.]x
$pat =~ s/^\*/[^.]*/;
# Anchor the pattern
$pat = "^$pat\$";
# could do some optimising here, but leave it to perl!
# e.g. "^.*" => ""
# ".*$" => ""
--
Gregory Bond <gnb@bby.com.au>
Burdett Buckeridge & Young Ltd Melbourne Australia
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Wed Apr 7 09:04:08 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: darte@ens.ens-lyon.fr (Alain Darte)
Subject: Re: SUMMARY: Loop transformations with unimodul
Message-ID: <93-04-034@comp.compilers>
Keywords: optimize, bibliography
Sender: compilers-sender@iecc.cambridge.ma.us
Reply-To: darte@ens.ens-lyon.fr
Organization: Ecole Normale Superieure de Lyon
References: <93-04-027@comp.compilers>
Date: Wed, 7 Apr 1993 08:24:19 GMT
Approved: compilers@iecc.cambridge.ma.us
Let me add some references that could be of interest concerning our work
here in Lyon.
Alain Darte. Regular partitioning for synthesizing fixed-size systolic
arrays. {\em INTEGRATION, The VLSI Jounal}, 12:293--304, December 1991.
Alain Darte, Leonid Khachiyan and Yves Robert. Linear scheduling is nearly
optimal. {\em Parallel Processing Letters} 1(2):73--81, December 1991.
Alain Darte and Yves Robert. Scheduling uniform loop nests. In R. Melhem
ed., {\em ISMM Conference on Parallel and Distributed Systems}, ISMM Press
(1992), 75-82.
Alain Darte, Tanguy Risset and Yves Robert. Loop nest scheduling and
transformations. In J.J. Dongarra et B. Tourancheau eds., {\em
Environments and Tools for Parallel Scientific Computing}, Elsevier
Science Publishers (1993).
Alain Darte and Yves Robert. Mapping uniform loop nests onto distributed
memory architectures. Technical Report 93-03, LIP, January 1993.
Submitted. Available via anonymous ftp at lip.ens-lyon.fr in
/pub/LIP/RR/RR93 (file RR93-03.ps.Z).
Thanks to Uwe Assmann for his summary and the interesting references he
gives.
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Wed Apr 7 21:50:41 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: nachterg@imec.be (Lode Nachtergaele)
Subject: papers about loop transformations
Message-ID: <93-04-035@comp.compilers>
Keywords: optimize, bibliography, theory
Sender: compilers-sender@iecc.cambridge.ma.us
Reply-To: nachterg@imec.be
Organization: IMEC vzw Leuven
References: <93-04-020@comp.compilers>
Date: Wed, 7 Apr 1993 14:31:42 GMT
Approved: compilers@iecc.cambridge.ma.us
Hello world,
Uwe Assman asked for the precise pointer to the paper of M.E. Wolf, but
people mailed their reference list about this subject. So here is our list
of references.
1. Loop data flow analysis.
a. group P.Featrier (Univ. P.et M.Curie, Paris): paper in International
Journal of Parallel Programming Vol. 20 No. 1, Feb. 1991 : "Dataflow
Analysis of Array and Scalar References" This handles formal data flow
analysis for sets of nested loops with manifest affine index expressions
including manifest conditions. Equations are set up to calculate
dependencies which are solved with their PIP (Parametric Integer
Programming) package. The result could be written out in an applicative
form (but it is currently not).
b. group Pugh (Univ. of Maryland): papers in Supercomputing'91 and
Sigplan'92. This also handles formal data flow analysis for sets of
nested loops with manifest affine index expressions but it can include
also user specified assertions on non-manifest conditions which make it
more general (though not general enough for all our applications).
Equations are set up to check dependencies which are passed to their
"omega"-test based on Fourier-Motzkin elimination. The result can be
written out in an (more or less) applicative form.
c. group Monica Lam (Stanford University): thesis of Dror Eliezer Maydan,
"Accurate analysis of array references", CSL-TR-92-547 (STAN-CS-92-1449),
september 1992
In addition there are a few older refs worth mentioning:
d. J.Bu (Delft): papers ISCAS/ICASSP'88, thesis 90 Provides data flow
analysis on restricted set of nested loops.
e. S. Rajopadhye (University of Oregon): paper in IMEC Workschop on Formal
Methods, November 89 "Algebraic transformations in Systolic Arrays
synthesis: A case study" Provides some algebraic transformations on Affine
Recurrence Equations to change dependencies. The transformations rely on
algebraic characteristics of the operators.
2. Analysis techniques to check whether (a) particular class(es) of loop
trafos can be applied. No full data flow analysis happens in this case.
a. U. Banerjee (?): paper '89 in Proc. 2nd Workshop Languages Compilers
Parallel Computing : "A theory of loop permutation" Here, sets of nested
URE loops are checked for a potential permutation. This can also be
solved if data flow analysis is performed first to derive a single
assignment form.
b. D.Padua, M. J. Wolfe (University of Illinois): papers Proc ACM'86 -
Supercomputing'90. This provides a survey of loop trafo analysis
techniques (under restrictions) + a way to check loop interchanging for
manifest affine index expressions. Also the decomposition of uni-modular
loop trafos into skew/reversal/inter- change is proposed here (though not
proven).
c. Randy Allen, Ken Kennedy (Rice University, Houston) : paper in IEEE
Transactions on computers, Vol. 41, No. 10, october 1992, "Vector Register
Allocation" Interesting survey paper on register allocation techniques
which includes analysis on when loop trafo can be applied.
3. Extraction of parallelism in presence of nested loops:
a. Polychronopoulos (University of Illinois): paper in IEEE Transactions
on Computers, Vol. 37, No. 8, August 1988 "Compiler Optimizations for
Enhancing Parallelism and their impact on architecture design" Survey of
loop trafo classes with some analysis on how to extract parallelism but
not really automated.
b. M.J. Wolfe (Oregon Graduate Institute of Science and Technology): paper
The Journal of Super Computing 4,321-344, 1990 : "Data dependence and
Program Restructuring" Maximal parallelism is found for sets of UREs with
lexicographically ordered index expressions (quite restricted).
c. Pugh-Wonnacott (Univ. of Maryland): report UMIACS-TR-92-126
(CS-TR-2994) They find maximal parallelism for sets of nested loops with
manifest affine index expressions but including the user specified
assertions on non-manifest conditions.
d. Shang-Fortes (?): paper in Algorithms and Parallel VLSI Architectures
II, P.Quiton and Y.Robert (eds.), 1992 Also here parallelism is detected
for sets of nested loops with manifest affine index expressions.
4. Methods to perform piece-wise linear scheduling.
a. M.E. Wolf, M. Lam (Stanford University) : paper IEEE Transactions on
parallel and distributed systems, Vol.2, No.4, October 1991 "A loop
transformation theory and an algorithm to maximize parallelism" Very good
paper on unimodular transformation of a loop structure. An general
algorithm to generate the loop structure after transformation is proposed.
b. L.Thiele (Univ. Saarland), "On the design of piecewise regular
processor arrays", ISCAS'89, pp. 2239-2242. Original work on this topic,
but not really automated.
c. Alain Darte, Tanguy Risset, Yves Robert, "Loop nest scheduling and
transformations", to appear in Environments and tools for Parallel
Scientific Computing, J.J. Dongarra and B.Tourancheau eds, North Holland,
1993
d. Leslie Lamport : "The parallel execution of do loops" Communication of
the ACM, 17(2):83-93, February 1974
e. W.Kelly, W.Pugh (Univ. of Maryland) : report in ftp.cs.umd.edu
UMIACS-TR-92-126 (CS-TR-2995) November 1992 "Generating Schedules and Code
within a Unified Reordering Transformation Framework" Describes an
algorithm to compute transformations to obtain maximum parallellism. Gives
a method to genererate code after transformation.
f. C.-H. Huang, P. Sadayappan, "Communication-Free Hyperplane Partitioning
of Nested Loops", Languages and Compilers for parallel Computing, Fourth
International Workshop, Santa Clara, California, August 1991
g. M. Neeracher, R.Ruhl, "Automatic Parallelization of LINPACK Routines on
distributed Memory of Parallel processors", Proceedings 7th International
Parallel Programming Symposium, April 1993
5. Singular Loop transformations papers ftp'ed from
ftp.cs.cornell.edu:pub/TyphoonCompiler/papers-ps/ Department of Computer
Science, University of Cornell a. Wei Li, Keshav Pingali, "A singular loop
transformation framework based on non-singular matrices", Proceedings of
the Fifth Annual Workshop on Language and Compilers for Parallelism, New
Haven, August, 1992
b. Wei Li, Keshav Pingali, "Access Normalization : Loop restructuring for
NUMA Compilers", ACM SIGPLAN Notices, Vol 27, Number 9, Sep. 1992
c. Wei Li, Keshav Pingali, "Loop transformation for NUMA Machines", to
appear in SIGPLAN Notices, 1993
d. R. Johnson, Wei Li, Keshav Pingali, "An executable representation of
distance and direction", Languages and Compilers for parallel Computing,
Fourth International Workshop, Santa Clara, California, August 1991
6. The work in the following three references is related to automated
control flow optimization for DSP memory management. In the papers a
method is presented to automatically generate a sequence of unimodular
transformations in order to optimize memory needs.
Group of F.V.M. Catthoor (IMEC, Belgium) :
M.F.X.B. van Swaaij, F.H.M. Franssen, F.V.M. Catthoor, H.J. De Man.
"Modeling data flow and control flow for high level memory
management", European Design Automation Conference, pp. ,1992.
M.F.X.B. van Swaaij, F.H.M. Franssen, F.V.M. Catthoor, H.J. De Man.
"Modeling data flow and control flow for DSP system synthesis",
VLSI Design Methodologies for DSP Systems, M. Bayoumi editor,
Kluwer, 1993.
M.F.X.B. van Swaaij, F.H.M. Franssen, F.V.M. Catthoor, H.J. De Man.
"Automating high level control flow transformations for
DSP memory management",
Proceedings of the IEEE Workshop on VLSI signal processing, 1992.
=============================================================================
Papers we are still looking for :
a. C.Ancourt, F.Irigoin, "Scanning polyhedra with DO loops", Third ACM
symposium on Principles and Practice of Parallel Programming, p.39-50,
April 1991
b. L.Lu, "A Unified framework for systematic loop transformations", Third
ACM Symposium on Principles and Practice of parallel Programming, p.
28-38, April 1991
Frank Franssen
IMEC Laboratory,
Kapeldreef 75,
B-3001 Leuven,
Belgium
Email: franssen@imec
tel: ++32-16-281512
fax: ++32-16-281515
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Wed Apr 7 21:55:03 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: wand@ccs.northeastern.edu (Mitchell Wand)
Subject: Re: Semantic actions in LR parser
Message-ID: <93-04-036@comp.compilers>
Keywords: LALR, parse, bibliography, comment
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: College of Computer Science, Northeastern University
References: <93-04-008@comp.compilers> <93-04-029@comp.compilers>
Date: Wed, 7 Apr 1993 17:24:25 GMT
Approved: compilers@iecc.cambridge.ma.us
clark@zk3.dec.com (Chris Clark USSG) writes:
> One curious feature which falls out, is that if you can detect two actions
> are the same you can execute the action even if the eventual reduces are
> parts of two distinct productions. That allows you to defer (and
> eliminate) some conflicts. (In our model, two actions are the same if
> they are character for character the same string.) Eliminating the pseudo
> productions, also means smaller state tables.
Ahh, this sounds like you've rediscovered a result similar to the one in
Brown, C. & Purdom, P. "Semantic Routines and LR(k) Parsers," Acta
Informatica 14 (1980), 299--315.
I suppose 1980 is before the beginning of time for a lot of folks.
I wonder if Chris could compare his results with the Brown-Purdom results.
--Mitch
--
Mitchell Wand
College of Computer Science, Northeastern University
360 Huntington Avenue #161CN, Boston, MA 02115 Phone: (617) 437 2072
Internet: wand@ccs.northeastern.edu Fax: (617) 437 5121
[In 1980 yacc was already eight years old. -John]
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Wed Apr 7 21:55:44 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: jwe@emx.cc.utexas.edu (John W. Eaton)
Subject: Re: Looking for a MATLAB parser
Message-ID: <93-04-037@comp.compilers>
Keywords: parse
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: The University of Texas at Austin, Austin, Texas
References: <93-04-030@comp.compilers>
Date: Wed, 7 Apr 1993 22:05:04 GMT
Approved: compilers@iecc.cambridge.ma.us
keerthi@leland.Stanford.EDU (Keerthi Angammana) writes:
> Does anybody know if a parser (or at least a complete grammar) for
> MATLAB is available for free someplace ?
I'm working on an interpreter called Octave that's very much like Matlab.
The parser is built using flex and bison and the whole thing is
distributed under the terms of the GNU Copyleft.
The underlying numerical solvers are currently standard Fortran ones like
Lapack, Linpack, Odepack, the Blas, etc., packaged in a library of C++
classes (see the files in the libcruft and liboctave subdirectories). If
possible, the Fortran subroutines are compiled with the system's Fortran
compiler, and called directly from the C++ functions. If that's not
possible, they are translated with f2c and compiled with a C compiler.
Better performance is usually achieved if the intermediate translation to
C is avoided.
The library of C++ classes may also be useful by itself, and they are
distributed under the same terms as Octave.
Octave has been compiled and tested with g++-2.3.3 and libg++-2.3 on a
SPARCstation 2 running SunOS 4.1.2, an IBM RS/6000 running AIX 3.2, a
DECstation 5000/240 running Ultrix 4.2a, and an i486 system running Linux
SLS 0.99-47. It should be possible to build on almost all systems where
gcc and g++ are available.
If you are on the Internet, you can copy the latest distribution version
of Octave from the file /pub/octave/octave-M.N.tar.Z, on the host
ftp.che.utexas.edu. This is a compressed tar file, so be sure to use
binary mode for the transfer. M and N stand for version numbers; look at
a listing of the directory through ftp to see what version is available.
After you unpack the distribution, be sure to look at the files README and
INSTALL.
--
John W. Eaton
jwe@che.utexas.edu
Department of Chemical Engineering
The University of Texas at Austin
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Thu Apr 8 11:41:53 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: ccsis@sunlab1.bath.ac.uk (Icarus Sparry)
Subject: Re: Serendipitious Compiler Stuff
Message-ID: <93-04-038@comp.compilers>
Keywords: tools, FTP
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Bath University Computing Services, UK
References: <93-04-022@comp.compilers>
Date: Thu, 8 Apr 1993 11:13:25 GMT
Approved: compilers@iecc.cambridge.ma.us
Paul Robinson <tdarcos@mcimail.com> writes:
>Note that these files end in ".z" NOT ".Z" so you need GNUZIP to decompress
>them, NOT compress. ...
No, these are '.Z' (compress) files. Simtel-20 runs an operating system
which does not distinguish between cases. For simplicity most (all?)
mirror sites map everything to lower case. Simtel-20 filenames can only
have a single '.' in them, so the common extension is '.tar-z'. File
should be renamed to have an extension of '.tar.Z' and then treated in
the normal manner.
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Thu Apr 8 17:11:24 EDT 1993
Xref: iecc comp.arch:27742 comp.lang.functional:2718 comp.lang.lisp:7184 comp.lang.scheme:5462 comp.parallel:5681 comp.compilers:4497
Newsgroups: comp.arch,comp.lang.functional,comp.lang.lisp,comp.lang.scheme,comp.parallel,comp.compilers
Path: iecc!compilers-sender
From: Lori Lynn Avirett-Mackenzie <lori@au-bon-pain.lcs.mit.edu>
Subject: FPCA 93 Advance Program Information
Message-ID: <93-04-039@comp.compilers>
Keywords: conference, parallel, functional
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: MIT Lab for Computer Science, Cambridge, Mass.
Date: Thu, 8 Apr 1993 21:59:09 GMT
Approved: compilers@iecc.cambridge.ma.us
ANNOUNCEMENT: Programming Language Conferences, Copenhagen, June 9-16, 1993
===========================================================================
June 9-11: FPCA (Functional Programming Languages and Computer Architecture)
June 12 : SIPL (State in Programming Languages)
June 14-16: PEPM (Symposium on Partial Evaluation and Semantics Based Program
Manipulation)
The advance program for the above ACM-sponsored meetings is being sent
out as a supplement to ACM SIGPLAN Notices. It contains detailed
programs for FPCA and PEPM plus information about registration and
accommodation. The advance program is also available by request from
Lisa Wiese
Attn.: FPCA '93
DIKU, University of Copenhagen
Universitetsparken 1
DK-2100 Copenhagen East
Denmark
Tel. +45-35 32 14 13
Email: wiese@diku.dk
or in .dvi and .ps format via anonymous ftp from DIKU; see the
following sample session:
> ftp ftp.diku.dk
Name: anonymous
Password: <type your internet email address here>
ftp> binary
ftp> cd pub/diku/semantics
ftp> get FPCA-SIPL-PEPM.dvi (or: get FPCA-SIPL-PEPM.ps)
ftp> bye
or in .dvi and .ps format via anonymous ftp from MIT; see the
following sample session:
ftp jj.lcs.mit.edu
Name: anonymous
Password: <type your internet email address here>
ftp> binary
ftp> cd fpca93
ftp> get FPCA-SIPL-PEPM.dvi (or: get FPCA-SIPL-PEPM.ps)
ftp> bye
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Thu Apr 8 19:53:46 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: Steven Novack <snovack@enterprise.ICS.UCI.EDU>
Subject: Assembly hacker vs. compiler revisited
Message-ID: <93-04-040@comp.compilers>
Keywords: assembler, optimize
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Compilers Central
References: <93-02-105@comp.compilers> <93-02-122@comp.compilers>
Date: Thu, 8 Apr 1993 20:12:16 GMT
Approved: compilers@iecc.cambridge.ma.us
A month or so ago there was a debate on in this group about the question
of by how much, if at all, a good assembly language programmer could beat
the best compiler. Someone on the assembly side mentioned an example
wherein handcoding a data compression algorithm was able to achieve an 8
fold improvement over compiling the same algorithm written in a high level
language.
I would greatly appreciate receiving more information on this example, or
any others, in which hand-coding provides significant improvements over
compiling high-level implementations. I'm interested in any aspect of
this from where benefits are obtained throughout an application, all the
way down to little, but useful ``tricks'' that would be missed by most
compilers.
Thanks in advance,
Steve Novack
Dept. of Information and Computer Science
University of California, Irvine, CA 92717
snovack@ics.uci.edu
(714) 725-2248
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Sun Apr 11 11:41:40 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: megatest!plethorax!djones@uu2.psi.com (Dave Jones)
Subject: Re: Wanted: Regular Expression -> Finite Automata C code =-
Message-ID: <93-04-041@comp.compilers>
Keywords: lex, DFA
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Compilers Central
References: <93-04-023@comp.compilers>
Date: Fri, 9 Apr 1993 11:39:29 GMT
Approved: compilers@iecc.cambridge.ma.us
tdh6t@helga3.acc.virginia.edu (Todd Hodes):
> I wanted to use the code in Sedgewick's 'Algorithms in C' book,
> but found the following bug: ...
Last month I posted an article pointing out two bugs in Sedgewick's Pascal
algorithm for matching the NFA against an input string. The bug Todd
reports is in the algorithm for building the NFA, in the "C" book, which I
have never seen.
Helpful suggestion: Don't try to salvage the Sedgewick algorithms. See
the New Dragon Book, "Compilers, Priciples, Techniques, and Tools", by
Aho, Sethi, and Ullman; Addison Wesley; Reading, Mass; 1988; IBSN
0-201-10088-6.
Both procedures are sketched out almost correctly in algorithms 3.3 and
3.4. Algorithm 3.4 is not quite right, but you'll figure it out when you
start to implement it. As written, it terminates only when the input
string has been read to the end. It should terminate either then or when
the NFA is in a state in which it can make no move on the current input
character. In the language the book uses, that will be when
e-closure(move(S,a)) is empty.
Also, every time a final state is encountered, you will want to remember
the number of characters that have been processed at that point. The last
number remembered will be the length of the longest match.
One key difference between the Dragon Book algorithm and the Sedgewick
algorithm is the use of the bit-vector to prevent putting the same NFA
state into the the "next state" set more than once. Come to think of it,
do you need another bit-vector to keep from putting the same set into the
"current state" set more than once when calculating the e-closure? Hmm.
The other Seg. problem is stopping when a final NFA state is first
encountered. In other words, it finds the shortest match -- seldom what is
wanted, particularly for expressions that can match the empty string!
I haven't verified the bug you report against the other Sedgewick
algorithm, so I don't know what the difference is, but I think the Dragon
algorithm 3.3 is correct, if memory serves. It's been about 10 years since
I looked at it though (in the old Dragon book).
Good luck.
Dave
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Sun Apr 11 11:44:25 EDT 1993
Xref: iecc comp.lang.c++:34615 comp.compilers:4500
Newsgroups: comp.lang.c++,comp.compilers
Path: iecc!compilers-sender
From: Mayan Moudgill <moudgill@cs.cornell.EDU>
Subject: A C++ Parser toolkit
Message-ID: <93-04-042@comp.compilers>
Summary: a toolkit for quickly implementing a parser (including CSGs)
Keywords: C++, parse, tools, comment
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Cornell Univ. CS Dept, Ithaca NY 14853
Date: Sun, 11 Apr 1993 02:43:30 GMT
Approved: compilers@iecc.cambridge.ma.us
I've implemented a parser/scanner/text-matcher :) that allows a programmer
to quickly specify a grammar, and to attach actions to productions.
For instance, the following code:
int name(Parse& P)
{
Token t;
P, IDENT(t);
if( P && StbFind(t) ) {
return 1;
}
return 0;
}
int stmt(Parse & P)
{
Token t;
P, MATCH(name), "=", NUMBER(val);
}
matches an identifier (i.e. [a-zA-Z_][a-zA-Z_0-9]*), '=', number string,
but only if identifier is already in the symbol-table.
NOTES: It works on a Sun 4.0 with C++ 3.0. It _MIGHT_ work on some other OSes
I've used mmap() to implement it. Your OS might have the function;
then again it might not. Even if it does, its parameters might not be
the same.
PS. Its also ftp'able as pub/Parse.shar from ftp.cs.cornell.edu
:)
Mayan
[I've put this in the compilers FTP archives at primost.cs.wisc.edu as
c++kit.Z. If you can't FTP, it's available by e-mail from the mail server
at compilers-server@iecc.cambridge.ma.us; send "send c++kit" to retrieve it.
Please FTP if you can, the mail server is linked to the outside world by a
single dial-up modem. -John]
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Sun Apr 11 11:48:44 EDT 1993
Xref: iecc comp.compilers:4501 misc.jobs.offered:26691
Newsgroups: comp.compilers,misc.jobs.offered
Path: iecc!compilers-sender
From: compilers-jobs@iecc.cambridge.ma.us
Subject: Compiler positions available for week ending April 11
Message-ID: <93-04-043@comp.compilers>
Keywords: jobs
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Compilers Central
Date: Sun, 11 Apr 1993 15:47:31 GMT
Approved: compilers@iecc.cambridge.ma.us
This is a digest of ``help wanted'' and ``position available'' messages
received at comp.compilers during the preceding week. Messages must
advertise a position having something to do with compilers and must also
conform to the guidelines periodically posted in misc.jobs.offered.
Positions that remain open may be re-advertised once a month. To respond
to a job offer, send mail to the author of the message. To submit a
message, mail it to compilers@iecc.cambridge.ma.us.
-------------------------------
From: reinders@SSD.intel.com (James Reinders)
Subject: Intel Supercomputer Job Opennings
Organization: Supercomputer Systems Division (SSD), Intel
Date: Fri, 9 Apr 1993 21:38:53 GMT
The Supercomputing Systems Division of Intel has positions available now
in Beaverton, Oregon for Senior Software Engineers, Compilers.
A more detailed description is attached.
Please mail resumes (no FAXes, no phone calls, no e-mail, please):
James Reinders
c/o Intel Corporation
Mail Stop CO4-02
5200 N.E. Elam-Young Parkway
Hillsboro, OR 97124-6497
- james
------------------------------------------------------------------------------
Senior Software Engineers, Compilers
Position(s): Design and implement compilers for Intel Parallel
Supercomputers, both scalar and parallelizing compiler positions.
Education: M.S. or PhD in Computer Engineering or Computer Science.
Experience: minimum of 3 years of experience in compiler development
and system architecture.
Skills: Experience with parallel or vector supercomputers required.
Proficiency in compiler technology, computer architecture,
parallel architectures, C, FORTRAN, UNIX and UNIX tools.
Must have strong communication and analytical skills, and team
software development experience.
--
:: James R. Reinders reinders@ssd.intel.com ::
:: Intel Supercomputer Systems, M/S C04-02 ::
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Mon Apr 12 17:05:10 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: parrt@ecn.purdue.edu (Terence J Parr)
Subject: Re: A C++ Parser toolkit
Message-ID: <93-04-044@comp.compilers>
Keywords: tools, PCCTS
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Compilers Central
References: <93-04-042@comp.compilers>
Date: Mon, 12 Apr 1993 15:25:55 GMT
Approved: compilers@iecc.cambridge.ma.us
I'm very pleased by the posting of Mayan Moudgill
<moudgill@cs.cornell.EDU>; people are beginning to see that semantic
predicates are the way to recognize context-sensitive constructs rather
than having the lexer change the token type (ack!). Mayan writes:
> For instance, the following code:
>
> int name(Parse& P)
> {
> Token t;
>
> P, IDENT(t);
> if( P && StbFind(t) ) {
> return 1;
> }
> return 0;
> }
>
> int stmt(Parse & P)
> {
> Token t;
>
> P, MATCH(name), "=", NUMBER(val);
> }
>
> matches an identifier (i.e. [a-zA-Z_][a-zA-Z_0-9]*), '=', number string,
> but only if identifier is already in the symbol-table.
In PCCTS, we would write something akin to:
name : << IsVAR(LATEXT(1)) >>? IDENT
;
stat : name "=" NUMBER
;
where <<IsVAR(LATEXT(1))>>? is a semantic predicate; IsVAR is some
user-defined function and LATEXT(1) is the text of the first token of
lookahead. This example behaves exactly as Mayan outlines. We call this
a *validation* semantic predicate (we have syntactic predicates in the
next release of PCCTS). Predicates can also be used to distinguish
between two syntactically ambiguous productions (*disambiguating* semantic
predicates). E.g., let's add a production to stat to match a type name
followed by a declarator.
name : << IsVAR(LATEXT(1)) >>? IDENT
;
type : << IsTYPE(LATEXT(1)) >>? IDENT
;
stat : name "=" NUMBER
| type declarator
;
In this case, IDENT predicts both productions of stat and k=1 lookahead is
syntactically insufficient. However, ANTLR (the parser-generator of
PCCTS) finds 2 *visible* predicates (one in name and the other in type)
that can be used to semantically disambiguate the productions of stat.
Hence, it *hoists* the predicates for use in the prediction expressions
for stat, thus, resolving the conflict. Note that, using k=2, ANTLR could
uniquely predict stat's productions without predicates and would not hoist
the visible predicates.
PCCTS is in the public domain and may be obtained by sending email to
pccts@ecn.purdue.edu with a blank "Subject:" line.
Terence Parr
Purdue University
School of Electrical Engineering
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Mon Apr 12 17:07:03 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: henry@zoo.toronto.edu (Henry Spencer)
Subject: Re: Wanted: Regular Expression -> Finite Automata C code =-
Message-ID: <93-04-045@comp.compilers>
Keywords: DFA, lex
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: U of Toronto Zoology
References: <93-04-023@comp.compilers> <93-04-041@comp.compilers>
Date: Mon, 12 Apr 1993 19:50:34 GMT
Approved: compilers@iecc.cambridge.ma.us
megatest!plethorax!djones@uu2.psi.com (Dave Jones) writes:
>Also, every time a final state is encountered, you will want to remember
>the number of characters that have been processed at that point. The last
>number remembered will be the length of the longest match.
Unfortunately, this doesn't generalize to most real-life applications,
where the match is not anchored at the start of the string. It's easy to
generalize the matching algorithm so you still make only one pass, by
considering the possibility of a fresh start at each character, and this
is definitely the way to do it -- you don't want to retry the match at
each possible starting position! Alas, the bookkeeping for longest match
falls apart. You don't know when the particular match that corresponds to
a new final state started. Consider the RE "(abcde|cd|ef)" and the string
"xabcdefy". You reach a final state after seeing the "d", another after
the "e", and another after the "f", but the last one is not the longest.
--
Henry Spencer @ U of Toronto Zoology, henry@zoo.toronto.edu utzoo!henry
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Tue Apr 13 11:18:18 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: Thomas Johnsson <johnsson@cs.chalmers.se>
Subject: Effectiveness of coloring: Chaitin-style vs Chow-style
Message-ID: <93-04-046@comp.compilers>
Keywords: optimize, registers, question
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Compilers Central
Date: Tue, 13 Apr 1993 08:39:09 GMT
Approved: compilers@iecc.cambridge.ma.us
Here is something that has puzzled me, regarding the effectiveness of
Chaitin style graph coloring vs Chow style priority based coloring.
In Chaitin's original coloring scheme, nodes which have degree < N (N
being the number of available registers) are deleted from the graph,
together with the incident arcs, because such nodes are trivially
colorable. If one comes to a point where all nodes have degree >= N, some
node is chosen for spilling, and the deletion continues (In a modification
of this, a node is simply chosen for deletion anyway -- I think this is
what Preston Briggs does). Nodes are colored in reverse order in which
they were deleted from the graph.
In Chow style priority based coloring, nodes are basically colorored in
order highest priority first, where the priority is set by the estimated
runtime gain by allocating the variable to a register.
In the (modified) Chaitin scheme, coloring order basically becomes: high
pressure regions first -- an order which is different from the priority
order.
All other things being equal (i.e., ignoring issues of subsumption, live
range splitting, etc) I would have thought that it would always be more
beneficial to color the high gain nodes first, irrespective of how many
other nodes that the node-to-be-colored conflicts with.
True or not?
-- Thomas Johnsson (johnsson@cs.chalmers.se)
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Tue Apr 13 11:33:45 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: wjackson@pinnacle.cerc.wvu.edu
Subject: request C code which translates source into PDG
Message-ID: <93-04-047@comp.compilers>
Keywords: optimize, analysis, question
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Compilers Central
Date: Tue, 13 Apr 1993 14:37:13 GMT
Approved: compilers@iecc.cambridge.ma.us
Has anyone implemented algorithms which convert a source program into a
Program Dependence Graph? We are currently working to implement such
algorithms as outlined in the 1987 paper "The Program Dependence Graph and
Its Use in Optimization" by Ferrante, Ottenstein, and Warren. If anyone
else has done this or another implementation and has code available, we
would appreciate the information.
Thanks,
Walter Jackson
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Tue Apr 13 16:55:57 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: preston@dawn.cs.rice.edu (Preston Briggs)
Subject: Re: Effectiveness of coloring: Chaitin-style vs Chow-style
Message-ID: <93-04-048@comp.compilers>
Keywords: optimize, registers
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Rice University, Houston
References: <93-04-046@comp.compilers>
Date: Tue, 13 Apr 1993 19:37:54 GMT
Approved: compilers@iecc.cambridge.ma.us
Thomas Johnsson <johnsson@cs.chalmers.se> writes:
[asking about Chow and Chaitin]
>All other things being equal (i.e., ignoring issues of subsumption, live
>range splitting, etc) I would have thought that it would always be more
>beneficial to color the high gain nodes first, irrespective of how many
>other nodes that the node-to-be-colored conflicts with.
I disagree. If you waste colors on expensive (but trivially colorable)
nodes, you may have to spill in situations where no spilling was required.
Look at it like this. Both Chaitin and Chow divide the nodes in the graph
into two sets: trivial and non-trivial. Then they try and color the
non-trivial nodes first. Chow says all the nodes with degree < k (where k
is the number of registers) are trivially colorable. Chaitin finds his
sets of trivial nodes by removing from the graph all nodes of degree < k.
He continues removing nodes until no nodes of degree < k remain in the
graph. Chaitin's set always includes all the nodes in Chow's set. Thus,
Chow may spill nodes that Chaitin would consider trivial.
Within the remainder of the graph (the non-trivial part), Briggs et al.
(following Chaitin) first remove nodes with low spill cost and high
degree. Thus nodes with high spill cost tend to remain in the graph
longer and be colored earlier, just like Chow.
Preston Briggs
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Wed Apr 14 10:41:36 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: M.J. Ratcliffe <Michael.Ratcliffe@ecrc.de>
Subject: PARLE'93: Advanced Programme Corrections
Message-ID: <93-04-049@comp.compilers>
Keywords: conference, CFP, parallel
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: ECRC GmbH, Arabellast. 17, D-8000 Munich 81
References: <93-03-047@comp.compilers>
Date: Tue, 13 Apr 1993 21:18:30 GMT
Approved: compilers@iecc.cambridge.ma.us
We have recently spotted some bugs in the printed version of the PARLE'93
Advanced Programme. Here are the most important points for those of you
receiving the printed version, followed by a reposting of the full
information for reference. Please note that the early registration
deadline is nearing (April 30th).
a) the tutorial attendance fee is DM275
b) the reduced registration fee is available for all members of CEPIS
(Council of European Professional Informatics Societies)
organisations (i.e. AFCET, AFIN, AICA, BCS, CCS, DD, DND, FESI,
FIPA, GCS, GI, ICS, ITG im VDE, NGI, NJSzT, OCG, PIPS, SAI,
SI, VRI). Please include your membership number and relevant
member organisation on the registration form.
--michael
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+-----------------------------------------------+
| |
| A D V A N C E D P R O G R A M M E |
| |
| & |
| |
| C A L L F O R P A R T I C I P A T I O N |
| |
| |
+-----------------------------------------------+
PARALLEL ARCHITECTURES PPPP AA RRRR L EEEE 999 333
P P A A R R L E 9 9 3
AND LANGUAGES PPP AAAA RRR L EEE 9999 33
P A A R R L E 9 3
EUROPE P A A R R LLLL EEEE 999 333
Marriott Hotel, Munich, Germany
June 14 - 18, 1993
OVERVIEW
PARLE is an international, european!based conference focusing on the
parallel processing subdomain of Informatics/Information Technology. It
is organised annually on a non!profit making basis to act as a European
forum for interchange between those working or interested in the domain,
from both academia and industry.
Ever increasing demands are being made on computer technology to provide
the processing power necessary to help understand and master the
complexity of natural phenomena and engineering structures. Within human
organisations ever more processing power is needed to master the
increased information flow. Many so!called !Grand Challenges! have been
identified as being orders of magnitude beyond even the most powerful
computers available today.
Parallel processing technology offers a solution to providing the
necessary power and is therefore generally recognised as a strategically
important technology. By taking many basic processing devices and
connecting them together the potential exists of being able to achieve a
performance many times that of an individual device. However it is still
an important topic of research to discover how to do this optimally and
then to be able to effectively exploit the potential power through real
applications solving real!world problems.
WHO SHOULD ATTEND?
PARLE!93 has been designed to be attractive for a very wide!ranging
audience. It will appeal to academic researchers and students working in
the field of parallel processing or related areas. It will also appeal to
industrial workers in the area wishing to discover the latest ideas,
techniques and approaches. The Tutorial Programme offers an opportunity
for anyone interested in learning the state!of!the!art in some of the
newest, most exciting areas of parallel processing from world renowned
experts. A special Esprit project poster display will provide an overview
of many projects funded by the European Commission. The Industrial
Programme, consisting of both Vendor Presentations and an Exhibition,
offers the opportunity to gain first!hand experience of the commercial
products available today.
PARLE!93 offers an unrivaled opportunity for anyone interested to learn
more about parallel processing, whether expert or novice.
PROGRAMME OVERVIEW
+------------+------------+------------+------------+------------+
| Monday | Tuesday | Wednesday | Thursday | Friday |
| 14th June | 15th June | 16th June | 17th June | 18th June |
+------------+------------+------------+------------+------------+
| T | | |
| U | | SATELLITE |
| T | PAPER SESSIONS | |
| O | | EVENTS |
| R | | |
| I +------------+------------+------------+------+-----+
| A | |
| L | INDUSTRIAL EXHIBITION |
| S | |
+------------+------------+------------+------------+------+
| RECEPTION | BANQUET |
+------------+------------+
INDUSTRIAL PROGRAMME
The Industrial Programme associated with PARLE!93 consists of an
Exhibition and Vendor Sessions. The following companies have confirmed
their participation:
Convex Intel
Cray Research MasPar
Distributed Software Limited Meiko
DSM Computer nCUBE
Encore Pallas
GENIAS Software GmbH Parsytec
IBM Siemens-Nixdorf Information Systems
ICL Transtech Parallel Systems
SATELLITE EVENTS
Two satellite events are being organised in association with PARLE!93:
* A meeting for Esprit projects involved in parallel processing is
planned
by the European Commission.
* An Intel European Users! Group meeting.
* A meeting of CONVEX's SCWG (Scalable Computing Working Group) on
compatible cluster and MPP programming
TUTORIALS
A choice of four parallel tutorials is being offered. Each tutorial will
last a full day in order to provide the time necessary for an in-depth
coverage:
TUTORIAL 1: MOLECULAR BIOINFORMATICS
Presenter: Thomas Lengauer, GMD, St. Augustin, Germany
Objectives
This tutorial will give an overview of Molecular Bioinformatics, its
promises, its limitations, and its scientific challenges. Molecular
Bioinformatics is an area in applied computer science which is in general
terms concerned with the development of methods and tools for analysing,
understanding, reasoning about and, eventually, designing large
biomolecules such as DNA, RNA, and proteins, with the aid of the
computer. The complexity of these molecules necessitates sophisticated
algorithmic and organizational models and techniques in order to keep the
computational requirements within acceptable ranges. High-performance
computing and parallel computation are an important aspect of Molecular
Bioinformatics. On the one hand, chemists have a well-developed tradition
of using high-performance computers. On the other hand, many of the
problems involved can only be tackled if the highest computing power
available is used effectively.
Contents
I Introduction to the biomolecular background.
II Alignment of biomolecular sequences (DNA, RNA, Proteins) and the
detection of evolutionary relationships between such sequences
(phylogenetic trees).
III Modeling of large biomolecules, including the prediction and
analysis of
secondary and higher-level structure as well as spatial
conformations
(folding).
IV Molecular dynamics and simulations of interactions between
biomolecules.
Biography
Thomas Lengauer is a director of the Institute of Foundations of
Information Technology at the Federal Research Center for Computer
Science (GMD) in St. Augustin, Germany. He is also Professor of Computer
Science at the University of Bonn. His research interests are in the area
of efficient algorithms and their application in science and technology.
Applications besides molecular biology that he is interested in are:
circuit design and layout, parallel computation, and cutting problems in
manufacturing.
TUTORIAL 2: NEURAL NETWORKS, FROM THEORY TO APPLICATIONS
Presenter: Francoise Fogelman Soulie, LRI, University of Orsay, France
Objectives
Participants in this tutorial will learn about advanced Neural Network
techniques. Efficient methods will be given for training the complex,
hybrid architectures required for real-world problems. Through a survey
of applications, the tutorial will assess the technology's realizations
and provide a development methodology. Parallel implementations of Neural
Networks will be discussed.
Contents
I General introduction to Neural Networks. Overview of industrial
implementations.
II Survey of major Neural Network learning algorithms: Adaline,
Perceptron,
Multi-Layer Networks, Learning Vector Quantisation, Radial
Basis Function,
Topological Maps.
III Neural Networks and Pattern Recognition: theoretical links with
the
Bayesian Classifier, computing and generalisation capabilities
of Multi-
Layer Networks, links of Multi-Layer Networks with Principal
Component
Analysis and Discriminant Analysis.
IV Architectures for real-world applications. The need for modular
design.
Cooperation of hybrid techniques (Neural Networks and
conventional).
Training algorithms and examples.
V Development methodology of Neural Network applications:
detailed case
studies will be presented to illustrate the development
methodology, from
various sectors: optical character recognition, security,
finance, High
Energy Physics, control, biology and Medicine.
VI Parallel implementations: algorithms, hardware, language
Biography
Francoise Fogelman Soulie is Professor of Computer Science at LRI,
University of Orsay. She has participated in various ESPRIT projects
concerned with Neural Networks and their applications. She was a
co-founderof Mimetics. She is vice-president of the European Neural
Networks Society, and an editor for a number of Neural Networks journals.
Her research interests are on Neural Networks and their use, in
conjunction with other methods (eg Pattern Recognition and AI), to
develop industrial applications.
TUTORIAL 3: OBJECT-ORIENTED CONCURRENT PPROGRAMMING
Presenter: Jean-Pierre Briot, Dept. of Information Science, The
University of
Tokyo, Japan and LITP, Institut Blaise Pascal, Paris, France.
Objectives
This tutorial introduces the basic concepts and methodology of
object-oriented concurrent programming. Object-oriented concurrent
programming is a natural integration of object-oriented programming and
concurrent programming. The resulting programming methodology allows
large programs to be decomposed into a collection of small modules that
run and interact concurrently and which are capable of exploiting
parallel hardware. The tutorial covers both a progressive introduction to
the concepts, a number of program examples, and a survey of the current
state of the art.
Contents
I Introduction
II From Passive Objects to Active Objects
III Concepts
IV Example of Programming Language: ConcurrentSmalltalk
V Programming Methodology and Examples in ConcurrentSmalltalk
VI Implementing Active Objects: Actalk
VII The Actor Model of Computation
VIII Survey/Comparison of Main OOCP Languages
IX Present and Future, Conclusion, and Bibliography
Biography
Jean-Pierre Briot is a researcher at LITP, University of Paris-VI. He is
currently visiting the Department of Information Science, University of
Tokyo. He participated in the design of several OOCP languages and
platforms, and their applications to computer music and distributed AI.
He co-headed an ESPRIT parallel computing action to develop an OOCP
programming environment for multiprocessors. He has performed tutorials
on OOCP at the main OOP conferences (TOOLS, ECOOP and OOPSLA).
TUTORIAL 4: AUTOMATIC PARALLELISATION FOR DISTRIBUTED-MEMORY SYSTEMS
Presenter: Hans P. Zima, Institute for Software Technology and Parallel
Systems,
University of Vienna, Austria
Objectives
In this tutorial, we describe the current state of the art in compiling
procedural languages (in particular, Fortran) for distributed-memory
multiprocessing systems (DMMPs), analyze the limitations of these
approaches, and outline future research. We introduce the language
extensions of Vienna Fortran which allow the user to write programs for
DMMPs using global addresses, and to specify the distribution of data
across the processors of the machine. We also introduce Message Passing
Fortran (MPF), a Fortran extension that allows the formulation of
explicitly parallel programs which communicate via explicit message
passing.
Contents
I Current programming practice on DMMPs by means of a simple
example.
II A programming model is introduced and an informal overview of
the relevant
elements of Vienna Fortran and MPF is provided.
III Basic parallelisation is described by discussing the individual
phases
involved in the translation from Vienna Fortran to MPF. This
includes a
discussion of the procedures and various optimisation
techniques.
IV Additional optimisation methods and advanced parallelization
techniques
including run-time analysis.
V An overview of related work and a discussion of open research
issues.
Biography
Hans P. Zima is Professor for Computer Science, University of Vienna,
Austria, and Adjunct Professor, Computer Science, Rice University, USA.
He is also Director of the Austrian Center for Parallel Computation
(ACPC). He guided the development of Vienna Fortran, one of the major
inputs for the High Performance Fortran effort. His main research
interests are in the field of advanced programming environments for
parallel machines, in particular automatic parallelisation for
distributed-memory machines, performance analysis, and knowledge-based
transformation systems. He currently leads research efforts in the
context of the ESPRIT III projects PREPARE and PPPE.
PAPER PRESENTATIONS
TUESDAY, 15th JUNE 1993
09.00 - 09.30: Welcome
09.30 - 10.30: Plennary Session: The Rubbia Committee Report
B. Hertzberger, University of Amsterdam, The Netherlands
10.30 - 11.00: BREAK
11.00 - 12.30: Parallel Sessions
Track a: Architectures: Virtual Shared Memory
---------------------------------------------
Simulation-based Comparison of Hash Functions for Emulated Shared Memory,
J. Keller
& C. Engelmann, Universit!t des Saarlandes, Germany
Task Management, Virtual Shared Memory, and Multithreading in a
Distributed Memory
Implementation of Sisal, M. Haines & W. B!hm, Colorado State
University, USA
Simulating the Data Diffusion Machine, E. Hagersten, M. Grindal, A.
Landin,
A. Saulsbury, B. Werner & S. Haridi, SICS, Sweden
Track b: Functional Programming
-------------------------------
2 DT-FP : An FP Based Programming Language for Efficient Parallel
Programming of
Multi Processors Networks, R. Wilhelm, Y. Ben Asher, G. R!nger & A.
Schuster,
Universitaet des Saarlandes, Germany
The Data-Parallel Categorial Abstract Machine, Ga!tan Hains & Christian
Foisy,
Universite Montreal, Canada
Data Parallel Implementation of Extensible Sparse Functional Arrays,
J. T. O!Donnell, University of Glasgow, UK
12.30 - 14.00: LUNCH: Bavarian Specialities
14.00 - 15.30: Parallel Sessions
Track a: Interconnection Networks: Embeddings
---------------------------------------------
Embeddings of Tree-Related Networks in Incomplete Hypercubes, S. Oehring,
S. K. Das,
Universitaet Wuerzburg, Germany
Static and Dynamic Performance of the M!bius Cubes, P. Cull, S. Larson,
Oregon State
University, USA
Optimal Mappings of a m Dimensional FFT Communication to a k Dimensional
Mesh for
Arbitrary m and k, Z. G. Mou, X. Wang, Brandeis University, USA
Track b: Vendor Session 1
-------------------------
15.30 - 16.00: BREAK
16.00 - 18.00: Parallel Sessions
Track a: Language Issues
------------------------
Implicit Parallelism: The United Functions and Objects Approach, J.
Sargeant,
University of Manchester, UK
Detection of Reductions in Sequential Programs, X. Redon & P. Feautrier,
Universite
Versailles-St. Quentin, France
Parallel Programming Using Skeleton Functions, R. L. While, J. Darlington,
A.J. Field, P.G. Harrison et al., Imperial College, UK
Data-parallel portable software platform: principles and implementation,
A. V. Shafarenko, C. Sutton & V. B. Muchnick, University of Surrey,
UK
Track b: Concurrency: Responsive Systems
-----------------------------------------
A Compositional Approach to Fault-Tolerance Using Specification
Transformation,
D. Peled & M. Joseph, AT&T Bell Laboratories, USA
Concurrent METATEM-A Language for Modelling Reactive Systems, M. Fisher,
University of Manchester, UK
Trace-based Compositional Reasoning about Fault Tolerant Systems, Henk
Schepers,
Jozef Hooman, Eindhoven University of Technology, The Netherlands
A Kahn principle for networks of nonmontonic real-time processes, R. K.
Yates &
G. R. Gao, McGill University, Canada
20.00: RECEPTION Munich Town Hall
WEDNESDAY, 16th JUNE 1993
9.00 - 9.30: Bavarian Secretary of State, Dr. Wiesheu
9.30 - 11.00: Plenary Session
11.00 - 11.30: BREAK
11.30 - 13.00: Parallel Sessions
Track a: Interconnection Networks: Routing
------------------------------------------
Adaptive Multicast Wormhole Routing in 2D Mesh Multicomputers, A.-H.
Esfahanian,
X. Lin, P. K. McKinley, Michigan State University, USA
The Impact of Packetization in Wormhole-Routed Networks, J. H. Kim & A.
A. Chien,
University of Illinois at Urbana-Champaign, USA
Grouping Virtual Channels for Deadlock!Free Adaptive Wormhole Routing, Z.
Liu,
Royal Institute of Technology, Sweden
Track b: Logic Programming
--------------------------
Monaco: A High-Performance Flat Concurrent Logic Programming System, Evan
Tick,
University of Oregon, USA
Exploiting Recursion-Parallelism in Prolog, J. Bevemyr, T. Lindgren & H.
Millroth,
Uppsala University, Sweden
Why and How in the ElipSys OR!parallel CLP System, A. V!ron, K.
Schuerman, M. Reeve
& L.-L. Li, ECRC, Germany
13.00 - 14.00: LUNCH
14.00 - 15.30: Parallel Sessions
Track a: Poster Session
-----------------------
Track b: Vendor Session 2
-------------------------
15.30 - 16.00 BREAK
16.00 - 18.00 Parallel Sessions
Track a: Architectures: Caches
------------------------------
Skewed-associative Caches, A. Seznec & F. Bodin, IRISA, France
Trace-Splitting for the Parallel Simulation of Cache Memory, N.
Ironmonger,
ETH Zentrum, Switzerland
Locality and False Sharing in Coherent-Cache Parallel Graph Reduction, A.
Bennett &
P. Kelly, Imperial College, UK
SLiD!A Cost-Effective and Scalable Limited!Directory for Cache Coherence,
G. Chen,
New York University, USA
Track b: Concurrency: Semantics
-------------------------------
Actor Programs Formal Development Using Structured Algebraic Petri Nets,
N. Guelfi &
D. Buchs, University of Geneva, Switzerland
A Parallel Programming Style and Its Algebra of Programs, C. Hankin, D.
Le Metayer &
D. Sands, Imperial College, UK
B(PN) - a Basic Petri Net Programming Notation, E. Best & R. P. Hopkins,
Univesitaet
Hildesheim, Germany
A Calculus of Value Broadcasts, K. V. S. Prasad, Chalmers University of
Technology,
Sweden
18.30 - 19.30: Poster Session
20.00: BANQUET Marriott Hotel
THURSDAY, 17th JUNE 1993
9.00 - 10.00: Parallel Sessions
Track a: Tools
--------------
TRAPPER : A Graphical Programming Environment for Industrial
High-Performance
Applications, C. Scheidler, L. Schaefers & O. Kraemer-Fuhrmann,
Daimler-Benz AG,
Germany
Control and Data Flow Visualization for Parallel Logic Programs on a
Multi-window
Debugger HyperDEBU, J. Tatemura, H. Koike & H. Tanaka, University of
Tokyo
Track b: Neural Networks
------------------------
Artificial Neural Networks for the Bipartite and K!partite Subgraph
Problems,
J!S. Lai & S!Y. Kuo, National Taiwan University, R. O. China
Homogeneous Neuronlike Structures for Optimisation Variational Problem
Solving,
I. A. Kalyayev, Scientific Research Institute of Multiprocessor
Computing,
Russia
10.00 - 10.30: BREAK
10.30 - 12.30: Parallel Sessions
Track a: Scheduling
-------------------
Effectiveness of Heuristics and Simulated Annealing for the Scheduling of
Concurrent
Tasks!An Empirical Comparison, Z. Liu & C. Coroyer, INRIA, France
Task Scheduling with Restricted Preemptions, K. Ecker & R. Hirschberg,
Technische
Universitaet Clausthal, Germany
Effects of Job Size Irregularity on the Dynamic Resource Scheduling of a
2!D Mesh
Multicomputer, D. Min & M. W. Mutka, Michigan State University, USA
Static Allocation of Tasks on Multiprocessor Architectures with
Interprocessor
Communication Delays, S. Norre, Universit! Blaise
Pascal!Clermont!Ferrand II,
France
Track b: Specification & Verification
-------------------------------------
PEI: a single unifying model to design parallel programs, E. Violard &
G-R. Perrin,
University of Franche-Comte, France
Correctness of Automated Distribution of Sequential Programs, Cyrille
Bareau,
Benoit Caillaud, Claude Jard, Rene Thoraval, IRISA, France
Compositionality Issues of Concurrent Object-Oriented Logic Languages, E.
Pimentel,
J. M. Troya, Universidad de Malaga, Spain
Using State Variables for the Specification and Verification of TCSP
Processes,
Luis M. Alonso, R. Pena Mari, Universidad del Pais Vasco, Spain
12.30 - 14.00: LUNCH
14.00 - 15.30: Parallel Sessions
Track a: Algorithms
-------------------
A Parallel Reduction of Hamiltonian Cycle to Hamiltonian Path in
Tournaments,
E. Bampis, M. El Haddad, Y. Manoussakis & M. Santha, Universite de
Paris-Sud,
France
A Unifying Look at Semigroup Computations on Meshes with Multiple
Broadcasting,
S. Olariu, D. Bhagavathi, W. Shen & L. Wilson, Old Dominion
University, USA
A fast, simple algorithm to balance a parallel multiway merge, R.S.
Francis,
I. D. Mathieson & L. J. H. Pannan, CSIRO, Australia
Track b: Vendor Session 3
-------------------------
15.30 16.00: BREAK
16.00 - 17.30: Parallel Sessions
Track a: Architectures: Fine Grain Parallelism
----------------------------------------------
Some Design Aspects for VLIW Architectures Exploiting Fine-Grained
Parallelism,
W. Karl, Technische Universitaet Muenchen, Germany
Load Balanced Optimisation of Virtualised Algorithms for implementation
on Massively
Parallel SIMD Architectures, C. A. Farell & D. H. Kieronska, Curtin
University,
Australia
Performance evaluation of WASMII: a data flow machine based on the
virtual hardware,
X.-P. Ling & H. Amano, Keio University, Japan
Track b: Databases
------------------
On the Performance of Parallel Join Processing in Shared Nothing Database
Systems,
E. Rahm, R. Marek, Universit!t Kaiserslautern, Germany
Processing Transactions on Grip, a Parallel Graph Reducer, G. Akerholt,
K. Hammond,
S. Peyton Jones & P. Trinder, Glasgow University, UK
Arithmetic for Parallel Linear Recursive Query Evaluation in Deductive
Databases,
J. Robinson & S. Lin, University of Essex, UK
POSTER PAPERS
Computing the Complete Orthogonal Decomposition Using a SIMD Array
Processor,
E. J. Kontoghiorghes, Queen Mary and Westfield College, UK
A Dynamic Load Balancing Strategy for Massively Parallel Computers, D.
Talia,
M. Cannataro & Y. D. Sergeyev, CRAI, Italy
Issues in Event Abstraction, T. Kunz, Technische Hochschule Darmstadt,
Germany
Modelling Replicated Processing, M. Koutny, L. V. Mancini & G. Pappalardo,
University of Newcastle upon Tyne, UK
Procedures for Folding Transformations, M. Gusev & D. J. Evans,
Loughborough
University, UK
Performance Analysis of M3S: a Serial Multiported Memory Multiprocessor,
Ch. Rochange, P. Sainrat & D. Litaize, Universite Paul Sabatier,
France
Multi-Criteria : Degrees of Recoverability in Distributed Databases, M.
Nyg!rd,
The Norwegian Institute of Technology, Norway
Deadlock-Free Adaptive Routing Algorithms for the 3D-Torus: Limitations
and
Solutions, P. Lopez & J. Duato, Universidad Politecnica de Valencia,
Spain
Convergence of Asynchronous Iterations of Least Fixed Points, J. Wei, GMD
Karlsruhe,
Germany
Efficient Parallel Simulation of Neural Networks, A. Zell, H. Bayer, R.
Huebner,
N. Mache & M. Vogt, Stuttgart Universitaet, Germany
Improve Instruction Bandwidth through Compact Encoding, O. S. Schoepke,
University
of Bath, UK
LU-Decomposition on a Massively Parallel Transputer System, S. Luepke,
Technische
Universitaet Hamburg, Germany
PSEE: Parallel System Evluation Environment, E. Luque, R. Suppi & J.
Sorribes,
Universitat Autonoma de Barcelona, Spain
An Algorithm for distributing a finite transition system on a
shared/distributed
memory system, P. Caspi & A. Girault, LGI/IMAG, France
Implementation of a Digital Modular Chip for a Reconfigurable Artificial
Neural
Network, P. Plaskonos, S. Pakzad, B. Jin & A. R. Hurson,
Pennsylvania State
University, USA
Article!Acquisition: A Scenario for Non!Serializability in a Distributed
Database,
M. Nygard, The Norwegian Institute of Technology, Norway
An Empirical Study of Vision Programs for Data Dependence Analysis, L. A.
Barragan &
A. Roy, Universidad de Zaragoza, Spain
Cyclic Weighted Reference Counting without Delay, R. E. Jones & R. D.
Lins,
University of Canterbury, UK
Parallel Optimisation of Join Queries Using an Enhanced Iterative
Improvement,
M. Spiliopoulou, Y. Cotronis & M. Hatzopoulos, University of Athens,
Greece
Distributed Data Structures in Distributed Shortest Path Algorithms, J.
L. Traeff,
University of Copenhagen, Denmark
A Routing and Broadcasting Scheme on Faulty Star Graphs, N. Bagherzadeh &
N. Nassif,
University of California, USA
A Disabling of Event Structures, N. Anisimov, Far East Division of the
Russian
Academy of Sciences, Russia
Barrier Semantics in Very Weak Memory, R.S. Francis & A. N. Pears, LaTrobe
University, Australia
Using Hammock Graphs to Eliminate Nonstructured Branch Statements, F.
Zhang &
E. H. D'Hollander, University of Ghent, Belgium
Performance Modeling of Micro-kernel Thread Schedulers for Shared Memory
Multiprocessors, W. Van de Velde, J. Opsommer & E. H. D'Hollander,
University
of Ghent, Belgium
A Semantic Model of Data Flow Networks based on Process Algebras, C.
Bernardeschi,
A. Bondavalli & L. Simoncini, CNUCE!CNR, Italy
On the Time Complexity of Parallel Algorithms for Lattice Basis Reduction,
C. Heckler & L. Thiele, Universit!t des Saarlands, Germany
Computer Vision Applications Experience with Actors, M. Di Santo, F.
Arcelli,
M. De Santo & A. Picariello, Universita di Salerno, Italy
Grid Massively Parallel Processor, V.P. Il'in & Y. I. Fet, Siberian
Division of the
Russian Academy of Sciences, Russia
STEERING COMMITTEE
The PARLE conferences are controlled by a Steering Committee consisting
of some of the most renowned European experts:
Werner Damm (U. of Oldenburg, D) Jean!Claude Syre (Bull SA, F)
Jose Delgado (INESC, P) Jorgen Staunstrup (TU Denmark,
DK)
Lucio Grandinetti (U. of Calabria, I) Mateo Valero (U. of Catalunya,
E)
Constantin Halatsis (U. of Athens, GR) Thierry Van der Pyl (DGXIII,
CEC)
Ron Perrot (u. of Belfast, UK) Pierre Wolper (U. of Liege, B)
Martin Rem (TU Eindhoven, NL)
ORGANISING COMMITTEE
PARLE!93 is being organised by a committee consisting of:
Arndt Bode (TU Munich, D) Masaru Kitsuregawa (U. of
Tokyo, J)
Werner Damm (U. of Oldenburg, D) Rudi Kober (Siemens/ZFE, D)
Doug DeGroot (TI/CSC, USA) Michael Ratcliffe (ECRC, D)
Ulrike Jendis, (ECRC, D) Mike Reeve (ECRC, D)
Peter Kacsuk (KFKI, H) Gottfried Wolf (DLR, D)
GENERAL INFORMATION, TERMS AND CONDITIONS
OFFICIAL CONFERENCE ADDRESS
ECRC GmbH (PARLE!93), Arabellastrasse 17, 8000 Munich 81, Germany
tel. ++49 89 926990 fax. ++49 89 92699!170
CONFERENCE VENUE
Marriott Hotel, Berlinerstrasse 93, 8000 Munich 40, Germany
tel: ++49 89 360020 fax: ++49 89 3600220
CONFERENCE DATE
June 14!18, 1993
LANGUAGE
The official conference language is English
ENTRY VISA
Non-EEC residents may require an entry visa to enter Germany. Please
check with your local German consulate as early as possible.
ACCESS TO MUNICH CITY CENTRE
By plane: Munich International Airport is directly connected by the S8
commuter line; a station is located in the central area of the airport.
Leave the train at !Marienplatz!. The journey takes about 45 minutes.
By train: Take any !S! commuter train from the central station in the
direction of !Ostbahnhof! as far as !Marienplatz!. Look out for the green
signs showing a white letter !S!.
ACCESS TO THE MARRIOTT HOTEL
By public transport: Proceed from !Marienplatz! on the underground line
!U6! in the direction of !Kieferngarten! as far as !Nordfriedhof!. Leave
the station by the exit at the rear end of the train and follow the signs
to the Marriott Hotel.
By car: The hotel is located near the Schwabing exit of National Highway
9 (Nurnberg!Munich)
REGISTRATION PROCEDURE
Advanced: The enclosed tear!off form and payment must be received by the
organisers by April 30th, 1993. Completed forms should be sent to the
official conference address given above.
On-site: The Registration Desk will be open on Sunday, June 13th, between
18.00 and 20.00, and on each day of the conference from 8.30 until 17.00.
Fees: Registration fees include access to all conference sessions, one
copy of the conference proceedings, access to the Industrial Exhibition,
lunches, coffee breaks, and a ticket for the conference reception and
banquet. The special student rate does not include a copy of the
proceedings or a ticket for the banquet.
PAYMENT
Payment will only be accepted in German Marks. Fees may be paid by cash,
credit card (VISA or JCB), Eurocheque or direct bank credit transfer to:
bank: Dresdner Bank, Munich, Germany
sorting code: 700 800 00
account number: 3008 630/01
account holder: ECRC GmbH (wgn. PARLE!93)
Please be sure to ask your bank to indicate your name and !PARLE!93!
along with any payments. Any banking charges incurred for processing your
payments will be charged directly to you.
CANCELLATIONS
Refunds of 50% of the fee already paid can only be made if a written
request is received by April 30th, 1993.
Should the PARLE!93 conference be cancelled for reasons beyond the
organisers! control, liability is limited to the paid registration fees.
PROCEEDINGS
Extra copies of the conference proceedings will be available from the
Reception Desk at a specially reduced conference price.
SOCIAL EVENTS
Reception: The official reception will be held in the historic !Alte
Rathaussaal!, the ancient seat of the Mayor of Munich, on June 15th.
Banquet: A banquet has been organised for June 16th.
Excursions: A wide variety of picturesque and interesting tours can be
booked through the Marriott Hotel.
HOTEL ROOM RESERVATIONS AT THE MARRIOTT HOTEL
A special discount rate is available for people attending PARLE!93 and
their guests. This is only available by returning the attached tear-off
registration form to the organisers before April 30th, 1993.
SPONSORSHIP
The organisers of PARLE!93 are grateful to be able to acknowledge the
support of the following organisations: AFCET, CEPIS, the Commission of
the European Communities, The Dresdner Bank AG, ECRC GmbH, GI, ITG, the
Technische Universitaet Muenchen.
CONFERENCE ATTENDENCE FEES
The registration fees include access to all conference sessions, the
Industrial Exhibition, lunches, coffee breaks, a copy of the conference
proceedings, a ticket for the banquet and reception.
Advanced Registration Late
Registration
(before April 30th) (after
April 30th)
Members of supporting
organisations (AFCET, CEPIS, GI, ITG) DM725
DM825
Students* DM350
DM450
Normal registration fee DM825
DM925
*this special student rate does not include a copy of the conference
proceedings
or a ticket for the banquet. Student status will only be acknowledged
if your
registration is accompanied by a copy of a valid student identity card.
Tutorial attendance fee DM275 for one tutorial
Extra tickets for the banquet DM125 each
Extra tickets for the reception DM50 each
-----------------------------------------------------------------------
PARLE'93 REGISTRATION FORM
Please write clearly and return to the official conference address given
above.
PERSONAL INFORMATION
-----------------------------------------------------------------------
last name first name
-----------------------------------------------------------------------
address
-----------------------------------------------------------------------
-----------------------------------------------------------------------
country
-----------------------------------------------------------------------
telephone fax
-----------------------------------------------------------------------
email
-----------------------------------------------------------------------
HOTEL RESERVATIONS FOR THE MARRIOTT
The special PARLE'93 rates of DM195 per night (single room) or DM215 per
night (double room) are only available when your completed registration
form is returned by April 30th, 1993. The information given in this part
will be given to the Marriott Hotel and processed directly by them. Any
changes in your requirements should be referred to the Marriott Hotel.
Payment should be made directly to the hotel on your departure.
Please Reserve:
single room(s) for the nights of to June 1993
inclusive
---- ------ ------
double room(s) for the nights of to June 1993
inclusive
---- ------ ------
REGISTRATION FEES
Student registrations should be accompanied by a copy of a student
identity card valid for the current academic year. The conference fees
are given above.
conference registration fee: DM
-----------
tutorial registration fee: DM for tutorial number
-----------
-----
extra banquet tickets: DM
----- -----------
extra reception tickets: DM
----- -----------
TOTAL SUM DUE: DM
===========
PAYMENT
Registration is only valid once the full payment has been received.
Credit card payments are only possible for VISA or JCB account holders.
I prefer to pay: (please tick as appropriate)
by Eurocheque (enclosed) by direct transfer by
credit card
---- ---- ----
If you prefer to pay by credit card, please complete the authorisation
below:
I authorise you to charge the following credit card with the sum of DM
----------
-----------------------------------------------------------------------
card type: VISA or JCB card number
-----------------------------------------------------------------------
card expiry date holder's name
-----------------------------------------------------------------------
date signature
-----------------------------------------------------------------------
Please complete the following if you are paying a reduced fee for being a
member of one of the PARLE'93 supporting organisations (i.e. AFCET, ITG,
GI)
I am a member of (name of
organisation)
-------------------------------------
-----------------------------------------------------------------------
membership number
-----------------------------------------------------------------------
date signature
-----------------------------------------------------------------------
MISCELLANEOUS
Please tick as appropriate:
please do not include my address in any published mailing lists
----
please provide me with a receipt for the registration fees
----
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Wed Apr 14 19:12:10 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: jbowyer@cis.vutbr.cs (Bowyer Jeff)
Subject: Internationalization mailing list
Message-ID: <93-04-050@comp.compilers>
Keywords: i18n
Sender: compilers-sender@iecc.cambridge.ma.us
Reply-To: jbowyer@cis.vutbr.cs
Organization: Technical University of Brno, Czech Republic
Date: Wed, 14 Apr 1993 09:51:59 GMT
Approved: compilers@iecc.cambridge.ma.us
We want you to announce your work on our mailing list!
Do you use a program that has a non-English interface?
Have you converted any software to support more than one language for
its interface?
Will you sponsor a conference that might concern software with a
non-English interface?
Please tell us!
INSOFT-L@CIS.VUTBR.CS Internationalization of Software
Discussion List
Internationalization of software relates to two subjects:
1. Software that is written so a user can easily change the
language of the interface;
2. Versions of software, such as Czech WordPerfect, whose
interface language differs from the original product.
Topics discussed on this list will include:
-- Techniques for developing new software
-- Techniques for converting existing software
-- Internationalization tools
-- Announcements of internationalized public domain software
-- Announcements of foreign-language versions of commercial
software
-- Calls for papers
-- Conference announcements
-- References to documentation related to the
internationalization of software
This list is moderated.
To subscribe to this list, send an electronic mail message to
LISTSERV@CIS.VUTBR.CS with the body containing the command:
SUB INSOFT-L Yourfirstname Yourlastname
Owner:
Center for Computing and Information Services
Technical University of Brno
Udolni 19, 602 00 BRNO
Czech Republic
INSOFT-L-REQUEST@CIS.VUTBR.CS
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Wed Apr 14 19:13:56 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: clark@zk3.dec.com (Chris Clark USSG)
Subject: Semantic actions in LR parser
Message-ID: <93-04-051@comp.compilers>
Keywords: LALR, parse, bibliography
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Compilers Central
References: <93-04-008@comp.compilers> <93-04-036@comp.compilers>
Date: Wed, 14 Apr 1993 14:56:35 GMT
Approved: compilers@iecc.cambridge.ma.us
I was asked:
> Ahh, this sounds like you've rediscovered a result similar to the one in
> Brown, C. & Purdom, P. "Semantic Routines and LR(k) Parsers,"
> Acta Informatica 14 (1980), 299--315.
And was courteously sent a copy of the paper by the authors.
It is a very interesting paper, and it details three types of places where
(shift) action code may or may not be placed--you can always place action
code, you can never place action code, and you can sometimes place action
code.
The set of places you can always place action code matches the places you
can put action code in your grammar with yacc and not produce a conflict.
The set of places you can sometimes place action code match the places
where in Yacc++(R) the action code must match on several possibilities.
(This is the same in Karsten Nyblad's parser generator.)
The set of places you can never place action code will always cause a
conflict. However, as Karsten describes, the parser generator can attempt
to postpone the action by one step of look-ahead without too much trouble.
(Yacc++ does this too, and there is a caution in the manual about
interactions with the lexer, since if the action code is designed to cause
the lexer to return a different token and it is postponed, the directive
to the lexer won't get to the lexer until after the lexer has returned the
token!)
The paper provides some interesting algorithms for calculating where
action code can be placed. Of course, in a pragmatic sense, it is just
easier to run it through the parser generator of your choice and find out
whether it gets conflicts. However, if you think the place where your
action code is placed should be legal, the algorithms in the paper could
either tell you that--you are right, but you need a better parser
generator; you need to put some other matching action code in; or you are
wrong and you need to rethink your semantics.
The most interesting aspect of the algorithms is that they will tell you
if adding matching action code into another production will have a
cascading effect producing new conflicts to be resolved with more matching
action code and whether the cascade will converge and eventually result in
a legal grammar or not. Our parser generator doesn't tell you that. For
most common cases, it isn't an issue, because the places to add action
code converges in one step, but for unusual grammars it would help.
I hope this was interesting.
Chris Clark
I am biased in favor of parser generators and work for,
Compiler Resources, Inc.
3 Proctor St.
Hopkinton, MA 01748
(508) 435-5016 fax: (508) 435-4847
For a technical literature packet (including a price list) send email
to: bz%compres@primerd.prime.com
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Wed Apr 14 19:14:53 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: Todd Hodes <tdh6t@onyx.cs.virginia.edu>
Subject: Re: Wanted: Regular Expression -> Finite Automata C code =-
Message-ID: <93-04-052@comp.compilers>
Keywords: DFA, books
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: University of Virginia Computer Science Department
References: <93-04-023@comp.compilers> <93-04-041@comp.compilers>
Date: Wed, 14 Apr 1993 20:06:03 GMT
Approved: compilers@iecc.cambridge.ma.us
megatest!plethorax!djones@uu2.psi.com (Dave Jones) writes:
>
>Last month I posted an article pointing out two bugs in Sedgewick's Pascal
>algorithm for matching the NFA against an input string. The bug Todd
>reports is in the algorithm for building the NFA, in the "C" book, which I
>have never seen.
They are exactly identical.
>Helpful suggestion: Don't try to salvage the Sedgewick algorithms. See
>the New Dragon Book ... Both procedures are sketched out almost correctly
>in algorithms 3.3 and 3.4.
I just have the old one. From what I understand, it's treatment
of this topic is the same. There is no implementation in it, just as
there was no implementation in the book I was originally working from,
Hopcroft & Ullman's "Intro to Automata Theory, Languages, & Computation"
(or some such). This problem must have been implemented bunches of time
in the past, and is obviously tricky. Algorithm 3.3 is correct -- the
problem is parsing the string into it's constituents, which is equivalent
to parsing a CFG. I was hoping to find working code, rather than reinvent
the wheel.
Thanks for the pointer, although it wasn't exactly what I needed.
T.
--
Todd Hodes, hodes@cs.Virginia.edu
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Wed Apr 14 19:20:06 EDT 1993
Xref: iecc comp.lang.c++:34711 alt.sources:6157 comp.compilers:4511
Newsgroups: comp.lang.c++,alt.sources,comp.compilers
Path: iecc!compilers-sender
From: Mayan Moudgill <moudgill@cs.cornell.EDU>
Subject: C++ Parser toolkit: crude mmap added.
Message-ID: <93-04-053@comp.compilers>
Summary: Added a crude mmap() for those systems that don't have it.
Keywords: tools, comment
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Cornell Univ. CS Dept, Ithaca NY 14853
References: <93-04-042@comp.compilers>
Date: Wed, 14 Apr 1993 20:55:56 GMT
Approved: compilers@iecc.cambridge.ma.us
I've added a crude mmap() for those systems that don't have it. It works
by new'ing the necessary space, and then reading the file into it. So,
EOF will not match (OUCH!!!). I'll try and work out something better when
I have the time.
As usual, its available for anon ftp as pub/Parser.shar from
ftp.cs.cornell.edu :)
Mayan
[Sounds to me like you forgot to set _end, the pointer to the end of the file
buffer. -John]
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Wed Apr 14 19:20:53 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: cliffc@rice.edu (Cliff Click)
Subject: Re: request C code which translates source into PDG
Message-ID: <93-04-054@comp.compilers>
Keywords: tools, analysis, optimize
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Center for Research on Parallel Computations
References: <93-04-047@comp.compilers>
Date: Wed, 14 Apr 1993 23:02:47 GMT
Approved: compilers@iecc.cambridge.ma.us
wjackson@pinnacle.cerc.wvu.edu writes:
> Has anyone implemented algorithms which convert a source program into a
> Program Dependence Graph? We are currently working to implement such
> algorithms as outlined in the 1987 paper "The Program Dependence Graph and
> Its Use in Optimization" by Ferrante, Ottenstein, and Warren.
I've implemented conversion of a low-level 3-address intermediate
representation to something similar to a PDG. I do analysis and
optimizations on this form, then produce something close to assembly out.
Paul Havlok (paco@cs.rice.edu) has implemented conversion of Fortran
(really an AST of the Fortran program) to something similar to a PDG.
Paul does analysis and optimizations, then outputs readable Fortran.
Our forms are more like each other than they are like a PDG. We both have
fewer restrictions on the scheduling of computations than the PDG does,
and use this freedom to do better analysis.
Because of our different purposes the kinds and extents of our analyses
differ, but the basic graph formats are very similar. We both found this
kind of IR is very easy to analyze and optimize.
I can send you C++ code for what I do, but its probably less useful than a
careful description of how you would go about doing it.
Cliff
--
cliffc@cs.rice.edu
Massively Scalar Compiler Group
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Thu Apr 15 00:04:28 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: paco@legia.cs.rice.edu (Paul Havlak)
Subject: Re: request C code which translates source into PDG
Message-ID: <93-04-055@comp.compilers>
Keywords: tools, analysis, optimize
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Rice University
References: <93-04-047@comp.compilers> <93-04-054@comp.compilers>
Date: Thu, 15 Apr 1993 02:49:05 GMT
Approved: compilers@iecc.cambridge.ma.us
I have code inside the ParaScope programming environment for parallel
Fortran that builds any or all of the following from a Fortran AST:
* Control flow graph
* with pre- and post-dominator trees
* with Tarjan interval tree (nested loops)
* Unfactored control dependence graph (no region nodes)
* edge labels according to corresponding branch values
* edge levels according to carrying loop
* Static Single-Assignment form with def-use chains
* with or without arrays modeled as scalars
* interprocedural variables modeled separately or as
one glob
* with or without def-def chains (output dependences)
* optionally converted to Gated Single-Assignment form
(a variant on the version in Ballance et al.,
SIGPLAN 90)
* Hashed value numbers based on SSA or GSA form
There are several complete or nearly-complete program
representations in ParaScope:
* Control dependence graph and SSA form
* Control dependence graph and data dependence graph
(like a PDG, although the data dependence computation
still has technical limitations)
* Gated Single-Assignment form
Many more features to come, perhaps even documentation :-). Of my
code, only the value numbering is in C++ and relatively abstract; the
rest is in C with more details about our AST implementation than I
would put in today.
All this is a very small part of the ParaScope infrastructure.
ParaScope source is available for a relatively small fee -- for
details send mail to softlib@cs.rice.edu (works like netlib) with the
line:
send catalog
-- and another with the line:
send license
Regards,
Paul
--
Paul Havlak Dept. of Computer Science
Graduate Student Rice University, Houston TX 77251-1892
PFC/ParaScope projects (713) 527-8101 x2738 paco@cs.rice.edu
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Thu Apr 15 09:59:32 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: xorian@solomon.technet.sg (Xorian Technologies)
Subject: Semantic actions in LR parser
Message-ID: <93-04-056@comp.compilers>
Keywords: parse, LR(1)
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Compilers Central
References: <93-04-008@comp.compilers> <93-04-010@comp.compilers>
Date: Thu, 15 Apr 1993 09:35:59 GMT
Approved: compilers@iecc.cambridge.ma.us
roy@prism.gatech.edu (Roy Mongiovi) writes:
>LR parsers can only perform semantic actions when the recognize a handle
>(right-hand side). You can either split up the right-hand sides into
>pieces so that the pieces end where you need the semantic actions, or you
>can stick in epsilon productions whose only purpose is to cause semantic
>actions.
karsten@tfl.dk (Karsten Nyblad) writes:
>Even that can be generalized. All items of the kernel of a state has the
>same symbol before the . of the item, where the . denotes the point until
>which the parser has accepted the symbols of the production of the item.
>If the same action has been specified on all productions of the items of
>the kernel of a state on pushing the symbol before the ., then that action
>can executed by the parser.
The principal problem with the above analysis is that it does not address
the grammar from the point of view of the language developer. In
particular, language developers would prefer not to think in terms of
states and items much less their manipulation.
Fortunately, in practice, the problem is usually not nearly so arbitrary
as the above discussion would imply (though, certianly in theory, it can
be). In fact, the problems associated with intermediate parser actions
(namely reduce-reduce conflicts) are more often than not *introduced* by
the traditional grammar formalism.
As I am certain Chris Clark is aware, there is a more natural approach to
grammar specification which better allows for intermediate actions in
productions: regular expressions. By regular expressions, though, I don't
just mean a preprocessor which converts regular expression grammars into
fixed expression grammars. That is what causes the problems in the first
place. Instead, to work, the regular expression must be taken right down
into the parser engine.
The result of using regular expressions to specify grammars is that
seemingly different productions are coallessed into a single production.
The implication is obvious: fewer items and, therefore, less chance of
conflicts associated with intermediate actions. Of course, this requires
that a single action be specified for the new single production but that
was already a requirement as previously discussed. In practice, even naive
users tend to write regular expression grammars that avoid the
redundencies (and intermediate reductions) of fixed expression grammars
and are therefore more suitable for intermediate actions and the
availability of regular expressions provide a more natural mechanism for
the resolution of conflicts that might arise.
The essential problem is that an intermediate action, no matter how it is
implemented, introduces a requirement for the parser to decide what
production it is working on. This is inherently contrary to the nature of
LR parsing which seeks to defer the decision until reduction. Regular
expression grammars avoid this delimna by eliminating many shift-reduce
choices by converting them into paths in the regular expression.
To see just how far this can be taken, consider your favorite grammar.
Rewrite (top to bottom or vice versa) by making macro substitutions by
hand to bring productions together. The only circumstance where
productions cannot be so merged is when a nonterminal is used more than
once in a nontrivial construct. For example, most of the productions of a
ANSI C expression can be brought together into a single regular expression
of nested repetitions. Similarly, much of the complexity of declarations
can be removed.
None of this, of course, is new in theory but there are few production
quality compiler-compilers which support regular expressions. One such
product is LADE which not only allows regular expressions but provides
considerable semantic support for them so that you don't have to pick
through the parse stack trying to figure out, for example, whether or not
an optional construct occured.
I strongly recommend that anyone interested in the issue of attaching
intermediate actions to productions look into regular expression grammars
and their facility for supporting, at the user level, what is needed at
the machine level.
(For more information on LADE, email: xorian@solomon.technet.sg or fax:
(65)253-7709.)
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Fri Apr 16 12:10:32 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: ghiya@sally.cs.mcgill.ca (Rakesh Ghiya )
Subject: Reference formals in Pascal.
Message-ID: <93-04-057@comp.compilers>
Keywords: Pascal, question, comment
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Compilers Central
Date: Thu, 15 Apr 1993 17:13:17 GMT
Approved: compilers@iecc.cambridge.ma.us
Hi all,
I wanted to know, how exactly the reference formals are implemented in
Pascal compilers : is the reference formal allocated space on the stack
which contains the address of the corresponding actual parameter, with
every access to the formal parameter being directed to the actual
parameter using this address (like dereferencing in C ); or is it
implemented in some other way ?
I would appreciate any feedback I receive on this.
Thanks a bunch,
Rakesh.
ghiya@cs.mcgill.ca
[I'm not aware of any other implementation with the required semantics.
Fortran arguments can be either reference or copy-in/copy-out, but Pascal
is not so forgiving. -John]
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Fri Apr 16 12:11:48 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: jimy@cae.wisc.edu
Subject: IR Transformations
Message-ID: <93-04-058@comp.compilers>
Keywords: optimize, question
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Compilers Central
Date: Thu, 15 Apr 1993 17:42:49 GMT
Approved: compilers@iecc.cambridge.ma.us
Could anyone give me pointers to papers on the subject of IR
transformations to suit code generation?
More specificaly, suppose we want to generate code for x = 2 * a; The
problem is how to know, at the IR level (before code is generated), that
the above expression can be rewritten as x = a + a (assuming + is cheaper
than *). The problem is that the above transformation may be context
dependent and not always desirable. In other words, sometimes 2*a may be
covered cheaper than a+a, e.g. by a shift left, which some processors
accomplish for free, bundled in a previous instruction.
Thanks for any help
Jim Yu
jimy@eckert.ece.wisc.edu
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Fri Apr 16 12:16:39 EDT 1993
Xref: iecc comp.arch:27986 comp.parallel:5740 comp.compilers:4517
Newsgroups: comp.arch,comp.parallel,comp.compilers
Path: iecc!compilers-sender
From: stamos@bert.eecs.uic.edu (Jerry Stamatopoulos)
Subject: ISCA Workshop on Coordination - Prelim. Program and Regist. Material
Message-ID: <93-04-059@comp.compilers>
Keywords: conference, parallel
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: University of Illinois at Chicago
Date: Fri, 16 Apr 1993 01:42:08 GMT
Approved: compilers@iecc.cambridge.ma.us
Workshop on Fine-Grain Massively Parallel Coordination
Intl. Symp. on Computer Architecture
San Diego, California
May 15, 1993
ORGANIZERS:
===========
Jon A. Solworth (Chair), University of Illinois at Chicago
Andrew Chien, University of Illinois at Urbana-Champaign
Gary Koob, Office of Naval Research
LOCAL ARRANGEMENTS
==================
Jerry Stamatopoulos, isca-ws@parsys.eecs.uic.edu
PRELIMINARY PROGRAM
===================
9:00 - 9:15 Opening remarks
9:15 - 9:45 "Long-lived Communication Management for iWarp"
Susan Hinrichs (CMU)
9:45 - 10:00 "Raising the Level of Abstraction of the Distributed Memory
Paradigm: The Abstract Topology Model"
W.K. Giloi and A. Schramm (Technical Univ. of Berlin)
10:00 - 10:15 "Cache Coherent Atomic Synchronization Primitives"
David V. James (Apple Computer)
10:15 - 10:30 "Compile-time Analysis to Implement Point-to-point
Synchronization in Parallel Programs"
John Nguyen (MIT)
10:30 - 11:00 Break
11:00 - 11:30 "Using Barrier Synchronization To Support Massive Fine-Grain
Parallelism Across Conventional Microprocessors"
H.G. Dietz, S. Ramakrishnan, D.G. Meyer, W.E. Cohen,
T.J. Parr, J.B. Sponaugle, R.W. Quong, A.K. Srikanth,
T.M. Chung, G. Krishnamurthy, C. Liou, et al. (Purdue Univ.)
11:30 - 11:45 "Do&Merge and its Implementation"
James M. Stichnoth (CMU)
11:45 - 12:00 "Smoke: A Parallel Processor Array with Multiple Communication
and Control Modes"
Abhaya Asthana and Boyd T. Mathews (AT&T Bell Labs)
12:00 - 12:30 "Optimal Phase Barrier Synchronization in k-ary n-cube
Wormhole-routed Systems using Multirendezvous Primitives"
Dhabaleswar K. Panda (Ohio-State Univ.)
12:30 - 2:00 Lunch
2:00 - 2:30 "How To Port Sequential Programs to Fine-Grain Machines"
William J. Dally (MIT)
^L
2:30 - 2:45 "Network-based Coordination of Asynchronously Executing
Processes with Caches"
Craig Williams and Paul Reynolds, Jr. (Univ. of Virginia)
2:45 - 3:00 "Dynamic and Static Scheduling of Integrated Network
Barriers (INB's)"
Jon A. Solworth and Jerry Stamatopoulos (Univ. of Illinois)
3:00 - 3:30 "On Designing Processor-Network Interface for Fine-Grained
Distributed-Memory Multicomputers"
Yunn-Yen Chen and Chung-Ta King (National Tsing Hua Univ.)
3:30 - 4:00 Break
4:00 - 4:30 "Workload Characterization for Thread Placement on Multi-
threaded Architectures"
Radhika Thekkath and Susan J. Eggers (Univ. of Washington)
4:30 - 4:45 "Justifying Cache Memories for Data Flow Architectures"
Ponnarasu Shanmugam, Shirish Andhare, and Krishna M. Kavi (UTA)
4:45 - 5:30 Future Issues in Coordination
^L
ISCA'93 WORKSHOPS
Sheraton Harbor Island, San Diego, CA
May 14-15, 1993
Workshop 2 (Sat only): Fine-Grain Massively Parallel Coordination
Jon A. Solworth (Chair), University of Illinois at Chicago
Andrew Chien, University of Illinois at Urbana-Champaign
Gary Koob, Office of Naval Research
WORKSHOPS REGISTRATION FORM
---------------------------
First Name ___________________ Last Name _______________________ Title ___
Company/Institution ______________________________________________________
Address ___________________________________________________________________
City ________________________________ State ____________ Zip _____________
Country _____________________________ Telephone __________________________
E-mail ___________________________________________________________________
Badge Name _______________________________________________________________
Fees Prior to April 26 After April 26
Member Nonmember Member Nonmember
----------------- -----------------
Workshop 2 (1 day) 100 100 120 120
Workshop number ________
TOTAL FEE ENCLOSED (US$) ________________
Make checks payable to ACM ISCA WORKSHOPS (credit card payments are
not acceptable for workshops) and forward with completed
registration form to:
ACM ISCA WORKSHOPS
c/o Pat Harris
Department of Information
and Computer Science
University of California
Irvine, CA 92717
USA
e-mail: harris@ics.uci.edu
Early registration discounts are extended to forms received by April 26,
1993. ACM or IEEE-CS members must include membership number to receive
member discount. Cancellations must be made in writing and received by
May 1, 1993. Workshops may be cancelled due to lack of registration. The
fee includes coffee breaks, lunches, and reception.
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Fri Apr 16 12:18:07 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: krishna@cs.unm.edu (Ksheerabdhi Krishna)
Subject: Re: request C code which translates source into PDG
Message-ID: <93-04-060@comp.compilers>
Keywords: optimize, analysis
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Computer Science Department, University of New Mexico
References: <93-04-047@comp.compilers>
Distribution: usa
Date: Thu, 15 Apr 1993 19:32:30 GMT
Approved: compilers@iecc.cambridge.ma.us
> Has anyone implemented algorithms which convert a source program into a
> Program Dependence Graph?
Karl and Steven Ellcey did implement this, the implementation is described
in a paper by both of them in "Software Practice and Experience"
Experience Compiling Fortran to Program Dependence Graphs
by Karl Ottenstein and Steven Ellcey, vol22(1), 41-62, Jan 1992.
> If anyone else has done this or another implementation and has code
> available, we would appreciate the information.
They did their implemantation in Modula-2, but in Sun's Modula-2
(different libraries) and to the best of my knowledge Sun has discontinued
a Modula-2 line, at least last time I checked.
-begin-plug-
I am working on an implemenation (restricted C -> PDW) right now, which
should be available sometime later this year.
-end-plug-
Cheers,
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Fri Apr 16 12:19:16 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: roger@ac.upc.es
Subject: dependence graphs for vector machines
Message-ID: <93-04-061@comp.compilers>
Keywords: optimize, vector, question
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Compilers Central
Date: Fri, 16 Apr 1993 17:18:52 GMT
Approved: compilers@iecc.cambridge.ma.us
We are two PhD students working on instruction scheduling for vector
processors. We would like to test different register allocators and
instruction scheduling algorithms on a set of FORTRAN vectorizable loops.
^^^^^^^^^^^^
In order to do that we need to have the (low-level) dependence graph of each
loop, something like:
VLD VLD VLD VLD
\ / \ /
\ / \ /
\ / \ /
VADD VADD
\ /
\ /
\ /
\ /
\ /
\ /
VMUL
|
|
VST
Our problem is how to obtain these graphs from the Fortran loops. We are
thininkg on the following solutions:
1) Compile the loops with a real vectorizing compiler ( the Convex one in
our case) and get the assembler produced. Eliminate all the scalar code
from the assembler and keep only the vector part. Write a little program
that constructs the dependence graph from that ( renaming registers to
eliminate artificial dependencies ).
Problems with this approach:
- Difficult to handle memory aliases (i.e. is VLD 300(a3),r1
dependent on VLD 500(a3),r2 or not ? )
- The graph produced is? (terribly?) biased by the compiler used
2) Given that we are interested in loops, and usually the body of a loop is
"simple" (mainly expressions and assignment), write a tiny compiler
that parses the loops and generates the graph.
Problems with this approach:
- We won't be able to perform all the usual optimizations that a
compiler does (cse, strength reduction ...) so we'll probably end up
with a graph not very representative of real programs
3) Is there any tool that does what we want ?
4) Does the latest GNU gcc version (2.3.3?) vectorize for the convex ?
4) Is there source code for a vectorizing compiler freely available ?
5) Any other suggestions ?
Thanks,
Roger Espasa
e-mail: roger@ac.upc.es
Marta Jimenez
e-mail: marta@ac.upc.es
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Fri Apr 16 12:19:53 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: tbr@tfic.bc.ca (Tom Rushworth )
Subject: LALR(k) lookahead set algorithms
Message-ID: <93-04-062@comp.compilers>
Keywords: LALR, bibliography
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Compilers Central
Date: Fri, 16 Apr 1993 03:40:00 GMT
Approved: compilers@iecc.cambridge.ma.us
There was a series of papers several years back in TOPLAS and SIGPLAN Notices
about improved algorithms for LALR(1) lookahead set generation:
1982 October - TOPLAS vol 4 # 4,
"Efficient Computation of LALR(1) Look-Ahead Sets",
DeRemer and Pennello
1985 January - TOPLAS vol 7 # 1,
"A New Analysis of LALR Formalisms",
Park, Choe and Chang
1986 July - SIGPLAN Notices vol 21 # 7, (Proceedings of the SIGPLAN'86
Symposium on Compiler Construction)
"Unifying View of recent LALR(1) Lookahead Set Algorithms"
Ives
1987 April - SIGPLAN Notices vol 22 # 4,
"Remarks on Recent Algorithms for LALR Lookahead Sets"
Park and Choe
1987 August - SIGPLAN Notices vol 22 # 8,
"Response to Remarks on Recent Algorithms for LALR Lookahead Sets"
Ives
I implemented the original DeRemer and Pennello algorithm, but I'm rusty
enough on the subject that I found the later papers heavy going. Does
anyone know if Fred Ives published a more detailed paper later? The
August'87 paper mentions a paper submitted to TOPLAS, but if it was
accepted, I've missed it.
The point of all this is that I'm thinking of implementing either the
Park, Choe and Chang algorithm or the Ives algorithm, and I want as clear
an explanation as I can get. Any pointers or suggestions will be
appreciated!
----
Tom Rushworth (604) 733-0731 [FAX: 733-0634] e-mail: tbr@tfic.bc.ca {VE7TBR}
Timberline Forest Inventory Consultants
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Fri Apr 16 16:56:33 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: goer@midway.uchicago.edu (Richard L. Goerwitz)
Subject: FIRST() algorithm
Message-ID: <93-04-063@comp.compilers>
Keywords: LR(1), question
Sender: compilers-sender@iecc.cambridge.ma.us
Reply-To: goer@midway.uchicago.edu
Organization: University of Chicago
Date: Fri, 16 Apr 1993 16:16:37 GMT
Approved: compilers@iecc.cambridge.ma.us
Perhaps this is not a puzzle for old hands. For me, though, it's
a real conundrum. Where is the breakdown in the following sequence?
1) LR-family parser generators associate actions with reductions
2) To calculate when a reduction must take place, one uses (among
other things) the FIRST() algorithm we all know and love
3) FIRST() does not work with left recursive rules, so we must
first convert left recursion to right recursion
4) The only left recursion -> right recursion algorithms I know
are only guaranteed to work on grammars without epsilon moves
5) To remove epsilon moves from a grammar that specifies actions
for epsilon moves will alter the action-reduction assocations in
an unacceptable way
6) hence I can't do step 1 except for grammars that don't specify
actions for epsilon moves (and, by implication, for grammars that
have left recursion, since conversion to right recursion intro-
duces new nonterminals and action->epsilon-move associations).
Where's the weak link in my reasoning?
--
-Richard L. Goerwitz goer%midway@uchicago.bitnet
goer@midway.uchicago.edu rutgers!oddjob!ellis!goer
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Sun Apr 18 20:10:19 EDT 1993
Xref: iecc comp.compilers:4522 misc.jobs.offered:27038
Newsgroups: comp.compilers,misc.jobs.offered
Path: iecc!compilers-sender
From: compilers-jobs@iecc.cambridge.ma.us
Subject: Compiler positions available for week ending April 18
Message-ID: <93-04-064@comp.compilers>
Keywords: jobs
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Compilers Central
Date: Sun, 18 Apr 1993 12:00:03 GMT
Approved: compilers@iecc.cambridge.ma.us
This is a digest of ``help wanted'' and ``position available'' messages
received at comp.compilers during the preceding week. Messages must
advertise a position having something to do with compilers and must also
conform to the guidelines periodically posted in misc.jobs.offered.
Positions that remain open may be re-advertised once a month. To respond
to a job offer, send mail to the author of the message. To submit a
message, mail it to compilers@iecc.cambridge.ma.us.
-------------------------------
Subject: Position Available, C++, Pittsburgh, PA.
Date: Fri, 16 Apr 93 16:36:07 -0400
From: Sam Harbison <harbison@tartan.com>
SENIOR MEMBER, TECHNICAL STAFF-- C++; Debuggers; Pittsburgh, PA.
This is a software engineering and programing job in a new group
developing a C++ programming environment for embedded systems. The job
involve individual technical contributions and work in groups.
The successful applicant is expected to be--or quickly become--thoroughly
familiar with all aspects of the C++ language, and will be the group's
primary resource on C++. He or she will assume overall responsibility for
designing, developing, and/or modifying a proprietary embedded-systems
symbolic debugger to work with C++, by designing and implementing a C++
subset interpreter in the debugger. Subsequently, the applicant will
assume other tasks involving C++ technical leadership in the project.
Requirements: M.S. or Ph.D. in computer science; 4 years or more
experience in programming languages and their implementations. Thorough
knowledge of C++, and excellent software engineering skills. Some
exposure to embedded systems. Ability to work well individually and in a
group. Project management experience is a plus.
If interested, send your resume to:
Personnel Dept.
Tartan, Inc.
300 Oxford Drive
Monroeville, PA 15146
(412) 856-3600
--or email to harbison@tartan.com
Expect no response until the end of April, due to travel schedules.
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Sun Apr 18 20:11:00 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: mauney@csljon.csl.ncsu.edu (Jon Mauney)
Subject: Re: FIRST() algorithm
Message-ID: <93-04-065@comp.compilers>
Keywords: LR(1)
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: NCSU
References: <93-04-063@comp.compilers>
Date: Sun, 18 Apr 1993 20:50:11 GMT
Approved: compilers@iecc.cambridge.ma.us
goer@midway.uchicago.edu (Richard L. Goerwitz) writes:
>Perhaps this is not a puzzle for old hands. For me, though, it's
>a real conundrum. Where is the breakdown in the following sequence?
> 3) FIRST() does not work with left recursive rules, so we must
> first convert left recursion to right recursion
>Where's the weak link in my reasoning?
First works fine with left-recursion. First is a set of terminals, and
therefore of finite size; it is easily calculated with an iterative
algorithm.
You are probably thinking of the LL-family use of First to build the
Predict() function, which does have trouble with left-recursion.
--
Jon Mauney mauney@csc.ncsu.edu
Mauney Computer Consulting (919)828-8053
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Mon Apr 19 10:20:12 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: xorian@solomon.technet.sg (Xorian Technologies)
Subject: Semantic predicates into grammar specifications
Message-ID: <93-04-066@comp.compilers>
Keywords: parse, tools
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Compilers Central
References: <93-04-044@comp.compilers>
Date: Mon, 19 Apr 1993 01:17:31 GMT
Approved: compilers@iecc.cambridge.ma.us
parrt@ecn.purdue.edu (Terence J Parr) writes:
>I'm very pleased by the posting of Mayan Moudgill
><moudgill@cs.cornell.EDU>; people are beginning to see that semantic
>predicates are the way to recognize context-sensitive constructs rather
>than having the lexer change the token type (ack!).
>
>We call this a *validation* semantic predicate (we have syntactic
>predicates in the next release of PCCTS). Predicates can also be used to
>distinguish between two syntactically ambiguous productions
>(*disambiguating* semantic predicates).
Terence only begins to scratch the surface of what can be accomplished by
the introduction of semantic predicates into grammar specifications. His
examples concentrated on the lookahead (looking up the next symbol in the
symbol tables) but this is by far the least interesting use of semantic
predicates.
Semantic predicates, as defined in LADE, allow you to attach arbitrary
computations to any production: the production will only be reduced if the
predicate returns true. The predicates have access not only to the
lookahead symbol(s), but to the entire state of lexical, syntactic and
semantic analysis.
Here are two examples of what can be achieved:
The predicate has access to, for example, the synthesized attributes of
the production symbols. You might, for example, restrict a particular
production to only allow expressions of a particular type or type
expressions which indicate a class. This is of particular importance in
sugh languages as C++ where type expression equivalence is not name based.
In C++, there are many pathological ambiguities which are unresolvable for
lr(k) or ll(k) for any k. They require, instead, an unbounded lookahead.
This can be accomplished by predicating the reduction of a production on a
successful recursive parsing of the forward stream. In essence, the
current parse is suspended and a secondary, simplified, parse is
initiated. If the forward parse succeeds, then the production is reduced
and the primary parsing proceeds as usual from the original point. If the
forward parse fails, another alternative is considered.
As a general rule, though, you should only use predicates for the purpose
of disambiguation; if there is no alternative syntactic interpretation of
a string, let the error handling fall to the semantic analysis. But where
syntactic ambiguities occur, whether shift-reduce or reduce-reduce,
semantic predicates are a powerful tool for their resolution and certainly
a much cleaner and simpler approach than hacking the lexer.
A more interesting questions arises wich respect to language design:
should languages be intionally designed to take advantage of semantic
predicates? A purist might be inclined to answer no, but consider that the
typical programmer does not look at a program and see only the syntactic
information; he does not look at it with PDA-eyes. Instead, he not only
sees the semantic information that has passed before but also that which
lies ahead. (My favorite example is the function call/arrary reference
ambiguity in Ada. No rational Ada programmer would see it as an ambiguity
for he would have to practice extreme myopia to fail to notice that the
declarations that preceeded their use.) If anything, semantic predicates
allow for more natural languages to be defined, languages which are not so
syntactically contrived. That, I think, is the best argument for a closer
look at semantic predicates in grammar specifications.
For more info on LADE, fax or send us an email.
Xorian Technologies
email: xorian@solomon.technet.sg
Fax: +65 253-7709
Tel: +65 255-7151
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Tue Apr 20 10:07:12 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: parrt@ecn.purdue.edu (Terence J Parr)
Subject: Re: Semantic predicates into grammar specifications
Message-ID: <93-04-067@comp.compilers>
Keywords: parse, tools
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Compilers Central
References: <93-04-044@comp.compilers> <93-04-066@comp.compilers>
Date: Mon, 19 Apr 1993 20:45:31 GMT
Approved: compilers@iecc.cambridge.ma.us
I thank xorian@solomon.technet.sg (Xorian Technologies) for furthering my
brief introduction to predicates. A couple of comments on his summary:
As will the LADE system, our semantic predicates have access to all
information about the current parse, results of user actions and current
lexical state etc... (although LL(k) parsers know more about their *exact*
position in the parse than LR(k) parsers do).
Of particular interest in Xorian's posting is:
> In C++, there are many pathological ambiguities which are unresolvable for
> lr(k) or ll(k) for any k. They require, instead, an unbounded lookahead.
> This can be accomplished by predicating the reduction of a production on a
> successful recursive parsing of the forward stream. In essence, the
> current parse is suspended and a secondary, simplified, parse is
> initiated. If the forward parse succeeds, then the production is reduced
> and the primary parsing proceeds as usual from the original point. If the
> forward parse fails, another alternative is considered.
Quoting from Ellis and Stroustrup's ARM where they discuss some rather
nasty C++ ambiguities:
"In a parser with backtracking the disambiguating rule can be stated very
simply:
[1] If it looks like a \fIdeclaration\fP, it is; otherwise
[2] if it looks like an \fIexpression\fP, it is; otherwise
[3] it is a syntax error."
PCCTS notation for Stroustrup's solution is simply:
stat: (declaration)?
| expression
;
PCCTS-generated parsers are completely deterministic, LL(k), until you
enter a (...)? block which can be viewed as a guess block (backtracking).
Note that this guess block is NOT a simplified parse; hence, you will be
doing arbitrary lookahead with full CFG power (not regular expressions,
for example). The full form of our (...)? blocks are:
(syntactic_predicate)? conditional_production
where syntactic_predicate can be any EBNF grammar construct (except a new
rule definition). If the EBNF grammar fragment is matched on the input
stream, the conditional_production is then applied. The short form, as
employed above, is
(grammar_fragment)?
which is really
(grammar_fragment)? grammar_fragment
In summary, I strongly advocate the use semantic and syntactic (guessing)
predicates in deterministic parsers and am happy that LADE and others are
of the same mind.
Semantic predicates in PCCTS were released in December 1992 as version
1.06. Version 1.07, which includes the syntactic predicates, will be out
this Summer. Information about PCCTS can be obtained by mailing to
pccts@ecn.purdue.edu with a blank "Subject:" line.
Terence Parr
Purdue University Electrical Engineering
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Tue Apr 20 10:08:26 EDT 1993
Xref: iecc comp.compilers:4526 comp.sys.sun.misc:7764
Newsgroups: comp.compilers,comp.sys.sun.misc
Path: iecc!compilers-sender
From: hpage@access.digex.com (Howard W Page)
Subject: Sun C and Fortran options
Message-ID: <93-04-068@comp.compilers>
Followup-To: comp.sys.sun.misc
Summary: Looking for Sun optimization flags that work best.
Keywords: C, Fortran, performance, question
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: AIB Software, Inc.
Date: Tue, 20 Apr 1993 01:13:26 GMT
Approved: compilers@iecc.cambridge.ma.us
I'm doing research on the Sun C and Fortran comilers trying to determine
the relative effect of the optimization flags. In addition to the
documented flags I've also found the loop unrolling option invoked by
-Qoption iropt -lN, where N is the level of unrolling. Are there other
supposedly hidden flags that people know about that I might try? Please
respond to hpage@digex.com.
Thanks
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Tue Apr 20 10:12:06 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: Sanjay Jinturkar <sj3e@server.cs.virginia.edu>
Subject: Run time optimizations
Message-ID: <93-04-069@comp.compilers>
Keywords: optimize, question, comment
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: University of Virginia Computer Science Department
Date: Tue, 20 Apr 1993 02:10:16 GMT
Approved: compilers@iecc.cambridge.ma.us
Compilers generate very conservative code. In absence of some information,
the compiler assumes the worst case and generates the code accordingly.
How about generating two pieces of code - one conservative and the other
with some aggressive optimizations, and then making a check at run
time(about the information that was missing at compile time) to see which
piece of code should be executed. An example use of such technique could
be in doing an optimization which would be safe only in absence of
aliasing. The aliasing information could be checked at run time and
appropriate pice of code could be executed. Will such techniques pay? Is
there some previous work in this area? If yes, could someone give some
pointers to such work..
Thanks in advance.
-Sanjay
[People have done this from time to time. The HP3000 APL system in the late
1970s generated code on the fly. The first time you ran a function, it
generated very optimistic code that assumed that the arguments to the
function would always be of the same type and shape as they were on the first
call, with "signature" code at the beginning to check that the assumptions
were satisfied. If not, it fell back into the compiler which generated
slower but more general code. It worked pretty well considering how slow
the underlying machine was. -John]
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Tue Apr 20 10:13:23 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: chris@stokeisland.ohi.com (Chris Traynor)
Subject: Pascal grammar
Message-ID: <93-04-070@comp.compilers>
Keywords: Pascal, parse, comment
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Compilers Central
Date: Tue, 20 Apr 1993 07:07:29 GMT
Approved: compilers@iecc.cambridge.ma.us
All:
Quite a while ago I saw someone ask for a pascal grammar. The
replies indicated that they could hack one out or get a partial one from
the p2c utility. Well, I just saw a public domain pascal grammar for ISO
Pascal at ftp.uu.net - the path is
usenet/comp.sources.unix/volume4/iso_pascal.Z The file is unfortunately
dated 1986, but should be the best starting point for a lexer and parser.
Hope this helps that person.
Cheers,
Christopher Traynor, |Object Horizons, Ltd.
[How much has Pascal changed since 1986? -John]
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Tue Apr 20 10:13:56 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: dekker@dutiag.twi.tudelft.nl (Rene Dekker)
Subject: research on transformational systems
Message-ID: <93-04-071@comp.compilers>
Keywords: translator, question
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Delft University of Technology
Date: Tue, 20 Apr 1993 08:23:43 GMT
Approved: compilers@iecc.cambridge.ma.us
As a PhD project I am working on transformational systems. These are
systems that can perform transformations from one language into another.
Generally you describe these transformations by rules acting on abstract
syntax trees. Examples of the kind of transformations that can be
described are: optimization, code-generation, parallellization,
reverse-engeneering. Examples of transformational systems are: PUMA (GMD
Karlsruhe), SAFE/TI (ISI) and HOPE (Darlington).
I am looking for literature on transformational systems. In particular,
any survey articles and recent research are welcome.
Thanks,
Rene.
--
Rene Dekker dekker@dutiba.tudelft.nl +3115-783850
Delft University of Technology Technical Mathematics and Informatics
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Tue Apr 20 10:16:55 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: Paul.Klint@cwi.nl (Paul Klint)
Subject: CFD: European Conferences on Programming Research
Message-ID: <93-04-072@comp.compilers>
Followup-To: poster
Keywords: conference, question
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: CWI, Amsterdam
Date: Tue, 20 Apr 1993 13:27:38 GMT
Approved: compilers@iecc.cambridge.ma.us
CALL FOR DISCUSSION
===================
Should we restructure the European
----------------------------------
Conferences in
---------------
Programming Research?
---------------------
Paul Klint (Paul.Klint@cwi.nl)
Spring 1993
Abstract
Among some European researchers in the areas of programming
languages, semantics and programming the concern is growing that this
field is not adequately represented in a major, high quality, conference
in Europe. The many conferences being organized in this and related
fields (like, e.g., PLIP, ESOP, CC, PARLE, TAPSOFT) are competing
with each other in a too small market and therefore they have a hard
job in building up sufficient critical mass to become competitive with
the big conferences in the USA.
Can we join efforts and change this situation? In this note I pro-
pose to create a European organization (tentatively called EAPLS ---
European Association for Programming Languages and Systems --- pat-
terned after the EATCS) that aims at creating a platform of researchers
that can restructure the European conferences in the desired direction.
This note explains this idea in more detail and invites you to par-
ticipate in a debate on this matter.
1. BACKGROUND
=============
The concern about the future direction of the ``Programming Languages
and Systems'' area has been expressed in several programme committees of
European conferences in this field. As far as I can reconstruct history, the
following people were involved in these discussions: Harald Ganzinger Chris
Hankin, Berthold Hoffman, Neil Jones, Uwe Kastens, Peter D. Mosses, Alan
Mycroft, and Reinhard Wilhelm. It was the latter who contributed the analysis
in the next section and suggested me to take the initiative to come to an
European organization in this field.
2. THE CURRENT SITUATION
========================
The situation concerning European conferences in the areas of Program-
ming Languages, Semantics, and Programming is not satisfactory. Different
groups of scientists have established conference and workshop series which
compete for a too small market. From these only ICALP has a (more or
less) permanent carrier, i.e. the EATCS. The others were established by ES-
PRIT projects or Basic Research Actions, groups in or among the national
computer science organizations, or just groups of cooperating individuals.
ICALP (International Colloquium on Automata, Languages, and Pro-
gramming) is the most established series; however, in the course of time it
has suffered several turns of focus. The last conferences were dominated by
papers on algorithms and complexity; few papers had to do with automata,
some with semantics of programming languages, and none with program-
ming.
ESOP (European Symposium on Programming) has been established in
1985 and been organized 1986 (Saarbruecken), 1988 (Nancy), 1990 (Copen-
hagen), and 1992 (Rennes). It has its strong areas in semantics, types, and
functional programming.
PLILP (Programming Language Implementation and Logic Program-
ming) It has been organized in 1988 (Orleans), 1990 (Linkoeping), Passau
(1991) and Leuven (1992). The strive for a palindromic name has caused a
bias towards logic programming mixed with a certain amount of implemen-
tation matters.
CC (Compiler Construction) is the continuation of a workshop series
in the former German Democratic Republic. Its 1992 instance is run in
Paderborn. It is completely devoted to language implementation.
PARLE (Parallel Architectures and Languages Europe) is organized in
Eindhoven by Philips. It takes place in odd years since 1987. As the name
states, it covers the combination of parallel languages and architectures.
TAPSOFT (Theory and Practice of Software ...)
ALP (Algebraic and Logic Programming)
The competition of too many conferences for a rather small supply of
scientific results has prevented any of the series to really reach a high in-
ternational standing. Frankly stated, ESOP never really makes it to the
POPL level, PLILP never to the level of ICLP, CC never to that of ACM
SIGPLAN PLDI, etc.
This is made even worse, when two of the conferences fix their deadline
to the same day, e.g. March 1, 1992 for CC'92 and PLILP'92.
3. PLAN OF ACTION
=================
We could remedy the situation sketched above by creating a scientific orga-
nization similar to EATCS that takes the responsibility to restructure the
current situation and work in the direction of a major, high quality confer-
ence. The Profile of such an organization is sketched in the next section.
Of course, this change cannot be achieved by force but only by persua-
sion. Ideally, we start with synchronizing the dates and places of some of the
conferences. And indeed, the program committees of ESOP, CC and CAAP
have already agreed with such a synchronization and have their conferences
in the same week in Edinburgh in 1994.
Each conference can then keep its own identity but profit from the pres-
ence of the other ones (number of attendents, increased possibilities to get
funding and sponsoring, discounts on facilities, resources available for com-
mon events, etc.) Later on, when it turns out that this synchronization is
beneficial we may transform this set of cooperating conferences into a single
conference with separate sections.
I propose the following plan of action to investigate whether there is
enough support for this line of development.
o April-May 1993: Discussion (by E-mail) of this document among
colleagues.
o June 1993: If, the outcome of the discussion is positive: make final
proposal for EAPLS, and request for final comments.
o July-August 1993: Official creation of EAPLS, establish board and
scientific council.
4. PROFILE OF EAPLS
===================
Aims
----
o Act as an international professional non-profit organization represent-
ing the interests of its members.
o Promote research and education in the area of ``Programming'' here
understood as the design, specification and implementation of pro-
gramming languages and systems.
o Promote the exchange of ideas and results in the area of Programming.
o Organize an annual international conference on Programming Lan-
guages and Systems and publish the proceedings.
Actions
-------
o Organize and sponsor summer schools.
o Sponsor specialist workshops and national meetings.
o Sponsor scientific publications in the field.
o Cooperate with related scientific and national societies and institu-
tions.
Members
-------
o Researchers.
o Students.
Organization
------------
o A small board consisting of a president, vice-president, treasurer and
secretary.
o A larger scientific council with members from the European countries.
5. HOW CAN YOU PARTICIPATE?
===========================
Of course, you may want to react directly to the text of this document itself.
In addition, here is a list of explicit questions:
1. Do you agree/disagree with the observations in Section 2 concerning
the current situation of European conferences in the field of Program-
ming?
2. Do you support/oppose the idea of creating an organization like EAPLS
as sketched in Section 4?
3. Do you have additional suggestions for the aims, actions, members, or
organization of EAPLS?
Alan Mycroft has arranged for EAPLS to be set up as a 'mailbase'. A
mailbase is a database for mail-directed communication and, inter alia,
manages membership, message archival etc.
I invite you all to join EAPLS by sending a one-line message of the form
join eapls <firstname> <lastname>
to the e-mail address
mailbase@mailbase.ac.uk
(the 'subject' field is ignored).
On joining you will receive, by e-mail, documentation on use of mailbase
(including how to remove yourself from the list and to investigate other
lists).
>>>>>>After joining<<<<<< you can communicate with all other members by
mailing to eapls@mailbase.ac.uk and communicate with its adminstator
by mailing to eapls-request@mailbase.ac.uk.
And last, but not least
o forward copies of this note to colleagues of yours that might be inter-
ested.
o send us details about workshops and conferences you plan to organize
so that we can produce an overview of planned activities.
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Tue Apr 20 16:10:25 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: beaver@broue.rot.qc.ca (Andre Boivert)
Subject: Lexical Analyzer for F77
Message-ID: <93-04-073@comp.compilers>
Keywords: Fortran, lex, question, comment
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Compilers Central
Date: Tue, 20 Apr 1993 16:44:21 GMT
Approved: compilers@iecc.cambridge.ma.us
I am looking for a lexical analyzer for Fortran 77 (!!!). I started to
write one using Lex, but it seems that it is not best way to go (is that
right?).
I heard of a program called 'fortlex' that could do the job.
If you have any sources (preferably in C), algorithms, references that
could help me, I would greatly appreciate.
Thank you.
Andre Boisvert
beaver@rot.qc.ca
[My Fortran subset parser in the compilers archives does a fairly respectable
job of tokenizing Fortran. You can't tokenize it without doing a certain
amount of parsing as well, e.g., "10e5" is a floating point number except
in the context "do 10e5 = 1,100" where it is the statement number 10 and
the variable name e5, or "do 10e5 = 1.100" where do10e5 is a variable name.
I'd say it's not as bad as it sounds, but it is. In the full F77 parser I
wrote for INfort 15 years ago there were at least 12 separate lexical kludges
like that. -John]
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Wed Apr 21 19:24:34 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: hagerman@ece.cmu.edu (John Hagerman)
Subject: Control Dependencies for Loops
Message-ID: <93-04-074@comp.compilers>
Keywords: analysis, question
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Carnegie Mellon University
Date: Tue, 20 Apr 1993 21:31:36 GMT
Approved: compilers@iecc.cambridge.ma.us
This definition of control dependence is fairly typical, right?
DEP(x,y) iff !POST-DOM(y,x)
and there exists a path P=<x,...,y> such that
for all z in P (except x,y), POST-DOM(y,z)
Consider the following loop:
while (E) do S;
and the corresponding CFG:
[START]
|
v
[E]<-+
| |
v |
+-<?> |
| | |
| v |
| [S] |
| | |
| +---+
v
[END]
The above definition specifies that DEP(<?>,[E]) and DEP(<?>,[S]). But it
seems like I should only be concerned with the dependencies within a
single iteration, so why have DEP(<?>,[E]) at all? Is it only an artifact
of the definition? If I change the definition so that backedges are not
permitted in P, do I shoot myself?
Thanks - John
--
hagerman@ece.cmu.edu
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Wed Apr 21 19:25:20 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: Yutaka_Kuroda@QM.SRI.COM
Subject: RPG II to C conversion tool
Message-ID: <93-04-075@comp.compilers>
Keywords: RPG, C, translator, question
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: SRI International
Date: Tue, 20 Apr 1993 23:47:09 GMT
Approved: compilers@iecc.cambridge.ma.us
I am looking for a tool that converts AS/400 RPG II to C language on
UNIX. Does anybody know of one? Alternatively, it can be a company that
has done a coversion like this before.
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Wed Apr 21 19:26:52 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: jwe@emx.cc.utexas.edu (John W. Eaton)
Subject: Re: Lexical Analyzer for F77
Message-ID: <93-04-076@comp.compilers>
Keywords: Fortran, lex
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: The University of Texas at Austin, Austin, Texas
References: <93-04-073@comp.compilers>
Date: Wed, 21 Apr 1993 00:25:39 GMT
Approved: compilers@iecc.cambridge.ma.us
beaver@broue.rot.qc.ca (Andre Boivert) writes:
> I am looking for a lexical analyzer for Fortran 77 (!!!).
The paper
J. K. Slape and P. J. L. Wallis, A Modification of Sale's Algorithm
to Accomodate Fortran 77, The Computer Journal, Volume 34 Number 4,
1991.
describes a technique for classifying Fortran statements and includes code
(about 350 lines of of Fortran) to do it.
Unfortunately, it isn't complete -- it classifies statement functions as
assignments, and there are several restrictions, such as requiring that
simple goto's must have at least the first digit of the label on the
initial line, and that a logical if statement which has an executable
statement part that begins with the letters `then' must have at least one
more non-blank character on the initial line.
Depending on what you need to do, these restrictions may be acceptable,
and you might be able to use this technique to greatly simplify your
parser.
Another possibility might be to use the GNU Fortran front end. It's in
alpha test now and isn't generally available yet -- ask
fortran@gnu.ai.mit.edu for more information.
--
John W. Eaton, jwe@che.utexas.edu
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Wed Apr 21 19:28:52 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: Ariel Meir Tamches <tamches@wam.umd.edu>
Subject: predicate parsing
Message-ID: <93-04-077@comp.compilers>
Keywords: parse, tools
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Compilers Central
References: <93-04-044@comp.compilers> <93-04-066@comp.compilers>
Date: Wed, 21 Apr 1993 05:33:45 GMT
Approved: compilers@iecc.cambridge.ma.us
Just would like to add my two cents worth to the predicate parsing
discussion that has been taking place among Terence Parr
(parrt@ecn.purdue.edu) and Xorian technology (xorian@solomon.technet.sg)
with their new PCCTS and LADE parsing tools.
One thing that I haven't seen noted is that it's just about impossible to
add predicate parsing to any type of bottom-up parsing engine. A formal
argument could be made by looking at LR handle generation (it depends on
the PDA stack, not leaving much room for predicates) but a more intuitive
one would simply note that top-down parsers, such as LL, have a control
flow completely analagous to "real-world" programming languages, such as C
(think "recursive descent"). If you have any version of PCCTS, examine
the C code it produces; it will look suspiciously like that which you
would have created had you been writing a recursive-descent parser from
scratch in C. Each grammar rule is represented by exactly one C function,
which among other things made it easy even in PCCTS 1.00 to effortlessly
have inherited and synthesized attributes. Inherited attributes are
call-by-value parameters to that procedure; synhesized variables are
call-by-reference.
Sure, Yacc has synthesized attributes, and there is a proof that inherited
attributes can be coaxed with the addition of extra temporary rules. The
dragon book states (page 341) that "The impression that top-down parsing
allows more flexibility for translation is shown to be false by a proof in
Brosgol [1974] that a translation scheme based on an LL(1) grammar can be
simulated during LR(1) parsing. Independently, Watt [1977] used marker
nonterminals to ensure that the values of inherited attributes appear on a
stack during parsing." I would have to contest this assertion, especially
in the light of semantic predicates, which the book seems never to have
heard of. The Dragon book may be right about being able to coax inherited
attributes in Yacc, but I don't think it can take the next big step:
having such attributes take an active role in parsing decisions.
To clarify, consider a conceptually simple but extremely powerful addition
to PCCTS: Taking a look at the beautiful recursive-descent C code produced
by PCCTS, one can't help but wonder if we can tap into the power of C from
our parser. Sure, Yacc has C actions, but they are a "scam" when it comes
to making parsing decisions. They simply can't take part in parsing,
unless one resorts to horrifying hacks such as tinkering with Yacc's PDA
stack from within a Yacc action. Predicate parsing was relatively simple
to add to pccts 1.06 (you can clarify me on this if I'm wrong, Terence)
because all it has to do is copy the predicate right into the C code,
overriding the normal "default predicate" which is simply good 'ole LL(k)
tests of FIRST(), etc.
It's hard to imagine how one can insert arbitrary predicates in a
bottom-up parser; for the same reason it's hard for bottom-up parsers to
have inherited attributes - their control flow is "screwy" - top-down is
the way to go. People used to (and maybe still do) laugh at LL parsers;
after all, LR(k) is a proper superset of LL(k). But when we consider how
easy it is to add predicates and inherited attributes to LL, we see that
"conventional wisdom" has been wrong.
On a theoretical note, it is very easy to prove that the latest version of
PCCTS (1.06 - 1.07 [as long as it has predicates]) is Turing-equivalent.
(I did it by writing a TM simulator in PCCTS, which I believe is a
sufficient condition by Church's Thesis.)
>From another angle, consider hacks used by Yacc to get complex
(non-LALR(1)) grammars to parse: I'm still trying to decipher the tricks
and hacks used in GNU C++ and Jim Roskind's Yacc grammars. Bottom-up
parsing is simply not the answer; with predicates used in the new breed of
top-down parsers, we now have a superior alternative.
Ariel Tamches
University of Maryland, College Park
tamches@cs.umd.edu
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Wed Apr 21 19:29:29 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: uysal@cs.umd.edu (Mustafa Uysal)
Subject: Dynamic Slices...
Message-ID: <93-04-078@comp.compilers>
Keywords: debug, question
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: U of Maryland, Dept. of Computer Science, Coll. Pk., MD 20742
Date: Wed, 21 Apr 1993 11:51:29 GMT
Approved: compilers@iecc.cambridge.ma.us
Are there any references dealing with the generation of dynamic slices?
In particular, I'm interested in generation of a (backward) dynamic slice
given the state of the program execution (ie. variables, program counter,
etc).
The idea is that, when a person writes a program that crashes, one
can(?) generate a slice that captures the part of the program causing the
crash in that particular execution. Then the programmer may concentrate on
this slice in the debugging phase. However, the information available at
the time of the crash is only the "core dump" (plus the source code). My
question is that is it possible to generate such slices (in a reasonable
time), and if yes, could you point me to relevant references?
Thanks in advance,
Mustafa Uysal
(E-mail: uysal@cs.umd.edu)
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Wed Apr 21 19:31:27 EDT 1993
Xref: iecc comp.arch:28218 comp.compilers:4537
Newsgroups: comp.arch,comp.compilers
Path: iecc!compilers-sender
From: S_JUFFA@IRAV1.ira.uka.de (|S| Norbert Juffa)
Subject: Optimal code sequences for signed int div/rem by powers of 2
Message-ID: <93-04-079@comp.compilers>
Keywords: arithmetic, optimize
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: University of Karlsruhe, FRG
Date: Wed, 21 Apr 1993 13:49:45 GMT
Approved: compilers@iecc.cambridge.ma.us
This article is somewhat of a followup to the recent discussion about
integer division here comp.arch. I am also cross-posting to comp.compilers
because this stuff may be relevant for people involved with compiler code
generation.
Many modern RISC CPUs do not have an integer division instruction (SPARC
V7, Alpha), have only a division step instruction (AMD 29K), or have a
rather slow HW-division (microSPARC). Even if there is a division
instruction, it may fail to produce a remainder (SPARC V8).
Therefore, people are looking for alternatives to using division. Fast
alternatives are especially feasible if the divisor is known at compile
time (e.g. multiplication by reciprocal of divisor). Quite a few posts in
the recent discussion were involved with speeding up division by powers of
two known at compile time. This is really easy when dealing with unsigned
integers (-> shift right), but requires some correction steps for signed
integers.
Checking compiler output for 80x86 produced by Microsoft C 7.0 and for
SPARC by gcc 2.2.3 and the Solaris cc compiler, I find they don't use all
the optimizations possible for signed integers when the divisor is a power
of two. Especially, when the divisor is a negated power of two (i.e.
-2^i), all of these compilers will resort to divide and remainder
instructions or subroutines, although the shifting and masking approach
used for unsigned integers could be used with a few correction steps. The
Microsoft C 7.0 compiler doesn't optimize any division/remainder by 2^i or
-2^i for signed integers.
I wonder what the fastest instruction sequences for signed integer
division by +/- 2^n are for different machines. I know from the discussion
there are even machines that have signed integer division by powers of two
in HW (i960).
I am including the routines I came up with for 80x86 and SPARC for doing
signed integer division/remainder by +/- 2^i.
Norbert Juffa (s_juffa@iravcl.ira.uka.de)
CODE FOR Intel 80x86 (can easily be changed to 32-bit version for >= 386)
=========================================================================
/ 2: CMP AX, 8000h ; CY = 1, if dividend >= 0
SBB AX, -1 ; inc AX, if dividend < 0
SAR AX, 1 ; now do right shift
/ 2^n: CWD ; DX = FFFFh if dividend < 0
AND DX, 2^n-1 ; mask correction
ADD AX, DX ; apply correction if necessary
SAR AX, n ; now do right shift
/ -2: CMP AX, 8000h ; CY = 1, if dividend >= 0
SBB AX, -1 ; inc AX, if dividend < 0
SAR AX, 1 ; now do right shift
NEG AX ; use (x div -2) = - (x div 2)
/ -2^n: CWD ; DX = FFFFh if dividend < 0
AND DX, 2^n-1 ; mask correction
ADD AX, DX ; apply correction if necessary
SAR AX, n ; now do right shift
NEG AX ; use (x div -2^n) = - (x div 2^n)
% 2, % -2: CWD ; generate flag, FFFFh id divd < 0, else 0
AND AX, 1 ; mask out remainder
XOR AX, DX ; negate
SUB AX, DX ; remainder if dividend < 0
% 2^n, % -2^n: CWD ; generate mask, FFFFh if divd < 0, else 0
AND DX, 2^n-1 ; mask correction
ADD AX, DX ; apply pre-correction if necessary
AND AX, 2^n-1 ; mask out remainder
SUB AX, DX ; apply post-correction if necessary
CODE FOR SPARC
==============
Dividing a signed integer by a positive power of two (1<<n) known at
compile time.
1) for i/2: addcc %o1,%o1,%g0 ! carry if %o1 < 0
addx %o1,%g0,%o1 ! inc %o1, if %o1 < 0
sra %o1,1,%o1 ! do the shift
Although this code sequence uses three instructions, just like what is
produced by gcc now, it has the advantage that it doesn't need an
additional register. It has the disadvantage of destroying the
condition codes.
2) for i/(1<<n): sra %o1,31,%o2 ! %o2 = 0xffffffff if %o1 < 0
srl %o2,32-n,%o2 ! (1<<n)-1, if %o1<0, else 0
add %o1,%o2,%o1 ! apply correction
sra %o1,n,%o1 ! do the shift
The advantage of this sequence is that it doesn't use a branch and has
no problem with n>=13 since it doesn't use immediate constants for the
correction step, also it doesn't destroy the condition codes.
Dividing a signed integer by a negated positive power of two (-(1<<n))
should make use of the identity x / (-(1<<n)) = -(x / (1<<n)) and then
apply the code given above. Currently, gcc 2.2.3 generates calls to .div
1) for i/-2: addcc %o1,%o1,%g0 ! carry if %o1 < 0
addx %o1,%g0,%o1 ! inc %o1, if %o1 < 0
sra %o1,1,%o1 ! i / 2
neg %o1 ! -(i / 2)
2) for i/(-(1<<n)):sra %o1,31,%o2 ! %o2 = 0xffffffff if %o1 < 0
srl %o2,32-n,%o2 ! (1<<n)-1, if %o1<0, else 0
add %o1,%o2,%o1 ! apply correction
sra %o1,n,%o1 ! i / (1<<n)
neg %o1 ! -(i / (1<<n))
Computing the remainder (% operator) of a division of a signed integer by
a positive power of two (1<<n) known at compile time. Currently, gcc uses
calls to .rem.
1) for i%2, i%(/-2):
sra %o1,31,%o2 ! 0xFFFFFFFF, if %o < 0, else 0
and %o1,1,%o1 ! mask out remainder
xor %o1,%o2,%o1 ! negate remainder if quotient
sub %o1,%o2,%o1 ! negative (sign(quot)=sign(rem)!)
2) for i%(1<<n), i%(-(1<<n)):
sub %g0,1,%o2 ! 0xffffffff
srl %o2,32-n,%o2 ! (1<<n)-1
sra %o1,31,%o3 ! 0xffffffff, if %o1 < 0, else 0
and %o3,%o2,%o3 ! (1<<n)-1, if %o1 < 0, else 0
add %o1,%o3,%o1 ! apply correction if necessary
and %o1,%o2,%o1 ! mask out remainder bits
sub %o1,%o3,%o1 ! apply correction if necessary
This instruction sequence is branch free, doesn't destroy the condition
codes and has no problem with n>=13, since it doesn't use immediate
constants in the correction step.
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Wed Apr 21 19:31:55 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: corbett@lupa.Eng.Sun.COM (Robert Corbett)
Subject: Re: Pascal grammar
Message-ID: <93-04-080@comp.compilers>
Keywords: Pascal, parse
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Sun
References: <93-04-070@comp.compilers>
Date: Wed, 21 Apr 1993 21:04:11 GMT
Approved: compilers@iecc.cambridge.ma.us
>[How much has Pascal changed since 1986? -John]
ISO Pascal was revised in 1990. I believe there were about fifty changes,
ranging from minor to insignificant. As I recall, none of the changes
affected the syntax of the language.
The ISO approved a standard for Extended Pascal in 1991. Extended Pascal
has lots of syntactic extensions.
Yours truly,
Robert Corbett
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Thu Apr 22 11:37:42 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: paco@ariel.cs.rice.edu (Paul Havlak)
Subject: Re: Control Dependencies for Loops
Message-ID: <93-04-081@comp.compilers>
Keywords: analysis
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Rice University
References: <93-04-074@comp.compilers>
Date: Thu, 22 Apr 1993 12:15:47 GMT
Approved: compilers@iecc.cambridge.ma.us
hagerman@ece.cmu.edu (John Hagerman) writes:
>This definition of control dependence is fairly typical, right?
>
> DEP(x,y) iff !POST-DOM(y,x)
> and there exists a path P=<x,...,y> such that
> for all z in P (except x,y), POST-DOM(y,z)
Yes, this is pretty standard, although for this version to be precise,
POST-DOM must be strict; i.e., POST-DOM(x,x) is false for all x.
> [START]
> |
> v
> [E]<-+
> | |
> v |
> +-<?> |
> | | |
> | v |
> | [S] |
> | | |
> | +---+
> v
> [END]
>
>The above definition specifies that DEP(<?>,[E]) and DEP(<?>,[S]). But it
>seems like I should only be concerned with the dependencies within a
>single iteration, so why have DEP(<?>,[E]) at all? Is it only an artifact
>of the definition? ...
It's not an artifact, it's the whole point. Control dependences,
defined as above, are a powerful abstraction because they can be handled
very similarly to data dependences. Like data dependences, control
dependences are either loop-independent or carried by a particular loop.
In your example, DEP(<?>,[E]) is loop-carried and DEP(<?>,[S]) is
loop-independent.
> ... If I change the definition so that backedges are not
>permitted in P, do I shoot myself?
Loop-carried control dependences, together with other control and data
dependences, can create dependence cycles (recurrences). So they are
essential for many purposes. Recurrences must be broken by
transformations before a loop can be run in parallel.
However, if you never perform a transformation that could violate a
loop-carried control dependence, you may "ignore" them because they are
implicitly respected.
Hope this helps,
Paul
--
Paul Havlak Dept. of Computer Science
Graduate Student Rice University, Houston TX 77251-1892
PFC/ParaScope projects (713) 527-8101 x2738 paco@cs.rice.edu
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Thu Apr 22 11:38:15 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: paco@ariel.cs.rice.edu (Paul Havlak)
Subject: Re: Dynamic Slices...
Message-ID: <93-04-082@comp.compilers>
Keywords: debug
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Rice University
References: <93-04-078@comp.compilers>
Date: Thu, 22 Apr 1993 12:44:03 GMT
Approved: compilers@iecc.cambridge.ma.us
uysal@cs.umd.edu (Mustafa Uysal) writes:
> Are there any references dealing with the generation of dynamic slices?
>In particular, I'm interested in generation of a (backward) dynamic slice
>given the state of the program execution (ie. variables, program counter,
>etc).
Vernon Lee did some work on dynamic slicing in his dissertation at
Rice University, advised by Hans Boehm. Vernon is now at Zycad
(spelling?) and Hans at Xerox PARC, but I don't have their addresses
handy.
Vernon was working on constructive real arithmetic. The
representation allows computation to an arbitrary number of digits
(assuming sufficient computational resources). If, for some reason, one
needs more precision on a value already computed, one would like to
recompute on a backward dynamic slice rather than repeat the whole
program.
I think this tech report is Vernon's dissertation:
Rice COMP TR91-159
Optimizing Programs Over the Constructive Reals,
Vernon A. Lee Jr., April 1991. ($15.00)
A short presentation of the work is in
Vernon Lee and Hans Boehm, "Optimizing Programs over the
Construct Reals," SIGPLAN '90, pages 102-111.
Good luck,
Paul
--
Paul Havlak Dept. of Computer Science
Graduate Student Rice University, Houston TX 77251-1892
PFC/ParaScope projects (713) 527-8101 x2738 paco@cs.rice.edu
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Thu Apr 22 19:04:08 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: hagerman@ece.cmu.edu (John Hagerman)
Subject: More on Control Dependencies for Loops
Message-ID: <93-04-083@comp.compilers>
Keywords: analysis, question
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Carnegie Mellon University
References: <93-04-074@comp.compilers>
Date: Thu, 22 Apr 1993 17:03:55 GMT
Approved: compilers@iecc.cambridge.ma.us
A couple of days ago I asked why "loop-carried" control dependencies
should be included. I got a couple of responses by mail saying that
my basic blocks were wrong. I don't think this has anything to do
with my question; the point is that I can construct a loop that will
have such a dependence. Here's another try:
do S while (E);
This has a control dependence from E to S through the backedge. I
hope this comment helps to clarify my question...
Thanks - John
--
hagerman@ece.cmu.edu
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Thu Apr 22 19:04:39 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: krishna@cs.unm.edu (Ksheerabdhi Krishna)
Subject: Re: Run time optimizations
Message-ID: <93-04-084@comp.compilers>
Keywords: optimize
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Computer Science Department, University of New Mexico
References: <93-04-069@comp.compilers>
Date: Thu, 22 Apr 1993 15:54:49 GMT
Approved: compilers@iecc.cambridge.ma.us
David Keppel, Susan Eggers and Robert Henry have made a convincing case
for run-time code gen (RTCG) - it was out as a U of Wash, tech. report. I
cant recall the number, but that might have some references you are
looking for.
Partial evaluation is one way of doing some optimizations at compile time
which might be difficult to do (actually, impossible) at run-time. As
compilers get better and better, this is a technique that will find its
way in. A good reference here is a paper by Weise, Ruf, Seligman and
Conybeare called - "Automatic Online Partial Evaluation" in FPCA 91.
ksh
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Thu Apr 22 19:05:15 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: martin@CS.UCLA.EDU (david l. martin)
Subject: Test suite for C/C++ compilers (?)
Message-ID: <93-04-085@comp.compilers>
Keywords: C, testing, question
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: UCLA, Computer Science Department
Date: Thu, 22 Apr 1993 18:05:34 GMT
Approved: compilers@iecc.cambridge.ma.us
I need to assemble a set of C and C++ source code to test the
capabilities of a compiler. We are solely concerned to verify that it
correctly identifies the full range of legal constructs (and gives
reasonable warnings/ errors for questionable/illegal constructs); we
are NOT concerned with whether it generates fast or compact code.
Is there any public domain body of code which provides a good test
suite?
Any comments or help with this greatly appreciated.
Thanks.
- Dave Martin
- martin@cs.ucla.edu
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Thu Apr 22 19:07:09 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: dave@cs.arizona.edu (Dave Schaumann)
Subject: Re: predicate parsing
Message-ID: <93-04-086@comp.compilers>
Keywords: parse, tools
Sender: compilers-sender@iecc.cambridge.ma.us
Reply-To: dave@cs.arizona.edu (Dave Schaumann)
Organization: University of Arizona
References: <93-04-044@comp.compilers> <93-04-077@comp.compilers>
Date: Thu, 22 Apr 1993 17:52:36 GMT
Approved: compilers@iecc.cambridge.ma.us
tamches@wam (Ariel Meir Tamches) writes:
>It's hard to imagine how one can insert arbitrary predicates in a
>bottom-up parser;
LR parsing is based on running a dfa which recognizes handles of the
grammar, and issues the appropriate shift, reduce, or accept action on
each iteration.
It seems to me that we could augment this model to include test-reduce
actions to the PDA. For instance, when you generate the parse tables for
something like this:
var_name : identifier
;
type_name : identifier
;
a reduce/reduce ambiguity is introduced -- when the parser sees an
identifier, it doesn't know whether to reduce on the type_name rule or the
var_name rule. This is where the test-reduce action would come in. The
easiest way to implement this would be to associate a predicate with each
ambiguous production:
var_name : is_var_name ( identifier )
;
type_name : is_type_name( identifier )
;
Then, when the reduce-test action is encountered, code could be executed
to test the predicate of each choice in turn, until one succeeded, and
then reducing on the associated rule. Notice that we only need to use
this in the case of ambiguous rules; other rules can be recognized with
the usual reduce action.
Of course, faster and more elegant solutions are possible, but I think
this demonstrates predicates can be practically implemented in an LR
parser.
>[...]top-down is the way to go. People used to (and maybe still do) laugh
>at LL parsers; after all, LR(k) is a proper superset of LL(k). But when
>we consider how easy it is to add predicates and inherited attributes to
>LL, we see that "conventional wisdom" has been wrong.
Of course, LL is fine if you can do without left recursion, and you can
always determine what rule to choose next based on FIRST sets. In a few
cases, this is not a problem. In other cases, it forces you to mutilate
your grammar to satisfy the needs of the parser. Certainly, you can use
the well-known algorithms to do left-factoring, and left-recursion
removal. But you are then forced to use a grammar that is a step removed
from your original choice for each left-factoring, and for each
left-recursion you must remove.
>From another angle, consider hacks used by Yacc to get complex
>(non-LALR(1)) grammars to parse: I'm still trying to decipher the tricks
>and hacks used in GNU C++ and Jim Roskind's Yacc grammars.
I think that the problems of the various C++ grammars belong to C++ far
more than they belong to Yacc or LR parsing.
--
Dave Schaumann dave@cs.arizona.edu
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Thu Apr 22 19:11:15 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: girkar@kpc.com (Milind Girkar)
Subject: Re: Control Dependencies for Loops
Message-ID: <93-04-087@comp.compilers>
Keywords: analysis, bibliography
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: KPC
References: <93-04-074@comp.compilers>
Date: Thu, 22 Apr 1993 17:59:45 GMT
Approved: compilers@iecc.cambridge.ma.us
hagerman@ece.cmu.edu (John Hagerman) writes:
<control dependences due to back edges>
>If I change the definition so that backedges are not
>permitted in P, do I shoot myself?
Something along these lines has been tried in:
1. Cytron, R., M. Hind and W. Hsieh Automatic Generation of DAG
Parallelism, Proc. of 1989 SIGPLAN on Programming Language Design and
Implementation, July 89, pp 54-68.
2. Hsieh, W. Extracting parallelism from sequential programs, MS thesis,
Dept of Electrical and Computer Science, MIT, May 1988
- Milind
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Thu Apr 22 19:12:32 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: donawa@bluebeard.CS.McGill.CA (Chris DONAWA)
Subject: Re: IR Transformations
Message-ID: <93-04-088@comp.compilers>
Keywords: code
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: SOCS, McGill University, Montreal, Canada
References: <93-04-058@comp.compilers>
Date: Wed, 21 Apr 1993 14:35:53 GMT
Approved: compilers@iecc.cambridge.ma.us
jimy@cae.wisc.edu wrote:
: Could anyone give me pointers to papers on the subject of IR
: transformations to suit code generation?
: More specificaly, suppose we want to generate code for x = 2 * a; The
: problem is how to know, at the IR level (before code is generated), that
: the above expression can be rewritten as x = a + a (assuming + is cheaper
: than *). The problem is that the above transformation may be context
: dependent and not always desirable. In other words, sometimes 2*a may be
: covered cheaper than a+a, e.g. by a shift left, which some processors
For the lower level intermediate representation in our C compiler (the
McCAT C compiler), we use Bernstein's algorithm for integer
multiplications with constants. The work is described in:
@Article{Bernstein86,
Author = "Robert Bernstein",
Title = "Multiplication by Integer Constants",
Journal = "Software--Practice and Experience",
Volume = 16,
Number = 7,
Pages = "641-652",
Month = "July",
Year = 1986
}
Essentially any integer multiplied by and integer constant can be replaced
by a series of shift/add/subtraction combinations. The algorithm tries to
find the best combination that does not exceed a cost (specified by you,
ususally the cost of an integer multiply).
There are some slight typos in the implementation. Preston Briggs posted
a corrected version of the algorithm, which formed the basis of our
converter. If you'd like I can mail it to interested folks, or make it
available for ftp.
--
Christopher M. Donawa
Advanced Compilers, Architectures and Parallel Systems Group (ACAPS)
McGill University, Montreal PQ.
donawa@cs.mcgill.ca
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Thu Apr 22 19:13:03 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: Stephen J Bevan <bevan@computer-science.manchester.ac.uk>
Subject: predicate parsing
Message-ID: <93-04-089@comp.compilers>
Keywords: parse, bibliography
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Compilers Central
References: <93-04-077@comp.compilers>
Date: Thu, 22 Apr 1993 17:59:56 GMT
Approved: compilers@iecc.cambridge.ma.us
Ariel Meir Tamches <tamches@wam.umd.edu> writes:
One thing that I haven't seen noted is that it's just about impossible to
add predicate parsing to any type of bottom-up parsing engine.
Watt did further work on affix/attribute directed parsing :-
author= D. A. Watt
title= Rule Splitting and Attribute-Directed Parsing
crossref= Jones80
pages= 363--392
The following paper in the same proceedings would also seem to be
relevant :-
author= Neil D. Jones and Michael Madsen
title= Attribute-Influenced LR Parsing
crossref= Jones80
pages= 393--407
title= Proceedings of a Workshop on Semantics-Directed Compiler Generation
year= 1980
editor= Neil D. Jones
publisher= SpringerVerlag
month= jan
note= LNCS 94
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Thu Apr 22 23:00:05 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: Jim Reis <reis@sparc0a.cs.uiuc.edu>
Subject: dataflow analysis in C compilers
Message-ID: <93-04-090@comp.compilers>
Keywords: C, optimize, question
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Compilers Central
Date: Thu, 22 Apr 1993 23:13:09 GMT
Approved: compilers@iecc.cambridge.ma.us
I'm trying to find out how sophisticated C compilers are in their dataflow
analysis; especially in terms of interprocedural and aliasing algorithms.
Does anyone know the state of the following in available (i.e. not
research) C compilers ?
1) Using an aliasing algorithm for intraprocedural dataflow analysis.
2) Interprocedural dataflow analysis.
3) Using an aliasing algorithm for interprocedural dataflow analysis.
Thanks,
Jim Reis
reis@cs.uiuc.edu
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Thu Apr 22 23:04:07 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: pardo@cs.washington.edu (David Keppel)
Subject: Re: Run time optimizations
Message-ID: <93-04-091@comp.compilers>
Keywords: optimize
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Computer Science & Engineering, U. of Washington, Seattle
References: <93-04-069@comp.compilers>
Date: Fri, 23 Apr 1993 01:43:57 GMT
Approved: compilers@iecc.cambridge.ma.us
In <93-04-069@comp.compilers>
Sanjay Jinturkar <sj3e@server.cs.virginia.edu> writes:
>[Generate optimistic and conservative code and do runtime checks?]
john@iecc.cambridge.ma.us writes:
>[Or dynamically deoptimize code as the assumptions fail.]
I know of three general techniques:
* Statically generate optimistic and conservative code, with runtime tests
to decide which to use. Sometimes, checks can be CSE'd to further improve
performance. Sometimes the optimized and conservative code are the same
but e.g, the code must run sequentially to be conservative but can usually
run in parallel. Examples: loop unrolling, with a prologue to ``peel''
off some number of iterations then execute an optimized (unrolled) loop.
Runtime disambiguation [Nicolau 89], which checks for aliasing then
branches to the appropriate statically-generated case. ``Runtime
compilation'' [Saltz, Berryman & Wu 90], which delays loop scheduling
until runtime indirection values are known (e.g., x[i] = y[a[i]] +
z[b[i]], `a' and `b' unknown at compile time) and a schedule is generated
before the loop body. Inline the expected common part of a function,
resorting to function calls for uncommon cases [Chow 88], aka ``shrink
wrapping''.
* Look at runtime values and decide how to generate code that is more
optimized than the general case but still conservative given the runtime
values. Examples: Bitblt [Pike, Locanthi & Reiser 85], OS system calls
[Pu, Massalin & Ioannidis 88], cache simulation [Przybylski, Horowitz &
Hennessy 88], virtual machines [Deutsch & Schiffman 84], executing
user-supplied commands [Chamberlin et. al. 81], etc.
* Generate optimisitic code with runtime checks. When runtime checks
fail, regenerate the code. There are three general approaches here:
discard the old code and generate code optimized to the new values
[Johnston79]; generate less-optimized code [Sall & Weiss 79] that may
still be better-optmized than conservative static code; or cache several
optimized versions and select between them, generating new versions when
the cache lookup fails [Holze, Chambers & Ungar 91].
Time for a plug: Since I'm a leading advocate of runtime code generation,
I suggest you (everybody!) rush right out and pick up a copy of ``A Case
for Runtime Code Generation,'' [Keppel, Eggers and Henry 91], available
via anonymous ftp from `cs.washington.edu' (128.95.1.4) in the tech
reports subdirectory, number 91-11-04.
;-D on ( Optimizing for the com onion case ) Pardo
%A D. D. Chamberlin
%A M. M. Astrahan
%A W. F. King
%A R. Alorie
%A J. W. Mehl
%A T. G. Price
%A M. Schkolnick
%A P. Griffiths Selinger
%A D. R. Slutz
%A B. W. Wade
%A R. A. Yost
%T Support for Repetitive Transactions and Ad Hoc Queries in System R
%J ACM Transactions on Databse Systems
%V 6
%N 1
%D March 1981
%P 70-94
%A Fred Chow
%T Minimizing Register Usage Penalty at Procedure Calls
%J Sigplan 88 Conference on Programming Language Design and
Implementation
%D 1988
%K shrink wrapping
%A Peter Deutsch
%A Alan M. Schiffman
%T Efficient Implementation of the Smalltalk-80 System
%J 11th Annual Symposium on Principles of Programming Languages
(POPL-11)
%D January 1984
%P 297-302
%A Urs Ho\\*:lze
%A Craig Chambers
%A David Ungar
%T Optimizing Dynamically-Typed Object-Oriented Languages With
Polymorphic Inline Caches
%R Proceedings of the European Conference on Object-Oriented
Programming (ECOOP)
%D July 1991
%A Ronald L. Johnston
%T The Dynamic Incremental Compiler of APL\e3000
%I Association for Computing Machinery (ACM)
%J APL Quote Quad
%V 9
%N 4
%D June 1979
%P 82-87
%A David Keppel
%A Susan J. Eggers
%A Robert R. Henry
%T A Case for Runtime Code Generation
%R UWCSE 91-11-04
%I University of Washington Department of Computer Science and
Engineering
%D November 1991
%A Alexandru Nicolau
%T Run-Time Disambiguation: Coping with Statically Upredictable
Dependencies
%J IEEE Transactions on Computers
%V 38
%N 5
%D May 1989
%P 663-678
%A Rob Pike
%A Bart N. Locanthi
%A John F. Reiser
%T Hardware/Software Trade-offs for Bitmap Graphics on the Blit
%J Software - Practice and Experience
%V 15
%N 2
%P 131-151
%D February 1985
%A Steven Przybylski
%A Mark Horowitz
%A John Hennessy
%T Performance Tradeoffs in Cache Design
%J Proceedings of the 15th Annual International Symposium on Computer
Architecture
%D May 1988
%P 290-298
%A Calton Pu
%A Henry Massalin
%A John Ioannidis
%T The Synthesis Kernel
%J Computing Systems
%V 1
%N 1
%D Winter 1988
%P 11-32
%A H. J. Saal
%A Z. Weiss
%T A Software High Performance APL Interpreter
%J Quote Quad
%V 9
%N 4
%D June 1979
%P 74-81
%A Joel Saltz
%A Harry Berryman
%A Janet Wu
%T Multiprocessors and Runtime Compilation
%J Proceedings of the International Workshop on Compilers for Parallel
Computers
%C Paris
%D December 1990
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Fri Apr 23 12:57:15 EDT 1993
Xref: iecc comp.arch:28277 comp.compilers:4550
Newsgroups: comp.arch,comp.compilers
Path: iecc!compilers-sender
From: markt@harlqn.co.uk (Mark Tillotson)
Subject: Re: Optimal code sequences for signed int div/rem by powers of 2
Message-ID: <93-04-092@comp.compilers>
Keywords: arithmetic, optimize
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Harlequin Limited, Cambridge, England
References: <93-04-079@comp.compilers>
Date: Fri, 23 Apr 1993 11:03:40 GMT
Approved: compilers@iecc.cambridge.ma.us
S_JUFFA@IRAV1.ira.uka.de (|S| Norbert Juffa) wrote:
> Therefore, people are looking for alternatives to using division. Fast
> alternatives are especially feasible if the divisor is known at compile
> time (e.g. multiplication by reciprocal of divisor). Quite a few posts in
> the recent discussion were involved with speeding up division by powers of
> two known at compile time. This is really easy when dealing with unsigned
> integers (-> shift right), but requires some correction steps for signed
> integers.
I think you are perhaps unjustifiably forcing a definition of signed
integer division at us that is neither mathematically elegant nor what
people usually want (if they *ever* want signed integer division). Many
languages cop out of actually defining a semantics for signed integer
division anyway!
I think we all agree that integer division and modulo are related thus:
(a / b) * b + (a MOD b) == a
This by itself is under-constrained, but the simplest most elegant way to
constrain it is to limit the values of (a MOD b) to a contiguous range of
b distinct values (usually 0 .. b-1)
To do otherwise makes correctness proofs more complex, makes it easier to
introduce bugs. I see it as a failing in a language to constrain the
semantics to disallow this definition. If you feel strongly that
a/b == -(-a/b) then you are automatically saying that
(a+nb) MOD b /= a MOD b
which is very counter-intuitive to me!!
M. Tillotson Harlequin Ltd.
markt@uk.co.harlqn Barrington Hall,
+44 223 872522 Barrington, Cambridge CB2 5RG
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Fri Apr 23 12:58:05 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: isckbk@nuscc.nus.sg (Kiong Beng Kee)
Subject: Re: predicate parsing
Message-ID: <93-04-093@comp.compilers>
Keywords: LALR, parse
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: National University of Singapore
References: <93-04-077@comp.compilers>
Date: Fri, 23 Apr 1993 13:36:32 GMT
Approved: compilers@iecc.cambridge.ma.us
tamches@wam.umd.edu (Ariel Meir Tamches) writes:
: It's hard to imagine how one can insert arbitrary predicates in a
: bottom-up parser; for the same reason it's hard for bottom-up parsers to
: have inherited attributes - their control flow is "screwy" - top-down is
The predicates used in LADE are called 'conditionals'. Since bottom-up
parsing is happy to accomodate more than one possibility, it does not need
predicates to tell it which nonterminal to predict. However, it uses
conditionals to dynamically resolve shift-reduce and reduce-reduce
conflicts (rather than rules at table generation time as in YACC).
Inherited attributes in LADE is somewhat implicit, and quite equivalent to
global variables with (invisible) mechanisms for saving and restoring such
values. It gets the job done efficiently, and is better than in YACC, but
maybe not as nicely as in a top-down parser.
Compared with a top-down approach, I suspect that the LADE approach
requires fewer predicates.
--
Kiong Beng Kee
Dept of Information Systems and Computer Science
National University of Singapore
Lower Kent Ridge Road, SINGAPORE 0511
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Fri Apr 23 17:11:54 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: jim@float.co.uk (James Cownie)
Subject: Re: Run time optimizations
Message-ID: <93-04-094@comp.compilers>
Keywords: optimize, vector
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Meiko World.
References: <93-04-069@comp.compilers>
Date: Fri, 23 Apr 1993 20:51:26 GMT
Approved: compilers@iecc.cambridge.ma.us
The technique of generating multiple posible implementations of a
particular piece of source, and then choosing the correct one at run time
based on the actual circumstances present at the time is fairly standard
in high performance vectorising compilers. These often generate both
vector and scalar code and then choose which to use based on the loop
length at run time. The technique is also used to allow vectorisation of
operations which may have a data dependancy for some values of a variable,
but not for others e.g.
do i = 2,10
a(i+j) = a(i)+1
end do
which certainly vectorises if j <= 0 or j >= 9 but does not for 1 <= j <= 8
-- Jim
James Cownie
Meiko Limited, 650 Aztec West, Bristol BS12 4SD, England
Meiko Inc., Reservoir Place, 1601 Trapelo Road, Waltham MA 02154
Phone : +44 454 616171 or +1 617 890 7676
FAX : +44 454 618188 or +1 617 890 5042
E-Mail: jim@meiko.co.uk or jim@meiko.com
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Fri Apr 23 17:13:10 EDT 1993
Xref: iecc comp.compilers:4553 comp.lang.lisp:7322 comp.lang.misc:9758 comp.lang.prolog:5390 comp.lang.smalltalk:5661 comp.lang.scheme:5574 comp.lang.icon:844 comp.lang.apl:2225
Newsgroups: comp.compilers,comp.lang.lisp,comp.lang.misc,comp.lang.prolog,comp.lang.smalltalk,comp.lang.scheme,comp.lang.icon,comp.lang.apl
Path: iecc!compilers-sender
From: gudeman@cs.arizona.edu (David Gudeman)
Subject: Representations of Dynamic Type Information
Message-ID: <93-04-095@comp.compilers>
Keywords: question, types
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: U of Arizona CS Dept, Tucson
Date: Fri, 23 Apr 1993 21:04:06 GMT
Approved: compilers@iecc.cambridge.ma.us
I'm preparing a document that is intended to be an encyclopedic summary
of all of the different ways of encoding the type of a value in
dynamically typed languages. In order to make this list as comprehensive
as possible, I thought I'd post to some relevant groups on the net to see
if anyone has a technique I'm not aware of. Instead of asking for
techniques and having to go through all of the responses looking for new
ones, I thought I'd list the ones I know about and hope that some
implementers will be willing to go through my list looking for their own
favorite technique, and let me know if it is missing. Also, many of these
techniques are either folk-lore, or (probably re-)invented by me, so I
would appreciate knowing if anyone can give me references that have some
claim to originiality.
I don't need to know about cdr-coding or other methods to represent data,
I'm only interested in methods to encode dynamic type information.
If you can help I prefer to get replies by mail. If you post a follow-up
to this article, please notice that there are a lot of groups in the
subject line, and some replies may not be relevant for some groups (I
recommend comp.lang.misc for general discussions of representational
schemes).
Any and all help is greatly appreciated.
The LIST:
tagged-words (type information is in the machine word)
tag fields (word is broken up into tag and data fields)
tag field in high end (most-significant bits) of word
use tag of all zeros for one type to avoid tagging cost
negative ints get a tag of all ones, non-negative ints
get a tag of all zeros
use sign bit for one type
use sign-bit = 0 for one type and optimize another type by
giving it the tag of all ones in the
high end and tagging by negation.
tag field in low end of word
use two-bit tags to represent word pointers to avoid shifting
use the tag 00 for word pointers to save the cost of tagging
use all zeros to optimize integer arithmetic
optimize integer arithmetic by adding/subtracting tag
after subtraction/addition
tag field in both ends of word
various combinations of tricks from the other two options
partitioning by magnitude (type is encoded in the magnitude
of the word used to represent it)
simple range tests to identify types
segments with statically determined types
segments with dynamically determined types
use BIBOP to identify types
identify type in the segment itself
first word of segment
last word of segment
object-pointers (untagged pointers refering to self-identifying blocks
on the heap)
combinations of this scheme with the tagged-word scheme
descriptors (two-word data elements divided into a type word and a
value word)
encoding a qualifier in a descriptor
encoding a cons cell in a descriptor
--
David Gudeman
gudeman@cs.arizona.edu
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Sat Apr 24 11:43:44 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: haahr@mv.us.adobe.com (Paul Haahr)
Subject: Re: Run time optimizations
Message-ID: <93-04-096@comp.compilers>
Keywords: optimize, architecture
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Compilers Central
References: <93-04-069@comp.compilers> <93-04-091@comp.compilers>
Date: Sat, 24 Apr 1993 03:18:33 GMT
Approved: compilers@iecc.cambridge.ma.us
David Keppel <pardo@cs.washington.edu> writes:
> * Look at runtime values and decide how to generate code that is more
> optimized than the general case but still conservative given the runtime
> values. Examples: Bitblt [Pike, Locanthi & Reiser 85]
One data point here: i had written, back when i was in school, a 680n0 (n
>= 2) compile-on-the-fly bitblt. It ran at (roughly) memory bandwidth on
my 68020-based sun3/60, and generally moved 3-5x as many bits per second
as an ``interpretive'' bitblt. I was pretty pleased with this code. ;-)
I recently compiled it and timed on on my 68040-based nextstation.
the compile-on-the-fly version and the interpretive version ran
in the same amount of time, reproducible within 10%. In some cases,
the compiling bitblt was faster, in others it was slower, seemingly
due to the i-cache flushing that was going on before jumping into
the actual copying routine.
My explanations for what's going on, which are all in the form of
first impressions and could easily be completely mistaken, are:
+ The 68040 outruns the memory by a lot. No matter how much
other work you do in bitblt, performance is most directly
related to memory bandwidth and you have cycles to spare.
+ The 68040 is (internally) clock-doubled, so you already
have twice as many instruction cycles as you have chances
to get at the memory buses. That means that a loop with
two memory references that are not coming out of cache can
effectively have two other completely ``free'' instructions.
This makes loop-unrolling, one of the prime advantages of
the compile-on-the-fly code, much less important.
+ There's much more of a penalty on the 68040 for flushing
the icache: the cache is an order of magnitude larger and
almost certainly has useful code from outside the bitblt
in it, not to mention advantages stemming from repeated
bitblts being cached.
+ The icache is bigger, so you don't need the incredibly
density of the compile-on-the-fly code to keep the entire
inner loop in cache. (In fact, I suspect that the inner
loop typically fits entirely in the 68020s 256 byte icache,
but i'm note sure.)
Anyway, I don't want to disparage the approach of run-time code
generation, but do want to remind people that as hardware changes,
engineering trade-offs change.
paul
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Sat Apr 24 14:14:45 EDT 1993
Xref: iecc comp.compilers:4555 comp.unix.ultrix:17442
Newsgroups: comp.compilers,comp.unix.ultrix
Path: iecc!compilers-sender
From: raynor@cs.scarolina.edu (Harold Brian Raynor)
Subject: How to force gcc to dump core on FP error
Message-ID: <93-04-097@comp.compilers>
Summary: How can I make GCC coredump on FP error
Keywords: GCC, debug, question
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: USC Department of Computer Science
Date: Sat, 24 Apr 1993 17:27:16 GMT
Approved: compilers@iecc.cambridge.ma.us
Is there any compiler switch that will force either GCC or the MIPS C
compiler (supplied with Ultrix) to coredump when a floating point error
occurs?
I am writing a 3D Graphics package and at the moment am having a lot of
things calculated to be Infinity or Nan. This should never happen in the
program.
Without dissecting every line of code with a debugger (or putting a TON of
printfs), it would be nice if I could just get a core dump whenever this
happened. Then I would know exactly where it occured. It does this with
integer divide by zeros, etc. There outta be a way to do it with FP (all
are floats, no doubles).
I am using a DECstation 2100 (believe it is MIPS 2000).
Any help would be greatly appreciated...
Thanks,
Brian Raynor
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Mon Apr 26 12:51:52 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: Mark_Kuschnir@gec-epl.co.uk
Subject: C Preprocessor
Message-ID: <93-04-098@comp.compilers>
Keywords: C, parse, question
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Compilers Central
Date: Mon, 26 Apr 1993 08:10:54 GMT
Approved: compilers@iecc.cambridge.ma.us
Is it more usual to concatenate adjacent string literals in the
preprocessing pass or afterwards ?
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Mon Apr 26 12:59:03 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: simonh@swidev.demon.co.uk (Simon Huntington)
Subject: Re: predicate parsing
Message-ID: <93-04-099@comp.compilers>
Keywords: parse
Sender: compilers-sender@iecc.cambridge.ma.us
Reply-To: simonh@swidev.demon.co.uk
Organization: SoftWare Interrupt Developments
References: <93-04-077@comp.compilers>
Date: Fri, 23 Apr 1993 15:27:51 GMT
Approved: compilers@iecc.cambridge.ma.us
Re regarding predicate parsing:
What exactly are predicates? I've had a look at the *excellent* PCCTS
which uses predicates, but I don't see how they can help very much. I'm
trying to write a C++ parser (something simple to start with :-)), but
predicate parsing would have to look-ahead many tokens to decipher
ambiguities wouldn't it?
I wrote a backtracking LALR parser which basically recursively 'trial'
parses (similar method to Gary Merrill, but trial parsing is specified
with the grammar). I've managed to get it to parse almost all C++,
including templates and exceptions, but it's pretty big (>950states). I
also needed error repair which is why I **had** to write my own parser.
I'd have liked to use LL since it is much easier to understand but seemed
to run into so many problems. Firstly, I wanted it to be as fast as
possible. I can write the parser driver in assembler to read the tables.
Second, I needed error repair. LL parsers seem to have a hard-time
repairing errors. Thirdly, I couldn't parse-out those lovely ambiguities
in C++ :-).
.. So, would predicates help or what? Are they for simpler things?
--
James Huntington
Software Interrupt Developments, Leeds, UK.
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Mon Apr 26 13:00:06 EDT 1993
Xref: iecc comp.unix.wizards:14952 comp.sys.sun.misc:7878 comp.compilers:4558
Newsgroups: comp.unix.wizards,comp.sys.sun.misc,comp.compilers
Path: iecc!compilers-sender
From: troy@molson.ho.att.com
Subject: Finding the return address in a Sparc stack frame
Message-ID: <93-04-100@comp.compilers>
Followup-To: comp.unix.wizards
Keywords: sparc, debug
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: AT&T Bell Laboratories, Holmdel NJ
Date: Mon, 26 Apr 1993 15:31:50 GMT
Approved: compilers@iecc.cambridge.ma.us
Hi,
I'm trying to hack a debugging version of malloc that tracks who
called it for each piece of memory doled out. To determine who
called it, I want to figure out the return address by following
the argument's address to the stack frame.
My starting point is a version that runs on 386's and 3B2's,
which has the following unstructured hack. "nbytes" is the
argument to malloc.
#ifdef debug
/* reuse nextfree as pointer to caller */
struct header **argptr = (struct header **)&nbytes;
#if defined(i386)
blk->nextfree = *(argptr-1);
#else
blk->nextfree = *(argptr+1); /*u3b2*/
#endif
#endif
I'm having trouble interpreting the Sparc stack frame. I've examined
stack frames using gdb and the structure found in /usr/include/frame.h
(below), but can't find anything that looks correct in fr_savpc.
struct frame {
int fr_local[8]; /* saved locals */
int fr_arg[6]; /* saved arguments [0 - 5] */
struct frame *fr_savfp; /* saved frame pointer */
int fr_savpc; /* saved program counter */
char *fr_stret; /* struct return addr */
int fr_argd[6]; /* arg dump area */
int fr_argx[1]; /* array of args past the 6th*/
};
I also could not decipher the gdb source. I'm guessing that there is
some indirection at some level that I'm missing. I've seen references
to "stack windows", but I've yet to find a detailed explanation.
Any help with this problem would be appreciated.
-troy
troy@molson.ho.att.com
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Tue Apr 27 16:46:57 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: mauney@csljon.csl.ncsu.edu (Jon Mauney)
Subject: Re: predicate parsing
Message-ID: <93-04-101@comp.compilers>
Keywords: parse
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: NCSU
References: <93-04-077@comp.compilers> <93-04-099@comp.compilers>
Date: Tue, 27 Apr 1993 03:43:47 GMT
Approved: compilers@iecc.cambridge.ma.us
simonh@swidev.demon.co.uk (Simon Huntington) writes:
>I'd have liked to use LL since it is much easier to understand but seemed
>to run into so many problems. Firstly, I wanted it to be as fast as
>possible. I can write the parser driver in assembler to read the tables.
>Second, I needed error repair. LL parsers seem to have a hard-time
>repairing errors. Thirdly, I couldn't parse-out those lovely ambiguities
>in C++ :-).
Can't let this go by without comment.
A) LL parsers can be as fast as anything else.
2) LL Parsers are to be *preferred* when repair is an issue.
The simple structure of the data on the stacks makes life
much easier.
I'm not going to stick my neck out on III, however.
--
Jon Mauney mauney@csc.ncsu.edu
Mauney Computer Consulting (919)828-8053
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Wed Apr 28 12:40:36 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: jeremy@sw.oz.au (Jeremy Fitzhardinge)
Subject: Re: Run time optimizations
Message-ID: <93-04-102@comp.compilers>
Keywords: optimize, architecture
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Softway Pty Ltd
References: <93-04-069@comp.compilers> <93-04-096@comp.compilers>
Date: Wed, 28 Apr 1993 11:23:27 GMT
Approved: compilers@iecc.cambridge.ma.us
haahr@mv.us.adobe.com (Paul Haahr) writes:
>[runtime code gen for bitblt was a win on the 68020, no better than
>interpreted on 68040, probably due to cache effects]
>Anyway, I don't want to disparage the approach of run-time code
>generation, but do want to remind people that as hardware changes,
>engineering trade-offs change.
Good points. However, it does depend on what you are generating code for.
I spend quite a lot of time playing with Byron Rakitzis' pico
implementation, in particular an optimiser pass.
Pico was originally written at Bell Labs. It took as input a C-like
language that describes a set of transformations to be performed on an
image. Transformations can involve simple arithmatic, logical ops, polar
or rectangular coords, trig operations, conditionals, etc. It compiled
the user input into native machine code and ran it. The Bell
implementation generated Vax and WD32000 code I think; Byron's more
limited implementation generates Sparc and Mips code.
Byron's original code was a literal translation into assembly with no
attempt at optimisation. I added strength reduction, constant folding,
loop invarient motion and simple peephole optimisation. After I'd
finished, there was no way an interpretive version was going to get within
an order of magnitude of the compiled code on a Sparc Station 1. There
was all the same sort of tradeoffs as your case, but because there were at
least 512x512 operations (for that sized image) there was a high payoff
for reducing loop overhead, as well as operation time per pixel.
For blit-type operations, there are relatively few variations, so it can
pay just to have one function per operation, each of which encompasses all
the loops. Pico, by its nature, can't do that, so either you interpret
the user input or compile it. The runtime compiler code is not as good
as, say, gcc, but it generates quickly and its results are always going to
be faster than a gcc-compiled interpreter. (Just to blur the distinction,
I added a "portable option" that would generate C source as output, and
pass it to gcc, then map the resulting .o into the address space and run
it. This would have worked modulo bugs in Sun's runtime linking
routines.)
In conclusion, runtime code generation pays off when there are too many
possibilities at runtime to encode into the compiled source.
J
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Thu Apr 29 01:53:15 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: hadad@cs.uh.edu (Ben S. Hadad)
Subject: automata book recommendations wanted
Message-ID: <93-04-103@comp.compilers>
Keywords: books, theory, question
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Computer Science dept., Univ. of Houston (Main Campus)
Date: Wed, 28 Apr 1993 17:01:33 GMT
Approved: compilers@iecc.cambridge.ma.us
Folk:
I am looking for a book on Automata Theory as part of a
self-study program in preparation for the Computer Science
section of the GRE. I already have the excellent book by
Hopcroft and Ullman, "Intro to Automata Theory, Languages, and
Computation", which I find admirable in its rigor and
thoroughness, but a bit short on solved problems. Any other
introductory texts out there that you folk can recommend? If
you'd e-mail me with your answer, I'll post a summary when the
answers stop coming in.
Thanks in advance for the advice.
Yours,
Ben Hadad, aka
hadad@cs.uh.edu
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Thu Apr 29 01:55:44 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: isckbk@nuscc.nus.sg (Kiong Beng Kee)
Subject: Re: predicate parsing
Message-ID: <93-04-104@comp.compilers>
Keywords: parse
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: National University of Singapore
References: <93-04-099@comp.compilers>
Date: Wed, 28 Apr 1993 18:43:27 GMT
Approved: compilers@iecc.cambridge.ma.us
simonh@swidev.demon.co.uk (Simon Huntington) writes:
: What exactly are predicates? I've had a look at the *excellent* PCCTS
: which uses predicates, but I don't see how they can help very much. I'm
: trying to write a C++ parser (something simple to start with :-)), but
: predicate parsing would have to look-ahead many tokens to decipher
: ambiguities wouldn't it?
I see predicates as being used to help the parser resolve ambiguites. In
top-down parsing, it points out the path to take for rules with common
prefixes. A multi-token lookahead could be one way; semantic attributes
is another.
In bottom-up parsing, these predicates point out which reduction to take
(when there is a reduce-reduce conflict) or whether to reduce at all (when
there is a shift-reduce conflict).
Intuitively, one needs fewer and simplier(?) predicates in bottom- up
parsing because these are invoked later in the case of bottom-up parsing
when the context is better known; rather than the top-down case when it
must guide the parser.
: I wrote a backtracking LALR parser which basically recursively 'trial'
: parses (similar method to Gary Merrill, but trial parsing is specified
: with the grammar). I've managed to get it to parse almost all C++,
: including templates and exceptions, but it's pretty big (>950states). I
: also needed error repair which is why I **had** to write my own parser.
I know that xorian@solomon.technet.sg has a C++ parser, and I heard it is
small. However, I do not know how many states. They also incorporate the
FMQ method of error repair into the parser generation process, but I think
the FMQ method for bottom-up parsing (as in Fischer's book) is somewhat
less clever than the one for top-down parsing. Top-down parsing always
have the benefit when it comes to error-repair since the parsing context
is already on the stack. The same cannot be said for bottom-up parsing --
so I do not see how LL parsers are disadvantaged in error repair.
--
Kiong Beng Kee
Dept of Information Systems and Computer Science
National University of Singapore
Lower Kent Ridge Road, SINGAPORE 0511
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Thu Apr 29 01:56:21 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: max@nic.gac.edu (Max Hailperin)
Subject: test at top ==> test at bottom
Message-ID: <93-04-105@comp.compilers>
Keywords: optimize, question
Sender: compilers-sender@iecc.cambridge.ma.us
Reply-To: Max Hailperin <max@nic.gac.edu>
Organization: Gustavus Adolphus College, St. Peter, MN
Date: Wed, 28 Apr 1993 19:15:10 GMT
Approved: compilers@iecc.cambridge.ma.us
While working my way through the literature on certain forms of loop
optimization, it occurred to me that I haven't seen much in the literature
on transformations that will turn test-at-the-top loops into
test-at-the-bottom loops. Yet unless you've done so, you can't safely
assume the loop will be done at least once.
I know some compilers may leave the loop structurally alone but prove
separately that it is done at least once, or may move some invariants out
of loops even without being sure that the loop is done at least once.
These aren't the approaches I'm interested in.
What I *am* interested in is approaches that literally transform the
structure of the loop. The basic approach seems to be to make two copies
of the test block (one for when the loop is initially entered, one for
subsequent iterations) and then try to optimize away the copy that is done
on the first iteration. This description leaves open all sorts of
questions about how to efficiently do it and how to avoid doing it where
it would involve replicating lots of code.
If you know of any good material on this topic, would you be so kind as to
send me a citation?
Thanks.
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Thu Apr 29 01:56:59 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: wendt@CS.ColoState.EDU (alan l wendt)
Subject: Chop Available for FTP
Message-ID: <93-04-106@comp.compilers>
Keywords: code, FTP, available
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Colorado State University, Computer Science Department
Date: Wed, 28 Apr 1993 20:42:17 GMT
Approved: compilers@iecc.cambridge.ma.us
The source code for the chop fast automatically-generated code generator
system is now available for anonymous ftp. Chop is described in "Fast
Code Generation Using Automatically-Generated Decision Trees", ACM SIGPLAN
'90 PLDI, and other publications cited there.
The current revision, 0.6, is interfaced with Fraser and Hanson's lcc
front end. The result is a highly fast C compiler with good code
selection and no global optimization.
Project Status: Chop compiles and runs a number of small test programs on
the Vax. I'm currently updating the NS32k and 68K retargets for lcc
compatibility. After I get them working, I'll work on getting the system
to compile itself, get struct assignments working, improve the code
quality and compile speed, and run the SPEC benchmarks. That will be rev
1.0. This is rev 0.6.
Rev 0.6 is available by ftp from beethoven.cs.colostate.edu. Download the
file "~ftp/pub/chop/0.6.tar.Z.
Alan Wendt
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Thu Apr 29 01:58:01 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: Trevor Jenkins <tfj@apusapus.demon.co.uk>
Subject: predicate parsing
Message-ID: <93-04-107@comp.compilers>
Keywords: parse
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Job hunters use Parachute
References: <93-04-101@comp.compilers>
Date: Wed, 28 Apr 1993 22:43:28 GMT
Approved: compilers@iecc.cambridge.ma.us
mauney@csljon.csl.ncsu.edu writes:
> A) LL parsers can be as fast as anything else.
Indeed Fraser and ? at Bell Labs wrote a recursive descent (LL(1)) based
parser for their new C compiler. (from memory) They said "LALR parsers (ie
yacc generated ones) are too slow".
> 2) LL Parsers are to be *preferred* when repair is an issue.
> The simple structure of the data on the stacks makes life
> much easier.
It is generally acknowledged that in an LR based parser error recovery is
real messy. As to the data on the stack that has nothing to do with the
error recovery procedure of either LL or LR parsers.
> I'm not going to stick my neck out on III, however.
I'm not a C++ theologian but both of you seem to be confusing language
with grammar. Just because a language is described by an *LR(1) grammar
soes not necessarily mean that it the LANGUAGE is not LL(1). If it can be
described by an LL(1) grammar but the langauge designer decides to publish
a grammar in the LR family that is there choice but does not restict an
implementation to using that grammar form.
Regards, Trevor.
--
Trevor Jenkins
134 Frankland Rd, Croxley Green, RICKMANSWORTH, WD3 3AU, England
email: tfj@apusapus.demon.co.uk phone: +44 (0)923 776436 radio: G6AJG
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Thu Apr 29 23:39:00 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: mauney@csljon.csl.ncsu.edu (Jon Mauney)
Subject: Re: predicate parsing
Message-ID: <93-04-108@comp.compilers>
Keywords: parse
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: NCSU
References: <93-04-101@comp.compilers> <93-04-107@comp.compilers>
Date: Thu, 29 Apr 1993 12:59:05 GMT
Approved: compilers@iecc.cambridge.ma.us
mauney@csljon.csl.ncsu.edu writes:
> 2) LL Parsers are to be *preferred* when repair is an issue.
> The simple structure of the data on the stacks makes life
> much easier.
Trevor Jenkins <tfj@apusapus.demon.co.uk> writes:
>It is generally acknowledged that in an LR based parser error recovery is
>real messy. As to the data on the stack that has nothing to do with the
>error recovery procedure of either LL or LR parsers.
Having implemented error-repair for both LL and LR parsers, I must
disagree. Since the parse stack contains the definition of the valid
continuations of the input, I consider it to be essential to good error
repair. Even quick and dirty panic-mode recovery methods look at the
stack to the extent of skipping input to a symbol that can be read, and/or
popping to a configuration that can read the current symbol.
>I'm not a C++ theologian but both of you seem to be confusing language
>with grammar. Just because a language is described by an *LR(1) grammar
>soes not necessarily mean that it the LANGUAGE is not LL(1). If it can be
>described by an LL(1) grammar but the langauge designer decides to publish
>a grammar in the LR family that is there choice but does not restict an
>implementation to using that grammar form.
Being a proponent of LL parsing, I frequently make the same speech to
people who state that X cannot be done with LL(1). In this case, however,
numerous people have complained about the difficulty in writing an LALR
grammar for C++. Not having tried it myself (yet), I'm not going to claim
that it is easy to do with LL. ( In theory, of course, C++ contains the
dangling-else problem and is not an LL language. But that one is easy to
work around)
--
Jon Mauney mauney@csc.ncsu.edu
Mauney Computer Consulting (919)828-8053
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Thu Apr 29 23:40:05 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: preston@dawn.cs.rice.edu (Preston Briggs)
Subject: Re: test at top ==> test at bottom
Message-ID: <93-04-109@comp.compilers>
Keywords: optimize, bibliography
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Rice University, Houston
References: <93-04-105@comp.compilers>
Date: Thu, 29 Apr 1993 15:24:33 GMT
Approved: compilers@iecc.cambridge.ma.us
Max Hailperin <max@nic.gac.edu> writes:
>While working my way through the literature on certain forms of loop
>optimization, it occurred to me that I haven't seen much in the literature
>on transformations that will turn test-at-the-top loops into
>test-at-the-bottom loops. Yet unless you've done so, you can't safely
>assume the loop will be done at least once.
I haven't ever seen much besides 1-liners saying "just do it." Depending
on the source language, there may be several approaches. It's possible to
get the front end to generate the desired code for DO loops and WHILE
loops (DO - WHILEs or REPEAT - UNTILs are already fine). For example,
instead of translating
a
WHILE p DO b END
c
into
a
L1: if (!p) goto L2
b
goto L1
L2: c
we'd prefer to see
a
if (!p) goto L2
L1: b
if (p) goto L1
L2: c
which (as Hailperin points out) allows much more optimization. If you've
got a language like Fortran or C that allows programmers to build loops
out of goto's, then the optimizer will have to fix loops on its own. I
can think of a couple of approaches.
1) If the loop header block has a successor block outside
the loop, then clone the loop header block.
2) Peel an entire iteration of the loop.
Let's reconsider our example, rewritten slightly
a
goto L1
L1: if (p) goto L2 else goto L3
L2: b
goto L1
L3: c
The loop header is L1 and the other block in the loop is L2.
Cloning the header gives
a
goto L1
L1: if (p) goto L2 else goto L3
L2: b
goto L1'
L1': if (p) goto L2 else goto L3
L3: c
Note that we leave the original loop header block in place and make a copy
of it (called L1'). All back edges (edges from within the loop to the
loop header) are repointed to the clone.
When we straighten the mess out a little, we get
a
if (p) goto L2 else goto L3
L2: b
if (p) goto L2 else goto L3
L3: c
which is just what's desired.
The second approach, peeling an iteration of the loop, is rather more
violent. On the other hand, if we peel and then perform global common
subexpression elimination (using value numbering, for example), then we
effectively move all loop invariant in front of the loop.
Peeling our example gives
a
goto L1
L1: if (p) goto L2 else goto L3
L2: b
goto L1'
L1': if (p) goto L2' else goto L3
L2': b
goto L1'
L3: c
Cleaning it up a little would give
a
if (p) goto L2 else goto L3
L2: b
L1': if (p) goto L2' else goto L3
L2': b
goto L1'
L3: c
which is obviously not the same result as achieved by cloning. However,
the higher-level goal is achieved; that is, we have a place to put
loop-invarient code -- in fact, we have already made a copy of all the
loop-invariant code (and everything else). The downside of this approach
is that we can consume lots of space (if not in the final result, at least
in out intermediate representation).
A relevant paper is
title="Code Motion of Control Structures in High-Level Languages",
author="Ron Cytron and Andy Lowry and F. Kenneth Zadeck",
booktitle=popl13,
year=1986,
month=jan,
pages="70--85"
Preston Briggs
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Thu Apr 29 23:40:48 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: oz@ursa.sis.yorku.ca (Ozan S. Yigit)
Subject: Re: automata book recommendations wanted
Message-ID: <93-04-110@comp.compilers>
Keywords: theory, bibliography
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: York U. Student Information Systems Project
References: <93-04-103@comp.compilers>
Date: Thu, 29 Apr 1993 17:09:10 GMT
Approved: compilers@iecc.cambridge.ma.us
Ben S. Hadad writes [in part]:
I am looking for a book on Automata Theory as part of a
self-study program ...
There is an excellent book by Lewis+Papadimitriou[1], and there
is the classic volume by Minsky[2].
oz
---
[1] Harry R. Lewis and Christos H. Papadimitriou.
Elements of the theory of computation.
Prentice-Hall, Englewood Cliffs, N.J.
1981.
[2] Marvin L. Minsky.
Computation: finite and infinite machines.
Prentice-Hall, Englewood Cliffs, N.J.
1967.
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Thu Apr 29 23:55:39 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: oz@ursa.sis.yorku.ca (Ozan S. Yigit)
Subject: parsing references [req]
Message-ID: <93-04-111@comp.compilers>
Keywords: parse, question, comment
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: York U. Student Information Systems Project
Date: Thu, 29 Apr 1993 17:17:40 GMT
Approved: compilers@iecc.cambridge.ma.us
I would like to have a good bibliography on parsing. I have various
bits and pieces of references, but I would prefer something much more
comprehensive. If you have such a bibliography [in any format] you
would like to share, I would much appreciate it.
please reply via e-mail. thanks in advance.
oz
---
electric: oz@sis.yorku.ca, ph:[416] 736 2100 x 33976
[I'm always happy to add bibliographies to the compilers archive. Feel
free to send 'em in. -John]
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Fri Apr 30 00:05:28 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: Boum Belkhouche <bb@rex.cs.tulane.edu>
Subject: CFP: IEEE CS Computer Languages Conf., France May 1994
Message-ID: <93-04-112@comp.compilers>
Keywords: CFP
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Compilers Central
Date: Thu, 29 Apr 1993 17:06:03 GMT
Approved: compilers@iecc.cambridge.ma.us
CALL FOR PAPERS
IEEE Computer Society 1994 International Conference on Computer Languages
Toulouse, France, May 16-19, 1994
Sponsored by the IEEE Computer Society Technical Committee on Computer
Languages In cooperation with ACM SIGPLAN and IRIT
Areas of particular interest include but are not limited to:
Implementation/optimization Theory/semantics
Abstract interpretation Dataflow analysis
Partial evaluation
Parallel/distributed languages Object-oriented languages
Functional/logic languages Multiparadigm languages
Real-time/fault-tolerant languages Reqm'ts/design/specification languages
Visual/graphical languages Application-specific languages
Papers should be at most 20 double-spaced pages (10 pt on 16 pt) in
length, should have an abstract of approx, 250 words, and a separate cover
page indicating the title, authors, and a list of keywords. Papers will be
judged on the basis of their relevance, significance, originality,
correctness, and clarity. Accepted papers will appear in a full
proceedings published by the IEEE Computer Society Press, for which
authors will be expected to sign a copyright release form. A selection of
best papers will be published in a journal.
Submit 5 copies of papers or 3 copies of panel session proposals by
September 24, 1993 to Dr. Henri Bal, program committee chair Submissions
should be accompanied by a cover letter that includes a return mailing
address, telephone number and email address. Authors will be notified of
acceptance or rejection by December 8, 1993. Camera ready versions of
accepted papers will be due January 12, 1994.
For more information contact: bb@cs.tulane.edu
or Dr. Henri Bal
Vrije Universiteit
Dept. of Mathematics and Computer Science
De Boelelaan 1081a
1081 HV Amsterdam, The Netherlands
+31 20 5485574, bal@cs.vu.nl
Fax: +31 20 6427705
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Fri Apr 30 00:06:44 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: andrewd@winnie.cs.adelaide.edu.au (Andrew Dunstan,,2285592,)
Subject: Re: predicate parsing
Message-ID: <93-04-113@comp.compilers>
Keywords: parse, performance
Sender: compilers-sender@iecc.cambridge.ma.us
Reply-To: andrewd@cs.adelaide.edu.au
Organization: The University of Adelaide
References: <93-04-101@comp.compilers>
Date: Thu, 29 Apr 1993 12:37:13 GMT
Approved: compilers@iecc.cambridge.ma.us
> A) LL parsers can be as fast as anything else.
This is an understatement. Other things being equal, they are faster than
bottom-up parsers. (See Fischer & LeBlanc for an analysis.)
> 2) LL Parsers are to be *preferred* when repair is an issue.
> The simple structure of the data on the stacks makes life
> much easier.
Right, again, but for the wrong reason. Intuitively, LL parsers provide
better error recovery possibilities because they are predictive, i.e. you
know where you are going, whereas in L??R parsers, you don't always know
for sure where you are going till you've got there. The power of this
parsing method comes precisely from the ability to delay this decision.
[III is how to deal with ambiguities in C++]
Let's go back to the start of this thread. Using predicates in parsing,
either predictive or not, can add to the power of the parsing method.
Perhaps more importantly, they can reduce the extent to which the grammar
must be massaged in order to fit the parsing method. This is significant
in difficult grammars such as C++ and Ada.
At the GNAT project at NYU (which will produce GNU Ada), Robert Dewar is,
as I understand it, using a recursive descent parser with backtracking,
written in Ada83, with excellent recovery characteristics and blindingly
fast. So these methods are not just of "academic" interest.
--
# Andrew Dunstan
# net:
# adunstan@steptoe.adl.csa.oz.au
# or: andrewd@cs.adelaide.edu.au
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Fri Apr 30 00:07:31 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: psychlo@bagabolts.eecs.umich.edu (John-David Wellman)
Subject: Code Scheduling for Multi-Issue Machines
Message-ID: <93-04-114@comp.compilers>
Keywords: optimize, architecture, question
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: University of Michigan EECS Dept., Ann Arbor
Date: Thu, 29 Apr 1993 21:22:27 GMT
Approved: compilers@iecc.cambridge.ma.us
Hi,
We are working on several projects here, and many of them would benefit
from some insights into the compilation of code for a multi-issue
architecture. We understand that most compiler work is pretty generally
applicable, but there are some issues in the jump from a single-issue
machine to a multiple-issue machine, and yet we are having some difficulty
finding a good set of publications which address these issues. Thus, this
is a general call for either references which discuss the generation of
code schedules for a superscalar or multi-issue machine (and particularly
one which can have more than two instructions issue, and is statically
scheduled), or general insights and knowledge (experiences) about this
subject.
Thank you for any help you can provide. Please either post or send
email (or both, if you'd like).
---
John-David Wellman -- (psychlo@eecs.umich.edu)
EECS Graduate Research Assistant
The University of Michigan
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Fri Apr 30 00:11:59 EDT 1993
Xref: iecc comp.object:10055 comp.compilers:4573 comp.benchmarks:3123 comp.databases.object:111
Newsgroups: comp.object,comp.compilers,comp.benchmarks,comp.databases.object
Path: iecc!compilers-sender
From: "Jeff P. Lankford" <jpl@nrtc.northrop.com>
Subject: Have CORBA IDL compiler -- will trade 4 regression test suite
Message-ID: <93-04-115@comp.compilers>
Followup-To: comp.object
Keywords: benchmarks
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Northrop Research and Technology Center
Distribution: comp
Date: Thu, 29 Apr 1993 22:04:26 GMT
Approved: compilers@iecc.cambridge.ma.us
I'm finishing up an IDL compiler and am seeking a test set of IDL
"programs". The examples in the CORBA spec manual are skimpy, and
i'd like some hefty input to really stress the compiler. Any
suggestions? Please reply directly via e-mail.
jpl
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Fri Apr 30 00:19:25 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: gudeman@cs.arizona.edu (David Gudeman)
Subject: Summary: Representations of Dynamic Type Information
Message-ID: <93-04-116@comp.compilers>
Keywords: types, summary
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: U of Arizona CS Dept, Tucson
References: <93-04-095@comp.compilers>
Date: Fri, 30 Apr 1993 02:31:35 GMT
Approved: compilers@iecc.cambridge.ma.us
Well, thanks for all of the responses. I got several new ideas, and got
reminded of several things I had forgotten to include. It would be a
little impractical for me to "summarize" the entire report here, since it
is 26 pages long. However, there is a fairly complete draft available (at
26 pages it ought to be fairly complete :-), and if anyone is interested
in reading it, it is available in compressed postscript by anonymous ftp
from cs.arizona.edu in the file "/usr/ftp/janus/jc/tags.ps.Z". If anyone
has trouble getting this or needs an uncompressed file, or wants the latex
source, let me know. I haven't had time to discuss all of the ideas I got
from the net, so I've included draft notes in the text to give a hint
about these (and I've tried to credit the people who wrote me). A few of
the draft notes are essentially the remainders of my outline. Since I
have to go on to other work, the report is probably going to remain in
this unfinished state for a while.
For those who are curious, but not curious enough to read a 26-page
report, here is a rough outline of the methods (this does not include all
of the tricks and special cases)
tagged-words (type information is in the machine word)
tag fields (word is broken up into tag and data fields)
tag field in high end (most-significant bits) of word
use tag of all zeros for one type to avoid tagging cost
negative ints get a tag of all ones, non-negative ints
get a tag of all zeros
use sign bit for one type
use sign-bit = 0 for one type and optimize another type by
giving it the tag of all ones in the
high end and tagging by negation.
tag field in low end of word
use n-bit tags to represent pointers data aligned on n-byte boundaries
this allows tagging without shifting
use a tag of all zeros to avoid tagging and untagging
use tags that contain only one non-zero bit to make testing faster
use all zeros to optimize integer arithmetic
optimize integer arithmetic by adding/subtracting tag
after subtraction/addition
tag field in both ends of word
various combinations of tricks from the other two options
partitioning by pattern (type is encoded in the representation of the value,
no boxing/unboxing is done, usually words are
partitioned into ranges of numbers they rep.)
simple range tests to identify types
segments with statically determined types (a segment is a range of
numbers that can be identified by an
initial field in the bit-patterns used to
represent them).
segments with dynamically determined types
use BIBOP (table indexed by segments) to identify types
identify type in the segment itself
first word of segment
last word of segment
on a segmented architecture like the 80x86, one location has
many addresses so the (machine) segment
part of the address can be set to the
(representation) segment of the data type
being represented, and the offset can be
set in such a way that any object can be in
any machine segment.
on machines that ignore the high bits of pointer, these bits can
be used for free boxing/unboxing
make everything a legal IEEE float, using the NaN bit patterns to
encode all other types of data.
object-pointers (untagged pointers refering to self-identifying blocks
on the heap)
combinations of this scheme with the tagged-word scheme
descriptors (two-word data elements divided into a type word and a
value word)
encoding a qualifier (address+length representation of a sequence)
in a descriptor
encoding a cons cell in a descriptor
direct representation of floats
segregated type codes (type information is kept elsewhere)
type information for globals kept in a global area
type information for locals kept in stack frame
type information kept in header of aggregate objects
Thanks to Paul Tarau, Richard Fateman, Mikael Pettersson, Nick Thompson,
Andrew Wright, Benjamin Goldberg, Aubrey Jaffer, Eric Benson, Marc
Brandis, Kelvin Nilsen, Stavros Macrakis, Alexandr Kopilovich, Pekka
Pirinen, William Griswold, David Keppel, Marc-Michael Brandis, Mark
Tillotson, Richard Brooksby, cowan@magpie.linknet.com, Hintermeier Claus,
Hendrik Boom, and Tim Lindholm for your help.
--
David Gudeman
gudeman@cs.arizona.edu
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Fri Apr 30 12:02:07 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: Pat Terry <CSPT@giraffe.ru.ac.za>
Subject: COCO/R bug fix
Message-ID: <93-04-117@comp.compilers>
Keywords: tools, LL(1)
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: Compilers Central
Date: Fri, 30 Apr 1993 20:37:26 GMT
Approved: compilers@iecc.cambridge.ma.us
John Gough has reported a bug in Coco/R (the LL(1) parser generator that
originated from Hanspeter Mossenbock in Zurich). It exists in my MS-DOS
port (version 1.27), which I know several readers of this group have used,
and I suspect exists in the Oberon version too. A simple input that will
set it off is the following
COMPILER E
PRODUCTIONS
E = % | % .
END E.
The problem occurs when there are unrecognisable symbols (like %) in the
alternatives for a production. It does not always cause trouble; when it
does, the system loops infinitely.
The fix is as follows:
In CRP.MOD the code for PROCEDURE Term currently allows one to leave the
procedure without assigning proper values to the parameters gL and gR if
an unrecognisable terminal is encountered. A simple extra line sorts that
out
PROCEDURE Term (VAR gL, gR: INTEGER);
VAR
gL2, gR2: INTEGER;
BEGIN
gL := 0; gR := 0; (* <= =============== add line here *)
IF In(symSet[2], sym) THEN (* This is the DOS version; Oberon
one is slightly different here *)
Factor(gL, gR);
The simplest fix is to alter CR.ATG. If you do this as below you can
generate a new CRP.MOD file by a bootstrap process:
CR.ATG : we need to get the extra line generated ======
so the attribute grammar needs fixing |
|
Term<VAR gL, gR: INTEGER> (.VAR |
gL2, gR2: INTEGER;.) |
= (.gL := 0; gR := 0.) <----- add here
( Factor <gL, gR>
{ Factor <gL2, gR2> (.CRT.ConcatSeq(gL, gR, gL2, gR2).)
}
| (.gL := CRT.NewNode(CRT.eps, 0, 0); gR := gL.)
).
Pat Terry, Computer Science, Rhodes University, GRAHAMSTOWN 6140, RSA
cspt@alpha.ru.ac.za or cspt@giraffe.ru.ac.za or pdterry@psg.com
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.
From compilers Fri Apr 30 12:02:40 EDT 1993
Newsgroups: comp.compilers
Path: iecc!compilers-sender
From: jourdan@minos.inria.fr (Martin Jourdan)
Subject: Re: Dynamic Slices...
Message-ID: <93-04-118@comp.compilers>
Keywords: debug
Sender: compilers-sender@iecc.cambridge.ma.us
Organization: INRIA, Rocquencourt, France
References: <93-04-078@comp.compilers>
Date: Fri, 30 Apr 1993 15:09:34 GMT
Approved: compilers@iecc.cambridge.ma.us
Peter Fritzson and his team at Linko"ping Univ. in Sweden have done quite
a bit of work on dynamic slicing. In particular, they have recently
published a survey on static and dynamic slicing which might be of
interest to you. Contact Peter (paf@ida.liu.se).
Martin Jourdan <Martin.Jourdan@inria.fr>, INRIA, Rocquencourt, France.
Phone +33-1-39-63-54-35, fax +33-1-39-63-53-30
--
Send compilers articles to compilers@iecc.cambridge.ma.us or
{ima | spdcc | world}!iecc!compilers. Meta-mail to compilers-request.