home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.ai.neural-nets
- Path: sparky!uunet!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!destroyer!ubc-cs!unixg.ubc.ca!kakwa.ucs.ualberta.ca!alberta!arms
- From: arms@cs.UAlberta.CA (Bill Armstrong)
- Subject: Re: Neural Nets and Brains
- Message-ID: <arms.711935064@spedden>
- Sender: news@cs.UAlberta.CA (News Administrator)
- Nntp-Posting-Host: spedden.cs.ualberta.ca
- Organization: University of Alberta, Edmonton, Canada
- References: <1992Jul21.162033.57397@cc.usu.edu> <1992Jul23.013755.18847@hubcap.clemson.edu> <arms.711907358@spedden> <BILL.92Jul23135614@ca3.nsma.arizona.edu>
- Date: Thu, 23 Jul 1992 23:44:24 GMT
- Lines: 76
-
- bill@nsma.arizona.edu (Bill Skaggs) writes:
-
- >arms@cs.UAlberta.CA (Bill Armstrong) writes:
-
- > >First off, isn't it rather strange that the most widespread
- > >artificial model of neural operation. the multilayer perceptron,
- > >uses continuous quantities on its connections, while the dendrites
- > >and axons of neurons use "zero or one" type action potentials?
-
-
- > As a matter of fact, the
- >McCulloch-Pitts model used binary neurons.
-
- Right, I should have said neural learning, not neural operation.
-
- > >Until physiological psychologists start studying adaptive logic
- > >networks, can anyone expect much progress on understanding the
- > >brain?
-
-
- > ... There is no reason to think that continuous models are
- >incapable of shedding any light on nervous systems. For theoretical
- >work they have some real advantages -- among the most important being
- >that they make possible certain learning rules, such as backprop, that
- >cannot be used with binary models.
-
- The theory of multilayer perceptrons as used in BP is horrifying. You
- need Kolmogorov's theorem just to show you can do everything you want
- to approximate continuous functions, but nobody can apply it in
- practice. In contrast, every logic designer who has heard of CNF and
- DNF finds it *obvious* that a adaptive logic net can synthesize any
- boolean function. So BP nets lose in a BIG way on the theory side, sorry.
-
- As for learning, BP is just gradient descent, and it leads to networks
- that are *extremely* inefficient -- like trying to write C programs
- with no conditional statements like "if" or "for" or "while". For
- adaptive algorithms in logic networks that are far superior to
- backprop, you can take a look at the atree release 2.6 adaptive logic
- network simulator.
-
- Sorry, but after you have looked at ALN software, you may no longer
- feel non-logical nets have any real advantages at all.
-
- *****
- Here is a typical session with ftp, aimed at retrieving the adaptive
- logic network (ALN) software. Unix prompts are "%", ftp prompts are "ftp>" and
- editorial comments are in "[]".
- % ftp [start ftp]
- ftp> open menaik.cs.ualberta.ca [or "open 129.128.4.241"]
- Name (menaik.cs.ualberta.ca:arms): anonymous
- Password: [type your login id here, if you like]
- ftp> cd pub
- ftp> type binary [ we are dealing with a compressed file]
- ftp> get atree2.tar.Z
- ftp> quit
- % uncompress atree2.tar.Z
- % tar -xvf atree2.tar [and there you have it, and similarly for
- atree2.ps.Z (a compressed PostScript document),
- alnlpst-1991.Z (the alnl mailing list archive for 1991)] and
- atre26.exe (the atree release 2.6 code for use with Windows 3.0 or 3.1
- on the IBM-PC and compatibles). Release 2.5 is now obsolete.
-
- NB: If you want to economize on transmission time, you can omit the source
- code by getting a26exe.exe instead of atre26.exe.
-
-
-
-
-
-
-
- --
- ***************************************************
- Prof. William W. Armstrong, Computing Science Dept.
- University of Alberta; Edmonton, Alberta, Canada T6G 2H1
- arms@cs.ualberta.ca Tel(403)492 2374 FAX 492 1071
-