home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.ai.neural-nets
- Path: sparky!uunet!usc!cs.utexas.edu!qt.cs.utexas.edu!yale.edu!yale!gumby!destroyer!ubc-cs!unixg.ubc.ca!kakwa.ucs.ualberta.ca!alberta!arms
- From: arms@cs.UAlberta.CA (Bill Armstrong)
- Subject: Re: Neural Nets and Brains
- Message-ID: <arms.712337444@spedden>
- Sender: news@cs.UAlberta.CA (News Administrator)
- Nntp-Posting-Host: spedden.cs.ualberta.ca
- Organization: University of Alberta, Edmonton, Canada
- References: <BILL.92Jul23135614@ca3.nsma.arizona.edu> <arms.711935064@spedden> <50994@seismo.CSS.GOV> <2905@mdavcr.mda.ca>
- Date: Tue, 28 Jul 1992 15:30:44 GMT
- Lines: 43
-
- garry@mdavcr.mda.ca (Gary Holmen) writes:
-
- >After reading this message I decided to go home and run the ALN software
- >on my IBM PC. It is only a 25 Hz 386 with 2 Meg of memory and the
- >package seemed to work fine. I trained 3 trees on the times tables and
- >it finished in about 45 mins. The answer I recieved was 1*7 which is about
- >the same accuracy I've seen from the BP algorithms I've used.
-
- He continues, referring to Mike Black's test:
-
- >My guess the problem with your answer had to do with the
- >random_walk portion of the algorithm rather than the ALN's themselves.
- >I believe that this has been noted for modification in future releases.
-
- The fact is that ALNs do not interpolate. How could they if they
- don't do arithmetic? ALNs generalize by being insensitive to
- perturbations of the inputs. The anwer you obtained is just what I
- would expect: the anwer to 1*6 which was missing from the training
- set, was generalized from 1*7 which was in it. Mike's problem where
- he obtained ~35 (was it?) is possibly due to a bad coding of the
- output variable (not enough bits)
-
- By the way, 3 trees only won't do it. I suppose you mean 3 trees per bit
- of the output code?
-
- In the next version of our software, we are scrapping the random walk
- technique. It works, but it doesn't fit in to a safe design
- methodology. Interpolation will be handled by smoothing the "blocky"
- functions that ALNs produce. (There is a close relation to BP here.
- We will use something like the derivative of the usual sigmoid as a
- kernel, but chopped to a bounded support and made infinitely-often
- differentiable. The use of a finite support makes the result a lot
- faster to compute.)
-
- > ... Look at them objectively and learn their advantages
- >and disadvantages so that we can use neural networks to their full potential.
-
- Thanks. One can't ask more.
- --
- ***************************************************
- Prof. William W. Armstrong, Computing Science Dept.
- University of Alberta; Edmonton, Alberta, Canada T6G 2H1
- arms@cs.ualberta.ca Tel(403)492 2374 FAX 492 1071
-