home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.ai.neural-nets
- Path: sparky!uunet!cs.utexas.edu!qt.cs.utexas.edu!yale.edu!cs.yale.edu!tsioutsias-dimitris
- From: tsioutsias-dimitris@CS.YALE.EDU (Dimitris Tsioutsias)
- Subject: What can NN really do.
- Message-ID: <1992Jul23.024136.6125@cs.yale.edu>
- Sender: news@cs.yale.edu (Usenet News)
- Nntp-Posting-Host: systemsx-gw.cs.yale.edu
- Organization: Yale University Computer Science Dept., New Haven, CT 06520-2158
- Date: Thu, 23 Jul 1992 02:41:36 GMT
- Lines: 30
-
- I've been reading for some time the postings in this newsgroup and
- I have the feeling that many people have turned to the use or
- had to turn to the use of NN without having realized what actually
- can and what cannot be done with them. Many think that if you set up
- a bp net, and feed it with some data, then you have solved all your
- problems!
-
- In fact, NN are just a test bed for the much more complicated
- and comlpex (and thus evasive, along with its rules) prototype of
- knowledge aquisition, that is the brain. As with powerful supercomputers
- that run simulations to test the evolution of the Universe based on
- well established known physical laws but on a much smaller and simplified
- scale, the NN are just the tools to emulate the behaviour of the living
- tissue that forms up the brain.
-
- The point to understand here is that what is important is not so much the
- use of 10 or 100 units here or there, but the unraveling of the RULES that
- are hidden behind them. HOW these neurons interact with each other in such
- a massively parallel and compicated way, as happens with the brain itself.
-
- So, try to experiment with the rules and the dynamics of the nets and
- not merely with a good or better data set. After all, the human brain
- can easily generalize and be trained with poor training sets and even
- learn more from "false" data, that enable it to extrapolate, analyze and
- LEARN. Don't loose the forest behind the first few trees!
-
-
- ->dimitris
-
-
-