home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.ai.neural-nets
- Path: sparky!uunet!elroy.jpl.nasa.gov!NewsWatcher!user
- From: sgrenander@NASAMAIL.JPL.NASA.GOV (Sven Grenander)
- Subject: Re: How to train a lifeless network (of "silicon atoms")?
- Message-ID: <sgrenander-231192125621@128.149.34.44>
- Followup-To: comp.ai.neural-nets
- Sender: news@elroy.jpl.nasa.gov (Usenet)
- Nntp-Posting-Host: 128.149.34.44
- Organization: Jet Propulsion Laboratory
- References: <1992Nov21.002654.13198@news.columbia.edu> <1992Nov22.182325.24185@dxcern.cern.ch> <1992Nov22.215822.7238@news.columbia.edu>
- Date: Mon, 23 Nov 1992 21:01:45 GMT
- Lines: 30
-
- In article <1992Nov22.215822.7238@news.columbia.edu>,
- rs69@cunixb.cc.columbia.edu (Rong Shen) wrote:
- >
- > In article <1992Nov22.182325.24185@dxcern.cern.ch> block@dxlaa.cern.ch (Frank Block) writes:
- >
- > (junk deleted)
- >
- > >What you normally do during training is to present (taking you example) the
- > >words 'hello' and 'goodbye' alternatively. You should not train the net first
- > >just on one and then, when it has learned to recognize it, on the other.
- > >The training is a statistical process which in the end (let's hope) converges
- > >to a good set of weights (a compromise which recognizes all patterns in an
- > >optimal way).
- >
- > Thanks, Frank.
- >
- > If I feed the words alternately, how would I train the network
- > to recognize 99,999 words? Would not the 99,999th word erase the 1st
- > word?
- >
- > --
- > rs69@cunixb.cc.columbia.edu
-
- It would if your learning rate is too high. Using BrainMaker I have found
- that I have to reduce the learning (training ?) rate from the default 1.0
- to as little as .01 when presenting a large training set (~50,000 separate
- facts). I suspect that if all these training facts were as different as
- 'hello' and 'goodbye' the learning rate may have to be reduced ever more.
-
- -Sven
-