home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.ai.neural-nets
- Path: sparky!uunet!psinntp!news.columbia.edu!cunixb.cc.columbia.edu!rs69
- From: rs69@cunixb.cc.columbia.edu (Rong Shen)
- Subject: Re: How to train a lifeless network (of "silicon atoms")?
- Message-ID: <1992Nov22.215822.7238@news.columbia.edu>
- Sender: usenet@news.columbia.edu (The Network News)
- Nntp-Posting-Host: cunixb.cc.columbia.edu
- Reply-To: rs69@cunixb.cc.columbia.edu (Rong Shen)
- Organization: Columbia University
- References: <1992Nov21.002654.13198@news.columbia.edu> <1992Nov22.182325.24185@dxcern.cern.ch>
- Date: Sun, 22 Nov 1992 21:58:22 GMT
- Lines: 19
-
- In article <1992Nov22.182325.24185@dxcern.cern.ch> block@dxlaa.cern.ch (Frank Block) writes:
-
- (junk deleted)
-
- >What you normally do during training is to present (taking you example) the
- >words 'hello' and 'goodbye' alternatively. You should not train the net first
- >just on one and then, when it has learned to recognize it, on the other.
- >The training is a statistical process which in the end (let's hope) converges
- >to a good set of weights (a compromise which recognizes all patterns in an
- >optimal way).
-
- Thanks, Frank.
-
- If I feed the words alternately, how would I train the network
- to recognize 99,999 words? Would not the 99,999th word erase the 1st
- word?
-
- --
- rs69@cunixb.cc.columbia.edu
-