home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.ai.neural-nets
- Path: sparky!uunet!munnari.oz.au!sol.deakin.OZ.AU!fulcrum.oz.au!steve
- From: steve@fulcrum.oz.au (Steve Taylor)
- Subject: Re: How to train a lifeless network (of "silicon atoms")?
- Message-ID: <1992Nov23.232129.13840@fulcrum.oz.au>
- Organization: The Fulcrum Consulting Group
- References: <1992Nov21.002654.13198@news.columbia.edu>
- Date: Mon, 23 Nov 1992 23:21:29 GMT
- Lines: 25
-
- rs69@cunixb.cc.columbia.edu (Rong Shen) writes:
-
- >Your Highness:
- Hey, I like it.
-
- > Suppose you have a neural network and you want to train it to
- >perform a task; for the moment, let's say the task is to recognize
- [... deleted ...]
- >Therefore, it is extremely likely that one training session will erase
- >the efforts of previous sessions.
-
- > My question is, What engineering tricks shall we use to
- >overcome this apparent difficulty?
- One training session will undo previous sessions. The trick is to interleave
- your sets of training data so the net never has time to concentrate on
- solving just part of your problem.
-
- It's definitely in the "feature, not bug" category too, as it means that a nn
- can adapt itself to a new problem when required.
-
- >rs69@cunixb.cc.columbia.edu
-
- Steve
- steve@fulcrum.oz.au
-
-