home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.ai.neural-nets
- Path: sparky!uunet!cs.utexas.edu!zaphod.mps.ohio-state.edu!caen!destroyer!ubc-cs!unixg.ubc.ca!kakwa.ucs.ualberta.ca!uofapsy.uucp!mike
- From: mike@psych.ualberta.ca (Mike Dawson)
- Subject: Re: Reducing Training time vs Generalisation
- Message-ID: <mike.714067317@psych.ualberta.ca>
- Keywords: back propagation, training, generalisation
- Sender: news@psych.ualberta.ca
- Organization: Psychology, University of Alberta, Edmonton
- References: <1992Aug16.063825.15300@julian.uwo.ca> <1992Aug16.213939.15944@ccu1.aukuni.ac.nz> <arms.714014919@spedden>
- Date: Mon, 17 Aug 1992 16:01:57 GMT
- Lines: 23
-
- Bill Armstrong's point about defining generalization is an extremely
- good one. In my view, most ANN researchers view generalization from
- an "output performance" perspective, as noted by Bill: you train a
- net on a subset of possible stims, and then measure the network's
- responses to the remaining ones.
-
- In a 1989 review of pattern classification by ANNs, Lippmann points
- out an equally valid notion of generalization, in terms of "learning
- performance". Specifically, you train a network on some subset of
- stims, and then train the network on the remaining patterns. The
- issue is whether learning on the new sets of patterns is helped
- in any way from the previous training. In the memory literature in
- cognitive psychology, this effect is called "savings".
-
- Given that it may be extremely difficult to interpolate judgements
- about a function because of its complexity, or potential discontinuity,
- it may be that generalization of neural networks is more appropriately
- viewed from this savings perspective.
- --
- Michael R.W. Dawson email: mike@psych.ualberta.ca
- Biological Computation Project, Department of Psychology
- University of Alberta, Edmonton, AB CANADA T6G 2E9
- Tel: +1 403 492 5175 Fax: +1 403 492 1768
-