home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.ai.neural-nets
- Path: sparky!uunet!brunix!cs.brown.edu!mpp
- From: mpp@cs.brown.edu (Michael P. Perrone)
- Subject: Re: Wild values (was Reducing Training time ...)
- Message-ID: <1992Aug20.212905.6383@cs.brown.edu>
- Sender: news@cs.brown.edu
- Organization: Center for Neural Science, Brown University
- References: <9208201551.AA02766@neuron.siemens.com>
- Date: Thu, 20 Aug 1992 21:29:05 GMT
- Lines: 16
-
- >A wild idea for people trying to avoid wild values (e.g. for safety
- >critical applications etc.): Once the network has been trained and the
- >weights are fixed, it should be possible to determine the maximum and
- >minimum output values for all inputs. This can be done with normal
- >gradient ascent and descent. Simply calculate partials of the output(s)
- >with respect to the input(s). Convergence should be much faster than
- >training networks in the first place due to the generally smaller number
- >of inputs than weights. Multiple runs from different starting positions
- >or the use of stochastic techniques likely to converge to global
- >maxima/minima can reduce the chance of not seeing a wild value that
- >actually exists. With a small number of inputs (definitely 1, maybe
- >a more) analytical techniques should be able to provably determine the
- >global maximum and minimum.
-
- your idea is not wild at all. it has been used by statisticians to
- evaluate the effect of various inputs on statistical models.
-