home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.ai.neural-nets
- Path: sparky!uunet!gatech!destroyer!ubc-cs!unixg.ubc.ca!kakwa.ucs.ualberta.ca!alberta!arms
- From: arms@cs.UAlberta.CA (Bill Armstrong)
- Subject: Re: Wild values (was Reducing Training time ...)
- Message-ID: <arms.714513377@spedden>
- Sender: news@cs.UAlberta.CA (News Administrator)
- Nntp-Posting-Host: spedden.cs.ualberta.ca
- Organization: University of Alberta, Edmonton, Canada
- References: <9208201551.AA02766@neuron.siemens.com>
- Date: Sat, 22 Aug 1992 19:56:17 GMT
- Lines: 53
-
- kpfleger@NEURON.SIEMENS.COM (Karl Pfleger) writes:
-
- >1 quick point and 1 wild idea:
-
- >First, if one desires to avoid wild output values for certain regions of
- >input space, one ought to have training pairs from that region of input
- >space in the training set. The point about desiring certain behavior on
- >0 to 1 and not including any training pairs from that region has already
- >been made.
-
- I agree that this would be desirable. In some cases you don't have
- the training points, and in other cases you could never get them
- because there are just too many points in the space. High dimensional
- real-valued data will always have the latter problem.
-
- >Vaguely similar: since the inputs that will be thrown at the system in actual
- >use will have some probability distribution based on whatever the system is
- >doing, the training set should be generated by sampling the same
- >distribution, or something as close to it as possible, NOT by picking a
- >few values by hand or by using a lattice or points regularly spaced
- >(unless that represents the distribution well).
-
- The same example of non-safety will also work (neglecting numerical
- errors) provided you take a random sampling of a few points according
- to any distribution you like, except where you are not lucky enough to
- have points in [0,1]. As I said, the fact that integer training and
- test points are used is not significant. The example of "wild"
- behavior is not that easy to break down.
-
-
- >I have a much more difficult time picturing wild values coming from a
- >network trained on a significant number of random, real inputs than I do
- >coming from a network trained on a handful of regularly spaced
- >integers.
-
- Try it and see. You'll still have the problem.
-
- >A wild idea for people trying to avoid wild values (e.g. for safety
- >critical applications etc.): Once the network has been trained and the
- >weights are fixed, ...
-
- Your idea needs work, but calculating the aprtials and getting a
- Lipschitz condition with a reasonable constant could work.
-
- Thanks for your comments.
-
- Bill
-
- --
- ***************************************************
- Prof. William W. Armstrong, Computing Science Dept.
- University of Alberta; Edmonton, Alberta, Canada T6G 2H1
- arms@cs.ualberta.ca Tel(403)492 2374 FAX 492 1071
-