home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!charon.amdahl.com!pacbell.com!mips!swrinde!sdd.hp.com!caen!destroyer!ubc-cs!alberta!ttlg!f36.n342.z1.UUCP!Monroe.Thomas
- From: Monroe.Thomas@f36.n342.z1.UUCP (Monroe Thomas)
- Newsgroups: comp.ai.neural-nets
- Subject: Re: Wild values
- Message-ID: <13.2A92A69A@ttlg.UUCP>
- Date: 19 Aug 92 19:51:02 GMT
- Sender: ufgate@ttlg.UUCP (newsout1.26)
- Organization: FidoNet node 1:342/36 - Through the Looking, Edmonton Alta
- Lines: 31
-
- I think that the whole point of Bill's argument is that you can't
- guarantee that the output of a BP net on an input between two
- neighbouring training points a and b will lie between the true
- function values f(a) and f(b), unless you carefully craft your network
- using a priori knowledge of the function.
-
- So, eliminating "wild values" that can exist between training points
- in a BP net becomes:
-
- 1) a design process... or,
-
- 2) you can exhaustively test your net on all possible points on
- the input domain to ensure no "wild values" exist on output...
- or,
-
- 3) you can examine the weights of your BP net after training to
- ensure that they will not cause "wild values".
-
- Option 2 is out of the question for large input spaces... option 3
- gets real hard, real fast as the number of elements and layers
- increase (ie, NP complete).
-
- Does anyone have a "safe" design process, so that they can be ensured,
- a priori, that their BP net, once trained, will never produce "wild
- values"? One partially satisfactory method would be to bound the
- magnitude of the weights... but this can lead to poor function
- fitting, and may prevent convergence.
-
- -Monroe
-
- * OLX 2.2 * "I'm going to kill Dracula," said Tom painstakingly.
-