home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.ai.neural-nets
- Path: sparky!uunet!gumby!destroyer!ubc-cs!alberta!arms
- From: arms@cs.UAlberta.CA (Bill Armstrong)
- Subject: Re: Reducing Training time vs Generalisation
- Message-ID: <arms.714214353@spedden>
- Keywords: back propagation, training, generalisation
- Sender: news@cs.UAlberta.CA (News Administrator)
- Nntp-Posting-Host: spedden.cs.ualberta.ca
- Organization: University of Alberta, Edmonton, Canada
- References: <arms.714091659@spedden> <36944@sdcc12.ucsd.edu> <arms.714146123@spedden> <36967@sdcc12.ucsd.edu> <1992Aug18.231650.27663@cs.brown.edu>
- Date: Wed, 19 Aug 1992 08:52:33 GMT
- Lines: 21
-
- mpp@cns.brown.edu (Michael P. Perrone) writes:
-
- >The example given of a "wild" solution to a backprop problem
- >( f(x) = 40 [ 1/( 1 + e^40*(x - 1/4)) + 1/( 1 + e^-40*(x - 3/4)) -1 ] )
- >is certainly a valid solution. But whether gradient descent from an
- >initially "well-behaved" f(x) (e.g. one with suitable bounded derivatives)
- >would fall into the "wild" local minima is not clear.
-
- It is an absolute minimum, not a local minimum.
-
- >This example is more on the lines of an existence proof than a
- >constructive proof: Wild minima can exist but is gradient descent
- >likely to converge to them?
-
- Why don't you try it? The problem is trivial, easy to set it up. I
- have done it it does converge.
- --
- ***************************************************
- Prof. William W. Armstrong, Computing Science Dept.
- University of Alberta; Edmonton, Alberta, Canada T6G 2H1
- arms@cs.ualberta.ca Tel(403)492 2374 FAX 492 1071
-