home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!ogicse!uwm.edu!zaphod.mps.ohio-state.edu!news.acns.nwu.edu!casbah.acns.nwu.edu!ecm
- From: ecm@casbah.acns.nwu.edu (Edward Malthouse)
- Newsgroups: comp.ai.neural-nets
- Subject: Cybenko's Paper
- Message-ID: <1992Sep8.151142.21643@news.acns.nwu.edu>
- Date: 8 Sep 92 15:11:42 GMT
- Article-I.D.: news.1992Sep8.151142.21643
- Sender: usenet@news.acns.nwu.edu (Usenet on news.acns)
- Organization: Northwestern University, Evanston Illinois.
- Lines: 33
-
- I have read Cybenko's paper "Approximation by Superpoitions of a Sigmoidal
- Function" appearing in Math. Control Signals Systems, 2: 303-314 (1989) and
- have seen it referenced in several other papers. Cybenko's paper shows that
- a 3-layer, feedforward neural network with a hidden layer with sigmoidal
- activation functions and an output layer with linear activation functions can
- uniformly approximate any continuous function which maps from R^n into R^1
- with support in the unit hypercube. Cybenko defines a sigmoidal function as
- a function which has a limit of one as x approaches infinity and zero as x
- approaches negative infinity. I have read that certain feedforward networks
- which do not fit this form can often produce better training results. The
- deviations from Cybenko's result are as follows:
-
- - Include direct connections between input and output layers.
-
- - Use nonlinear activation functions on output nodes.
-
- - Use the hyperbolic tangent function, which maps into [-1,1], as activation
- function.
-
- - Include more than one output node so that mappings between R^p and R^q,
- q>1, can be approximated.
-
- Has there been any work since Cybenko generalizing the result and showing
- that networks with these modifications can also uniformly approximate an
- arbitrary continuous function?
-
- Please e-mail responses and I will summarize if there is interest.
-
- Thank you in advance for you help.
-
- Ed Malthouse
- Department of Statistics
- Northestern University
-