home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!pipex!bnr.co.uk!uknet!strath-cs!robert@cs.strath.ac.uk
- From: robert@cs.strath.ac.uk (Robert B Lambert)
- Newsgroups: comp.ai.neural-nets
- Subject: Perceptron Generalization
- Message-ID: <11363@baird.cs.strath.ac.uk>
- Date: 6 Jan 93 15:16:58 GMT
- Sender: robert@cs.strath.ac.uk
- Organization: Comp. Sci. Dept., Strathclyde Univ., Glasgow, Scotland.
- Lines: 32
-
- Hi.
-
- A question to all you NN gurus out there.
-
- To achieve an (E x 100%) recognition rate on a test set, where E is the
- acceptable fraction of errors on the test set, a two layer perceptron
- (1 hidden layer) with W network weights requires m training examples,
- (where training is done with the back-propagation algorithm) such that
- m>W/e. [Baum and Haussler, 1989]
-
- To what extent does this result affect the generalization performance of
- perceptrons with greater than one hidden layer?
-
- Given 10 inputs, does a 3-layer net with 6-6-4 nodes (hidden-hidden-output)
- require as many training cases as a 2-layer net with 6-4 nodes?
-
- I would appreciate any information on the generalization performance of
- perceptron nets with >1 hidden layer. I will post a summary of the results
- (if there are any).
-
- Thanks, Robert.
-
- [Baum and Haussler, 1989] - E.B.Baum and D.Haussler. "What size of net gives
- valid generalization?", Neural Computation, vol. 1
- pages 151-160.
-
- +-----------------------------+-----------------------------------------------+
- | Robert B Lambert | E-Mail : robert@cs.strath.ac.uk |
- | Dept. Computer Science, +-----------------------------------------------+
- | University of Strathclyde, | The brain's function is to cool the blood |
- | Glasgow, Scotland. | - Aristotle 400 B.C. |
- +-----------------------------+-----------------------------------------------+
-