home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!ogicse!das-news.harvard.edu!cantaloupe.srv.cs.cmu.edu!crabapple.srv.cs.cmu.edu!danz
- From: danz+@CENTRO.SOAR.CS.CMU.EDU (Dan Zhu)
- Newsgroups: comp.ai.neural-nets
- Subject: some basic questions
- Message-ID: <C0E58K.KwL.1@cs.cmu.edu>
- Date: 5 Jan 93 17:14:36 GMT
- Article-I.D.: cs.C0E58K.KwL.1
- Sender: news@cs.cmu.edu (Usenet News System)
- Organization: School of Computer Science, Carnegie Mellon
- Lines: 22
- Originator: danz@CENTRO.SOAR.CS.CMU.EDU
- Nntp-Posting-Host: centro.soar.cs.cmu.edu
-
-
- I have some problems with the input and output representation.
-
- - Is symmetric sigmoid function (-0.5, 0.5) or (-1, 1) always
- better than the asymmetric one (0, 1)? Any reference?
- - Shall I do this kind of scaling with the input representation also?
- - What would be a good cut point to test the network from time to time
- to avoid the overtraining?
- - I remember I read something like "three layer (with one hidden layer)
- is sufficient for the generalization of the network...". Could anyone
- give me any pointer to the exact reference for it?
- - Also, is there any new reference about the clue for selecting
- the range of "hidden nodes", "learning rate", "momentum" and
- judgement about the "initial weight"?
-
-
-
- Thanks in advance!
-
-
-
-
-