home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: sci.math.stat
- Path: sparky!uunet!stanford.edu!EE.Stanford.EDU!usenet
- From: art@playfair.Stanford.EDU (Art Owen)
- Subject: Neural Nets & Statistics
- Message-ID: <1992Aug25.174906.5193@EE.Stanford.EDU>
- Sender: usenet@EE.Stanford.EDU (Usenet)
- Organization: Stanford University
- Date: Tue, 25 Aug 92 17:49:06 GMT
- Lines: 15
-
- One way for statisticians to think of neural nets
- is as "recursive generalized linear models".
- If you are familiar with glms, then consider a
- glm in which each predictor may in fact be the
- result of a glm on another set of predictors.
-
- I wouldn't claim that the above covers all neural
- models, but it does cover "feedforward neural nets
- with a single hidden layer", which are among the
- ones most often used on statistical problems.
-
- To explain glm's to a connectionist, you can
- say they are feedforward nets with "no hidden layer".
-
- Art Owen
-