home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.ai.neural-nets
- Path: sparky!uunet!mcsun!dxcern!dxlaa.cern.ch!block
- From: block@dxlaa.cern.ch (Frank Block)
- Subject: Re: NN analysis
- Message-ID: <1992Nov13.151134.15587@dxcern.cern.ch>
- Sender: news@dxcern.cern.ch (USENET News System)
- Reply-To: block@dxlaa.cern.ch (Frank Block)
- Organization: CERN, European Laboratory for Particle Physics, Geneva
- References: <1dvfvkINN2tv@manuel.anu.edu.au>
- Date: Fri, 13 Nov 1992 15:11:34 GMT
- Lines: 67
-
-
- In article <1dvfvkINN2tv@manuel.anu.edu.au>, shuping@andosl.anu.edu.au (Shuping RAN) writes:
- |> Hello,
- |>
- |> Currently I am trying to understand how a trained NN performed
- |> the given task. What is its internal funtionality, whether its internal
- |> parameters relate to the real parameters of the given problem in some
- |> way.
- |>
- |> Could some one give me some ideas of how to analyse a trained NN,
- |> or some references.
- |> Thank you in advance,
- |>
- |> -Shuping RAN
- |>
- --
- First of all: you're addressing an extremely non-trivial problem with your
- question. There are some ways of which I know how to approach the understanding
- of what a net is in fact doing or what it has learned.
-
- 1.) Network inversion
- Here you have an algorithm which generates input patterns for a
- previously trained network which gives a desired, predefined output.
- This seems to be a good way for finding 'essential features'.
-
- J.Kindermann and A.Linden, "Inversion of Multilayer Networks"
- It appeared in Complex Systems around 1990/91. Just have a look, there
- are not too many to check.
-
-
- 2.) Rule Induction using RuleNet
- Check the hidden unit's activities for correlations using statistical
- methods like Prinicipal Component and Canonical Discriminant Analysis.
-
- C.McMillan, M.C.Mozer and P.Smolensky, "Rule Induction through
- Integrated Symbolic and Subsymbolic Processing", to appear in 'Advances
- in Neural Information Processing Systems IV'. This one I also got
- via ftp, don't remember from where. The authors are also listening to
- this newsgroup...ask them.
-
-
- 3.) Statistical analysis of hidden nodes
- By setting up a special architecture and imposing some restrictions
- to the learning process the resulting learned weights can be directly
- mapped to rules.
-
- S.Dennis and S. Phillips, "Analysis Tools for Neural Networks"
- I got it by ftp from I don't know where. But the authors should be
- reading this newsgroup and can tell you where.
-
-
- 4.) Self organizing networks for analysis of hidden layer activations
- This is a nice idea somebody mentioned to me which hasn't been used
- yet, as far as I know.
-
- Hope this helps a bit. Anyway, let me know when you found the solution to
- this problem :-)
-
- Frank
-
- ===============================================================================
- Frank Block
- Div. PPE BLOCKF@vxcern.cern.ch
- CERN e-mail:
- CH-1211 Geneve 23 BLOCKF@cernvm.cern.ch
- Switzerland
- ===============================================================================
-