home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.ai.neural-nets
- Path: sparky!uunet!cs.utexas.edu!qt.cs.utexas.edu!yale.edu!jvnc.net!gmd.de!jargon!al
- From: al@jargon.gmd.de (Alexander Linden)
- Subject: Re: Network Inversion
- Message-ID: <al.714496545@jargon>
- Sender: news@gmd.de (USENET News)
- Nntp-Posting-Host: jargon
- Organization: GMD, Sankt Augustin, Germany
- References: <BtD8FA.KL.1@cs.cmu.edu>
- Date: Sat, 22 Aug 1992 15:15:45 GMT
- Lines: 37
-
- sef@sef-pmax.slisp.cs.cmu.edu writes:
-
- >Most backprop-type networks are information-losing systems: many distinct
- >inputs give rise to the same output. You can't invert this kind of net
- >into another feed-forward net (of the usual kind) because you just end up
- >with a mushy average of the possible inputs for a given output.
-
- >-- Scott
-
- We should not talk about inversion in this case rather it is a kind
- of search for (may be) spurious minima in input space.
-
- They give no mushy averages but you can produce class boundaries.
-
- Assume a net which is allready trained on classifying digits:
-
- You can start with input I(0), lets say the bitmap of a three.
- The output of the net tells you thats a "3". Now you start to
- change I(0) with dE/dI and E = (T-O)**2 and T says the output should
- be a "9", for example. Now you see a sequence of I(1), I(2), ...
- which are gradually becoming more like a "9" than a "3", if your
- classifier is not sufficiently good, some distortions of a three
- are enough to give "9" at output.
- See Linden and Kindermann (1989) Int. Join Conf on NN, Washington D.C
- for illustrative examples.
-
- Alexander
-
- Alexander Linden | TEL. (49 or 0) 2241/14-2435, FAX. -2618 or -2889
- GMD - AI Research Division | TELEX 889469 gmd d
- P. O. BOX 1316 | email: A.Linden@gmd.de
- D-5205 Sankt Augustin 1, FRG
- --
- Alexander Linden | TEL. (49 or 0) 2241/14-2435, FAX. -2618 or -2889
- GMD - AI Research Division | TELEX 889469 gmd d
- P. O. BOX 1316 | email: A.Linden@gmd.de
- D-5205 Sankt Augustin 1, FRG
-