home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.ai.neural-nets
- Path: sparky!uunet!decwrl!access.usask.ca!kakwa.ucs.ualberta.ca!alberta!arms
- From: arms@cs.UAlberta.CA (Bill Armstrong)
- Subject: Re: Network Inversion
- Message-ID: <arms.714522787@spedden>
- Sender: news@cs.UAlberta.CA (News Administrator)
- Nntp-Posting-Host: spedden.cs.ualberta.ca
- Organization: University of Alberta, Edmonton, Canada
- References: <BtCqHo.AtH.1@cs.cmu.edu>
- Date: Sat, 22 Aug 1992 22:33:07 GMT
- Lines: 26
-
- tjochem+@CS.CMU.EDU (Todd Jochem) writes:
-
- >I'm looking for references to network inversion papers. The basic idea
- >I'm interested in is presenting the network's output with a signal and
- >by back-propagating this signal though the layer(s), recreating an input
- >which could have created the applied output signal.
-
- This is intractable for a general multi-layer perceptron.
-
- Proof: We have to show it for the special case of ALNs, which are
- trees of nodes realizing AND, OR, LEFT and RIGHT functions, with
- leaves connected to input bits and complements. The same input bit or
- complement may be sent to many leaves. It will be enough to show it
- for a two-layer tree, with ORs feeding into an AND. Back-propagating
- a 1 signal is just CNF-satisfiability, and is NP-complete.
-
- Comment: the difficulty arises because back-propagating signals often
- converge at the same node, either at the inputs (ALNs) or in hidden
- layers (MLPs), and they are likely to have contradictory values when
- they do.
-
- --
- ***************************************************
- Prof. William W. Armstrong, Computing Science Dept.
- University of Alberta; Edmonton, Alberta, Canada T6G 2H1
- arms@cs.ualberta.ca Tel(403)492 2374 FAX 492 1071
-