home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!usc!sdd.hp.com!swrinde!mips!mips!munnari.oz.au!network.ucsd.edu!sdcc12!cs!demers
- From: demers@cs.ucsd.edu (David DeMers)
- Newsgroups: comp.ai.neural-nets
- Subject: Re: Network Inversion
- Message-ID: <37125@sdcc12.ucsd.edu>
- Date: 21 Aug 92 23:48:34 GMT
- References: <BtCqHo.AtH.1@cs.cmu.edu>
- Sender: news@sdcc12.ucsd.edu
- Organization: =CSE Dept., U.C. San Diego
- Lines: 87
- Nntp-Posting-Host: beowulf.ucsd.edu
-
- In article <BtCqHo.AtH.1@cs.cmu.edu> tjochem+@CS.CMU.EDU (Todd Jochem) writes:
- >I'm looking for references to network inversion papers. The basic idea
- >I'm interested in is presenting the network's output with a signal and
- >by back-propagating this signal though the layer(s), recreating an input
- >which could have created the applied output signal. I think that this could
- >be pretty tough to do with some network configs. because of underdetermined
- >linear equations, but would like pointers to any work that addresses this
- >topic anyway. I'll summarize to the net any responses I get. Thanks,
-
- There has been a fair amount of work done on this. Or at least
- on the differential version; backpropagating errors to the inputs.
- Differentially, if you consider the network as computing y = f(x),
- then by backpropagating an error through the network (dy) you get
- dx = J^t(x) dy (where J^t(x) is the transpose Jacobian at x).
- There is a fairly standard control technique which uses this form.
- See Jordan, Michael I. & David Rumelhart, "Forward Models: Supervised
- Learning with a Distal Teacher" (probably in Cognitive Science
- early 1992, I have a preprint ...) where a NN is used for system ID
- purposes (to construct a forward model), then the model used
- to control a real physical system (e.g. inverse kinematics &
- dynamics).
-
- Hal White and Ron Gallant have a paper in Neural Networks 5, no. 1
- "On Learning the Derivatives of an Unknown Mapping with Multilayer
- Feedforward Networks" where it is shown that the assumption of
- Jordan & Rumelhart in the above paper that the derivatives of
- the model (the NN) approximate the derivatives of the true function
- is reasonable, and that methods exist which guarantee (asymptotically)
- that one can approximate the derivatives arbitrarily well from
- (x,y) data.
-
- Oops, actually this is a follow up to
- Hornik, Stinchecombe & White, "Universal Approximation of an
- unknown mapping and its derivatives using multilayer feedforward
- networks" Neural Networks 3, 551.
-
- Mike Dyer and Risto Mikkulainen have also looked at the
- input deltas for some purpose which I've forgotten, also
- the cite, but should be easy to find...
-
- Mike Rossen in NIPS 3 has a paper on closed form inversion
- of a feedforward network, but his methods are restricted to
- networks in which layer n has no more units than layer n - 1,
- and yields a pseudo-inverse solution (since the inverse
- problem is underconstrained, there will typically be an
- infinite number of solutions forming some submanifold
- in the input space). I think his paper could be generalized,
- however.
-
- I'm not sure how to do direct inversion by BP; how do you
- backpropagate a scalar signal?
-
- In any event, if the mapping is from R^n to R^n there
- will normally be a finite set of solutions (assuming
- the mapping is between compact manifolds), and if the
- mapping is from R^n to R^m where n > m then there will
- normally be a finite set of (n-m) dimensional manifolds
- as solutions to the inverse problem.
- See, e.g., Guillemin & Pollack, "Differential Topology"
-
- So a global difficulty is to pick from the finite set,
- and a local problem is to pick one from the manifold.
-
- I'm trying to invert what appear to be non-invertible functions
- for my thesis...
-
- What I do is analyze the forward function (from (x,y) pairs)
- assuming it is smooth, and split it up into a finite set of
- trivial fiber bundles, each of which can be parameterized and
- the inverses approximated directly. The good news is that it
- works, the bad news is that it is at least exponential in
- the dimensionality of the input space (but what isn't?), more
- good news is that for robotics, as you know, 4 dimensions is
- useful (redundant positioner) and 7 really valuable (e.g.
- Robotics Research K-1207, or similar redundant manipulator).
-
- Please pass along other references & replies!
-
- Thanks,
- Dave
-
-
- --
- Dave DeMers ddemers@UCSD demers@cs.ucsd.edu
- Computer Science & Engineering C-014 demers%cs@ucsd.bitnet
- UC San Diego ...!ucsd!cs!demers
- La Jolla, CA 92093-0114 (619) 534-0688, or -8187, FAX: (619) 534-7029
-