home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!gatech!rutgers!igor.rutgers.edu!planchet.rutgers.edu!nanotech
- From: sinster@cse.ucsc.edu (Darren Senn)
- Newsgroups: sci.nanotech
- Subject: Re: Multiple Experiences
- Message-ID: <Aug.14.00.12.54.1992.412@planchet.rutgers.edu>
- Date: 14 Aug 92 04:12:55 GMT
- Sender: nanotech@planchet.rutgers.edu
- Organization: University of California, Santa Cruz (CE/CIS Boards)
- Lines: 56
- Approved: nanotech@aramis.rutgers.edu
-
- In article <Aug.9.20.33.10.1992.15549@planchet.rutgers.edu> hsr4@vax.oxford.ac.uk (Auld Sprurklie) writes:
- >In article <Aug.6.20.51.51.1992.8765@planchet.rutgers.edu>, sinster@cse.ucsc.edu (Darren Senn) writes:
- >>[...]
- >> In order one memory of a network to be interdependant with another memory
- >> of the network, there must be some kind of feedback loop. Merely disabling
- >> this loop will allow an outside observer to examine memories independantly
- >> of one another. Each memory can't be accessed, however, without first
- >> knowing the triggering association. I'm using "memory" here to refer to
- >> any stored information, not merely remembered experiences. Unless you
- >> do something strange in the construction/definition of the network,
- >> the memories aren't interdependant: only their storage is.
- >
- >Although my knowledge of NNs is small (possibly even trivial, although it is
- >growing), from what I've learned so far it would appear that all memories are
- >interdependent to an extent, in that the addition of a new memory (by learning)
- >results perhaps in a change to weights, thresholds, maybe even to transfer
- >functions until the entire net stabilises again.
- >
- >In that case, the original values held by (associated with) one or more neurons
- >will have changed. Addition of another memory will result in further changes,
- >such that if one were to attempt to remove an earlier memory, those memories
- >'experienced' subsequently would need to be represented and the net stabilised
- >each time, in order that a 'hole' should not be left (the memories lying before
- >and after the removed memory might experience a kind of leap between them - on
- >the basis that memories tend to follow sequentially in presentation and are
- >therefore likely to be associated anyway).
- >[...]
-
- The memories (associations) stored by a neural network are very strongly
- distinguished from the means by which those memories are stored (the
- connection weights, thresholds -- which are just connections to always-on
- neurons -- and transfer functions). This is where a neural network
- gets its fault tolerance. I can take a suitably large network, and change
- a subset of its connection weights to random values without causing its
- output to leave whatever tolerance I choose. The smaller the network or
- tighter my tolerance, the smaller the number of weights I can change.
- It's important to remember that any particular memory stored in a neural
- network is distributed through _all_ of the neurons. Some play a larger
- role than others, to be sure, but the distribution is still complete.
-
- A useful image to keep in mind is the graph of the energy of association
- for various memories. Imagine a hyperplane in (n+1)-space, where n
- is the number of weights in the network. The last axis is the total
- "energy of association" in the network. Each memory that the network is
- required to learn creates a (generally complex) 'dimple' in the surface.
- The "lowest" point in that dimple represents the optimal set of weights
- that will allow a network to learn that association. Since we usually
- want a network to learn a large number of associations, the hypersurface
- becomes very warped. The object of most learning algorithms is to find
- the point on that complex surface that minimizes the energy without
- being trapped in any local minima.
-
- --
- Darren Senn Phone: (408) 479-1521
- sinster@scintilla.capitola.ca.us Snail: 1785 Halterman #1
- Wasurenaide -- doko e itte mo soko ni anata wa iru yo. Santa Cruz, Ca 95062
-