home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.ai.neural-nets
- Path: sparky!uunet!usc!sol.ctr.columbia.edu!destroyer!ubc-cs!unixg.ubc.ca!kakwa.ucs.ualberta.ca!alberta!arms
- From: arms@cs.UAlberta.CA (Bill Armstrong)
- Subject: Re: BEP question
- Message-ID: <arms.715877524@spedden>
- Sender: news@cs.UAlberta.CA (News Administrator)
- Nntp-Posting-Host: spedden.cs.ualberta.ca
- Organization: University of Alberta, Edmonton, Canada
- References: <1992Sep7.064241.10764@afterlife.ncsc.mil>
- Date: Mon, 7 Sep 1992 14:52:04 GMT
- Lines: 79
-
- hcbarth@afterlife.ncsc.mil (Bart Bartholomew) writes:
-
-
- > I have been pondering a problem in NNs that I think should
- >work, but I don't know how to implement it. While I will describe
- >a specific problem, the technique (if there is one) should have
- >a fairly wide use.
- > Everyone is just born knowing that sine waves are mean zero,
- >variance one.
-
- Do you mean "variance one-half."?
-
- The mean-square deviation of sin x from its mean 0 (using a uniform
- distribution on x from 0 to 2*pi) is 1/2. It can't possibly be 1,
- because then sin x would have to be at its maximum absolute deviation
- of 1 throughout the whole interval!
-
- If gaussian noise is added, the variance goes up.
- >Some people use the variance as a measure of the signal-to-noise ratio.
- > Seems like we should be able to do an epochal forward pass
- >in which we accumulate the output points, compute the variance,
- >develope an error term from the putative norm, and backpropagate that.
- > One can, of course, see other such error terms. In effect,
- >we have some black box in the forward pass to develop the error for
- >the backward pass.
-
- I assume the backpropagation is for some specific purpose. One goal
- could be to guide a learning procedure in some machine, and another one
- could be to determine which input variables have the most effect on
- the output. Since your example only has one variable, I have to assume
- a learning system, possibly a standard MLP.
-
- > The mathematicians (I am most assuredly *not*) will probably
- >object if the error term is not differentiable. My wetware feeling
- >(notoriously unreliable) is that if a given function is not nicely
- >differential, there is some reasonable transformation of it that is.
- > At this point we come to the inevitable clash with the nuts-
- >and-bolts of the problem in deciding how to actually implement
- >something like this.
- > Your comments please.
- > Bart
- >--
-
- On the contrary. If you want a proof that differentiability is not
- necessary, just look at ALNs. They do a form of logical backprop
- where either something affects the output or it doesn't, because
- everything is zeros and ones. The following is the whole idea behind
- backprop -- change the weights which have a significant effect on the
- error for the particular input pattern being given. I like Bernard
- Widrow's intuition on this one: change the weights in such a way as to
- "least disturb" the learning of other patterns. BP does it because it
- is able to have the greatest effect on the error by the least
- perturbation.
-
- Differentiability is just one way of finding out how things affect an
- output. It is also just one way of measuring the effect of an input
- on the output, as in cases where you might want to control something
- (like a truck backer upper).
-
- Now, how would you implement an alternative approach in general?
- Well, you could do it in combinational logic for speed as in ALNs, but
- considering the effect of a change on the output is only one of the
- heuristics used in ALNs. In general, you could try perturbations of
- the weights (or whatever parameters you have) to see which change
- affects the output most. There is also no reason why the appropriate
- weight(s) to change and by how much couldn't be predicted by another
- NN, as you suggest, if I have understood your remarks.
-
- So Bart, though your innate knowledge about sine waves seems to be
- twice as great as most peoples', I think you are absolutely correct in
- the ideas you are suggesting.
-
- Bill
-
- --
- ***************************************************
- Prof. William W. Armstrong, Computing Science Dept.
- University of Alberta; Edmonton, Alberta, Canada T6G 2H1
- arms@cs.ualberta.ca Tel(403)492 2374 FAX 492 1071
-