home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.ai.neural-nets
- Path: sparky!uunet!elroy.jpl.nasa.gov!ucla-cs!maui.cs.ucla.edu!edwin
- From: edwin@maui.cs.ucla.edu (Edwin Tisdale)
- Subject: Re: function approximation with neural nets : what is right ?
- Message-ID: <1992Sep10.182015.19429@cs.ucla.edu>
- Sender: usenet@cs.ucla.edu (Mr Usenet)
- Nntp-Posting-Host: maui.cs.ucla.edu
- Organization: UCLA Computer Science Department
- References: <1992Sep10.153652.12503@noose.ecn.purdue.edu>
- Date: Thu, 10 Sep 92 18:20:15 GMT
- Lines: 25
-
- In article <1992Sep10.153652.12503@noose.ecn.purdue.edu>
- kavuri@lips2.ecn.purdue.edu (Surya N Kavuri ) writes:
- >
- > In using a neural net to approximate a function, it is done with
- > out any a priori information on what class of functions we are
- > looking for. For example, if I know the function is some
- > polynomial, there is no way to impose this condition on the net.
- >
- If you **know** that the function you are trying to imitate is best
- approximated by a polynomial then you **should** use a polynomial.
- The polynomial is just another kind of network. The non-linear
- units are just the powers ( x^0, x^1, x^2, ...) of the input variable.
- People sometimes call these things Sigma-Pi units.
- The weights are just the coefficients (a_0, a_1, a_2, ...) which can
- be estimated using conventional polynomial regression.
- The problem is a little harder if you don't have much data to "train"
- with and you don't know which (or even how many) terms (units) to
- include in your network. You might try using variable exponents (x^w).
- But then you need a learning algorithm to propagate errors backward to
- the exponents. In short, you should always try to use the network
- architecture which is most appropriate for your application. But
- unless you have some good reason for believing one is better than
- another, any of the black-box models will do. And you should just
- try the one that is most convenient. Hope this helps, Bob Tisdale
- UCLA-CSD.
-