home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.ai.neural-nets
- Path: sparky!uunet!cs.utexas.edu!qt.cs.utexas.edu!yale.edu!cs.yale.edu!tsioutsias-dimitris
- From: tsioutsias-dimitris@CS.YALE.EDU (Dimitris Tsioutsias)
- Subject: Re: ALN vs BP
- Message-ID: <1992Jul24.234400.2369@cs.yale.edu>
- Sender: news@cs.yale.edu (Usenet News)
- Nntp-Posting-Host: topaz.systemsx.cs.yale.edu
- Organization: Yale University Computer Science Dept., New Haven, CT 06520-2158
- Date: Fri, 24 Jul 1992 23:44:00 GMT
- Lines: 22
-
-
- <<<<<<<<
- From: tedwards@src.umd.edu (Thomas Grant Edwards)
- Message-ID: <1992Jul24.194122.27019@src.umd.edu>
-
- In article <1992Jul24.053623.22636@cs.yale.edu> tsioutsias-dimitris@CS.YALE.EDU (Dimitris Tsioutsias) writes:
- >It seems that after the backprop fans, we have now the ALN ones. Why
- >each group (or any other that shows strong support) is trying to pass
- >its nets as the dominant ones?
-
- Yeah, but there are alot of people who want to throw nets at real problems
- today, and it would be a little silly for anyone to expect gradient
- descent MLP nets at any real problem and expect results.
- >>>>>>>>>>
-
-
- It depends on the 'real problem'; such a generalization is
- a bit too inappropriate. (Although, PURE gradient descent is
- definetely not the best thing to do in most situations.)
-
- ->dimitris
-
-