home *** CD-ROM | disk | FTP | other *** search
/ NetNews Usenet Archive 1992 #18 / NN_1992_18.iso / spool / comp / ai / neuraln / 3214 < prev    next >
Encoding:
Internet Message Format  |  1992-08-17  |  1.2 KB

  1. Path: sparky!uunet!usc!elroy.jpl.nasa.gov!ames!network.ucsd.edu!sdcc12!cs!demers
  2. From: demers@cs.ucsd.edu (David DeMers)
  3. Newsgroups: comp.ai.neural-nets
  4. Subject: Re: Reducing Training time vs Generalisation
  5. Keywords: back propagation, training, generalisation
  6. Message-ID: <36944@sdcc12.ucsd.edu>
  7. Date: 18 Aug 92 04:08:56 GMT
  8. References: <arms.714014919@spedden> <36931@sdcc12.ucsd.edu> <arms.714091659@spedden>
  9. Sender: news@sdcc12.ucsd.edu
  10. Organization: =CSE Dept., U.C. San Diego
  11. Lines: 17
  12. Nntp-Posting-Host: beowulf.ucsd.edu
  13.  
  14. In article <arms.714091659@spedden> arms@cs.UAlberta.CA (Bill Armstrong) writes:
  15. ...
  16.  
  17. >PS I hope you don't take the "violins" comment too much to heart, but
  18. >the truth is that with a least squared error criterion on the training
  19. >set, I can get the optimal learned function to create a disaster very
  20. >easily.
  21.  
  22. No offense, certainly.  I guess I just don't understand what you
  23. mean by "disaster" nor what you've meant in previous postings
  24. about "wild" results...  
  25.  
  26. -- 
  27. Dave DeMers             ddemers@UCSD   demers@cs.ucsd.edu
  28. Computer Science & Engineering    C-014        demers%cs@ucsd.bitnet
  29. UC San Diego                    ...!ucsd!cs!demers
  30. La Jolla, CA 92093-0114    (619) 534-0688, or -8187, FAX: (619) 534-7029
  31.