home *** CD-ROM | disk | FTP | other *** search
/ NetNews Usenet Archive 1992 #18 / NN_1992_18.iso / spool / comp / ai / neuraln / 3208 < prev    next >
Encoding:
Internet Message Format  |  1992-08-17  |  914 b 

  1. Path: sparky!uunet!usc!sdd.hp.com!swrinde!elroy.jpl.nasa.gov!ames!lll-winken!tazdevil!henrik
  2. From: henrik@mpci.llnl.gov (Henrik Klagges)
  3. Newsgroups: comp.ai.neural-nets
  4. Subject: Re: Reducing Training time vs Generalisation
  5. Keywords: back propagation, training, generalisation
  6. Message-ID: <?.714068811@tazdevil>
  7. Date: 17 Aug 92 16:26:51 GMT
  8. References: <1992Aug16.063825.15300@julian.uwo.ca> <1992Aug16.213939.15944@ccu1.aukuni.ac.nz> <arms.714014919@spedden>
  9. Sender: usenet@lll-winken.LLNL.GOV
  10. Lines: 13
  11. Nntp-Posting-Host: tazdevil.llnl.gov
  12.  
  13. arms@cs.UAlberta.CA (Bill Armstrong) writes:
  14.  
  15. >It would be interesting to hear Scott Fahlmann's ideas on how to get
  16. >good generalization.  Then you might find out why you had problems
  17. >using Cascade Correlation.
  18.  
  19. Could you elaborate a little bit on that point (why you had problems)
  20. about CC ? 
  21.  
  22. Cheers, Henrik
  23.  
  24. IBM Research
  25. Massively Parallel Group at Lawrence Livermore
  26.