home *** CD-ROM | disk | FTP | other *** search
/ NetNews Usenet Archive 1992 #16 / NN_1992_16.iso / spool / comp / ai / neuraln / 2921 < prev    next >
Encoding:
Internet Message Format  |  1992-07-24  |  1009 b 

  1. Path: sparky!uunet!zephyr.ens.tek.com!uw-beaver!micro-heart-of-gold.mit.edu!news.bbn.com!usc!wupost!ukma!seismo!lll-winken!tazdevil!henrik
  2. From: henrik@mpci.llnl.gov (Henrik Klagges)
  3. Newsgroups: comp.ai.neural-nets
  4. Subject: Re: neural nets and generalization (was Why not trees?)
  5. Message-ID: <?.711993807@tazdevil>
  6. Date: 24 Jul 92 16:03:27 GMT
  7. References: <arms.711643374@spedden> <4458@rosie.NeXT.COM>
  8. Sender: usenet@lll-winken.LLNL.GOV
  9. Lines: 16
  10. Nntp-Posting-Host: tazdevil.llnl.gov
  11.  
  12. paulking@next.com (Paul King) writes:
  13. >of events transform an input pattern into an output pattern.  The
  14. >"goal" of the neural net is not only to memorize the input-to-output
  15.                                          ^^^^^^^^
  16. >mappings, 
  17.  
  18. If the black box 'memorizes' the patterns (literally), you are lost,
  19. as lookup tables are pretty useless. I found that high information 
  20. compression rates (rule of thumb: # of float invals/# of float weights)
  21. lead to good generalization. 
  22.  
  23. --
  24.  
  25. Cheers, Henrik
  26. MPCI at LLNL
  27. IBM Research
  28.