home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!stanford.edu!ames!lll-winken!tazdevil!henrik
- From: henrik@mpci.llnl.gov (Henrik Klagges)
- Newsgroups: comp.ai.neural-nets
- Subject: Re: Reducing Training time vs Generalisation
- Message-ID: <?.714340972@tazdevil>
- Date: 20 Aug 92 20:02:52 GMT
- References: <Bt9GIx.9In.1@cs.cmu.edu> <arms.714289771@spedden>
- Sender: usenet@lll-winken.LLNL.GOV
- Lines: 31
- Nntp-Posting-Host: tazdevil.llnl.gov
-
- arms@cs.UAlberta.CA (Bill Armstrong) writes:
-
- >>Sure, but they will have no incentive to do this unless the data, in some
- >>sense, forces them to. You could always throw a narrow Gaussian unit into
- >>the net, slide it over between any two training points, and give it an
- >>output weight of 10^99. But it would be wrong.
-
- >A good reason not to use such units, eh!
- Hm, Bill - this reasonability applies to your example problem as well, which
- is pretty much constructed ad hoc. Just add a training point between your
- center - or remove the symmetry otherwise, and your extremum is gone (!).
-
- >I seem to recall going over this before, and I believe what is
- >required to upset the scheme is to have a lot of training points which
- >force training to fit a solution having a peak. I.e. if six points
-
- If there are many points suggesting a peak, than the peak is right. Ever
- did a coupled pendulum experiment and plotted phase lag versus resonance ?
- Perfect peaks. Few points, though, to generate/plot them !
-
- Concerning lazy evaluation: It is difficult to program on parallel machines.
- It is a kind of runtime loadunbalancing that constantly switches off tasks
- (=subtree evaluations) of varying size essentially at random. No way to do
- that in SIMD, and darn difficult to do it in MIMD at all, and especially not
- with a high efficiency.
-
-
- Cheers, Henrik
- IBM Research Division physics group Munich
- massively parallel group at Lawrence Livermore
-
-