home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!charon.amdahl.com!pacbell.com!mips!sdd.hp.com!elroy.jpl.nasa.gov!lll-winken!tazdevil!henrik
- From: henrik@mpci.llnl.gov (Henrik Klagges)
- Newsgroups: comp.ai.neural-nets
- Subject: Re: Reducing Training time vs Generalisation
- Message-ID: <?.714342847@tazdevil>
- Date: 20 Aug 92 20:34:07 GMT
- References: <Bt9GIx.9In.1@cs.cmu.edu> <arms.714289771@spedden>
- Sender: usenet@lll-winken.LLNL.GOV
- Lines: 22
- Nntp-Posting-Host: tazdevil.llnl.gov
-
- arms@cs.UAlberta.CA (Bill Armstrong) writes:
-
- >You have a lot more experience than I do with sigmoid type nets, so
- >what you have just said is extremely significant, in that you are
-
- Correct. It is a very interesting finding. Same happened during
- our experiments.
-
- >coming closer all the time to a logical net. If you are able to
- >replace sigmoids with sharp thresholds, and not change the output of
- >the net significantly, then you are really using threshold *logic*
- >nets.
-
- Well, if this replaceability is there ... would be great ! Don't
- think so, though. I'd need a few more experiments on that.
- An inital look suggests that the weights & sigmoids cannot easily
- (straightforwardly) replaced with 'gates'.
-
- Cheers, Henrik
- IBM Research Division Physics Group Munich
- Massively Parallel Group at Lawrence Livermore
-
-