home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.ai.neural-nets
- Path: sparky!uunet!usc!rpi!newsserver.pixel.kodak.com!psinntp!psinntp!afs!dave
- From: dave@afs.com (David J. Anderson)
- Subject: failsafe NNs
- Message-ID: <1992Aug25.154640.2817@afs.com>
- Sender: dave@afs.com
- Date: Tue, 25 Aug 1992 15:46:40 GMT
- Lines: 19
-
- Having looked at neural nets from the peripery for a while, I'm wondering
- if anyone has looked at NN from the angle of breaking down.
-
- As an analogy, it seems to me that every so often I'm faced with a concept
- so radically new, so different, that a good deal of the training I had is
- no longer sure and a new training/exploration mode must be entered to get
- my wetnet back up to speed.
-
- Granted, wetnets are a great deal more complex than the NNs we've created
- so far. But I beleive the analogy applies: what happens when the inputs to
- a NN are so weird that they cannot be adequately prepared for? If we start
- using NNs for `smarter' applications, and the penalty for being wrong gets
- nastier, how can we assure that the inputs to the NN can be responded to
- appropriately?
-
- ____________________________________________________________________
- Dave Anderson | ...consequently, society expects all earnestly
- the dman | responsible communication to be crisply brief...
- dave@afs.com | we are not seeking a license to ramble wordily.
-