home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.ai.neural-nets
- Path: sparky!uunet!usc!sol.ctr.columbia.edu!destroyer!gumby!yale!yale.edu!jvnc.net!nuscc!ntuix!ntuvax.ntu.ac.sg!cyxzhou
- From: cyxzhou@ntuvax.ntu.ac.sg
- Subject: Question: Forced learning
- Message-ID: <1992Nov5.154452.1@ntuvax.ntu.ac.sg>
- Lines: 12
- Sender: news@ntuix.ntu.ac.sg (USENET News System)
- Nntp-Posting-Host: v9001.ntu.ac.sg
- Reply-To: comp.ai.neural-nets
- Organization: Nanyang Technological University - Singapore
- Date: Thu, 5 Nov 1992 07:44:52 GMT
-
- We have used a feed-forward backprop network to simulate ore grade
- distribution. The problem is very simple. Input: (x,y) coordinates, output:
- Z (grade). While the results were very good, there was a problem of scaling
- down of data values which did not have a high frequency of occurence. But
- reality, very often these values at certain points (x,y) are know to be correct
- values. My question: is there anyway to force the NN to remember these values
- as being correct in the training process? I would welcome any comments or
- suggestioins. You may send your reply to: cyxzhou@ntuvax.ntu.ac.sg.
-
- Best regards.
-
- Zhou Yingxin
-