home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
Monster Media 1993 #2
/
Image.iso
/
text
/
9305nni.zip
/
930518.PPR
< prev
next >
Wrap
Text File
|
1993-05-18
|
3KB
|
93 lines
Article 9142 of comp.ai.neural-nets:
Path: serval!netnews.nwnet.net!ogicse!network.ucsd.edu!usc!cs.utexas.edu!uunet!pipex!uknet!cam-eng!rss
From: rss@eng.cam.ac.uk (R.S. Shadafan)
Newsgroups: comp.ai.neural-nets
Subject: Technical Report
Keywords: Sequential, Dynamic, Neural Network, Input Space, RLS, LMS, Vowels, Wheat.
Message-ID: <1993May18.093824.10137@eng.cam.ac.uk>
Date: 18 May 93 09:38:24 GMT
Article-I.D.: eng.1993May18.093824.10137
Sender: rss@eng.cam.ac.uk (R.S. Shadafan)
Organization: cam.eng
Lines: 76
Nntp-Posting-Host: tw800.eng.cam.ac.uk
The following technical report is available by anonymous ftp from the
archive of the Speech, Vision and Robotics group at Cambridge
University, UK.
A Dynamic Neural Network Architecture
by sequential Partitioning of the input space
Raed Shadafan and M. Niranjan
Technical Report CUED/F-INFENG/TR 127
Cambridge University Engineering Department
Trumpington Street
Cambridge CB2 1PZ
England
Abstract
We present a sequential approach to training multilayer perceptron for
pattern classification applications. The network is presented with
each item of data only once and its architecture is dynamically
adjusted during training. At the arrival of each example, a decision
whether to increase the complexity of the network, or simply train the
existing nodes is made based on three heuristic criteria. These
criteria measure the position of the new item of data in the input
space with respect to the information currently stored in the network.
During the training process, each layer is assumed to be an
independent entity with its particular input space. By adding nodes to
each layer, the algorithm effectively adding a hyperplane to the input
space hence adding a partition in the input space for that layer. When
existing nodes are sufficient to accommodate the incoming input, the
involved hidden nodes will be trained accordingly.
Each hidden unit in the network is trained in closed form by means of
a Recursive Least Squares (RLS) algorithm. A local covariance matrix
of the data is maintained at each node and the closed form solution is
recursively updated. The three criteria are computed from these
covariance matrices with minimum computational cost.
The performance of the algorithm is illustrated on two problems. The
first problem is the two dimensional Peterson \& Barney vowel
data. The second problem is a 32 dimensional data used
for wheat classification. The sequential nature of the algorithm has
an efficient hardware implementation in the form of systolic arrays,
and the incremental training idea has better biological plausibility
when compared with iterative methods.
************************ How to obtain a copy ************************
a) Via FTP:
unix> ftp svr-ftp.eng.cam.ac.uk
Name: anonymous
Password: (type your email address)
ftp> cd reports
ftp> binary
ftp> get shadafan_tr127.ps.Z
ftp> quit
unix> uncompress shadafan_tr127.ps.Z
unix> lpr shadafan_tr127.ps (or however you print PostScript)
b) Via postal mail:
Request a hardcopy from
Raed Shadafan or M. Niranjan,
Speech Laboratory,
Cambridge University Engineering Department,
Trumpington Street,
Cambridge CB2 1PZ,
England.
or email us: rss/niranjan@eng.cam.ac.uk