home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
Monster Media 1993 #2
/
Image.iso
/
text
/
9305nni.zip
/
930524.ANC
< prev
next >
Wrap
Text File
|
1993-05-24
|
20KB
|
402 lines
Article 9211 of comp.ai.neural-nets:
Path: serval!netnews.nwnet.net!hubble.asymetrix.com!kuhub.cc.ukans.edu!moe.ksu.ksu.edu!zaphod.mps.ohio-state.edu!darwin.sura.net!news-feed-1.peachnet.edu!umn.edu!math.fu-berlin.de!fauern!rrze.uni-erlangen.de!late.e-technik.uni-erlangen.de!cia
Newsgroups: comp.ai.neural-nets
Subject: NEW BOOK Neural Networks for Optimization and Signal Processing
Message-ID: <1tobsdEps7@uni-erlangen.de>
From: cia@late.e-technik.uni-erlangen.de (Andreas Cichocki)
Date: Sun, 23 May 1993 17:22:21 GMT
Organization: LATE, Uni-Erlangen, Germany
Keywords: New Book
Summary: Information about new book: NEURAL NETWORKS for OPTIMIZATION and SignalProcessing
NNTP-Posting-Host: late3.e-technik.uni-erlangen.de
Lines: 386
New book information:
NEURAL NETWORKS for OPTIMIZATION and SIGNAL PROCESSING
by Andrzej Cichocki and Rolf Unbehauen
( John Wiley 0471 93010 5 524pp April 1993)
OVERVIEW
This book has been written primarily for those who want to apply
artificial neural networks in practice, especially for the on-line
solution of a wide spectrum of optimization, matrix algebra and signal
processing problems. The book shows that artificial neural networks
can be used effectively for solving many scientific and engineering
problems which are formulated mainly as optimization (nonlinear
programming) problems.
The book does not aim at mathematical rigour, but at conveying understanding
through mathematics, i.e. algorithms and associated functional block
diagrams. In other words, the emphasis is on the mathematical models
of neural networks and the architectures (network structures)
associated with them. Many computer simulation results presented in
the book reveal (or illustrate) that the architectures developed are
feasible in practice. Most of the computer simulation results were
obtained by using the easily available and popular simulation programs
MATLAB, TUTSIM and SPICE. However, one can use other specialized computer
programs to check the validity and performance of the described
algorithms.
The book complements other books on the subject of artificial neural
networks rather than compete with them.
This book is not concerned with networks of real
biological neurons but rather with artificial neural networks in
which much of the analogy and inspiration comes from neuroscience.
The subject is approached from the standpoint of an engineer who looks for some
real-life applications. The book rests on the premise that artificial
neural networks are useful, important and interesting for such
applications as optimization, signal processing, optimal control and
identification. Especially, strong emphasis is given for optimization
problems, not only because they enable to obtain solutions in real
time but also because they allow to develop new techniques and
architectures and to stimulate the reader to be creative in
visualizing new approaches. The book provides a coverage of artificial
neural network applications in a variety of problems of both
theoretical and practical interest.
The book is partly a textbook and partly a monograph. It is a textbook
because it gives a detailed introduction to artificial neural network
models and basic learning algorithms. It is simultaneously a monograph
because it presents several new results and ideas of the authors, a further
development and explanation of existing models and because new procedures
for optimization problems are brought together and published in the book
for the first time. As a result of its twofold character the book is
likely to be of interest to graduate and postgraduate students and to
engineers and scientists working in the field of computer science,
optimization, operations research, signal processing, identification or
control theory. The book has been written with engineering in mind, so
it can be used as a textbook for computer science and engineering
courses in artificial neural networks. Furthermore, the book may also
be of interest to researchers working in different areas of science
since a number of new results and concepts have been included which may
be advantageous for further work. Although it is not expected that a
reader interested in this fascinating area will be totally satisfied,
all who seek some basic explanations, ideas, guidance and enlightenment on
matters having to do with analog artificial neural networks, may find
something of value. One can read this book through sequentially but it
is not necessary since each chapter is essentially self-contained with as
few cross references as possible.
This book is virtually self-contained and there are no prerequisites
for reading it, except for some mathematical background.
The book consists of nine chapters.
Chapter 1 summarizes the mathematical background which may
be helpful in the study of artificial neural networks and may provide
convenient sources for repetitive reference to standard facts. The
purpose of this chapter is to provide a selected list of mathematical
notations, definitions and results which are used frequently.
Chapter 2 is concerned principally on basic models of artificial neural
networks. Mathematical descriptions of functional block diagrams,
representations and some electronic implementations are described.
Chapter 3 provides an insight into learning algorithms both
supervising and unsupervising. First, are described various methods
and techniques which enable to transform unconstrained
optimization problems into equivalent systems of differential
equations. Such systems of differential equations often constitute
basic neural network algorithms to solve specified computational
problems in real time. The second objective in this chapter is to
explain in detail the basic back-propagation algorithm and its
modifications and improvements. The third objective is to give a
unified view of local learning by a single neuron. A generalized
learning algorithm is described and, as special cases, basic
algorithms are discussed.(E.g. genralized LMS, TLS, Hebbian, Oja's and
perceptron learning rules are discussed).
In Chapter 4 are discussed various general architectures and
VLSI circuit implementations of neuron-like analog processors for
linear and quadratic programming and linear complementarity problems.
In Chapter 5 various network architectures of simple
neuron-like processors are proposed for the on-line solution of a
system of linear equations with real constant and/or time-variable
coefficients. Various new algorithms and associated network structures
are developed. Special emphasis is given to ill-conditioned
problems. The properties and performance of the networks are
illustrated by computer simulation results.
The objective of Chapter 6 is to apply and extend the methods
and techniques described in the previous chapters to a large class
of linear matrix algebra problems, for example matrix inversion
and pseudo-inversion, singular value decomposition (SVD), the eigenvalue
problem, QR factorization, solving the Lyapunov and Riccati matrix equation,
and principal component analysis (PCA).Review of the recent algorithms is
also given.
Chapter 7 deals with "standard" constrained optimization
problems (nonlinear programming) where both the objective functions
and the constraints are generally nonlinear. As a special class of
problems we discuss minimax and nonlinear least absolute value
problems. For each optimization problem considered in this chapter
are developed appropriate computational algorithms and
associated neuron-like optimizer solvers.
In Chapter 8 are discussed various algorithms and realization
techniques of artificial neural networks for the real-time estimation
of parameters and the reconstruction of signals corrupted by noise
and distorted by parasitic components. Different criteria for the
optimal estimation of signal parameters are discussed. Furthermore,
neural network models for the identification and prediction of the
future are described. Special emphasis is given to blind
identification in the application of separation of independent
source signals. Exemplary computer simulation experiments illustrate
the validity and performance of the described neural network algorithms.
In Chapter 9 are considered difficult discrete and combinatorial
optimization problems. To illustrate different methods and techniques
many well-known combinatorial problems are discussed,
e.g. the quadratic assignment problem, the graph partition problem
and the travelling salesman problem. Special emphasis is given to
competitive-based neural networks, the Hopfield analog neural network
and its modification, and to the mean-field annealing approach.
Each chapter concludes with a brief literature survey and a list of
sources for further reading. In addition to the illustrative worked
examples, each chapter is terminated by a set of questions and
problems the aim of which is not only to illustrate but also to extend the
material in the text. Some problems require the use of a PC
and a suitable simulation program, e.g. TUTSIM, PSI or SIMULINK (MATLAB).
CONTENTS
I. "Mathematical Preliminaries of Neurocomputing" 1
"Linear Matrix Algebra" 1
"Matrix Representations and Notations" 1
"Inner and Outer Product" 3
"Linear Independence of Vectors" 4
"Rank of a Matrix" 5
"Positive and Negative Definite Matrices" 5
"The Inverse and Pseudoinverse of Matrices" 5
"Orthogonality, Unitary Matrices, Conjugate Vectors" 7
"Eigenvalues and Eigenvectors" 7
"Vector and Matrix Norms" 9
"Singular Value Decomposition (SVD)" 10
"Condition Numbers" 12
"The Kronecker Product" 14
"Elements of Multivariable Analysis" 15
"Sets" 15
"Functions" 16
"Differentiation of a Scalar Function with Respect to a Vector" 17
"The Hessian Matrix" 17
"The Jacobian Matrix" 18
"Chain Rule" 19
"Taylor Series Expansion and Mean Value Theorems" 20
"Lyapunov's Direct Method" 21
"Unconstrained Optimization Algorithms" 22
"Necessary and Sufficient Conditions for an Extremum" 22
"Dynamic Gradient Systems" 23
"Newton's Methods" 25
"The Quasi-Newton Methods" 27
"The Conjugate Gradient Method" 28
"Constrained Nonlinear Programming Problems" 29
"Kuhn-Tucker Conditions" 29
"Lagrange Multipliers and Kuhn-Tucker Conditions
for Constrained Minimization with Mixed Constraints" 31
"Duality the Primal and Dual Optimization Problems" 32
" Questions and Problems for Chapter 1" 34
"References and Sources for Further Reading" 37
II. "Architectures and Electronic Implementation of Neural Network Models" 38
"Biological (Real) Neuron" 39
"Basic Models of Artificial Neurons" 41
"Basic (Formal) Neuron Model McCulloch-Pitts Model" 46
"Fukushima Model of the Neuron" 49
"Adaline" 51
"Single-Layer Perceptron" 57
"The Hopfield Model of the Artificial Neuron" 60
"The Grossberg Model" 62
"Generalized Neuron Model" 63
"Artificial Neuron with Oscillatory Output" 64
"Discrete-Time Models of Artificial Neurons" 65
"Artificial Neural Network Models" 67
"Basic Features and Classification of ANNs" 67
"Feedforward Multi-Layer Perceptron" 71
"Architecture of the Three-Layer Perceptron" 71
"The Hopfield Artificial Neural Network and its Modifications" 74
"Analog Models" 74
"Discrete-Time Hopfield Models of ANNs" 84
" Questions and Problems for Chapter 2" 88
"References and Sources for Further Reading" 88
III. "Unconstrained Optimization and Learning Algorithms" 92
"The Use of Systems of Ordinary Differential Equations in
Unconstrained Optimization Problems Trajectory-Following Methods" 93
"Basic Iterative Gradient Descent Algorithms" 93
"Continuous-Time Realization of Iterative Algorithms" 95
"Basic Gradient Systems" 97
"Continuous-Time Algorithm with Prespecified Convergence Speed" 100
"Unconstrained Optimization by Applying a System of
Second-Order Differential Equations" 106
"Branin's Method" 108
"Optimization Networks Using a Combination of Deterministic and Random Search
- Stochastic Gradient Algorithms" 111
"Boltzmann Machine and Simulated Annealing" 117
"Mean-Field Annealing Algorithm" 121
"Back-Propagation Learning Algorithms" 126
"Learning of the Single Layer Perceptron" 126
"Standard Back-Propagation Algorithm for the Multilayer Perceptron" 131
"Back-Propagation Algorithm with Momentum Updating" 136
"Batch Learning Algorithm" 137
"Back-Propagation Algorithm with Adaptive Learning Rates:
the Delta-Bar-Delta Algorithm" 139
"Back-Propagation Algorithm with a Variable Number of Neurons
in the Hidden Layers" 140
"Back-Propagation Algorithms with Non-Euclidean Error Signals" 142
"Generalized Learning Algorithm for a Single Neuron" 146
"Generalized LMS Learning Rule" 148
"Potential Learning Rule" 152
"Correlation Learning Rule" 152
"Hebbian Learning Rule" 152
"Oja's Learning Rule" 153
"Standard Perceptron Learning Rule" 154
"Generalized Perceptron Learning Rule" 154
" Questions and Problems for Chapter 3" 155
"References and Sources for Further Reading" 157
IV. "Neural Networks for Linear, Quadratic Programming
and Linear Complementarity Problems" 161
"Formulations of the Problems" 162
"Linear Programming (LP)" 162
"Quadratic Programming (QP)" 166
"Linear Complementarity Problems (LCP)" 168
"Neural Networks for Linear Programming (LP) Problems" 174
"Linear Programming with Inequality Constraints" 174
"Linear Programming Problem in Standard Form" 183
"Linear Programming with Bounded Design Variables" 186
"Algorithm for Simultaneously Solving Primal and Dual Linear
Programming Problems" 191
"Transportation Problem" 193
"Neural Networks for Convex Quadratic Programming Problems" 194
"Equality Constrained QP Problems" 196
"Inequality Constrained QP Problem" 200
"Mixed Equality- and Inequality-Constrained QP Problem" 204
"Neural Networks for Linear Complementarity Problems" 207
" Questions and Problems for Chapter 4" 215
"References and Sources for Further Reading" 220
V. "A Neural Network Approach to the On-Line Solution of a
System of Linear Algebraic Equations and Related Problems" 223
"Formulation of the Problem for Systems of Linear Equations" 224
"Least-Squares Problems" 226
"Basic Features of the Linear Least-Squares Solution
of a System of Linear Equations" 226
"Basic Circuit Structure - Ordinary Least-Squares Criterion" 228
"Robust Circuit Structure by Using the Iteratively -
Reweighted Least-Squares Criterion" 231
"Special Cases with Simpler Architectures" 233
"Improved Circuit Structures for Ill-Conditioned Problems" 238
"Preconditioning" 239
"Regularization" 240
"Augmented Lagrangian with Regularization" 243
"Iterative Algorithms and Discrete-Time Circuit Models" 249
"Minimax Solution of Overdetermined Systems of Linear Equations" 257
"Neural Network Architecture by Using Quadratic Penalty Function Terms" 258
"Neural Network Architecture by Employing the Exact Penalty Method" 262
"Neural Network Architecture by Using the Augmented
Lagrange Multiplier Method" 264
"Neural Network Model by Using the Winner-Take-All Subnetwork" 265
"Least Absolute Deviations Solution of Systems of Linear Equations" 268
"Neural Network Architecture by Using the Linear Programming Approach" 269
"Neural Network Architectures by Using a Smooth Approximation
and the Augmented Lagrange Technique" 272
"Neural Network Model by Using the Inhibition Principle" 274
"Realizations of Four-Quadrant Voltage Dividers" 279
"Neural Networks for Discrete Hartley and Fourier Transforms" 282
"Learning Algorithm for Solving Large (Partitioned) Linear Systems" 285
"Learning Algorithms for the Total Least Squares Problem" 294
" Questions and Problems for Chapter 5" 288
"References and Sources for Further Reading" 293
VI. "Neural Networks for Matrix Algebra Problems" 297
"Matrix Inversion" 298
"LU Decomposition" 304
"QR Factorization" 307
"Spectral Factorization - Symmetric Eigenvalue Problem" 311
"Singular Value Decomposition (SVD)" 320
"Neural Networks for Solving Generalized Matrix Equations
Especially Lyapunov's Equation" 326
"Neural Network Learning Algorithms for Adaptive Estimation
of Principal Components" 329
"Estimation of the First Principal Component-
Normalized Hebbian Learning Algorithm" 332
"Estimation of Several Principal Components - Generalized Hebbian Algorithm" 335
"Linear Feature Extraction by Using Normalized Hebbian
and Anti-Hebbian Learning Rules" 338
"Learning Algorithm for Estimation of the Constrained Principal Components" 340
" Questions and Problems for Chapter 6" 342
"References and Sources for Further Reading" 344
VII. "Neural Networks for Continuous, Nonlinear, Constrained
Optimization Problems" 347
"Formulation of the Constrained Problems and their Characteristics" 348
"Constrained Optimization Problems with Simple Bounds" 351
"The Exterior Penalty Function Methods" 352
"Equality-Constrained Problems" 353
"Neural Networks for the Inequality-Constrained Minimization Problem" 355
"Barrier Function Methods" 362
"The Ordinary Lagrange Multiplier Method" 364
"The Augmented Lagrange Multiplier Methods" 366
"Equality Constrained Problem" 367
"Inequality-Constrained Problems" 369
"The General Constrained Optimization Problem with Mixed Constraints" 373
"Two-Sided Inequality-Constrained Optimization Problems" 373
"The Least Absolute Deviations ( L1-Norm) Optimization Problems" 377
"Minimax Optimization Problems" 385
"Multicriterion Optimization Problems" 392
"Neural Networks for Optimization Problems with Linear constraints
" Questions and Problems for Chapter 7" 396
"References and Sources for Further Reading" 402
VIII. "Neural Networks for Estimation, Identification and Prediction" 406
"On-Line Estimation of the Parameters of a Sinewave Distorted by
a DC Exponential Signal and Corrupted by Noise" 406
"Formulation of the Problem" 406
"Robust Estimation of the Parameters Using the Iteratively Reweighted
Least-Squares Criterion" 409
"Least Absolute Deviation Estimation Problem" 411
"Minimax Estimation Problem" 414
"Signal Decomposition and the Fourier Network" 416
"Decomposition of a Continuously Measured Signal by a Set of
Orthogonal Basis Functions" 416
"Fourier Network" 420
"Linear Regression and Identification of Linear Systems" 423
"Linear Regression" 423
"Discrete-Time Linear Models for System Identification" 427
"Time Series Prediction and Nonlinear System Identification" 430
"Time Series Forecasting Using Neural Networks Approach" 431
"Nonlinear System Identification" 441
"Blind Identification of Independent Source Signals" 442
"Formulation of the Problem" 442
"Basic Structures of a Two-Cell Neural Network" 445
"Learning Algorithm" 448
"General Structure of a Neural Network" 450
"Computer Simulation Results" 452
" Questions and Problems for Chapter 8" 454
"References and Sources for Further Reading" 456
IX."Neural Networks for Discrete and Combinatorial Optimization Problems" 462
"Energy Functions for Combinatorial Optimization Problems" 463
"Equations of Motion for Combinatorial Optimization Problems" 466
"Quadratic Zero-One Programming Problem" 476
"The Quadratic Assignment Problem" 479
"Graph Bipartition Problem" 482
"Graph Partition Problem" 485
"The Traveling Salesman Problem" 488
" Questions and Problems for Chapter 9" 493
"References and Sources for Further Reading" 496
Appendix A"
" List of Major Symbols Used" 500
Appendix B"
" Table of Basic Functional Building-Blocks" 502
" Table of Symbols of Logic Gates and Flip-Flops" 503
Subject Index 505
John Wiley ISBN 0 471 93010 5 524pp April 1993 $60.50
Teubner Verlag ISBN 3 519 06444 8