home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!charon.amdahl.com!pacbell.com!sgiblab!sdd.hp.com!hplabs!ucbvax!CATTELL.PSYCH.UPENN.EDU!neuron-request
- From: neuron-request@CATTELL.PSYCH.UPENN.EDU ("Neuron-Digest Moderator")
- Newsgroups: comp.ai.neural-nets
- Subject: Neuron Digest V10 #15
- Message-ID: <22697.721089431@cattell.psych.upenn.edu>
- Date: 6 Nov 92 22:37:11 GMT
- Sender: daemon@ucbvax.BERKELEY.EDU
- Reply-To: "Neuron-Request" <neuron-request@cattell.psych.upenn.edu>
- Distribution: world
- Organization: University of Pennsylvania
- Lines: 460
-
- Neuron Digest Friday, 6 Nov 1992
- Volume 10 : Issue 15
-
- Today's Topics:
- New version of Learning Vector Quantization PD program package
- Info on intelligent agents?
- Economics and Neural Nets bibliography, addendum
- Effectiveness of the latest ANNSs
- Production scheduling systems?
- non-linear dynamical modelling?
- Modeling question
- Job at Booz, Allen & Hamilton
- Request for advice - sound localization
- Algorithms for masssivley parallel machines?
- Postdocs at Rockefellar
-
-
- Send submissions, questions, address maintenance, and requests for old
- issues to "neuron-request@cattell.psych.upenn.edu". The ftp archives are
- available from cattell.psych.upenn.edu (130.91.68.31). Back issues
- requested by mail will eventually be sent, but may take a while.
-
- ----------------------------------------------------------------------
-
- Subject: New version of Learning Vector Quantization PD program package
- From: lvq@cochlea.hut.fi (LVQ_PAK)
- Date: Sun, 11 Oct 92 10:57:21 +0700
-
-
- ************************************************************************
- * *
- * LVQ_PAK *
- * *
- * The *
- * *
- * Learning Vector Quantization *
- * *
- * Program Package *
- * *
- * Version 2.1 (October 9, 1992) *
- * *
- * Prepared by the *
- * LVQ Programming Team of the *
- * Helsinki University of Technology *
- * Laboratory of Computer and Information Science *
- * Rakentajanaukio 2 C, SF-02150 Espoo *
- * FINLAND *
- * *
- * Copyright (c) 1991,1992 *
- * *
- ************************************************************************
-
- Public-domain programs for Learning Vector Quantization (LVQ)
- algorithms are available via anonymous FTP on the Internet.
-
- "What is LVQ?", you may ask --- See the following reference, then:
- Teuvo Kohonen. The self-organizing map. Proceedings of the IEEE,
- 78(9):1464-1480, 1990.
-
- In short, LVQ is a group of methods applicable to statistical
- pattern recognition, in which the classes are described by a
- relatively small number of codebook vectors, properly placed
- within each class zone such that the decision borders are
- approximated by the nearest-neighbor rule. Unlike in normal
- k-nearest-neighbor (k-nn) classification, the original samples
- are not used as codebook vectors, but they tune the latter.
- LVQ is concerned with the optimal placement of these codebook
- vectors into class zones.
-
- This package contains all the programs necessary for the correct
- application of certain LVQ algorithms in an arbitrary statistical
- classification or pattern recognition task. To this package three
- options for the algorithms, the LVQ1, the LVQ2.1 and the LVQ3,
- have been selected.
-
- This code is distributed without charge on an "as is" basis.
- There is no warranty of any kind by the authors or by Helsinki
- University of Technology.
-
- In the implementation of the LVQ programs we have tried to use as
- simple code as possible. Therefore the programs are supposed to
- compile in various machines without any specific modifications made on
- the code. All programs have been written in ANSI C. The programs are
- available in two archive formats, one for the UNIX-environment, the
- other for MS-DOS. Both archives contain exactly the same files.
-
- These files can be accessed via FTP as follows:
-
- 1. Create an FTP connection from wherever you are to machine
- "cochlea.hut.fi". The internet address of this machine is
- 130.233.168.48, for those who need it.
-
- 2. Log in as user "anonymous" with your own e-mail address as password.
-
- 3. Change remote directory to "/pub/lvq_pak".
-
- 4. At this point FTP should be able to get a listing of files in this
- directory with DIR and fetch the ones you want with GET. (The exact
- FTP commands you use depend on your local FTP program.) Remember
- to use the binary transfer mode for compressed files.
-
- The lvq_pak program package includes the following files:
-
- - Documentation:
- README short description of the package
- and installation instructions
- lvq_doc.ps documentation in (c) PostScript format
- lvq_doc.ps.Z same as above but compressed
- lvq_doc.txt documentation in ASCII format
-
- - Source file archives (which contain the documentation, too):
- lvq_p2r1.exe Self-extracting MS-DOS archive file
- lvq_pak-2.1.tar UNIX tape archive file
- lvq_pak-2.1.tar.Z same as above but compressed
-
-
- An example of FTP access is given below
-
- unix> ftp cochlea.hut.fi (or 130.233.168.48)
- Name: anonymous
- Password: <your email address>
- ftp> cd /pub/lvq_pak
- ftp> binary
- ftp> get lvq_pak-2.1.tar.Z
- ftp> quit
- unix> uncompress lvq_pak-2.1.tar.Z
- unix> tar xvfo lvq_pak-2.1.tar
-
- See file README for further installation instructions.
-
- All comments concerning this package should be
- addressed to lvq@cochlea.hut.fi.
-
- ************************************************************************
-
-
- ------------------------------
-
- Subject: Info on intelligent agents?
- From: Nick Vriend<VRIEND@IFIIUE.FI.CNR.IT>
- Date: Mon, 12 Oct 92 16:16:48
-
- Nick Vriend
- European University Institute
- C.P. 2330
- 50100 Firenze Ferrovia
- Italy
- EARN/Bitnet: <VRIEND@IFIIUE.FI.CNR.IT>
-
- As a PhD student of economics at the European University Institute in
- Florence (Italy), finishing a thesis on 'Decentralized Trade', I am
- interested in getting contact with people who are working on the
- following topic: DECENTRALIZED TRADE WITH ARTIFICIALLY INTELLIGENT
- AGENTS. Basic characteristic of decentralized economies is that each
- individual agent has a very limited knowledge of his relevant
- environment. Each agent acts and observes his outcomes in the market
- (which depend on the actions of the other participants). Thus, each
- individual agents learns independently, using only a success measure of
- his own actual performance (e.g. profits, utility).
-
- At the moment I am applying Classifier Systems and Genetic Algorithms to
- model the learning process of each individual agent, but (given the
- mentioned inherent problem of misspecification in decentralized
- economies) Neural Networks seem very promising. However, application of
- Neural Networks appears more complex, as in a decentralized economy
- nobody would be able to tell each agent what his "target" or "correct"
- decision would have been. Therefore, the machines have to learn
- unsupervised (as in e.g. Barto, Sutton & Anderson (1983): Neuronlike
- Adaptive Elements That Can Solve Difficult Learning Control Problems.
- IEEE Transactions on Systems, Man, and Cybernetics, 13). Hence, the
- topic I am interested in might be restated as: REINFORCEMENT LEARNING BY
- INTERACTING MACHINES.
-
-
-
- ------------------------------
-
- Subject: Economics and Neural Nets bibliography, addendum
- From: Duarte Trigueiros <dmt@sara.inesc.pt>
- Date: Wed, 14 Oct 92 11:27:46 -0200
-
- In addition to Paul Refenes list, I would like to mention mine and Bob's
- paper on the automatic forming of ratios as internal representations of
- the MLP. This paper shows that the problem of discovering the appropriate
- ratios for performing a given task in financial statement analysis can be
- be simplified by using some specific training schemes in an MLP.
-
- @inproceedings( xxx ,
- author = "Trigueiros, D. and Berry, R.",
- title = "The Application of Neural Network Based Methods to the
- Extraction of knowledge From Accounting Reports",
- Booktitle = "Organisational Systems and Technology: Proceedings of the
- $24^{th}$ Hawaii International Conference on System
- Sciences",
- Year = 1991,
- Pages = "136-146",
- Publisher = "IEEE Computer Society Press, Los Alamitos, (CA) US.",
- Editor = "Nunamaker, E. and Sprague, R.")
-
- I also noticed that Paul didn't mention Utans and Moody's "Senlecting
- Neural Network Architectures via the Prediction Risk: An Application to
- Corporate Bond Rating Prediction" (1991), which has been published
- somewhere and has, or had, a version in the neuroprose archive as
- utans.bondrating.ps.Z .This paper is especially recommended, as the early
- literature on financial applications of NNs didn't care too much with
- things like cross-validation. The achievements, of course, were
- appallingly brilliant.
-
- Finally, I gathered from Paul's list of articles, that there is a book of
- readings entitled "Neural Network Applications in Investment and
- Finance". Paul is the author of an article in chapter 27. The remaining
- twenty six or so chapters can eventually contain interesting stuff for
- completing this search.
-
- When the original request for references appeared in another list I
- answered to it. So, I must apologise for mentioning our reference again
- here. I did it, as Paul list of references could give the impression,
- despite him, of being an attempt to be extensive.
-
- ---------------------------------------------------
- Duarte Trigueiros,
- INESC, R. Alves Redol 9, 2. 1000 Lisbon, Portugal
- e-mail: dmt@sara.inesc.pt FAX +351 (1) 7964710
- ---------------------------------------------------
-
-
- ------------------------------
-
- Subject: Effectiveness of the latest ANNSs
- From: Fernando Passold <EEL3FPS%BRUFSC.bitnet@UICVM.UIC.EDU>
- Organization: Universidade Federal de Santa Catarina/BRASIL
- Date: Thu, 15 Oct 92 14:39:35 -0300
-
-
- I would like to begin a little discussion questioning the effectiveness
- of the latest Artificial Neural Networks Simulators (ANNSs).
-
- A majority of the ANNSs do a serial computation trying to emulate the
- process of the Natural (or biological) Neural Networks (NNNs).
-
- I would like to drive the attentions about the fact that the computation
- is doing serialy, or better, even in more improved ANNSs that profit parallel
- and/or concurrent processing, what is computing it's one synapse at time or
- in blocks (packets), different that occurs with NNNs. In a NNN, various
- potentials of activations of neurons (membrane voltage changes) are evoluate
- and processing at the same time (involving different latencies), and not the
- outcome of each synapse from time to time (like in conventionals ANNSs).
-
- The question that I would arise follows: Imagine that one of the neurons
- of an especific NNN suddenly presents a bigger latency in its response,
- compared with its neighbourhoods. Will it be that this 'failed' neuron, do
- not carry this network to a completely different result than it would be
- expected ? Will it be that the activation's timing (spike timing)
- between neurons of an especific network do not deserve too more attention
- than the mere emulating of the majority of ours latest ANNSs, even with
- parallel and/or concurrent processing ?
-
- Would not this synchronism (above mentioned) be responsible for our
- primitive intuitive notion of velocity and time (epistemological talking) ?
-
- Maybe this discussion would be of greater interest for the people of
- Construtivist IA (or Construtivist Connectionist IA).
-
- I would be glad in order to receive opinious and/or even outcomings
- form researchs (preferential via neuron-digest list) with this in mind
- (including 'neural-boards' using Analog implementations, DSPs inmplementations,
- neuron-chips or transputers outcomings).
-
- Fernando Passold
- (Master degree student)
- Biomedical Engineering Group
- Santa Catarina Federal University
- BRAZIL
- E-mail: EEL3FPS@BRUFSC.BITNET
-
-
- ------------------------------
-
- Subject: Production scheduling systems?
- From: shim@educ.emse.fr
- Date: 15 Oct 92 20:03:48 +0000
-
- Can anyone suggest some references or who have worked to apply on the
- production scheduling systems with neural networks. And, could someome
- forward me Mr. Yoon-Pin Simon Foo's email address(I think he is(or was)
- in Univ. South Carolina.).
-
- Thank in advance.
-
-
-
- ------------------------------
-
- Subject: non-linear dynamical modelling?
- From: ABDEMOHA@th.isu.edu
- Organization: Idaho State University
- Date: 15 Oct 92 20:42:39 -0700
-
- Dear Sir
- I would like to inquire about the use of neural networks in modelling
- non-linear dynamical systems. This may also include the ability of
- the neural networks to approximate systems governed by a pre-known
- partial differential equations?.
- Mohamed A. Abdel-Rahman
-
-
- ------------------------------
-
- Subject: Modeling question
- From: ABDEMOHA@th.isu.edu
- Organization: Idaho State University
- Date: 15 Oct 92 20:46:33 -0700
-
- Dear Sir:
- I was trying to use the backpropagation neural network to approximate
- a certain function. I found out that the resulting network could
- approxiamte the high value output points more than the low value
- points. I think thos maybe dur to that the error surface is an
- absolute one. Have there been any trials to construct a relative
- error surface (i.e. |(Fa(w) - Fd(w))/Fa(w)|).
- Mohamed A. AbdelRahman
-
-
- ------------------------------
-
- Subject: Job at Booz, Allen & Hamilton
- From: brettle@picard.ads.com (Dean Brettle)
- Date: Mon, 19 Oct 92 18:15:51 -0500
-
-
- NEURAL NETWORK DEVELOPERS
-
- Booz, Allen & Hamilton, a world leader in using technology to solve
- problems for government and industry, has immediate openings for
- experienced neural network developers in our Advanced Computational
- Technologies group.
-
- If chosen, you will help develop, implement and evaluate neural
- network architectures for image and signal processing, automatic
- target recognition, parallel processing, speech processing, and
- communications. This will involve client-funded work as well as
- internal research and development. To qualify, you must have
- experience in neural network theory, implementation, and testing;
- C/UNIX/X11; and parallel processing experience is a plus.
-
- Equal Opportunity Employer. U.S. citizenship may be required.
- Candidates selected will be subject to a background investigation and
- must meet eligibility requirements for access to classified
- information.
-
- ENTRY-LEVEL CANDIDATES should have a BS or MS degree in either
- computer science, applied mathematics, or some closely related
- discipline and experience implementing neural network paradigms.
-
- MID-LEVEL CANDIDATES should have a BS or MS degree in either computer
- science, applied mathematics, or some closely related discipline with
- >3 years experience. Candidates must possess a working knowledge of
- popular neural network models and experience implementing several
- neural network paradigms.
-
- SENIOR-LEVEL CANDIDATES should have an MS or Ph.D. degree in either
- computer science, mathematics, computational neuroscience, electrical
- engineering or some closely related discipline. Published work and/or
- presentations in the neural network field are highly desirable.
- Experience applying neural network technology to real-world problems
- and in developing neural network programs (including marketing,
- proposal writing, and technical & contractual management) is required.
-
- Booz, Allen offers a competitive salary, excellent benefits package,
- challenging work environment and ample opportunities to advance your
- career. Please send a resume to Dean Brettle either by email to
- brettle@picard.ads.com, fax to (703)902-3663, or surface mail to Booz,
- Allen & Hamilton Inc., 8283 Greensboro Drive, Room 594, McLean, VA
- 22102.
-
-
- ------------------------------
-
- Subject: Request for advice - sound localization
- From: net@sdl.psych.wright.edu (Mr. Happy Net)
- Date: Tue, 20 Oct 92 02:17:24 -0500
-
- Dear Sir,
- At Wright State University, we are working on developing an
- artificial neural net model of human sound localization. One of our
- objectives has been to show that ANN's adhere to the Duplex Theory of
- Localization in that they make use of high frequency intensity cues over
- low frequency intensity cues, and low frequency temporal cues over high
- frequency temporal cues. We have chosen to use the backpropagation
- algorithm distributed in the NeuralShell package available from Ohio
- State University (anonymous ftp quanta.eng.ohio-state.edu).
- One of our approachs has been to train ANN's with low, mid, or high band
- filtered signals. A problem with this is that in humans, our
- "net" learns to deal with broad band signals by selecting which portions
- of the signal to base judegments on. On the other hand, if we train an
- ANN with broad band signals, we would like to uncover which portions of
- the input spectrum are most heavily affecting the ANN's decisions. This
- is difficult to do because we cannot merely zero out portions of the
- input spectrum and test the ANN's performance, as such provides false
- cues indicating the signal is comming from either directly in front of
- or behind the head. I would greatly appreciate any suggestions on how
- to analyze the "weighting" the net gives to different portions of its input.
-
- Jim Janko
- net@sdl.psych.wright.edu
-
-
-
- ------------------------------
-
- Subject: Algorithms for masssivley parallel machines?
- From: "Rogene Eichler" <eichler@pi18.arc.umn.edu>
- Date: Wed, 21 Oct 92 12:13:49 -0600
-
- I am looking for references describing optimization algorithms for
- backprop type networks on either the CM-200 or CM-5. i.e. What algorithms
- best exploit the massive parallelism?
- Thanks!
- - Rogene
-
- eichler@ahpcrc.umn.edu
- ------------------------------
-
- Subject: Postdocs at Rockefellar
- From: Zhaoping Li <zl%venezia.ROCKEFELLER.EDU@ROCKVAX.ROCKEFELLER.EDU>
- Date: Thu, 22 Oct 92 11:53:40 -0500
-
-
-
- ROCKEFELLER UNIVERSITY
-
- anticipates the opening of one or two positions in Computational
- Neuroscience Laboratory. The positions are at the postdoctoral level,
- and are for one year, renewable to two, starting in September 1993.
- The focus of the research in the lab is on understanding the
- computational principles of the nervous system, especially
- the sensory pathways. It involves analytical and computational approaches
- with strong emphasis on connections with real neurobiology. Members
- of the lab include J. Atick, Z. Li, K. Obermayer, N. Redlich, and
- P. Penev. The lab also maintains strong interactions with other labs at
- Rockefeller University, including the Gilbert, Wiesel, and the biophysics
- labs.
-
- Interested candidates should submit a C.V. and arrange to have three
- letters of recommendation sent to
-
- Prof. Joseph J. Atick
- Head, computational neuroscience lab
- The Rockefeller University
- 1230 York Avenue
- New York, NY 10021 USA
-
- The Rockefeller University is an affirmative action/equal opportunity
- employer, and welcomes applications from women and minority candidates.
-
-
- ------------------------------
-
- End of Neuron Digest [Volume 10 Issue 15]
- *****************************************
-