home *** CD-ROM | disk | FTP | other *** search
-
-
- ---------
- | Neuro |
- ---------
-
- A neural network simulator
- Version 1.0
-
- by
- Berthold Ruf
- and
- Ulrich Wisser
-
-
-
- Contents:
- =========
- 0. ............. Copyright
- 1. ............. Introduction
- 2. ............. Getting started
- 3. ............. Description of the menu items
- 3.1 .................. Project
- 3.2 .................. Action
- 3.3 .................. Settings
- 4. ............. Notes to the internal structure of the program
- 5. ............. Known Bugs
- 6. ............. Appendix
-
-
-
- 0. Copyright:
- =============
-
- This program is shareware and NOT (!) public domain. When copying this
- program be sure to include the following files:
-
- -neuro the program
- -neuro.doc this documentation
- -req.library copy this file to your libs: directory
- Thanks to Bruce & Colin for the excellent
- library. Their address is:
- Bruce Dawson, Colin Fox
- Box, 1215 Daviestreet.
- Vancouver, B.C., V6E 1N4
- -data/Numbers demonstrates, how the net can recognize
- numbers, works very well
- -data/Encoder Example of finding an internal
- binary-like representaion
- -data/Letters the letters in a 5x5-representation are
- all quite simular, one pixel is used by
- many patterns, this example shows the
- problems for neural networks to learn
- patterns like these.
- -data/Pattern05x05.06 very easy to learn, try it yourself!
- -data/Pattern10x10.04 works in Hopfield-mode quite well
-
-
- IT IS FORBIDDEN TO SPREAD THE PROGRAM SKIPPING ONE OF THESE FILES OR WITH
- ANY FILE BEING ALTERED! REMEBER THIS PROGRAM IS STILL UNDER COPYRIGHT AND
- BY THIS REASON PROTECTED BY LAW! THE AUTHORS DON'T GUARANTEE ANY FUNCTION
- OF THE PROGRAM AND ARE NOT LIABLE FOR ANY DAMAGE RESULTING IN THE USE OF
- THIS PROGRAM!
-
-
- If you like this program and use it more frequently, please send DM 20
- (Europe) or $20 (Oversea) to:
-
- Berthold Ruf / Ulrich Wisser
- Mauerstrasse 44
-
- W-5100 Aachen
- Germany
-
-
- You will get a free update of the next version, as soon as it is available.
-
- If you have any questions concerning our program or suggestions for
- improving it, please write or email us. Our internet-addresses are:
-
- ruf@cip-s01.informatik.rwth-aachen.de
- wisser@cip-s01.informatik.rwth-aachen.de
-
- voice : +49-241-33176
-
-
- Many thanks to our beta-testers, who have helped to improve our program.
- Especially AnTu did a great job. He also created the nice icon.
-
-
- 1. Introduction:
- ================
-
- The program "Neuro" is a neural network simulator. It learns patterns and
- recognizes them, also if the input is incomplete! It can for example learn
- the pixel representation of some letters. After the user entered a part of
- a pattern the network will complete this pattern.
-
- Used in the mode "Backpropagation" it finds internal represantations for
- patterns. So the net is able to "discover" the binary code for example,
- when it has to learn 8 patterns, but has only 3 Bits (hidden units), to
- represent them internally (known as the Encoder-problem, see the included
- example-file "Encoder").
-
-
- Two types of neural networks are implemented in this version:
- -Backpropagation
- -Hopfield
- In the next version we also plan to realize the Boltzmann-mashine.
-
- It is not possible to give a full explanation here, if you are interested
- in details, please refer to the books listed in the appendix. The
- backpropagation-algorithm is probably the one which might get the best
- results. The net is able to learn most of the pattern-association problems
- very well. The hopfield-net also very often produces good results, though
- it can't distinguish between a pattern and its reverse partner, so you
- might sometimes get a reverse pattern as an answer. Unfortunately it has a
- very small capacity so you can't enter too many patterns. In this case you
- will get a warning message.
-
- The programm offers the opportunity, to explore the possibilities (but also
- the limitations) of neural networks.
-
-
- 2. Getting started:
- ===================
-
- Before starting you should copy the "req.library" to your libs: directory.
- The program is started from CLI (or shell) with the command
-
- neuro
-
- or from the workbench by doubleclicking the Icon. No parameters are
- necessary.
-
- For a first try, select "Load all" in the menu "project" and load the file
- "data/Numbers". You have loaded now a neural network which has learned to
- recognize the ten digits (0-9). To see them, select "info" in the menu
- "project". Now press a key or a mouse button and select the item
- "question" in the menu "Action". You now will see three grids. The left
- one is for your input, the middle one shows the internal representation of
- your input and the right one shows the answer of the network. Click in
- some squares of the input grid, so that it looks simular to one of the
- digits as shown in the info. Now click "question" in order to see the
- answer of the network. If it doesn't look simular to one of the digits you
- can take this answer again as an input by selecting the gadget "Retake".
- You also can find out, how certain the network is about its answer by
- clicking into "Black&White" which then will change to "Grey": Some squares
- will become grey. The more black a square is, the more sure the network is
- that this square has to be black.
-
-
- 3. Description of the menu items:
- =================================
-
- 3.1 Project:
- ----------------------
-
- -Load: You either can load the patterns only or the patterns plus all
- parameters and the weights of the matrix (the result of the learning
- process). When the program is started you either must load something or
- generate a new pattern file with the included editor (see below).
- When you select "Load patterns", an ASCII-file containing the descritption
- of the patterns to be learned is loaded. For a detailed descritpion of the
- structure of the ASCII-file see chapter 4.
- The file loaded by "Load all" is a Non-ASCII-file. It contains the
- patterns, the mode of the net (Backpropagation/Hopfield), the
- weights and the current values of the parameters, which can be changed in
- the "Setting/parameter" menu.
-
- -Save: You only can save the patterns with this menu item, which is not
- nessessary since you have generated or changed patterns with the editor.
- After learning, you should choose "Save all" in order to save the results
- of the learning process.
-
- -Clear: This will reset the complet program. All patterns and learning
- results will be deleted!
-
- -Edit: This is a very simple editor, but it works and it should suffice to
- create or change patterns.
- --New: If you want to create a completely new pattern file, select "Edit
- new" (the old patterns in memory will be forgotten then!). You have to
- tell the editor, how many rows and columns you want to use for your
- patterns. Then you can click with the mouse in these squares which you
- want to have black in your pattern. When finished, select "Take" in
- order to use this pattern. The grid should become empty and you can
- generate the next one. You can page through the patterns forward and
- backwards using the "Prev" and "Next" gadgets. If you want to remove a
- pattern from the list choose "Remove". To clear the grid choose "Clear".
- A maximum of 30 patterns can be edited.
- --Old: You can change your edited patterns later or a given pattern file
- by selecting "Edit old", which always refers to the current patterns in
- memory.
-
- -Info: Gives information about free memory, size, number, parameters and
- layout of the patterns. When the mode is set to backpropagation it also
- displays the learning error. The net has performed its task very well
- when the error is nearby zero. It can take a few seconds to calculate it,
- so don't panic!
-
- -About: To see the beautiful greeting-picture again.
-
- -Quit: should be no problem!
-
-
- 3.2 Action:
- -----------
-
- -Learn: Selecting "learn" starts either the backpropagation- or the
- hopfield-learning algorithm, depending on your choice of the mode.
-
- If the learning-process is normally completed, you'll get the message
- "Learning finished!!!". If you have choosen the backpropagation
- algorithm, you have to enter the number of learning iterations, that means
- how often a (randomly choosen) pattern is presented to the net in order to
- learn it. In most cases you will need 500 to 20000 iterations to get good
- results. It can sometimes take a few hours (depending on the size of the
- patterns) to finish. A display shows, how many iterations are still to
- do, it is refreshed always after 10 iterations (in order to save time).
- By striking any key a requester will apear and ask you, if you really want
- to abort. Select Abort to abort or continue to (guess what) continue.
- The results up to this moment won't be lost!. In Hopfield mode for a
- pattern file it's only necessary to call this menu item once and it
- doesn't make sense here to call "Learn" several times.
-
- -Question: After selecting "question", a window will appear, displaying
- the mode (of the network) in the first line . Below you see three grids:
- the left one is for your input, the right one is used by the network to
- display the answer of the network to your question and the middle one
- displays the hidden units of the network (not in the Hopfield-mode), which
- show the internal represantation of the current pattern.
-
- Click with the left mouse button in those squares of the input grid, you
- want to have black. After clicking the "question"-gadget the network will
- begin its search. In Hopfield-mode you can see on the left side the
- current energy-value. Here it can take a while until you see the answer.
-
- "Clear" will clear all three grids.
-
- "Re-Take" (in Backpropagation-mode) will take the current output again as
- a input and automatically start the query.
-
- "Black&White" / "Grey" is only available in the Backpropagation-mode. The
- hidden units and the output units contain internally values between 0.0
- and 1.0. In "Black&White"-mode all values higher than 0.5 are displayed
- black the other values are displayed white. In "Grey"-mode the values
- from 0.0 to 0.1 are displayed white, the values from 0.1 to 0.2 in the
- brightest grey and so on. So in this mode you get more information
- espacially about how certain the net is about its answer.
-
- "Go-On" (only in Hopfield-mode) : If the network has found a solution
- which does not correspond to one of the given patterns, it is possible to
- "unlearn" this pattern by clicking "Go-On". This will reduce the
- probability of finding this solution. You should not use this option too
- often. The efficiency of this unlearning-procedure is given by the
- parameter Mu (see below).
-
-
-
- 3.3 Settings:
- -------------
-
- -Parameter: Here you can set several parameters for the current selected
- mode. For all parameters there is a default-value, which will be
- displayed here and used by the algorithm unless you change the value.
- When entering erroneous values outside of the allowed interval the screen
- will flash and the entered value will be replaced by the default value.
- --Backpropagation: Eta determines the learning rate and must have a value
- between 0 and 1. The bigger Eta, the faster the net will learn. But
- with a value of Eta nearby 1, there will the danger, that the net won't
- learn all patterns. It's sometimes a good strategy to begin with large
- values and then to reduce Eta down to a very small value. Theta
- determines the bias of each unit (see literature) and should have small
- positive or negative values. The number of hidden units tells the net,
- how many units it can use to find an internal representation of the
- patterns. As a default value there is one hidden unit for one pattern.
- --Hopfield: Theta: As in Backpropagation. Mu: Must be between 0 and 1
- and influences the unlearning-procedure invoced by "Go-On" in the
- question-requester. The current output pattern will be completely
- deleted, if Mu=1.0. Unfortunately it also will reduce the probability of
- finding the other patterns. With Mu=0.0 "Go-On" will cause no changes in
- the weights. In this case the net only will continue its search.
-
- -Mode: Here you can set the program to one of the two implemented network
- types (boltzmann will be realized in our next version). The behaviour of
- the complete program depends on your choice in this menu item.
-
- -Status: This is only for specialists of interest. It displays the
- weight-matrix in a very (!) simple way. In our next version we plan to
- improve this display. You see a lot of "+" and "-", where "+" stands for
- exhibitory connections (weight >= 0.0) , "-" for inhibitory connections
- (weight < 0.0).
-
- -Random-Gen.: The random number generator is always initialized by a
- special variable called Seed. In order to change the random aspects of the
- networks (e.g. when initializing the weights with random values) you can
- change the Seed-value. Seed must be in the range of 0..65536.
-
- -Colour: If you like it, you can change the colours. The last eleven
- colours are used for the grey-scaled representation of the answers in
- backpropagation-mode and should change continuously (use spread).
-
- -Close WB: Closes the workbench, if there are no open windows with running
- programs. The workbench will open again when you quit the program.
- This option may speed up the program a bit.
-
- 4. Notes to the internal structure of the program
- =================================================
-
- The pattern file is an ASCII file which has the following structure:
-
- 1.line : Number of patterns
- 2.line : Number of columns
- 3.line : Number of rows
- Then in each line follows one line of the pattern containing only '.' and
- '*'. The patterns are separated one blank line. Even the last Pattern needs +
- this blank line.
-
- If you have, for example, two patterns with 7 rows and 8 columns, the
- pattern file looks like this:
- 2
- 8
- 7
- ...**...
- ..***...
- .****...
- ...**...
- ...**...
- ...**...
- ...**...
- .******.
-
- ..****..
- .**..**.
- .....**.
- ....**..
- ...**...
- ..**....
- .**.....
- .******.
-
- (eof)
-
- The name-convention of the pattern-files is:
- "Pattern" + #Rows + "x" + #Columns + "." + #Patterns.
- So "Pattern10x08.15" stands for a pattern file with 15 patterns, having 10
- rows and 8 columns.
-
- The files created by "Save all" can't be edited by an editor.
-
-
- 5. Known Bugs:
- ==============
-
- When entering erroneous parameters into a gadget, the cursor in this gadget
- will not disappear.
- If you should find more bugs, we'd be happy to get a note from you!
-
- 6. Appendix:
- ============
-
- Interesting books:
-
- J. McClelland, D. Rumelhart: Parallel Distributed Processing. Explorations
- in the Microstructure of Cognition. The MIT Press 1986.
-
- E. Schoeneburg. Neuronale Netzwerke. Markt und Technik 1990. (in german)
-
-