home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!mcsun!uknet!warwick!str-ccsun!strath-cs!robert
- From: robert@cs.strath.ac.uk (Robert B Lambert)
- Newsgroups: comp.ai.neural-nets
- Subject: Re: Correctness of NNs
- Message-ID: <10091@baird.cs.strath.ac.uk>
- Date: 27 Jul 92 11:58:43 GMT
- References: <10088@baird.cs.strath.ac.uk> <arms.712022732@spedden>
- Organization: Comp. Sci. Dept., Strathclyde Univ., Glasgow, Scotland.
- Lines: 136
-
- In article <arms.712022732@spedden> arms@cs.UAlberta.CA (Bill Armstrong) writes:
- >robert@cs.strath.ac.uk (Robert B Lambert) writes:
- >
- >>Surely to be able to state that network X is 100% reliable, the entire input
- >>set must be known. From experience, any pattern recognition task which has a
- >>fully defined input set with appropriate responses can most easily be solved
- >>with a look-up table.
- >
- >You are right, in theory, about table look up. But try to do that for
- >a 32*32 grid of pixels. You would have to store 2^1024 values, which
- >is not possible in this physical universe, and certainly is not
- >economical. This is typical of high-dimensional problems.
- >
- >So you can't store all outputs, you have to do some things by
- >"generalization" of some kind, based on a stored state derived form
- >some "training set", say.
-
- Interesting argument. If you have an input grid with 32x32 binary pixels, then
- indeed you have 2^1024 possible input patterns which is physically impossible
- to implement by a look up table. However, say we are recognizing handwritten
- characters and we have a database of 100,000 characters. We could use a look up
- table (size 12Mbytes) which would give 100% correct classification for each
- character in the database. If all future recognition is restricted to this set,
- the look up table provides the simplest (and probably cheapest solution).
-
- However, such a system is virtually useless if new characters are presented.
- A better solution is a system capable of generalization. Such a system can be
- implemented which gives 100% correct classification for the 100,000 characters
- and gives a good classification rate for unseen characters. But what about
- non-characters? Our original training set is a tiny percentage of all possible
- input patterns. If we fire random patterns at the input, what will the network
- do?
-
- Any neural network which can only classify inputs into a number of pre-defined
- categories will dramatically fail in the real world.
-
- >>I had thought that the principal strength of neural networks (including the
- >>brain) was the ability to form adaptive responses/rules based on a subset of
- >>all possible system inputs. If responses are wrong, a neural network has the
- >>ability to correct itself, whilst improving its response rate on subsequent
- >>new inputs.
- >
- >Sounds ok. But, between the lines I read that you think you could
- >correct any errors on huge input spaces like the 2^1024 - sized one
- >above. That can't be done, in general, simply because you don't have
- >enough memory to do it. Certain functions you will never be able to
- >compute in our universe.
-
- True, including the correctness of most computer software. However if a network
- is able to identify inputs which do not lie within expected ranges, appropriate
- action can be taken. ie, if the input is not a numeric character, do not attempt
- to categorize it.
-
- >>With respect to safety in critical applications, how do you prove a system to
- >>be correct? You first have to determine every possible input to that system.
- >
- >I agree, you have to specify a superset of the inputs you can get.
- >
- >>It is never possible to eliminate errors from any real system no matter how
- >>good it looks on paper. The current approach to this problem is redundancy.
- >>Build a number of systems from different component running different software
- >>and make sure they all produce the same response during use. Is this not one
- >>of the strengths of NNs? If a cell fails or a connection is broken, the
- >>degradation of the response to each input is slight.
- >
- >Sounds great, but doesn't work. There is a probability greater than
- >zero in a large network that you will get very ungraceful behaviour.
- >
- >I think your statement about "slight" degradation is quite wrong. You
- >Should be able to cook up cases where one connection being wrong
- >throws off a whole net. For example, if an output unit has a weight
- >that erroneously acts twice as large as it should be, you may not have
- >such a graceful degradation.
-
- Does this apply to the human brain? I think not. The human nervous system has
- massive redundancy with the expectation of component and connection failures
- over time.
-
- Much of my arguments and reasoning is based on the study of the human brain.
- ANNs are currently very primitive and limited in their application. The current
- argument over analogue vs digital transmission of information is an excellent
- example of the problem neural network researchers face. The individual neurons
- in the brain are causal devices. Their state is based on the history of firing
- of the cell and the history of received pulses. BP nets try to model the net
- behaviour of each neuron by considering the effective output (ie output signal
- proportional to input). ALNs as with many other model the binary outputs of
- biological neurons.
-
- It is to easy to get caught up with the workings of individual neurons and
- forget the connectivity. ANNs (inc ALNs) do not model the biological brain.
- There is strong evidence for cooperation between large number of neurons in the
- brain utilizing very high levels of feed-back. Such `groupings' act together
- to generate a global response to a given stimuli. For this cooperative grouping
- to fail, a large percentage of the neurons within the group must fail.
-
- If we want networks to both solve real world problems and improve our
- understanding of the human brain, we should be considering the higher level
- behaviour and connectivity of networks and not their basic components.
-
- >But how can you claim that x-technology is able to produce
- >sensible responses to new inputs as a general rule? Show me a proof,
- >say that fuzzy logic always produces sensible responses to new inputs,
- >and I'll eat my hat. First, there's a complete, all encompassing
- >definition of "sensible", then....
- >
- >For those who are just getting into the discussion: please do not
- >think that ALNs are safe but BP nets aren't. Both can produce
- >unexpected values not detected by testing. I intend to show that ALNs
- >can be used with a design methodology that will lead to safe systems.
- >I will leave it up to the BP people to worry about the safety of their
- >systems. Up to now, it seems they won't even admit there is a
- >problem.
- >
-
- I agree with you to a point. Simple NNs by there very nature are unreliable.
- My comments about sensible responses do not refer to any single network, but
- rather a principal we need to work towards. The human brain is capable of
- identifying stimuli which do not conform to the norm. The action taken is based
- on higher level reasoning and experience and can usually be defined as sensible.
-
- What is the future of ANNs? If they are to be used in any situation where a
- fully definable input-output set exists, they have no future as this is the
- application where conventional computer technology excels. If ANNs are to be
- used for real world control and recognition tasks we must face up to the fact
- that such networks while able to give the best performance can never be 100%
- reliable as it is simply not possible to account for all possible inputs.
-
- If ANNs were used to replace drivers in cars and the number of fatalities were
- reduced (but not removed) would this be acceptable?
-
- -------------------------
- Robert B Lambert
- University of Strathclyde
- Scotland, UK.
-
- robert@cs.strath.ac.uk
-