home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!ukma!usenet.ins.cwru.edu!agate!doc.ic.ac.uk!uknet!bhamcs!axs
- From: axs@cs.bham.ac.uk (Aaron Sloman)
- Newsgroups: comp.ai.philosophy
- Subject: Re: Colbys-Weizenbaum etc Belated Replies (2)
- Message-ID: <C1JHov.MHH@cs.bham.ac.uk>
- Date: 28 Jan 93 01:05:18 GMT
- Sender: news@cs.bham.ac.uk
- Organization: School of Computer Science, University of Birmingham, UK
- Lines: 370
- Nntp-Posting-Host: emotsun
-
- Another belated attempt to catch up on comments on some of my earlier
- postings. This has two replies to Jeff Dalton.
-
- > From: jeff@aiai.ed.ac.uk (Jeff Dalton)
- > Message-ID: <8197@skye.ed.ac.uk>
- > Date: 19 Jan 93 18:16:51 GMT
- > Organization: AIAI, University of Edinburgh, Scotland
-
- > In article <C0yDFC.35x@cs.bham.ac.uk> axs@cs.bham.ac.uk (Aaron Sloman) writes:
- > >jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
- > >> In article <C0rpCA.Lz3@cs.bham.ac.uk> axs@cs.bham.ac.uk (Aaron Sloman) writes:
- [AS]
- > >> >There are some people who see gods in lightning and other natural
- > >> >phenomena. We don't bewail the existence of those phenomena: we
- > >> >educate people.
- [JD]
- > >> What would people active in AI and Cog Sci say in this education?
- >
- > >> Would they say that thermostats are conscious (as dave Chalmers
- > >> maintains)?
-
- [AS]
- > >I think he's changed his mind.
- > >
- > >But I would rather warn people against assuming that questions like
- > >"Is X conscious" have any clear meaning, with true or false answers.
- > >(I've written about this previousl so won't go on now.)
-
- [JD]
- > There's certainly a way of looking at them that makes it seem that
- > way. However, whenever I see arguments along those lines, I wonder
- > why they're being made. It doesn't look like that great a problem to
- > me. I'm a realist about consciousness. That is, I think there's a
- > fact of the matter as to whether something is conscious or not
- > (although there may be borderline cases).
-
- If you are talking about a person who has been asleep and you are now
- asking whether he's awake, then I agree there's a fact of the matter.
- If you are talking about someone who has been drugged and are asking
- whether he's recovered, then there may be a fact of the matter if it's
- the kind of drug whose effects are clear cut (like putting you to
- sleep).
-
- But there are lots of cases where its not clear that there's a fact of
- the matter. For some nice examples regarding feeling pain read "Why
- you can't make a computer that feels pain", in D.C.Dennett's book
- Brainstorms. Roughly, I interpret his answer as being: "because the
- concept of feeling pain is so incoherent and full of inconsistent
- criteria that there's no way of deciding whether or not the computer
- feels pain." Of course that doesn't mean that toothache doesn't exist
- in humans.
-
- Similarly, I think we can ask whether a chimp is still asleep or has
- regained consciousness. More generally, there are many categories of
- things of type X such that for individuals of type X there are clear
- transitions between being conscious and being unconscious, though for
- all of them there may also be unclear cases where our criteria come
- apart (like the individual who writhes and screams during the operation,
- but immediately afterwards remembers absolutely nothing). Lets call
- categories like X "mindful" categories: people are mindful, and I claim
- chimps are. I don't have any views regarding amoebas.
-
- When you start asking which CATEGORIES of things are mindful, i.e. which
- can be conscious and which can't, as opposed to which things in some
- particular mindful category are or aren't conscious, then it is not
- clear that there is any fact of the matter. Is a fly conscious? An
- amoeba? A two month human embryo? A machine of type so and so?
-
- I often compare this with the following: we can ask whether it's
- noon or midnight or 5pm in London, or in New York, or Moscow, because
- the question is clear cut. If you point at a bit of the moon and ask
- is it noon, or midnight or 5pm there, you may get different answers,
- depending on whether you use as your criterion the angular elevation of
- the sun above the horizon at that point on the moon, or whether you use
- a criterion based on projecting our timezones outwards perpendicular to
- the earth's surface, as you probably would for someone flying a plane or
- a balloon.
-
- There's NO fact of the matter about which is the right way to decide
- what time it is at a particular place on the moon. Similarly there's no
- right way to decide how to project our criteria for saying whether
- something is conscious or not to other types of animals or machines,
- where the criteria fall apart.
-
- > ...There's a problem about
- > whether we can know which is the case,
-
- Not when there's nothing to be the case, because the question is
- muddled, incoherent, etc.
-
- > and there's a problem of making
- > it clear just what we're talking about by using the word "conscious"
- > (especially since it has a number of different meanings). But it
- > seems to be that our aim ought to be to solve these problems as best
- > we can rather to argue that we cannot.
-
- Yes, I think we ought to solve the problems. Here's how: we first
- produce a good theoretical overview of the types of designs that can
- produce various kinds of behavioural capabilities more or less like
- ours. We then analyse the different kinds of capabilities that are
- capable of being generated by different kinds of mechanisms covered by
- the overview. We then give technical names for these different sorts of
- capabilities (and states, and processes, etc.). We could then CHOOSE one
- of these as closest to the ordinary use of the word "conscious" and use
- that as our new technical definition of the word. It would then be a
- fact of the matter whether amoebae, flies, mice, chimps, etc. etc. were
- or were not conscious.
-
- Compare: we used to have words for different kinds of stuff, but they
- had no clear, precise, definitions to cover all cases. The development
- of physics and chemistry provided a good generative basis for a taxonomy
- of kinds of stuff (chemical elements of various sorts, isotopes,
- chemical compounds, mixtures, solutions, alloys, etc. etc.). Within this
- array of kinds generated by the new science, some kinds came very close
- to previous non-technical uses of words like "water" (H20), "salt"
- (NaCl), "carbon", "iron", etc.
-
- It then became a fact of the matter whether something was water or not,
- or water plus impurities, etc., whereas previously there were ill
- defined boundaries.
-
- Of course, someone who thinks he knows what consciousness "REALLY" is
- from first hand introspective experience, will reject all this. What
- I've found is that that sort of tendency is very-wide spread, deep
- rooted, and hard to argue against. In fact it's very like a religious
- conviction in some cases. Philosophical therapy over a long period
- sometimes cures such people. But only sometimes. (It took me a long time
- to get rid that delusion.)
-
- [JD]
- > Earlier you wrote:
- (as eliza might have said :->)
- [AS]
- > People who use the words "conscious", "consciousness" in technical
- > discussions, generally *think* they know what they are talking
- > about, but are normally totally muddled, between all sorts of
- > different interpretations sometimes mixed up with mumbo jumbo
- > concepts to do with magical inaccessible entities and processes.
-
- Well, I've tried to unravel some of that, in my remarks above.
-
- >
- > My impression is that people generally do know what they're talking
- > about, are not totally muddled,
-
- Well it's hard to be sure without discussing with them at length, and
- asking them to define precisely what the question is that they are
- asking about fleas, mice, embryos, or machines, when they ask whether
- these are capable of being conscious (i.e. are "mindful" categories.)
- I don't dispute that they understand the ordinary use of English
- phrases like "He's regaining consciousness now".
-
- Also the fact that people *claim* to know what they are talking about,
- or *appear* to know what they are talking about, doesn't mean that they
- *do* know what they are talking about. There are lots of cases where
- people don't discover till long afterwards, or sometimes never at all,
- that they have been using incoherent concepts. I think Aristotle thought
- he knew what he was talking about when he wondered where the "natural"
- place was for earth, water, fire, air, etc. Before Einstein some people
- may have thought they understood the concept of continuing identity of a
- spatial location (as opposed to a continuing set of spatial relations to
- other things).
-
- We are not authorities on whether we attach definite meanings to our
- words, for we easily deceive ourselves.
-
- > ...and don't mix in mumbo jumbo
- > concepts or magical entities and processes. (I'm leaving out
- > "inaccessible" here.) Of course, when there is muddle and magic,
- > it's worth pointing this out. But I'm very suspicious of this
- > sort of general accusation, divorced from all concrete instances.
-
- Point taken. I took various verbal cues in previous contributions as
- evidence, because they looked like the turns of phrase I've often heard
- used by people who I had talked to in more depth, and (in my view) found
- to be muddled, etc. and often unwittingly committed to the existence of
- some kind of special stuff divorced from the realm of scientific
- investigation, inacessible from "outside", etc.etc. and somehow
- (magically?) capable of producing events in the physical world.
-
- Even distinguished scientists can fall into this sort of trap (which was
- my conclusion after reading the well known book on consciousness and the
- self by Eccles and Popper (whose exact title I forget).
-
- [AS]
- > >I'll backtrack a little: if the vendors know there's a good chance
- > >that because of inadequacies in the current educational system users
- > >of a package are likely to be fooled into attributing capabilities
- > >to the machine which it really does not have, then perhaps the
- > >machine should from time to time print out a suitable warning
-
- [JD]
- > Humm. I think there's a chance that education in AI will lead
- > people to attribute capabilities to the machine which it really does
- > not have. I gave an example a while back of a class in Edinburgh
- > where students argued, with the support of the instructor, that
- > Eliza has some real understanding, although only a small amount of
- > it. (Ie, that the difference between Eliza and, say, a human
- > was essentially only a matter of degree.)
-
- [AS]
- A tendency to believe that understanding is all-or-nothing (and
- similarly for consciousness, etc. etc.) often goes with the sort of
- muddle I was talking about. If a detailed analysis of human
- understanding of language shows that it involves 173 different
- capabilities, and if Eliza has 4 of them, well then I'd say there's
- nothing wrong with saying that it has a small degree of understanding.
- (I've enlarged on this in my paper on understanding in IJCAI-85)
-
- Saying it's "only" a matter of degree may imply that there's a continuum
- of cases. If that's what you are objecting to, I agree. Design space is
- full of DIScontinuities, and some of them make a very large difference
- to the capabilities of instances of designs.
-
- But arguing about which subset of capabilities is really required for
- understanding (or even a "teeny bit of understanding") is like arguing
- over which criterion is the real one for deciding on the time on the
- moon. It's just a silly argument as there is no answer.
-
- [JD]
- > I think there's a very strong tendency to see some real understanding
- > behind any use of natural language.
-
- That's because there usually is: e.g. in the case of the computer
- program there's the understanding of the programmers that lies behind
- what the computer prints out. If you are tempted to argue about whether
- the machine *itself* has "real" understanding as opposed to "fake"
- understanding, or "counterfeit" understanding, or "primitive"
- understanding, or "partial" understanding, etc. then I think it's a
- pointless argument.
-
- Of course there's a different point, namely that people can be deceived
- by appearances, just as people have been deceived into thinking the
- earth is flat, the sun goes round the earth, the gods are punishing
- them, etc. etc. But I didn't think you were talking about that.
-
- > ..We can see the other side of this
- > when we consider how hard it often was for people to see how a
- > computer could use symbols that are meaningful to us rather than being
- > confined to numbers. For instance, there are people in offices
- > everywhere who have had to divide all recognized work activities into
- > numerically designated categories and who accepted the explanation
- > that they had to do this because the computer required it.
-
- This is beyond my experience. There are, I admit, lots of people who use
- numbers gratuitously instead of informative descriptions, e.g. many
- would-be scientists (e.g. psychologists who suffer from physics-envy).
- I don't see how teaching people about AI will confuse them about this.
-
- > Education ought to be able to do something about this, but I think
- > that education would have to be somewhat "anti-AI" (and anti-TT), at
- > least as the debate is currently conducted.
-
- Strange to read that coming from you, given that you, of all people,
- know full well that computers are capable of doing all kinds of
- non-numerical things, and that AI in particular has all sorts of
- non-numerical techniques. But I expect I've misinterpreted you.
-
- Now your second article:
- > From: jeff@aiai.ed.ac.uk (Jeff Dalton)
- > Subject: Re: Colbys-Weizenbaum and Overcoming Depression 2.0
- > Message-ID: <8199@skye.ed.ac.uk>
- > Date: 19 Jan 93 19:20:36 GMT
-
- > In article <C0z7o1.ADI@cs.bham.ac.uk> axs@cs.bham.ac.uk (Aaron Sloman) writes:
- > >arodgers@dcs.qmw.ac.uk (Angus H Rodgers) writes:
- > >>
- > >> I'll change the question, then, to one which you are less likely
- > >> to find ambiguous: would it be necessary for the mechanical
- > >> psychotherapist to be able to *feel* anything? (With apologies to
- > >> Roger Penrose, and Hans Christian Andersen.)
- [AS]
- > >Well, the English word "feel" is almost as bad as the word
- > >"conscious". Are you asking whether the machine would need to be
- > >able to
- > > feel the surface it is resting on
- > > feel the breeze
- > > feel itself falling
- > > feel how far it is from the wall
- > > feel the roughness of the wall
- > > feel that it is getting close to finding an answer to a question
- > > feel that it is getting hotter
- > > feel hot
- > > feel tired
- > > feel that the weather is likely to turn wet
- > > feel sure that Fermat's last theorem is provable
- > > feel angry (compare "be angry", which is quite different)
- > > feel depressed (compare "be depressed")
- > > feel inclined to refuse to answer
- > > feel unwilling to give up
-
- [JD]
- > This looks to me like it might be a solvable problem.
-
- Yes - that's my point. For a subset of very specific cases at least we
- can begin to specify criteria that might be used for discriminating
- those cases.
-
- But then someone comes along and says: "Ah it may feel in that sense,
- but does it REALLY feel?" And then you probe a bit and find that they
- can't define what they mean except by saying "Well YOU can feel things
- can't you? so you know what feeling is from your own introspective
- experience", etc. and then the argument gets bogged down because they
- don't appreciate that you cannot define anything by pointing at an
- instance, even pointing inwardly. (Compare consciousness.)
-
- [JD]
- > BTW, doesn't Dennett have a paper on whether a robot can feel pain
- > (in _Brainstorms_)? Does he manage to pin "feel pain" down well enough?
-
- Yes the paper is relevant -- referred to above. My interpretation is
- that he shows how incoherent the concept is as soon as you move beyond
- the "standard" situations in which we learn to use the word "pain".
-
- [AS]
- > >Such questions are often not so much a healthy expression of
- > >intellectual curiosity as an attempt to use rhetorical questions to
- > >challenge what is felt as a threat of some kind (e.g. Weizenbaum's
- > >threat that taking AI seriously somehow degrades mankind).
-
- [JD]
- > I think this is a rather dubious tactic, Aaron. You don't come
- > right out and say "you're just using rhetorical questions against
- > what you see as a threat" (perhaps because you don't think it's so),
- > but you try to create the impression that there's something suspect
- > about "such questions".
-
- I wrote "often". I don't know what proportion of cases need this
- diagnosis. It's certainly true of very many people I have argued
- with in real life. It's hard to be sure from typed contributions
- in comp.ai.philosophy! So perhaps I over-react to some of the familiar
- sounding turns of phrase and rhetorical questions.
-
- [AS]
- > >I was referring in a rather sloppy fashion to an old philosophical
- > >notion -- some people think that the only way you can possibly know
- > >what "consciousness" is, is by having it, and that's because the
- > >relevant states and processes are assumed not to be part of the
- > >physical universe but somehow part of a different (spiritual,
- > >mental) realm of being, not accessible via physical perception or
- > >measurement, but only via a kind of introspection.
- > >
- > >Often such thinking implies a notion of a kind of magic whereby
- > >these non-physical mental or spiritual entities can cause
- > >occurrences in the physical world or possibly vice versa.
- [JD]
- > Tar them with dualism, eh? I don't think this sort of thinking is
- > actually very prevalent on the net (at least in AI newsgroups). At
- > least I haven't noticed anyone defending substance dualism (although
- > Dave Chalmers defended property dualism).
-
- Dualism of a sort often lurks unacknowledged beneath the surface of
- hostility to AI.
- (Though not in the case of Searle or Penrose.)
-
- Scepticism about AI is different from hostility: I am deeply agnostic
- about what AI can or cannot achieve. But I think the best way to find
- out is to engage in AI research with an open mind. Currently I don't
- think we even understand what the problems are, let alone whether AI can
- solve them.
-
- But doing AI is the best way to learn.
-
- Aaron
- --
- --
- Aaron Sloman,
- School of Computer Science, The University of Birmingham, B15 2TT, England
- EMAIL A.Sloman@cs.bham.ac.uk OR A.Sloman@bham.ac.uk
- Phone: +44-(0)21-414-3711 Fax: +44-(0)21-414-4281
-