home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!spool.mu.edu!agate!doc.ic.ac.uk!warwick!uknet!edcastle!cam
- From: cam@castle.ed.ac.uk (Chris Malcolm)
- Newsgroups: comp.ai.philosophy
- Subject: Re: Colbys-Weizenbaum and Overcoming Depression 2.0
- Message-ID: <30581@castle.ed.ac.uk>
- Date: 21 Jan 93 12:24:44 GMT
- References: <8172@skye.ed.ac.uk> <C0yDFC.35x@cs.bham.ac.uk> <93Jan17.180655est.47622@neat.cs.toronto.edu>
- Organization: Edinburgh University
- Lines: 150
-
- In article <93Jan17.180655est.47622@neat.cs.toronto.edu> mstede@cs.toronto.edu (Manfred Stede) writes:
-
- >The origin of this discussion, however, was a program aiming to be an
- >artificial psychotherapist. Someone's posting advised to boil this
- >application down to the absolute minimal requirement: a tool that you
- >can talk to, that munches on the words for a second, and throws some-
- >thing back at you, such that some of your words reappear, and you do
- >that for a while, and you achieve self-elucidation. Well, if this is the
- >purpose, and I don't care about what the program utters anyways, I might
- >as well talk to my cat. Or to myself, which may in fact be much more
- >useful, because I could give myself sensible answers, couldn't I.
- >No need for a self-elucidation program.
-
- But the trouble with talking to yourself, or your cat, as anyone who
- has actually seriously tried this knows only too well, is that you (or
- your cat) don't ask yourself unexpected questions. That is why ancient
- systems of psychotherapy make such use of random oracles (reading
- bones, cards, tea-leaves, etc.). And that is the reason why puzzle
- solvers who are stuck are advised to try "silly" ideas, and to try
- random mutations of suggestive material, in an effort to jolt the mind
- out of its rut. It's the old generate-&-test paradigm. The human mind
- is a lot better at testing than generating, however, hence the
- popularity of artificial aids in the generation part of the problem
- solving process. That's why a silly clueless word-crunching program
- can be a useful adjunct to self-elucidation. And silly and clueless as
- it may be, it would be hard for it not be better than opening the
- Bible (or whatever) at random, for the utility of which there is great
- deal of evidence.
-
- Your kind of criticism is based on the presumption that _if_ a
- psychotherapeutic program is any good, then it _must_ possess, within
- itself, certain psychotherapeutic qualities. An obvious one of these,
- which the program can't avoid lacking, is understanding the distressed
- person. But a therapeutic program which does its work by engaging in a
- dialogue with a person is a participant in a process involving a very
- capable mind -- the mind of the human user. It can help that mind if
- it can provide facilities which that mind can find helpful. These can
- be very simple facilities indeed. For example, because I have a poor
- memory, I find pencil and paper very helpful in solving problems of
- all kinds. Including difficult problems involving sexual relationships
- with others. But there is absolutely _no_ problem solving ability in
- the paper and pencil I use. For a start they know nothing about sex.
- Yet I find them invaluable aids.
-
- >If the standards remain a little higher and we truly want an advice-
- >giving system, then one of my objections is that with the current state
- >of NLP it just won't work and would not be of any use: There is no
- >sublanguage to be pinned down here, to which you could connect your KB
- >such that you get some inferences right and come back with useful
- >advice. The program designer cannot anticipate what the problem of the
- >patient is, what s/he will be talking about, and how s/he will be talking
- >about it. And the user is bound to notice at some point how little that
- >dialogue partner actually grasps; frustration is a likely reaction.
-
- It is very easy to use simple ELIZA-like techniques to create a
- program capable of imparting useful information in the form of a
- discussion, or a question and answer session. Of course the program
- often makes mistakes, misunderstanding the user and making a
- completely irrelevant response. But people so that too. We respond by
- rephrasing our questions. Sometimes we have to do this a number of
- times. It works with ELIZA-ish programs too. In fact, it is much
- easier to rephrase your questions to get the answer you want,
- precisely because there is no (mis)understanding involved. Just as a
- hyper-card system can help you to learn something better than a plain
- book, so an ELIZA-like system can do a bit better than a hyper-card
- system. In this sort of application the user doesn't get frustrated by
- noticing the mechanics of the system; the user learns how to use the
- way the system works to get what she wants out of it.
-
- Of course if a user thought the program was really _understanding_
- their problems, then frustration and tears are inevitable. But this is
- simply due to misunderstanding by the user. The solution is education
- -- which can in this case be easily provided by the supplier or the
- machine.
-
- >If the sublanguage cannot be restricted due to the domain (as it is
- >the case with the psychotherapist), it becomes quite difficult, if not
- >impossible, to explain to someone with no linguistics background what
- >the program can handle and what not.
-
- You don't need a linguistics background to understand how an
- ELIZA-like system works! And adding simple parsing of logical
- relations between clauses is easy to implement, and to understand as a
- user. I once built an ELIZA-like system whose mission in life was to
- explain how it worked, and to engage in AI/philosophy of mind debates.
- School kids played with it for hours, enjoyed discovering how it
- worked, and had no trouble understanding it. Once they had grasped
- its capabilities, a favourite trick was to engage in a carefully
- staged dialogue which would convince a visiting teacher or parent that
- it really understood! (It was particularly good at Chinese Room
- arguments :-)
-
- The program doesn't have to make a good response every time. Irrelevant
- responses are quite acceptable provided there are ways the user can
- work round to getting the required response. A program which interacts
- with a _very_ sophisticated language user can employ the abilities of
- the user.
-
- >No reason to get AI involved in this business, no need to advertise
- >anything as a computer program that analyzes your psychatric problems
- >for you and helps you solving them.
-
- This is your misconception. You think that if AI is involved there
- must be some element of "real" analysis and understanding involved.
- But as I have argued, an _aid_ to self-elucidation need not duplicate
- human cognitive capabilities, since it can borrow them; it only needs
- to be able to offer useful facilities which the user can employ. You
- would then like to say "Ah, but in this case there is no AI, it's just
- a program, an animated book." What you don't realise that this kind of
- animated book can usefully employ the techniques of AI to better do
- its job of being an animated book. AI can be (and is routinely) used
- in applications without a trace of, or the slightest intention to
- have, "real" understanding. That's a research goal about which there
- is a lot of debate :-) It has nothing to do with the business of
- building useful interactive books or self-elucidation aids, since
- useful versions of these can be built without "real" understanding.
-
- Of course they would be even better with "real" understanding. But,
- like today's calculators, they can do a pretty useful job without it.
-
- It interests me that Weizenbaum's secretary was smart enouh to realise
- the therapeutic possibilities of ELIZA, whereas Weizenbaum was
- outraged by what he mistakenly considered to be the philosophical
- implications of what she was doing. As a general rule women find it
- much easier than men to use oracles like Tarot-reading gypsies,
- horoscopes, I Ching, etc., to help them elucidate their problems,
- while men are much more likley to be outraged by what they
- (mistakenly) think are the philosophical implications of asking a pack
- of cards a question.
-
- In psychotherapeutic programs have we discovered a use of computers
- which women will take to quite naturally, but which men will have
- great conceptual difficulties with?
-
- (There's an intersting philosophy of mind issue behind all this.
- Manfred's view depends on seeing intelligence etc. as being primarily
- a property of an agent, whereas I see it as primarily a property of
- the interactive process between an agent and something else. I no more
- expect to find intelligence _inside_ myself than I expect to find it
- in the Chinese Room :-)
-
-
-
-
-
-
- --
- Chris Malcolm cam@uk.ac.ed.aifh +44 (0)31 650 3085
- Department of Artificial Intelligence, Edinburgh University
- 5 Forrest Hill, Edinburgh, EH1 2QL, UK DoD #205
-