home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.ai.philosophy
- Path: sparky!uunet!inmos!fulcrum!bham!bhamcs!axs
- From: axs@cs.bham.ac.uk (Aaron Sloman)
- Subject: Re: Colbys-Weizenbaum etc Belated Replies (1)
- Message-ID: <C1JEA5.M30@cs.bham.ac.uk>
- Sender: news@cs.bham.ac.uk
- Nntp-Posting-Host: emotsun
- Organization: School of Computer Science, University of Birmingham, UK
- Date: Wed, 27 Jan 1993 23:51:41 GMT
- Lines: 228
-
- Various people commented on some of my earlier postings and I have been
- too busy to reply. So here goes. First
-
- From: mstede@cs.toronto.edu (Manfred Stede)
- Message-ID: <93Jan17.180655est.47622@neat.cs.toronto.edu>
- Date: 17 Jan 93 23:07:18 GMT
- Organization: Department of Computer Science, University of Toronto
-
- [AS]
- > > > >If some people are so badly educated that they take the use of "I"
- > > > >in computer print-out to mean the machine has a mind, rather than
- > > > >just being an extension of the use of "I" when a book includes
- > > > >something like "in that situation I would advise you to take plenty
- > > > >of rest...." or whatever, then we need to improve the education of
- > > > >the general public.
- [MS]
- > > > It's not just the word "I". If the rest of the dialogue were
- > > > obviously mechanical, perhaps no one would be fooled.
- [AS]
- > > Yes - but the point is the same. I see no more reason why what's
- > > printed out by an expert system should use clumsy stilted English
- > > than what's printed in a book. Of course, it depends on what the
- > > task of the program is.
- [MS]
- > But the book is a fairer partner: it only talks to you. The dialogue
- > program pretends to "understand".
-
- That depends on the program. Some books "pretend" to talk to the reader
- because they include, for instance, rhetorical questions. People who
- don't understand the relevant literary conventions may be confused by
- this. That's no reason for banning such books. Some books pretend to be
- based on divine revelation. That's a lot worse. But I'd still rather
- educate people than ban the books.
-
- Just because a program interacts with the user that does not mean it is
- pretending to do something it doesn't do. To a limited extent a database
- program understands insofar as it produces the required information in
- answer to questions. Why should that be OK and not something that helps
- people with personal problems? I think it's nothing but sheer prejudice
- and a Weizenbaum-type romantic human-centrism that objects to this.
-
- Incidentally "understands" is not an all-or-nothing concept, as I tried
- to show in
- A.Sloman
- `What enables a machine to understand?' in
- Proceedings 9th International Joint Conference on AI,
- Los Angeles, August 1985.
-
- > What it does (if it's a little
- > smarter than Eliza) is likely to be a syntactic analysis and some
- > semantics on top of it, which disambiguates words, attaches PPs
- > correctly etc. This is all right as long as we deal with a well-defined
- > sublanguage that the program has been designed to handle. There are
- > domains where it makes a lot of sense to assume such a sublanguage,
- > and several expert systems with NL interfaces exploit these. I claim
- > that AI/NLP can be quite useful in such domains.
- >
- > The origin of this discussion, however, was a program aiming to be an
- > artificial psychotherapist. Someone's posting advised to boil this
- > application down to the absolute minimal requirement: a tool that you
- > can talk to, that munches on the words for a second, and throws some-
- > thing back at you, such that some of your words reappear, and you do
- > that for a while, and you achieve self-elucidation. Well, if this is the
- > purpose, and I don't care about what the program utters anyways, I might
- > as well talk to my cat. Or to myself, which may in fact be much more
- > useful, because I could give myself sensible answers, couldn't I.
-
- Speak for yourself, but don't assume everyone else has your reactions to
- everything. People like you would feel no need to interact with the type
- of program you are discussing. Agreed. But that has no implications at
- all for people who are not like you. Why should you object to others who
- find it soothing, refreshing, insightful, encouraging, or helpful in
- some way? (Compare writing in a diary: "Dear diary ....". Some may
- prefer talking to a machine that produces responses.)
-
- Actually even a simple Eliza can produce more surprising responses than
- the sort of system you've described. Without having interacted with the
- Colby's system, I have no idea how much more advanced it is than Eliza.
- I thought you were objecting *in principle* to any attempt to offer
- people any kind of psychotherapy via a program using current expert
- system/natural language technology. If you are not objecting in
- principle but only saying you wouldn't find it useful, then perhaps we
- have no disagreement.
-
- > No need for a self-elucidation program.
-
- Not for you perhaps. So what?
-
- > If the standards remain a little higher and we truly want an advice-
- > giving system, then one of my objections is that with the current state
- > of NLP it just won't work and would not be of any use: There is no
- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- > sublanguage to be pinned down here, to which you could connect your KB
- > such that you get some inferences right and come back with useful
- > advice. The program designer cannot anticipate what the problem of the
- > patient is, what s/he will be talking about, and how s/he will be talking
- > about it. And the user is bound to notice at some point how little that
- > dialogue partner actually grasps; frustration is a likely reaction.
-
- If your claim is an empirical one that all contemporary programs of this
- sort will actually be useless, then only empirical investigations can
- establish this, not armchair (typing-chair) pronouncements: anything
- else is pure prejudice unless you have a more advanced theory of how the
- human mind works than anyone else I know has, on the basis of which you
- can demonstrate exactly why nobody in need of therapy will be helped. If
- you had such a theory, it could probably be the basis for the design of
- a good therapy program.
-
- You find it surprising that anyone should be helped by a limited
- "language understander". I find it surprising that some adults should be
- comforted by clutching a teddy bear. All that proves is that neither of
- us has a good theory of how human minds in general work.
-
- > (That's the technical point. Aside from that -- doesn't psychological
- > advice depend on comrehension abilities that go a little further than
- > literal "understanding" (getting the predicate/argument structure right)
- > of sentences?)
-
- If the advice is supposed to be that of a scientist prescribing remedies
- based on knowledge of the system needing repair then you are right. If
- the "advice" is just a tool that makes some people better, then HOW it
- could work may be as mysterious to you as the effect of music, or drugs,
- or religion, or reading poetry. But that doesn't mean it can't work.
-
- [AS]
- > > I'll backtrack a little: if the vendors know there's a good chance
- > > that because of inadequacies in the current educational system users
- > > of a package are likely to be fooled into attributing capabilities
- > > to the machine which it really does not have, then perhaps the
- > > machine should from time to time print out a suitable warning
- [MS]
- > This may be not so easy. In the restricted-domain system, the program
- > designer can anticipate what people will say and not say. If there is
- > need for a warning, it may be sth like "don't use complicated sentences,
- > don't abbreviate 'something' to 'sth', and talk only about bathtub-fixing
- > (or whatever it is) and nothing else."
-
- That's not the sort of warning I meant. Rather something like:
-
- PLEASE REMEMBER THAT YOU ARE TALKING A COMPUTER PROGRAM NOT
- A PERSON. THIS PROGRAM MERELY FOLLOWS RULES THAT WERE DESIGNED
- TO GENERATE HELPFUL INTERACTIONS, BUT IT DOES NOT REALLY
- UNDERSTAND WHAT YOU ARE SAYING, AND SOME OF THE RULES MAY
- NOT WORK WELL IN YOUR CASE.
-
- IT IS POSSIBLE THAT YOU HAVE PROBLEMS REQUIRING PROFESSIONAL
- HELP RATHER THAN THIS KIND OF COMPUTER THERAPIST.
-
- and occasionally shorter or longer reminders. Of course people should
- have the option to turn this patronising stuff off, if they don't need
- the reminder (or think it insults their intelligence.)
-
- [MS]
- > If the sublanguage cannot be restricted due to the domain (as it is
- > the case with the psychotherapist), it becomes quite difficult, if not
- > impossible, to explain to someone with no linguistics background what
- > the program can handle and what not.
-
- So what? Someone given the above warning does not need to have an
- explanation of precise details of the program's scope and limits, if
- talking to the program helps them, i.e. makes them feel better.
-
- You can tell someone that it's unsafe to drive round certain corners
- faster than 40kph without having to go into the detailed analysis of
- friction, momentum, centrifugal force, etc.
-
- If the program were giving technical advice on how to cure diseases by
- administering potentially very dangerous drugs, or how to maintain a car
- so that it is safe to drive, etc. then I would be MUCH more concerned
- that the technical limits, and especially the limits of reliability of
- the program, were made absolutely clear to the user. (The same applies
- to advice given in a book. A book on baby care must make it clear that
- in certain cases the infant should be taken to a proper doctor for
- diagnosis since the diagnosis cannot be done by an untrained person
- using the book.)
-
- If it turned out that for some users the program produced suicidal or
- other dangerous tendencies then that would be a reason for restricting
- its use. If the results are only either helpful or neutral, as I
- suspect, then much too much fuss is being made.
-
- [MS]
- > One conceivable, if humble, first
- > step is to list upon request all the words in the system's dictionary,
- > but this doesn't help much, either, since it's unclear what these words
- > "mean" to the system, that is: what inferences does the system draw when
- > these words appear (depending on the context, if the system is smart
- > enough). -- Yes, there are expert systems that try to "explain" how
- > they came up with their answer (by explicating the rule-chain), but
- > this again presupposes a well-restricted domain and language.
-
- I think you are approaching this whole thing from the point of view of
- an expert in the field who wants to know things about the program you
- are interacting with. There's no reason at all to assume that every
- customer will share your concerns, or should share them, any more than
- everyone who buys a car needs to know how a carburettor works, etc.
-
- [MS]
- > jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
- [JD]
- > > In fact, it can be fun (at least) to "talk" with a program that
- > > constructs replies by randomly linking overlapping subsequences
- > > of earlier input (sort of like an interactive dissociated press).
- [MS]
- > Absolutely right. But then let's keep it that way and advertise such
- > "dialogue programs" as "a toy you can have loads of fun with, because
- > it pretends to have some idea of what you're saying, yet it's comple-
- > tely dumb. Come and find out how dumb it is."
- >
- > No reason to get AI involved in this business, no need to advertise
- > anything as a computer program that analyzes your psychatric problems
- > for you and helps you solving them.
- > Otherwise, the message will be "Come and find out how dumb AI is."
-
- Here you seem to be saying: none of this stuff works. Well if you have
- tried it out in a systematic way on the sorts of people it was
- intended for, that's fine, and you should publish a detailed report
- debunking it.
-
- On the other hand, if you haven't done that, you have no right to assume
- that the people who use such software will gain no benefit from it, or
- that in doing so they will be deceived in some way.
- Aaron
- --
- Aaron Sloman,
- School of Computer Science, The University of Birmingham, B15 2TT, England
- EMAIL A.Sloman@cs.bham.ac.uk OR A.Sloman@bham.ac.uk
- Phone: +44-(0)21-414-3711 Fax: +44-(0)21-414-4281
-