home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!noc.near.net!hri.com!ukma!psuvax1!psuvm!auvm!FAC.ANU.EDU.AU!ANDALING
- From: andaling@FAC.ANU.EDU.AU ((Avery Andrews))
- Newsgroups: bit.listserv.csg-l
- Subject: language, discreteness
- Message-ID: <9211082223.AA15164@fac.anu.edu.au>
- Date: 9 Nov 92 14:23:40 GMT
- Sender: "Control Systems Group Network (CSGnet)" <CSG-L@UIUCVMD.BITNET>
- Lines: 58
- Comments: Gated by NETNEWS@AUVM.AMERICAN.EDU
-
- [Avery Andrews (921108.1244)]
- (Bill Powers (921107.0800))
-
- > I have some questions about the "knowledge base." I gather that what
- > you mean by a knowledge base is a set of statements describing
- > perceptions, rather than the perceptions themselves. I can see how it
- > would be possible to mark a list of statements as "said" or "not
- > said," but how would you mark the perceptions to which they refer?
-
- The perceptions being described are typically long-gone in these
- situations, so I don't see that there's any sense in which they
- can be marked at at all. It's my impression, though I don't know
- the area, is that there is a certain amount of evidence that memory
- really is symbolic to a considerable degree, and so, when one is
- talking on the basis of what one remembers, it is quite appropriate
- to mark statements as `said' or `not said', at least as a first
- approximation.
-
- As for modelling, what you are sensing missing is any idea of how
- linguistic tasks could be done with brain-like-as-we-know-it hardware.
- The tricky bit, I think, is how to deal with the fact that we seem
- to be able to build up networks in which arbitrarily large numbers
- of individuals are classified and related with a finite number
- of properties and relationships. E.g. we can learn of any
- number of women that they are daughters of, say, Adeline, any number
- of men that they are (current or former) husbands of these women,
- etc. & people can learn this kind of stuff very quickly--much faster
- than a connectionist network could be trained.
-
- I don't think anyone has a clue as to how to do this with
- neurologically realistic hardware (wetware?), just like nobody had
- any idea in the nineteenth century about how to reduce chemistry to
- physics, though people seemed to have assumed that it had to be able to
- be done somehow. Maybe this means that PCT should ignore language entirely
- until this problem is solved, but I think that the idea of closing loops
- through the environment has enough power to be worth trying to apply to
- language even without neurologically concrete modelling.
-
- And, it seems to me that trying to build models with the emphasis on
- interaction & minimal representation might turn up some constraints that
- would make a neurologically realistic model easier to attain. At
- any rate, it's the best we can do at the moment.
-
- The point of my little piece on discreteness is not that it is
- not necessarily problematic for PCT, but that a different collection
- of things might be important in a regime where disturbances are
- highly limited (e.g. once something has been said, nothing can
- make it unsaid), and where reference levels can be attained
- *exactly*. In particular, it is not at all clear to me that the
- usual equations have much significance--what makes equations
- interesting is the possibility of solving them non-trivially,
- but I don't see how to do this in this sort of domain: the perception
- that cancels the error signal is to hear somebody saying what is
- to be said, & that's a lot less exciting that the notion of the equilibrium
- state of a feedback system, at least to me (since the solution is a
- trivial restatement of the problem).
-
- Avery.Andrews@anu.edu.au
-