home *** CD-ROM | disk | FTP | other *** search
- Comments: Gated by NETNEWS@AUVM.AMERICAN.EDU
- Path: sparky!uunet!news.gtech.com!noc.near.net!mars.caps.maine.edu!news.yale.edu!yale!gumby!destroyer!sol.ctr.columbia.edu!zaphod.mps.ohio-state.edu!darwin.sura.net!paladin.american.edu!auvm!PARC.XEROX.COM!SIBUN
- X-Delivery-Notice: SMTP MAIL FROM does not correspond to sender.
- X-else-reply-to: sibun@parc.xerox.com
- Fake-Sender: sibun@parc.xerox.com
- Message-ID: <92Nov5.124237pst.29192@hmmm.parc.xerox.com>
- Newsgroups: bit.listserv.csg-l
- Date: Thu, 5 Nov 1992 12:42:31 PST
- Sender: "Control Systems Group Network (CSGnet)" <CSG-L@UIUCVMD.BITNET>
- From: Penni Sibun <sibun@PARC.XEROX.COM>
- Subject: Re: language
- In-Reply-To: marken@aero.org's message of Wed,
- 4 Nov 1992 14:47:23 -0800 <92Nov4.163443pst.12160@alpha.xerox.com>
- Lines: 58
-
- (penni sibun 921105.1300)
-
- [From Rick Marken (921104.1400)]
-
- penni sibun (921103.1600) on the relation between her model and pct model
-
- >i think the major difficulty in mapping bet. yr model and mine is the
- >``intended meaning'' part.
-
- I agree -- intentions (reference signals) ARE the ONE BIG difference
- between PCT and ALL other models of living systems.
-
- nonsense. intentions are one of the many banes of traditional ai.
-
- re: below. i can't make much sense of yr cryptic equations. i don't
- think you answered my question. i didn't ask whether you thought
- there were hidden loops in my model. i asked how you could make a pct
- model that accounted for the same phenomena. obviously, i believe
- that the phenomena my model captures are important. i'd like to see
- how you think pct could account for the same process. could you
- please try again?
-
- > this is a basic sketch of my model: i cast
- >the issue as deciding what to say next.
- > another way to characterize this might be 1) where are we in the
- >linguistic structure (ws) and 2) where are we in the structure of
- >``meaning(s)'' (?) we are talking about.
-
- In order to be able to answer these two questions, doesn't the model
- have to be able to perceive the state of ws and "where we are in
- the structure of meanings"? That is, 1) and 2) describe perceptual
- variables, right?
-
- sure. that's not the issue. i'm not interested in quibbling whether
- something is a perception or a perception of a perception.
-
- > ignoring the rest of the
- >context for the nonce, what is said next is determined both by 1) and
- >by 2). this implies that it is *not* the case that the ``meaning'' is
- >selected and then the linguistic structure is found to fit it, and the
- >next bit of language is produced.
-
- Your second sentence is not a correct description of the PCT model of
- meaning generation as I described it in my diagram; the PCT mdoel
- says that a linguistic structure (ws) is generated which moves perceived
- meaning into a match with reference (selected or intended) meaning.
- You say that 1) and 2) [the questions quoted avove] determine what is
- said next. To me, it sounds like the perception of 1) where you are in
- the linguistic structure (call this p.ws) and 2) where you are in the
- structure of meanings (call this p.m) determines what you say next
- (call this o). So, your model seems to be o = f(p.ws,p.m) -- definitely
- not a control model. But you may have a control model without knowing
- it; what you say next may be ws and m rather than o -- so (p,m) =
- f(p.ws,p.m) AND (p.ws,p.m) = g(p,m) -- closed loop.
-
- cheers.
-
- --penni
-