home *** CD-ROM | disk | FTP | other *** search
- Comments: Gated by NETNEWS@AUVM.AMERICAN.EDU
- Path: sparky!uunet!zaphod.mps.ohio-state.edu!darwin.sura.net!paladin.american.edu!auvm!PARC.XEROX.COM!SIBUN
- X-Delivery-Notice: SMTP MAIL FROM does not correspond to sender.
- X-else-reply-to: sibun@parc.xerox.com
- Fake-Sender: sibun@parc.xerox.com
- Message-ID: <92Nov19.123638pst.29194@hmmm.parc.xerox.com>
- Newsgroups: bit.listserv.csg-l
- Date: Thu, 19 Nov 1992 12:36:33 PST
- Sender: "Control Systems Group Network (CSGnet)" <CSG-L@UIUCVMD.BITNET>
- From: Penni Sibun <sibun@PARC.XEROX.COM>
- Subject: Re: visual-linguistic-motor model (or, vision, instruction,
- & action)
- In-Reply-To: "William T. Powers"'s message of Thu,
- 19 Nov 1992 07:45:10 -0800
- <92Nov19.085552pst.11662@alpha.xerox.com>
- Lines: 23
-
- (penni sibun 921119.1300)
-
- We obviously don't need to figure out how words are recognized or
- produced; we can just say that they are. The associative part looks easy
- to implement in a limited situation.
- I think we could make up a working model that could do the following:
- given the name of an object, turn and look at it; given a picture of an
- object, name it and turn to look at its prototype in the room; given a
- direction in which or a location at which to look, know that the object
- seen is a familiar object and name it. We would have to stipulate object
- recognition. Perhaps we could work in kinesthetic body configuration
- sensing, but we would have to stipulate, for the time being, conversion
- of visual images to objective space. But even with these limitations, I
- think this might be an interesting model. I invite anyone who is
- interested to be the first one to get such a model working.
-
- chapman's system Sonja does just this sort of thing. chapman also
- finesses the hard problems you suggest finessing, viz., object
- recognition and language use.
-
- cheers.
-
- --penni
-