home *** CD-ROM | disk | FTP | other *** search
- Comments: Gated by NETNEWS@AUVM.AMERICAN.EDU
- Path: sparky!uunet!uvaarpa!darwin.sura.net!paladin.american.edu!auvm!CCB.BBN.COM!BNEVIN
- Return-Path: <@VMD.CSO.UIUC.EDU:bnevin@ccb.bbn.com>
- Message-ID: <CSG-L%93012810523974@VMD.CSO.UIUC.EDU>
- Newsgroups: bit.listserv.csg-l
- Date: Thu, 28 Jan 1993 11:31:01 EST
- Sender: "Control Systems Group Network (CSGnet)" <CSG-L@UIUCVMD.BITNET>
- From: "Bruce E. Nevin" <bnevin@CCB.BBN.COM>
- Subject: theories of language
- Lines: 249
-
- [From: Bruce Nevin (Thu 930128 08:29:27)]
-
- (Bill Powers (930127.1430) ) --
-
- >Interesting. Here we are again. Operator grammar explains
- >language. Generative grammar explains language. Pick one to
- >believe, and the other is wrong. Obviously they can't both be
- >right. What basis can there be for anyone to believe either side?
-
- Neither "explains language" in a way that is satisfactory from a
- PCT perspective. Either could be used as a starting place for
- outlining a PCT explanation. My argument is not so much that
- Generative theory is wrong, but that it is *much* more difficult
- to go anywhere using it as a starting place. (I would argue that
- various flavors of Generative theory are wrong about certain
- things, but those things are by and large irrelevant or secondary
- to our concerns here.)
-
- Let's go back to your (930126.0830) generalizations:
-
- > One of the logical errors that's easy to make in learning control
- > theory is to suppose that because a particular state of the
- > environment is observed to be involved in control of a higher-
- > level perception, ONLY that state of the environment will result
- > when the perception is controlled.
-
- > I've seen this in linguistics. In a top-down model, some global
- > feature of a sentence is specified. This feature is then
- > exemplified by some element of a specific sentence at a lower
- > level. But why that sentence, and not a totally different one
- > that is also an example of the higher form? In fact the detailed
- > sentences used as examples vary all over the place, so there is
- > clearly no constraint on which sentence is to be used as an
- > example. This is a major problem for top-down models (at least as
- > far as implementing them is concerned).
-
- > A control-theoretic model of language production works the other
- > way: it selects sentences until one is found that can be
- > PERCEIVED as having a certain form matching a specified form.
- > None of the degrees of freedom of the sentence matter except the
- > combination that results in satisfying the reference-form; in all
- > other respects the sentence is free to vary. Any old words that
- > can be perceived as an NP will satisfy the requirement that an NP
- > appear in the final result in that position. But you can't go the
- > other way: you can't write an algorithm that will start with the
- > specification that a noun phrase be uttered, and come up with a
- > specific utterable noun phrase. The specification "NP" doesn't
- > care which noun phrase is found; therefore it can't specify ANY
- > noun phrase.
-
- > You linguists out there have heard me harp on this before. So far
- > you haven't dealt with the basic problem. It's part of the same
- > problem that Rick is talking about.
-
- The segue from your reply to Rick's (930125.1500) on output and
- perception "bushes" is actually a non-sequitur. A
- phrase-structure tree with NPs, VPs, etc. is a structural
- description that applies to an indefinitely (but not infinitely)
- large set of sentences. No one has ever claimed that one gets
- from a phrase-structure tree to a particular sentence in that
- set, top-down. That is not what the term "generative" means.
- (Basically, it just means "explicit," aside from its role as a
- trademark.) Indeed, that is one of the conceptual difficulties
- of "lexical insertion" in classical transformational-generative
- grammar. How do the words come in, that is, with what
- motivation?
-
- The bottom-up control-theoretic approach that you counterpose to
- this is I think what Avery has been proposing. (Correct me if
- I'm wrong, Avery, or elaborate if you will.) The words come up
- anyhow, in association with non-word perceptions. There are
- certain dependencies among the words that are correlated with
- dependencies among the perceptions. (These dependencies among
- the perceptions do not themselves all necessarily have to be
- perceived for such correlation to come about, I believe.) But
- without further constraint they might come out in a way that was
- not English (or the given language). This further constraint
- comes from perceptual control with reference to the syntactic
- structures.
-
- (Penni proposed that you don't even need the syntactic
- structures. This is because some aspects of structure,
- particularly discourse structure, follow from the correlation
- with dependencies among non-word perceptions. You can seem to
- get away with it for simple texts like hers if you build in
- English word order without noticing that you have. A simplified
- sublanguage like this could be of considerable practical value in
- computer applications prohibiting dialog or limiting the
- complexity of what the user could input.)
-
- With a Phrase-Structure basis, the programs to generate all and
- only the permissible structures needed to provide these reference
- perceptions are quite complicated. They require control of
- perceptions that are quite abstract. It has been argued
- convincingly (translating into PCT idiom) that infants cannot
- possibly develop perceptual control of such complicated and
- abstract perceptions on the basis of generalization and
- abstraction from their observations of language in use around
- them, and that therefore the hairy parts of it must be hard-wired
- in the human genome. This is known as the argument from paucity
- of data, or from paucity of stimulus.
-
- With a word-dependency/reduction basis (operator grammar), there
- are no abstract structures. Start with the word dependencies, as
- above. The programs for reducing the output form of
- low-information words are simple. The programs for doing so in
- conformity with current convention are a bit more complex in the
- places at which they are arbitrary, as are the programs for
- strategies to avoid various pitfalls, but no more complex or
- difficult to acquire than those for many sorts of nonverbal
- control. The ill-defined and variable set of sentences results
- as a byproduct of these several kinds of perceptual control,
- rather than by control with respect to a well-defined set of
- abstract sentence structures.
-
- A PCT explanation of language can be undertaken with either
- approach.
-
- > In Operator grammar you need to
- >know what words mean in order to distinguish operators from
- >arguments
-
- No, you don't.
-
- >Suppose I were to give proponents of either side a sentence like
- >"Word1 Word222 Word17 Word9237 Word1403 ...".
-
- No one can develop a grammar on the basis of one sentence. This
- is in no way different from being handed something in a language
- you don't know, say, this sentence from an article on the front
- page of a Greek newspaper on my desk now:
-
- I apoxorisantes i ekdhyoxthendes ipurghi ke poli vuleftes
- thetun evtheos ke aprokaluptos to erotema: pera apo
- linovouleftikus tipus, pu dhinun iperoxi dhio psifon, me
- pya praghmatiki pliopsifia kiverna ti xora o prothipurgos
- k. K. Mitsotakis.
-
- It is only by finding commonalities across many utterances,
- starting of course with short ones, that one distinguishes word
- classes, and that is on the basis of what can co-occur with what.
- One would find many instances words ending in -es in proximity,
- often with the word i or tis before them, and others ending in
- -os with o before, and so on. One can identify affixes and words
- and their gross combinatorial possibilities (word classes)
- without any translation.
-
- With a translation, one can use one's control of one's own
- language (and its metalanguage) as a basis for controlling the
- new one. Naturally, this results often in conflict. Infants are
- not troubled with this sort of conflict, and that, together with
- the fact that they work at it pretty near full time for a couple
- of years, integrated with learning all sorts of perceptual
- control, accounts for the difference between child language
- acquisition, and Genie, and adult second-language acquisition.
-
- The partial order of word dependencies, operators over arguments,
- before imposition of a linear order of words, derives from
- control of meanings (correlation with dependencies among non-word
- perceptions). But one's growing control of what words can be
- placed where in linearized order is based in large measure on
- language-specific convention (for the linearization of the
- operataor word with respect to its argument words, plus a few
- analogic extensions). The remainder falls out as a byproduct of
- reductions.
-
- >we expect the model to work correctly no matter what specific
- >meanings we give to these variables.
-
- >A model of language constructed in the same spirit as the PCT
- >model would not need a lexicon or empirical data on the way
- >specific words are used.
-
- The meanings are non-word perceptions with which words and some
- phrases (speaking loosely) are associated. They are inside the
- model insofar as control involving the non-word perceptions is
- modelled. The "lexicon" comprises the word-perceptions, the
- association of words with non-word perceptions, the control of
- reductions in word shapes (morphophonemics). These are all
- perceptions whose control is modelled in the model. If we model
- language learning, then empirical data are outside the model and
- inside the model. They are
-
- > needed to
- >arrive at such a model, but once the model was constructed it
- >would no longer be cast in terms of specific observations.
- >Instead, there would be underlying principles that apply to any
- >way of hooking symbols to experiences, whatever the symbols and
- >whatever the experiences. In fact I doubt that this underlying
- >model would be a linguistic model at all: it would simply be one
- >application of a single model of perceptual control, the same
- >model that explains all behavior.
-
- I still don't understand why you believe we disagree about this.
- Try again?
-
- >It's this concept of an underlying model that seems to me to be
- >missing from both the Operator and the Generative approaches. If
- >the apparent laws of language depend on specific word meanings,
- >they are not laws of language but only happenstance. Any other
- >laws would be just as likely, for all we can explain why they
- >exist.
-
- This concept of an underlying model can be added to either approach.
- The results in operator grammar do not depend on specific word
- meanings; rather, an account of meanings of words (and of
- sentences, and of texts) is a result in operator grammar, an
- account that fits well with PCT. Some aspects of language
- structure are due to correlation of word-perceptions with
- non-word perceptions. Is this what you are rejecting here?
-
- In Generative Grammar, much has been made of the autonomy of
- linguistic form from meaning, though Chomsky has been backing
- away from that recently. Still, it is not clear to me how you
- see that the "apparent laws of language" according to Generative
- theory depend upon specific word meanings.
-
- (Avery Andrews 930128.1120) --
-
- >There's something about the Harrissian approach that I just don't
- >get, & until I either get or it figure out what is wrong with it
- >I have no intention of saying much of anything against it.
-
- >In general, I think that current linguistic theories should be regarded
- >as just being resources that are out there, and that from a PCT
- >perspective, one should just take whatever insights, if any, that
- >they might seem to offer.
-
- Harris has been ignored for 40 years for (academic-)political
- reasons. That's a shame. It makes his work effectively
- unavailable as a resource and makes his insights invisible. If
- you or anyone shows me something that is wrong with it, that will
- be a good outcome. I'll just keep putting it out there and
- trying to make it as vulnerable as possible. If you can outline
- what it is that you don't get, I might be able to help you expose
- it better.
-
- > And, at the moment, I see Motor Control
- >as a *much* higher priority target.
-
- I agree. I don't see practical modelling of language being
- feasible unless control up to category level is modelled or
- simulated. I doubt that practical modelling of control at
- category level or higher is feasible without modelling language
- or covertly smuggling language-borne understandings into the
- model.
-
- Bruce
- bn@bbn.com
-