Webbed Reprint Collection
William H. Calvin
University of Washington
Smaller than a Sentence
What are the basic differences between protolanguage and real language? Let=s look at one property of the stringing words together process that produces protolanguage. I pointed out a while back that protolanguage characteristically consists almost exclusively of nouns and verbs, without any modifiers B if adverbs appear, they are usually whole-utterance modifiers, not modifiers of single words. If adjectives appear, they are a few of the more common ones, probably acquired with nouns as unanalyzed chunks, like idioms. But what this means is that all units are of equal value, just as you would expect them to be if they are all hung on the same clothesline.
Put it another way. In protolanguage, all words are equal; like runners in a race, it=s every word for itself. But if protolanguage is a footrace, language is a team sport, like football. The teams are phrases, and like any team, not all the players are equal B there=s a captain, and there are just regular players. In language we call these Aheads@ and Amodifiers.@ You can always tell what the head is by asking what the phrase is about. Is the phrase Aa young teacher of algebra from Oklahoma@ about a teacher, algebra, or Oklahoma? A teacher, obviously B all the other words modify the word Ateacher.@
The way we diagram sentences reflects this. Take AJohn kissed Mary.@ This could be either a true-language sentence or a protolanguage utterance. Don=t get the idea that protolanguage has to consist entirely of mangled utterances like AJohn kissed@ or Akissed Mary.@ It will probably contain a majority of these, but there=s nothing to prevent something that looks like a proper sentence from popping out now and then (though likely missing that -ed for past tense). The only difference, for reasons we=ll get to in a moment, is that it will sound like AJohn...kissed...Mary@ rather than AJohnkissedMary.@
So this is how AJohn kissed Mary@ gets put together in the two modes:
Now if they made you do this sort of thing in school you may well be thinking, AThese are just drawings, they don=t have anything to do with how sentences are produced.@ But I think that=s wrong. I think that these diagrams really show you what happens in the brain. If the brain is working in protolanguage mode, each word is sent separately to the part of the brain that controls the motor organs of speech, and each word is uttered separately.
When I first arrived in Hawaii, back in 1972, one of the things that struck me most forcibly was the difference in speed between the old-time immigrants who=d come to the island as young adults and spoke pidgin, and their children, born in Hawaii, who spoke creole (which in Hawaii is also called Apidgin,@ just to confuse things a bit more!). On top of all the other differences in their speech, the old-timers spoke about three times slower than their own kids. For instance, here=s an old-timer trying to describe one of those clock/thermometers you often see on the sides of city buildings:
If you=ve ever been in foreign country where you spoke only a few words of the language, you=ll know how it feels to speak protolanguage B anguished search for a word, struggle to pronounce it, anguished search for the next word, and so on.
Absolutely. But if the brain is working in language mode, words are put together in whole phrases and clauses and even sentences before theyíre sent to the speech organs to be pronounced. Thatís why, when youíre speaking your own native language, the words come out like a blue streak.
The second diagram illustrates another important fact. If you take it from the bottom up, rather than the top down, it reflects not just the fact that but the order in which the brain puts words together. That is to say, "kissed" and "Mary" are joined, before "John" is joined to "kissed Mary."
Which brings us to parsing.
The word " parse" has come in for some pretty vicious abuse lately. As a result of Clintonís impeachment trial, people talk about speakers "parsing" words like "sex" or "alone" in the sense of determining, sometimes quite arbitrarily, how those words should be interpreted. This usage is daft in two ways. First, you canít parse single words Ė you can only parse sentences. Second, parsing isnít something speakers do, itís something hearers do. A hearer parses a sentence (quite unconsciously Ė unless itís in a syntax class!) by deciding what that sentenceís structure is.
Of course, thatís not quite the whole story. If I say "Would you mind stopping that noise?" you donít respond by thinking, "Ah! An auxiliary verb followed by a second-person pronoun subject of the main verb Ďmind,í followed in turn by a participial verb that takes a noun-phrase consisting of noun and determiner for its object," and leave it at that. You parse sentences to find out their meaning. You need to know that I am speaking to you, that I want you to do something, and what it is that I want you to do. I suppose itís this rather indirect link with meaning that folk have taken as license to abuse the poor word.
Anyway, parsing is something we all do every time anything is uttered. But it works quite differently depending on whether whatís uttered is language or protolanguage. In fact, if itís protolanguage, itís a good question whether you can be said to parse at all. You canít decide what the structure is if there isnít any structure. What you do is just the second part of the job, trying to determine the meaning directly from the individual words. This of course is much harder than it is when thereís structure there to help you. You have to use all your knowledge of whoís speaking and whatís happening and what the world in general is like in order to figure out what is meant.
Suppose you hear a protolanguage utterance like "John kissed." You might think, thatís easy Ė all I have to do is figure out who John is most likely to have kissed. But suppose the speaker is a pidgin speaker from Japan. Itís possible in that case that the meaning is, "somebody kissed John," because verbs come at the end of the sentence in Japanese, and pidgin speakers sometimes (but pretty unpredictably) carry over features of their native languages into their pidgin. This is just one of the many reasons you canít hope to interpret protolanguage without taking lots of context into account (and doing plenty of guesswork, too).
Now take an actual headline I saw in the Denver Post the other day: "Spy Charges Dog Inspectors." You canít understand this sentence unless you get the structure right, and know that "Charges" is here a noun, not a verb; that "Spy Charges" is a subject; and that "Dog" is a verb. Of course you may have first spotted an alternative parse: "Spy" as subject, "Charges" as verb, "Dog Inspectors" as object. If you donít get the structure, you canít get the right meaning.
Here you may reasonably object, "Well, you need context just as much here. If you didnít know that the story under the headline concerned weapons inspectors in Iraq, you might assume that some spy had leveled unspecified charges against people whose job it was to inspect dogs, or had made them pay him some money." Thatís perfectly true; the headline had me baffled until I looked at the text. But two things make this case very different.
First, you very seldom need context to get the meaning of a true-language utterance, whereas you almost always need context to get the meaning of a protolanguage utterance (when I reread transcripts of pidgin speakers that I myself have recorded and transcribed, I often have no idea what theyíre talking about, although I can remember they made perfect sense at the time). Second, and much more important, youíre using context in quite different ways. With the headline, youíre using context to choose between two equally grammatical structures; with protolanguage, youíre using context to try and get any meaning at all.
This particular contrast between language and protolanguage shows up best when you look at what linguists call " empty categories." An empty category is where some unit of a sentence isnít overtly expressed. Take a sentence like "Bill wanted to go." "Wanted" has an overt subject but "go" doesnít have an overt subject, though we know that it must have a subject, and that its subject must be "Bill." Empty categories are rather like protons. You canít see any protons in this page youíre reading, but you know theyíre there because your physics teacher told you so. Your English teacher should have told you the same thing about "missing" subjects and objects, but probably didnít (even though to me theyíre among the most fascinating things about language, Iím not going to force them on you here; if you choose, you can read more about them in the appendix on page ).
Again, thereís a superficial resemblance between language and protolanguage that masks a profound difference. Protolanguage too has "missing" things, such as a missing subject in "kissed Mary" and a missing object in "John kissed." But the antecedents of these empty categories Ė the people or things they refer to Ė canít be found anywhere in the utterance. To know what those missing items refer to, you have to take into account who and what youíre talking about and, on that basis and your general knowledge, you have to work out who or what the speaker is most likely to be talking about. In real language, the antecedent is always there somewhere in the sentence, and there are rules to help you find it.
You can read more about those rules in the appendix. Enough for now to note that you canít just assume that the nearest noun is the antecedent of the empty category. Thatís true in "Bill wanted to go" and "Bill wanted Helen to go," but not in "Helen was the one that Bill wanted to go." In both the last two sentences, "Helen" is the subject of "go," but in the first sheís next to the verb and in the second sheís far from it and "Bill" is much nearer. The rules that fix the reference of empty categories are not simple, not obvious, and above all, not consciously applied. You just somehow know that, despite the distance between "Helen" and "go," itís her that, hopefully, will do the going.
Now we come to whatís maybe the most crucial difference between language and protolanguage: the existence in the former of phrases and clauses that are entirely absent from the latter. Such intermediate units cause problems. For instance, how are we going to tell where they begin or end?
The pink shirt is dirty.
It's less easy in
The pink shirt you made me buy is dirty.
and even more so in
The pink shirt you made me
buy when we stopped off
The trouble is, a phrase can be indefinitely long, and can include any number of things that might seem, to an outside observer, to have nothing to do with whatever is the head of the phrase.
The only way you can know where things begin and end is by knowing what phrases and clauses are. And, unfortunately for the common-sense, gradual-evolutionist view that maybe first phrases developed, then clauses (or vice versa), the two can only be defined in terms of one another (a phrase without a clause makes almost as little sense as a clause without a phrase):
What this means is that a clause is a clause because it has the right number of phrases (AFred put his new credit card into his wallet,@ rather than AFred put his new credit card,@ where there is a phrase too few, or AFred put his sister his new credit card into his wallet.@ where there is one too many). And a phrase is a phrase because it expresses a participant in the action of the verb and because it occupies a particular position in a clause (say, between the verb and Ainto his wallet@ for Ahis new credit card.@ And the two are even more entangled than that. A phrase can contain a clause, which in turn includes phrases of its own, as in
The pink shirt that you made me buy is dirty.
where AThe pink shirt that you made me buy@ contains the clause A(that) you made me buy,@ and where this clause, in turn, contains several phrases (to syntacticians, Ayou@ and Ame@ are just as much phrases as AThe pink shirt@ or AThe tall blond man with one black shoe@ B a phrase is anything that has a head, regardless of whether that head has any modifiers). The fact that these two units, intermediate between word and sentence, can operate in this way is what gives language one of its most striking characteristics, its infinite recursivity.
In his book The Language Instinct, Steven Pinker refers to what the Guinness Book of Records claimed as the longest English sentence: a 1,300-word monster by William Faulkner beginning AThey both bore it as though in deliberate flagellant exaltation . . .@ Pinker correctly pointed out that he could break that record by simply writing AFaulkner wrote, >They both bore it as though in deliberate flagellant exaltation . . .=@
What=s happening here is that Pinker is converting Faulkner=s 1,300 word monster into a mere phrase, a noun-phrase object whose function is no different from that of Aa book@ in AFaulkner wrote a book.@ And as Pinker points out, anyone with ambitions to get into the Guinness Book could do so by adding APinker wrote that Faulkner wrote . . .@ or AWho cares that Pinker wrote that Faulkner wrote . . .@ The process is truly an infinite one, limited only by our shortish immediate memories and the difficulty of making infinite sense.
But where did phrases and clauses come from? If they=re as closely interlinked as I=ve suggested, how can one be the hen and the other the egg? All we=ve seen so far suggests that they were born as twins, and that some third thing has to underlie both phrases and clauses. And indeed it does. That thing is what is known as Aargument structure.@
When you get down to it, the basic task of language is telling you who did what to whom (as well as when, where, how, and occasionally why). These AWH-words,@ as linguists call them (although Ahow@ has its W at the wrong end), just about exhaust the questions you can ask B even in plain old AYes-No@ questions, you=re asking WHether something happened or not. We can conclude from this that there=s a limit to the number of participants there can be in any action, process or state. Or at least that there=s a limit to the number we can talk about. We can talk about who performed an action, or who underwent it, or to whom it was directed, or for whose benefit it was performed, or when, where, or how it was performed.
But there=s no way we can talk directly about who observed it, or who discussed it. If I say ABill kicked the cat,@ you know without more ado that Bill performed the action and the cat underwent it. But there=s no way I can say anything like ABill kicked the cat blik me,@ meaning ABill kicked the cat observed by me,@ or ABill kicked the cat plok us,@ meaning ABill kicked the cat discussed by us.@ Things like that can of course be expressed B you can express anything in language, given time, patience, and ingenuity B but they have to be expressed indirectly: AI observed Bill kicking the cat@ or AWe discussed the fact that Bill had kicked the cat.@ In other words, we have to downgrade the original sentence into some kind of phrase or clause, then insert it into another clause.
Now you=ll have noticed that each of the participants in these states or actions has a specific role to play. There are agents that perform actions, patients or themes that undergo them, goals to which they are directed, and so on. These roles are known as Athematic roles.@ A thematic role plus the noun-phrase ,to which that role is attached, make up what is known as an Aargument.@ And argument structure B the system that determines when and where arguments can appear in language B represents the crucial link between word meaning (semantics) and sentence structure (syntax). Not every syntactician would make argument structure central to an account of syntax as it is today. But that=s irrelevant. How something started is often very different to what it has become B for instance, try describing modern computers in the terms appropriate for their ancestors of just forty or fifty years ago.
Before there was syntax, there was only semantics. So, if you are looking for the very first stages in the development of syntax, you have to look in semantics for whatever is the most syntaxlike thing. Argument structure is the most plausible candidate. It involves meaning (the meanings of the thematic roles, agent and so on, and their relation to the verb meaning) but it can be readily mapped onto linguistic output to provide that output with structure, along the lines described below.
The first thing to note is that all arguments aren=t equal. Some make an obligatory appearance, others only an optional one. It=s as if a team had a small core of seasoned players that appeared predictably while the remainder sat in a bench awaiting a call. For instance, if you use the verb Akick,@ you are obliged to mention a kicker and a kickee. You=re not obliged to mention where the kicking was done, or how, or when, or for whom (even if it was done on behalf of someone else), although of course you can whenever you need to. Likewise, if you use the verb Asleep,@ all you need do is name who slept B you don=t need to say who was slept with, or for how long the person slept. That is to say, every verb demands that a certain number (not less than one, not more than three) of the participants must be expressed.
Is the fact that verbs are divided into three classes (on the basis of the number of arguments that obligatorily accompany them) a fact of nature or an artifact of analysis? Do all states, processes, and actions in the world fall into one of these groups because of the nature of reality, or does the structure of the human mind impose its own pattern? This is a philosophical issue and fortunately I don=t think we need answer it here. You can be sure, whatever human language you may meet, that the verb equivalent to the English Asleep@ will take one obligatory argument, the verb equivalent to Abreak@ will take two, and the verb equivalent to Agive@ will require three.
You=ve heard about Afalse friends@ in language learning: words that sound like words in your language but mean something quite different in the other. Well, the division of verbs into three argument classes is a true friend, and like all true friends, seldom fully appreciated and too often taken for granted.
But the importance of argument structure goes far beyond that. If you know that
then you can easily process sentences that would have had you buffaloed if all you had was protolanguage. Take for example a sentence we looked at in the previous chapter: AThe boy you saw kissed the girl he liked.@ Parsing this with the above in mind, we look immediately at the verb Akissed@ and know that it must take two arguments. Because the language is English, and because we know the way English maps argument structure onto phrase structure, we know that Akissed@ will be followed by a theme (whoever got kissed) and preceded by an agent (whoever did the kissing). But this isn=t a simple AX kissed Y,@ because there are two extra verbs, Asaw@ and Aliked,@ which should have their own arguments. So you look for these.
Start with Aliked.@ That takes two obligatory arguments, but there=s only one there. However, you know that the other must be there, somewhere, even if you can=t see it, because the nature of argument structure tells you so. For every invisible argument there=s a visible argument in the same sentence that refers to the same person or thing. Often (see the appendix for a more detailed treatment) you=ll find that visible argument immediately next to the left of the leftmost obligatory argument of the verb you=re working on: in this case, Athe girl.@
Now you turn to the first part of the sentence. Here, again, the verb Asaw@ should have two arguments but has only one, Ayou.@ Again you know it must be there, and must be linked in reference to the argument on the left of the leftmost argument of Asaw@ (Ayou@). That argument is Athe boy.@
You have successfully parsed AThe boy you saw kissed the girl he liked,@ finding that it contains one main clause and two subordinate clauses modifying the heads Aboy@ and Agirl.@ And in so doing you have arrived at its correct meaning B which of course is what the whole exercise is about. It=s easy for people who work on syntax to get wrapped up in it and think that maybe it=s not just everything, it=s the only thing. Of course it=s not. It=s a mechanism, a means to an end, what allows you to move on to the next task.
But without that means, there wouldn=t be an end. Syntax is the magic key that unlocks the floodgates of language, unleashes the irresistible torrent of words that has swept us to where we are today. But where did that key come from, and how did we come by it?
Let me briefly sum up where we=re at right now. I=ve just said that the core of syntax must contain the means for producing phrases and clauses, because these are the indispensable units intermediate between word and complete utterance. These units are indispensable because without them we could not produce true sentences, or indeed any kind of long and/or complex utterance that could be understood. Now phrases and clauses derive from argument structure B from the fact that verbs can only assign a limited number of arguments, and that every verb falls into one of three classes that assign one, two, or three obligatory arguments. respectively.
Naturally you=ll want to know where argument structure came from and how we came to fashion our utterances in the ways that argument structure dictated. But before I can get to that, we=ll need to look at what goes on in the brain when we use language.
So, over to you, Bill.
WHC: From what (words to syntax) and why (evolutionary) considerations, it=s apparent that we need to know a lot more about how brains categorize an entity or a state of affairs, how this memory is retrieved and linked to others, and how we cope with the inevitable ambiguities. Both emergents (like crystals) and conversions of function (like curb cuts) could, at different times, be part of the story.
We=re most accustomed to noun attributes (Derek=s fruit with a color attribute, a shape attribute, the sound it makes when falling off the tree, and so forth). But they=re all optional -- you=ll forgive me if I mention an apple without telling you its color or size. Verbs too have optional attributes, such as time and place, but each verb has one or more obligatory attributes. How that=s implemented in the brain is surely a key question.
If I say (as the billboard ads have taken to doing) AGive him,@ you=ll go looking for three noun phrases. You will happily infer that it=s an imperative construction and supply the missing Ayou,@ but the lack of a noun for the theme will disturb you, and you=ll search for what you missed (supplied, in the ad, by a picture or logo). It=s a technique for grabbing people skimming over the ad and bringing them to a screeching halt, making them pay attention, thanks to a subconscious process that rings alarm bells. We talk of computers Ahanging up,@ and this is a prime example of a hung psychological process that might give us some clues to the circuitry someday.
By this point, I=m certainly curious about how the brain can do all of this, what circuits constitute the algorithm. I=m not sure that I can fully answer it (please don=t ask me where the Empty Categories are located!), but let me creep up on the problem of brain circuits for structuring sentences by introducing language and memory circuits, Darwinian processes, and the brain=s long-distance problem. Then we=ll be able to speculate more intelligently about what neural machinery might have been co-opted for syntax.
|On to the NEXT CHAPTER
Copyright ©2000 by William H. Calvin and Derek Bickerton
Book's Table of Contents