Figure 10.6.
A nominal topic gradient and a verbal relation gradient combine
rhythmically to generate a sentence.
in outputting the articles a and the or Prof. Plum or Mrs. White. All such detail is
suppressed in figure 10.6 in order to more clearly illustrate the essential orga
nizational principle of the topic/relation dipole.
The fundamental order in figure 10.6 is established by the topic gradient
P > W > H > K: Prof. Plum is output first. (This topic gradient is contextual and
transient. It is defined in STM and diagrammed in figure 10.8 with ordered
STM arrows.) After cerebellar deperseveration and bottom-up rebounds have
deactivated P and T, R rebounds into activity. V is then activated. Loc (location)
and Inst (instrument) are “primed” (subliminally activated), because kill has
a learned association with these case roles in long-term memory (LTM). Killed
is output, and V and R are deactivated. The T/R dipole rebounds again. Mrs.
White, the next most active nominal element in the topic gradient, is activated
and output. The T/R dipole then rebounds again. Loc and Inst are both equally
activated by R, since in the learned relation gradient, either could be output
next, but under the current topic gradient, H > K, so Loc is more primed than
Inst, and in is output next.
The T/R dipole rebounds again, and the hall is activated and output. The
T/R dipole rebounds to R, activating Inst and outputting with. Finally, the T/R
dipole rebounds one last time, outputting a knife.
NULL MOVEMENT
• 157
Pronouns
In chapters 8 and 9, we saw how the performed motor nodes of serial lists are
deperseverated by inhibitory feedback and rebounds. This process seems also
to explain fundamental universal features of pronouns, clitics, and other pro
nounlike words. Pronouns and related pro-forms are found in all natural lan
guages, and figure 10.7 explains why sentences of the type
Sid hit himself.
(10.37)
are universally preferable to sentences of the form
*Sid hit Sid.
(10.38)
Figure 10.7 models pronominalization subnetworks where (a) Sid saw Bill, and
then either (b) Sid hit Bill or (c) Sid hit Sid.
For simplicity, figure 10.7a collapses the topic and relation gradients of
figure 10.6 into a single topic-relation gradient. After (a) Sid saw Bill, /bIl/ is
deperseverated, so that in (b) the semantic relation hit (Sid, Bill) is expressed
Figure 10.7.
Simple pronominalization.
158 •
HOW THE BRAIN EVOLVED LANGUAGE
as Sid hit him. (By the same principles, He hit him is also predicted; for clarity,
figure 10.7 only diagrams one pronoun.)
Figure 10.7c considers the semantic relation hit (Sid, Sid). After /sId/ is
initially pronounced and deperseverated, only the motor plans for /hIm/ and
/hIms
°lf/ can be activated without contrastive stress. Although /hIm/ is the
more frequent (and so has the larger LTM trace), /hIms
°lf/ is also activated
by T1, the primary topic, Sid. By contrast, in (b), because Bill is not the pri
mary topic, /hIm/ is output. There is a certain similarity between this expla
nation and the generative notion of traces.
When a linguistic element was moved, generative linguists believed that it
left behind a residual trace. Thus, in 10.39 a trace t
i
of Neil
i
was believed to re
main in the embedded clause of the surface structure:
Neil
i
was believed [ t
i
= Neil
i
] to have destroyed the evidence.
(10.39)
Traces explained why, after hearing 10.39, one can reply Neil without any hesi
tation to the question Who might have destroyed the evidence? But since nothing
moves, adaptive grammar analyzes this trace as simply a “null pronoun,” the
completely inhibited motor plan of its antecedent.
9
The Scope of Negation
Adaptive grammar also offers an explanation of the “scoping of negation.”
Consider 10.40, for which four interpretations (10.41–10.44) are possible:
John didn’t eat the pizza quickly.
(10.40)
John didn’t (NEG eat the pizza quickly).
(10.41)
John didn’t (NEG eat) the pizza quickly.
(10.42)
John didn’t eat the (NEG pizza) quickly.
(10.43)
John didn’t eat the pizza (NEG quickly).
(10.44)
Example 10.41 interprets NEG as negating the entire scope of the verb
phrase eat the pizza quickly, but it is more likely that John did eat the pizza—he just
didn’t eat the pizza quickly. Examples 10.42 and 10.43 are possible readings,
but normally would be spoken with contrastive stress on the italicized words.
The normally preferred specific reading is that quickly is being negated (10.44),
and this pattern is common enough that Ross (1978) proposed a “rightmost
principle of negation,” which assigns negation to the final constituent of a sen
tence. Adaptive grammar makes a similar analysis. In 10.40, quickly, eat, and pizza
are all activated in STM and so are potential “attachment points” for NEG. At
the end of the sentence, NEG would be applied globally, presumably as a burst
NULL MOVEMENT
• 159
of nonspecific arousal, and the least-activated conceptual subnetwork, that
which encodes the newest information, is rebounded.
But once NEG is encountered in the sentence, how is NSA suppressed until
the end of the sentence? Is there a pushdown-store automaton in the human
brain after all? And when NSA is finally released, how is it constrained so as to
rebound only the rightmost element? Adaptive grammar has answers to these
questions, but they are not syntactic. They must wait until chapter 12.
Questions: Extraction and Barriers
Finally, we return to the questions raised by sentence 2.2/10.18.
Is
2
the man who is
1
dancing p
2
singing a song?
(10.18)
Generative linguists thought the generation of 10.18 involved ( a) the extrac
tion of an element ( is
2
) from one place ( p
2
), ( b) its “movement” to another
place, and ( c) an elaborate set of principled “barriers” which would, for ex
ample, prevent is
2
from moving to the front of the sentence. Figure 10.8 ac
counts for 10.18 without recourse to metaphors of movement.
English yes/no questions like 10.18 are initiated by an auxiliary verb. En
glish Aux and related modal verbs carry the epistemological status of a propo
sition (Givón 1993). In English, this association between epistemological status
( ? in figure 10.8) and Aux is learned as part of the grammar, so in figure 10.8,
LTM traces order Aux before the rest of the sentence, S. After Is is output, the
Aux-S dipole rebounds, and S initiates activation of the T/R dipole at T. (The
dashed LTM trace from S to R suggests that in VSO languages, if in fact there
are such, S can learn to initiate activation of the T/ R dipole at R.) Thereafter,
the T/ R dipole oscillates in phase with the foot dipole of chapter 9. As was men
tioned in the discussion of figure 10.6, T and R need not rebound on every
foot.
The first nominal concept to be activated, N
1
, is the topic, man. All sub
stantives can be phonologically realized as either a phonological form
Φ or Pro.
For simplicity, figure 10.8 only diagrams
Φ and cerebellar deperseveration for
the instance of man (i.e., /mæn/). At t
2
, /mæn/ is output and deperseverated.
Now the relative clause S
rel
is activated. This activation is displayed with an STM
arrow because relative clauses are not always attached to nominals. (The or
dering of relative clauses, however, is language-dependent and must be learned
at LTM traces, which, for simplicity, are not diagrammed in figure 10.8.)
The relative clause, S
rel
, (re)activates the sentential rhythm dipole at T. In
this case, the topic of the embedded relative clause is also the nominal con
cept man. Since
Φ has been deperseverated, Pro now becomes active, and who
is output at t
3
. At t
4
, the dipole switches back to R. Aux and V are activated and
is dancing is output.
Bottom-up deperseveration and rebounds now deactivate Vp, S
rel
, and N
1
.
The top-level dipole rebounds to R, and the top-level Vp is activated. Aux, how
160 •
HOW THE BRAIN EVOLVED LANGUAGE
Figure 10.8.
Generation of questions and relative clauses.
ever, has already been performed and deactivated, so V = singing is output at
t
5
. Finally, the top-level T/R dipole rebounds back to T; N
2
is activated and a
song is output at t
6
.
Having dispensed with the need for generative linguistics in this chapter, I should
close by crediting generative theory with anticipating many of the key elements
of adaptive grammar. Figure 10.8, for example, builds on generative trees, which
were generally correct in their structure, if not in their operation. Generative
linguistics also correctly predicted the existence of an “abstract, autonomous”
grammar, a relational system which functions quite independently of “real-world,
substantive” cognition. However, the generative assumption that sentences are
generated by movement proved a bad choice of metaphor. Nothing moves. Lan
guage needs relevance, and syntax is ordered by topicality. To be useful for sur
vival, grammar must relate to a topic; otherwise, it has no meaning.
TRUTH AND CONSEQUENCES
•
161
•
E
L
E
V
E
N
•
Truth and Consequences
Consider what effects, that might conceivably have practical
bearings, we conceive the object of our conception to have.
Then, our conception of these effects is the whole of our
conception of the object.
C. S. Peirce, the Pragmatic Maxim
from “How To Make Our Ideas Clear (1878)
In the last chapter we saw that the topic—that which we are talking about—
plays a privileged role in ordering our unfolding motor and language plans,
our sentences. But some topics never seem to arise. For example:
The King of France is bald.
(11.1)
You think therefore I am.
(11.2)
The human race has never existed.
(11.3)
Every bachelor is an unmarried man.
(11.4)
One would be very surprised to stray into a discussion on one of these topics at
a cocktail party. As we first noted in connection with 11.1, the problem seems
to be not so much that such sentences are false as that they are simply void.
They are meaningless. Even 11.4, which is very, very true, is very, very trite.
While it is easy to say that sentences like 11.1–11.4 are meaningless and
that topics must be meaningful, it is quite a bit more difficult to clarify just what
makes an idea meaningful, as Peirce’s above attempt illustrates.
1
So let us first
try to clarify Peirce. Consider the following sentences:
Hands up or I’ll shoot!
(11.5)
Global thermonuclear war will begin any minute.
(11.6)
161
162 •
HOW THE BRAIN EVOLVED LANGUAGE
Unlike sentences 11.1–11.4, these sentences bear on matters of life and death.
Presumably, they have a great deal of what Peirce would call “practical bear
ing.” Being in the “future tense,” neither one would be strictly True before the
fact, but either would, in sincere context, be Very Meaningful. Truth and
Meaning are not necessarily the same thing.
Sentences 11.5–11.6 are over-the-top, “Hollywood” examples of Meaning,
and as a philosopher of science, Peirce would no doubt have found them crass.
Only in a footnote to a later (1893) edition of his essay did Peirce deign to
give popular expression to his “Pragmatic Maxim”:
Before we undertake to apply this rule, let us reflect a little upon what it im
plies. It has been said to be a skeptical and materialistic principle. But it is only
an application of the sole principle of logic recommended by Jesus: “Ye may know
them by their fruits,” and it is very intimately related with the ideas of the Gos
pel. We must certainly guard ourselves against understanding this rule in too
individualistic a sense. (Peirce quoted in Wiener 1958, 181n)
The too-individualistic sense against which Peirce warns us was William
James’s sense of pragmatism. Born the first son of a wannabe Harvard profes
sor (Henry James, the elder), William James succeeded where his father had
not. In that previous heyday of American capitalism at the turn of the last cen
tury, James popularized Peirce’s notion of pragmatism with movie marquee
rhetoric: “the cash-value of true theories,” “truth is what works.” Blessed with
this clear (some would say pandering) style, James succeeded in becoming a
Harvard professor and celebrated as the “Father of American Pragmatism.”
By contrast, Peirce was the precocious son of a respected Harvard math
ematics professor. He no longer aspired to status. In his 1859 Harvard class book
he inscribed the following:
1855 Graduated at Dixwell’s and entered College.
Read Schiller’s Aesthetic Letters & began the study of Kant.
1856 Sophomore: Gave up the idea of being a fast man and
undertook the pursuit of pleasure.
1857 Junior: Gave up the pursuit of pleasure and undertook to
enjoy life.
1858 Senior: Gave up enjoying life and exclaimed “Vanity of
vanities!”
Disdainful of vanity, Peirce was an intensely original thinker whose writing
seems always contorted to avoid the popular clichés of his day. No member of
the Get-along-Gang, Peirce was dismissed as arrogant and was little appreciated
in his own time. For many years, history regarded Peirce’s students and col
leagues (including John Dewey, E. L. Thorndike, and his sometimes-antagonist
Josiah Royce) more highly than Peirce himself. Had it not been for the patron
age of the powerful and influential James, it is possible that Peirce’s work would
have been totally lost. But as it happened, James’s patronage was also patron
izing, and his popularization of Peirce’s pragmatism with overly simplistic for
TRUTH AND CONSEQUENCES
• 163
mulae like “the true is the expedient” and “faith in a fact helps create the fact”
would have been plagiarism had it been more astute.
In Peirce’s view, James confused Truth and Meaning. Meaning resides in
the practical consequences of the objects of our conception, but what we find
meaningful may not be True. We are fallible. This insistence on “fallibility” led
Peirce to rename his philosophy “ pragmaticism, which [is a term] ugly enough
to be safe from kidnappers” (Peirce 1905). As it happened, the times found
James’s “truth pays” more appealing than Peirce’s Jesus. “Truth pays” had more
“cash value.” Despite James’s patronage, Peirce died a failure by Hollywood
standards, impoverished and forgotten.
To be fair, we should note that from a psychologist’s perspective James’
jingles were perhaps defensible definitions of workaday truth, of the rational
izations and convenient fictions of everyday psychopathy. The difference be
tween Truth and Meaning may be less of quality than it is of quantity. I suspect
Peirce would not have objected so strongly if James had said, “What works for
a long time is true.” James was a psychologist of his day, but Peirce was a scien
tist, and in the scientific ideal, eternal truths work eternally. The problem is
that even in science revolutions occur. An Einstein detects a small wrinkle in
space-time, and suddenly the entire edifice of Newtonian mechanics is reduced
to a convenient fiction of workaday physics. Science’s quest for long-term
replicability is certainly noble, but for the individual (and sometimes for the
species), survival often comes down to short-term, lower case, Jamesian truths.
If we can’t have truth, we must settle for meaning.
Truth and Survival in Science
In his classic study of scientific revolutions, Kuhn’s central example was the
Copernican Revolution (Kuhn 1957, 1962). He paints a picture of licensed Ptole
maic astronomers doodling with epicycles, while outside the ivied halls of the
scientific establishment, Copernicus was meticulously noting small discrepancies
in measurements and creating the future science of the cosmos. Kuhn exam
ines the historical and sociological dynamics of these paradigm shifts in engag
ing detail, but for my money, he doesn’t sufficiently credit economics. The “cash
value” of Copernicus’s theory wasn’t in its Truth but in its Meaning.
In the fifteenth century, the expansion of maritime trade led intrepid sail
ors to challenge the popular notion of a flat Earth. Fifty years before Copernicus’s
text was published in 1543, Columbus had already reached the East by sailing
West, and twenty years before Copernicus, Magellan had already circumnavigated
the globe (1522). To be sure, Ptolemy thought the Earth was spherical, and the
heliocentric system did not directly improve navigation, but it was still the pros
pect of riches from world trade and the accompanying need for improved navi
gation by the stars that paid the salaries of Ptolemaic and Copernican astronomers
alike. Columbus and Magellan were the ones who conducted the empirical ex
periments with practical consequences. To paraphrase James, the meaningful
theory was what people would buy. By 1543, no one was buying Ptolemy, so
164 •
HOW THE BRAIN EVOLVED LANGUAGE
Copernicus could publish De revolutionibus orbium coelestium, claiming what expe
rience had found meaningful to also be True. This is what got Galileo into trouble
with the Church.
2
The Earth could be round and go around all it wanted, and
the Church didn’t really care how much money merchants made thereby; it only
cared that the heliocentric universe not be declared an Eternal Truth.
Cash value and Truth have been confused in linguistics, too. For Plato and
Aristotle, linguistics may have been basic research into eternal truths, but for
the Holy Roman Empire, linguistics had practical consequences. It meant lan
guage teaching and language learning: teaching and learning the Greek of
Scripture and the Latin of the Church. Grammar was a core course of the
medieval trivium, and linguists were primarily language teachers . . . at least
until the Reformation.
The Reformation was as much a linguistic revolution as it was a social,
political, and religious revolution. Luther’s original Ninety-Five Theses (1517)
are now largely forgotten, but his translation of the New Testament from Latin
to German (1534) remains a cultural bible.
3
Coupled with Gutenberg’s inven
tion of the printing press (ca. 1456), the mass-produced Lutheran Bible soon
had God speaking directly to the people—in German. Job prospects became
bleak for Latin and Greek teachers in Germany.
Although German had a Bible, it still lacked the cultural history and pres
tige the Romance languages had inherited from Latin. But after Jones’s theory
of evolution (chapter 2), a new generation of linguists set to work reconstruct
ing an earlier Germanic language, a sister to Latin, Greek, and Sanskrit. After
Napoleon’s demise, this newly discovered classical pedigree became German
nobility’s title to empire, and while demand may have dwindled for Latin and
Romance-language teachers, the aspiring young German philologist could hope
for a court appointment to study Germanic and “Aryan.” One such aspiring
young philologist was Jakob Grimm. In 1808, Grimm was appointed personal
librarian to the king of Westphalia. Germanic, unlike Latin and Greek, had
left no written literature from which it could be reconstructed, so Grimm and
his younger brother, Wilhelm, studied Germanic oral literature. In 1812, they
published their first collection of fairy tales, Kinder- und Hausmärchen (Children’s
and Home Tales). In 1830, Jakob and Wilhelm Grimm were given royal appoint
ments to the University of Göttingen. Germany was no longer a third-world
country, and the Brothers Grimm were no longer publishing fairy tales. By 1835,
they had published Die deutsche Heldensage and Deutsche Mythologie (German Hero
Sagas and German Mythology).
At the same time that philology was being celebrated in Germany, linguists
were still being employed as language teachers in the United States. Needing
a steady influx of immigrants to settle the frontier and expand labor-intensive
industry, the young nation founded “grammar schools” which employed lin
guists to teach English as a second language (ESL)
4
in a New World trivium of
readin’, writin’, and ’rithmetic. In the United States, bilingualism had practi
cal bearings, and language teaching was meaningful. It remained meaningful
until World War I limited immigration and the rise of communism discredited
bilingualism. To please their patrons and prove their patriotism, Americans
TRUTH AND CONSEQUENCES
• 165
became monolingual, and soon language teacher–linguists were no longer
needed in the New World either.
After World War II and Hitler’s appropriation of the term “Aryan,” the job
market for philologists collapsed. But as the world’s only surviving economy,
the United States suddenly found itself an international power. United States
soldiers returning home from the war reported with surprise, “No one in Eu
rope speaks English!” Within a decade, study of modern foreign languages
became required in every U.S. college and high school. At the same time, the
“baby boom” produced a 40% increase in the U.S. birthrate. Eventually, the
baby boom became a student boom, and the demand for linguists to teach
foreign languages redoubled. Suddenly, linguists could get jobs again.
Leadership in this new, foreign-language teaching movement came from
linguists trained in the incompatible methods of philology (the comparative
method and the contrastive analysis hypothesis) and psychology (habit forma
tion and interference). As crude as those methods seem today, I still remem
ber my first pattern practice drill in German:
Willi
Was gibt es denn zum Mittagessen.
Hans Wahrscheinlich Bratwurst.
Willi
Ich habe Bratwurst nicht gern.
But when the first cohort of multilingual U.S. students and I went abroad, eager
to strike up conversations about bratwurst, we found that everybody else in the
world had already learned English!
Almost simultaneously, oral contraceptives were invented and the baby
boom became a baby bust. Within a generation, English became the lingua
franca of the “new world order.” In the United States, there was suddenly no
longer a pressing national need for foreign languages. Before long, colleges
and universities had removed their foreign-language requirements. Soon there
were few foreign-language students, and there were fewer jobs for foreign-
language teachers.
5
Fortunately, there were other job opportunities for Ameri
can linguists, but they were top secret.
At the heart of German war communications in World War II was the Enigma
Machine. The Enigma Machine was a kind of cryptographic cash register which
took in a message, letter by letter, and then, by a complex system of gears, put
out an elaborately transformed and encrypted code. For example, if today were
Tuesday and e were input as the 1037th letter of the message, then x might be
the output code. To defeat Germany, the Allies needed to defeat the Enigma
Machine, and they needed to do it fast. As it happened, in 1936 Alan Turing
published a paper which mathematically described a universal cryptographic
cash register, one which could be configured to emulate any kind of real cryp
tographic device. With the outbreak of hostilities, the cash value of Turing’s
theory skyrocketed. The German’s Enigma Machine was a “black box”: from
enemy actions, cryptographers could see what had gone in, and from inter
cepted enemy radio messages they could see what had come out, but they
couldn’t see how it did it. The black box had to be “reverse-engineered.” To
166 •
HOW THE BRAIN EVOLVED LANGUAGE
that end, the Allies immediately began a major war program to build a “Tur
ing machine” which could emulate the German’s Enigma Machine. At the end
of the war, the Turing machine was upstaged by the atomic bomb, but the gen
erals knew that the triumph of the Allies was in large measure the triumph of
the Turing machine and of a new linguistics, a computational linguistics.
In 1949, on behalf of the U.S. military and espionage establishments,
Warren Weaver of the Rand Corporation circulated a memorandum entitled
“Translation” proposing that the same military-academic complex which had
broken the Enigma code redirect its efforts to breaking the code of the Evil
Empire, the Russian language itself. Machine translation became a heavily
funded research project of both the National Science Foundation and the
military, with major dollar outlays going to the University of Pennsylvania and
the Massachusetts Institute of Technology. In 1952, Weaver outlined a strategy
before a conference of these new code-breakers. The strategy was to first ana
lyze, or parse, Russian into a hypothetical, abstract, universal language, which
Weaver called machinese, and then to generate English from this machinese. At
MIT, the machine translation effort became organized under the leadership
of Yehoshua Bar-Hillel, and in 1955, Bar-Hillel hired a University of Pennsylva
nia graduate student who just happened to have written a dissertation outlin
ing a theory for generating English from machinese. His name was Noam
Chomsky, he called machinese “deep structure,” and his theory was “genera
tive grammar.”
By 1965, however, Bar-Hillel had despaired of achieving useful machine
translation. The main problem was that the MIT Russian parsing team had “hit
a semantic wall.” It never succeeded in producing deep structures from which
Chomsky’s theory could generate English. In describing this semantic impasse,
Bar-Hillel noted how hard it would be for a machine to translate even a simple
sentence like
Drop the pen in the box.
(11.7)
The problem was meaning. The problem with 11.7 was that drop, pen, and
box all have three or more senses (meanings with a small m). Theoretically, some
3
3
different sentences could be generated from a deep structure containing
just those three substantive terms. Consider for example *11.8 and 11.9:
*Drop the pen in the
det
box
verb
.
(11.8)
?Drop the pen
playpen
/pen
ballpoint
in the box
trailer
/box
container
.
(11.9)
Sentence *11.8 is fairly simple to solve. It can be rejected as an ungram
matical sentence by a simple generative grammar rule, something like a verb
may not immediately follow a determiner. But 11.9 is more problematic. In 11.9, both
pen and box are nouns. Each could be translated by two different Russian words,
but how was a poor computer to know which one was the right one? Ostensi
bly for this reason the U.S. government gave up on machine translation in 1966
TRUTH AND CONSEQUENCES
• 167
(ALPAC 1966). In point of fact though, the reason was more economic. As evil
as the Evil Empire might have been, the United States was at the time gearing
up its war on Vietnam, and Russian-English machine translation was not going
to be of immediate help. Some research had to be sacrificed for the war effort.
Physics, computer science, and mathematics all took one step back, and lin
guistics was volunteered.
Two years later, in 1968, three discoveries obviated the ALPAC report’s
criticism of machine translation: (1) Bobrow and Fraser’s description of the
augmented transition network, (2) Fillmore’s case grammar, and (3) Quillian’s
semantic networks.
pen
In fact, Bar-Hillel was much too skeptical. Chomsky (1965) had already
made considerable progress on sentences like 11.9 with his work on selectional
restrictions. We can drop a pen
ballpoint
into a box
container
, but we can’t drop a pen
prison
into a box
container
. If our friendly neighborhood lexicographer were to define
ballpoint
to have the feature +object and pen
prison
to have the “semantic feature”
+institution, then a simple grammar rule restricting drop to the selection of a
direct object which was either +object or –institution would reject *11.10:
*Drop the pen
prison
in the box.
(11.10)
This is a kind of agreement rule. We could say the semantic features of the
verb must agree with the semantic features of the direct object, but this solution
still posed several technical problems. The first problem was finding a way to
compute agreement between separated phrases—so-called long-distance dependen-
cies. Within just two years of the ALPAC report, Bobrow and Fraser (1969) solved
the general problem of long-distance dependencies with the augmented transi
tion network (ATN). Whereas the lambda calculus and the pushdown-store au
tomaton had two Turing machines working together, the ATN formalism had
three: one for program, one for data, and one for agreement.
A second problem arose when the verb and the direct object underwent a
passive transformation, as in 11.11:
The pen was dropped in the box.
(11.11)
In this case, semantic agreement needs to be enforced between subject and
verb, not between verb and direct object. This problem was also solved in 1968
by Fillmore’s case grammar, which as we saw in chapter 10 replaced terms like
“subject” and “direct object” with terms more appropriate to computing se
mantic agreement on selectional restrictions, terms like “actor” and “patient.”
Finally in 1968, Quillian published his ideas on semantic networks. The gist
of Quillian’s idea is illustrated by figure 11.1. Pen has (at least) three senses.
Pen
1
is a tool. This is represented in figure 11.1 with an ISA link from pen
1
to tool. In figure 11.1, a tool also ISA instrument, which illustrates the easy
linking of a semantic network to case grammar. Pen
2
ISA enclosure, as is
pen
3
. Pen
1
, however, is FOR writing, while pen
2
is FOR animals and pen
3
is FOR
criminals.
168 •
HOW THE BRAIN EVOLVED LANGUAGE
Dostları ilə paylaş: |