Terminator and Philosophy: I'll Be Back, Therefore I Am (36 page)

Read Terminator and Philosophy: I'll Be Back, Therefore I Am Online

Authors: Richard Brown,William Irwin,Kevin S. Decker

BOOK: Terminator and Philosophy: I'll Be Back, Therefore I Am
7.03Mb size Format: txt, pdf, ePub
 
Another pragmatic obstacle is
syntactic ambiguity
. There’s a scene in
T2
where John tells the T-101, “You can’t keep going around killing people!” This sentence is
syntactically ambiguous
because, given the way the words are arranged, there are at least two acceptable interpretations of it. On the more natural reading, John is claiming that it is not permissible for the Terminator to kill people. On a slightly less natural reading, John is claiming that it
is not
permissible for the Terminator to go around
and
kill people, though it
is
permissible for the Terminator to kill people as long as he’s not
going around
while he kills. If the Terminator is going to understand this sentence, it must disambiguate it. But it’s important to note that
each
reading is acceptable given the Code Model because there is nothing contained in the sentence itself that would support one interpretation over the other.
 
Two more pragmatic issues with natural language are
referential ambiguity
and
underdetermination.
A sentence exhibits a referential ambiguity when it is not clear from the meanings of the words of the sentence all by themselves what a word or phrase in that sentence
refers to
. Recall the scene late in the movie where the T-1000 is chasing John, Sarah, and the Terminator in a tractor-trailer carrying liquid nitrogen. The gang is in a junker that they stole from a man on the street. The T-1000 is gaining on them, and John screams, “Step on it!” In this sentence, the word “it” is
referentially ambiguous
. Naturally, we all know that the “it” refers to the gas pedal of their vehicle. John wants the Terminator to step on
the pedal
and speed up the vehicle. Again, there’s nothing in the signal that provides the referent of “it,” so the Code Model doesn’t explain how the Terminator is supposed to understand what “it” refers to.
 
Underdetermination occurs when a fully decoded sentence doesn’t provide enough evidence on its own to figure out what a speaker means. Even if you had a souped-up version of the Code Model, one that could resolve the above ambiguities, underdetermination might still be a problem. To see why, consider the scene discussed earlier when Sarah, John, and the T-101 are driving out to their gun supplier and they want to avoid the police. Sarah, not wanting the Terminator to speed, says to the Terminator, “Keep it under 65.” What, in this situation, does “under 65” mean? If the Terminator was just retrieving from its neural net processor the individual meanings of “under” and “65,” it would be very hard to see what Sarah is asking for. Does Sarah mean that the Terminator should keep the car under 65
years old
? Under 65
pounds
? Under 65
dollars
? Under 65
degrees
? These options make little or no sense. Again, we all understand that Sarah wants the Terminator to keep the car’s
speed
under 65
miles per hour
. But the words “speed” and “miles per hour” are nowhere to be found in her sentence. What Sarah means is
underdetermined
by the sentence because, even though we humans easily understand what Sarah meant, absolute clarity would require us to add more to the sentence than the words provide.
 
Skynet Doesn’t Want Them to Do Too Much Thinking: The Inferential Model
 
T-101: Skynet presets the switch to read-only when we’re sent out alone.
 
Sarah: Doesn’t want you doing too much thinking, huh?
 
 
To understand most sentences in a natural language, we must overcome some or all of these pragmatic obstacles. If the Code Model were accurate, hearers wouldn’t be able to make sense of utterances involving any pragmatic features because this model requires only that a hearer process the words of an utterance. How would we interpret spoken words if we were simply left with the words and the word order of the sentences alone? If people were programmed to interpret others’ utterances in just this way, we would have a really hard time understanding one another. But, in real life, we seem to solve these pragmatic problems of ambiguity and underdetermination, and we communicate with ease.
 
So if the Code Model is inadequate, what linguistic communication theory best mirrors what we in fact do? In place of the Code Model, many philosophers of language and linguists have advocated the
Inferential Model
of communication.
8
Here, having the lexicon and syntax of a language is not enough to figure out what speakers mean when they are communicating. Instead, hearers must use this information as one piece of evidence among many other pieces of evidence, to
infer
what speakers mean. It’s not a matter of unpacking information from a signal; it is matter of
working out
what a speaker means by appealing to a wider context like shared knowledge and assumptions in addition to the meanings of words.
 
What exactly, then, are hearers inferring? In answering this question, the Oxford philosopher H. Paul Grice revolutionized the philosophy of language and linguistics. In his two famous essays “Meaning” and “Logic and Conversation,”
9
Grice distinguishes what a
sentence means
, on the one hand, from what a
person means
by using that sentence on a particular occasion. What a sentence means, according to Grice, is something like what the Code Model suggests; you might think of it as the
literal
meaning of the sentence. We will call what a sentence means its
sentence-meaning
. The sentence-meaning of “The Terminator is a killing machine” is that the Terminator is a killing machine. This means that sentences with pragmatic obstacles such as underdetermination may not have a sentence-meaning at all, or at least not a complete sentence-meaning.
10
 
What a person means by using a sentence on a given occasion often greatly diverges from what that sentence means on a literal level with no context. We’ll call what a person means by saying a sentence on a given occasion the
speaker’s meaning
of that utterance.
11
If Sarah were to ask John if he thought the Terminator would be able to complete its mission, John might respond, “The Terminator is a killing machine.” In that case, the sentence-meaning is that the Terminator is a killing machine, but the speaker’s meaning is something like, “Sure, the Terminator can complete its mission.”
 
Grice’s distinction is quite plausible. Recall the scene we discussed in which the Terminator picks John up and John cries, “Help! Help! I’m being kidnapped! Get this psycho off of me!” According to Grice, the sentence-meaning of John’s utterance is probably something like, “Assist John in removing himself from the psychologically disturbed individual holding John!” Because the Terminator is operating according to something like the Code Model, it is likely that this is what it interprets John as meaning.
 
But, as any good Gricean knows, a speaker’s meaning often far outstrips the sentence-meaning of that speaker’s actual words. So John probably means something more akin to “Terminator, I want you to let me go.” Since the T-101 is under the sway of the Code Model, it does not catch on to John’s speaker’s meaning and continues to grapple with John. Only when John explicitly exclaims “Let go of me!” does the Terminator react and drop John to the ground.
 
According to Grice, John
did
mean that the Terminator should let him go with his initial statement. The sentence-meaning of John’s initial utterance isn’t that the Terminator is to let him go, but the speaker’s meaning of it surely
is
. His first utterance was directed toward someone else, true, but it provided evidence of his desire to be released. So, if John did tell the T-101 to let him go at first, why did it take the second, more explicit, utterance to get the Terminator to release him? Because the speaker’s meaning
and
the sentence-meaning of John’s second utterance were both to the effect that the Terminator let him go, but only the speaker-meaning of John’s first utterance possessed that meaning. If the Terminator were designed according to the Inferential Model, it would have been able to
infer
John’s speaker-meaning from the first utterance. And notice that even when the T-101 interprets John’s second utterance, he still seems to fall short of John’s speaker’s meaning because he complies only with its literal meaning. That is, the T-101 literally lets John go, dropping him to the ground, when it’s obvious to normal English speakers that this is not what John meant. Now, speaker’s meaning exists only in the speaker’s mind. This means that we have to guess at what people believe, desire, intend, wonder, and all the rest. These constitute or determine what a speaker means. But our access to these mental items is forever indirect, mediated by the speaker’s publicly observable behavior. From our observations of another’s behavior, we infer what that person believes and desires. In other words, we are able to figure out what it’s like on the
inside
by using external clues.
12
Language-using behavior is no different. A particular sentence is one clue among many pieces of the puzzle that we must put together by way of inference. These inferences are rarely, if ever, conscious, so it may not seem to us that we’re making them. But that’s okay. They’re still happening.
 
How to Make the Terminator Less of a Dork
 
In one of our favorite scenes in
T2
, John asks the Terminator whether it could “you know, be more human and not such a dork all the time?” So, if we wanted to make the Terminator’s communicative behaviors more humanlike, we would want to build its capacities to process language according to the Inferential Model. In that case, we would need to supply the machine with more than just the lexicon and syntactical rules of a given language. Clearly, we would need to also program it with a great deal of information about human psychology.
13
It would need to have a mechanism, or more likely several mechanisms, that could piece together lots of information from the environment and about people in general to solve the problem of reading others’ minds.
 
The goal of human interpreters is to infer speakers’ meanings behind linguistic behavior, not the mere sentence-meanings. To complete our interpretative tasks, we exploit all sorts of evidence, including speakers’ gestures, tones, facial expressions, locations, psychological facts about what they believe and know, their goals and expectations, and more. We use all of this, coupled with word-meanings and sentence structures, to infer what speakers mean. We know John uses “it” to refer to the gas pedal when he screams, “Step on it!” because we know his goals and we know what it would take to accomplish them in this situation. We don’t reach this conclusion by working from the words alone.
 
While we’ve focused mostly on language comprehension or interpretation, much of what we say goes for language production as well. In order to comprehend, a hearer must rely upon his beliefs or assumptions about a speaker’s psychology. This also is true for speakers. Speakers use their assumptions about hearers when they select their words. We don’t say more than we have to; we don’t inform people of what we think they already know. Rather, we say what we think would be relevant to our hearers, given what we think they believe and what we are trying to accomplish. We say just enough to get our points across. Only a machine that’s tuned in to context and to human psychology—in other words, the same kind of information that hearers exploit in order to infer speakers’ meanings—would be capable of knowing how to respond in particular situations.
 
Only a linguistic communication theory that accommodates pragmatic aspects of language would make the Terminator less of a dork. We think the best theory we have going currently is the Inferential Model. So here’s a suggestion and a request for artificial intelligence researchers and for Skynet, if and when it comes online: use the Inferential Model in your machines, but please don’t use their linguistic prowess to hasten Judgment Day.
14
 
NOTES
 
1
Natural languages like these are significantly different from formal languages, such as the formal languages of mathematics or logic. Natural languages develop, as it were, naturally over time in human communities and are mainly used to communicate between language users. Formal languages, on the other hand, are constructed artificially with other, usually noncommunicative, ends in mind.
 
2
The term “Code Model” first appeared in D. Sperber and D. Wilson,
Relevance: Communication and Cognition
(Cambridge: Harvard Univ. Press, 1986). The model received its first formal treatment in W. Weaver and C. E. Shannon,
The Mathematical Theory of Communication
(Urbana: Univ. of Illinois Press, 1949).
 
3
See, for example, C. E. Shannon, “A Mathematical Theory of Communication,”
Bell-System Technical Journal
27, no. 3 (1948): 379-423; Shannon, “A Mathematical Theory of Communication,”
Bell-System Technical Journal
27, no. 4 (1948): 623-656; and Weaver and Shannon,
The Mathematical Theory of Communication
.

Other books

Lover's Road by E. L. Todd
The Ragtime Kid by Larry Karp
My Sunshine by Catherine Anderson
Buried-6 by Mark Billingham
About a Girl by Lindsey Kelk
Parts Unknown by Rex Burns
The Captive Bride by Gilbert Morris
Dante's Numbers by David Hewson