The Language Instinct: How the Mind Creates Language (32 page)

BOOK: The Language Instinct: How the Mind Creates Language
6.91Mb size Format: txt, pdf, ePub
ads

In some ways, a morphemic writing system has served the Chinese well, despite the inherent disadvantage that readers are at a loss when they face a new or rare word. Mutually unintelligible dialects can share texts (even if their speakers pronounce the words very differently), and many documents that are thousands of years old are readable by modern speakers. Mark Twain alluded to such inertia in our own Roman writing system when he wrote, “They spell it Vinci and pronounce it Vinchy; foreigners always spell better than they pronounce.”

Of course English spelling could be better than it is. But it is already much better than people think it is. That is because writing systems do not aim to represent the actual sounds of talking, which we do not hear, but the abstract units of language underlying them, which we do hear.

Talking Heads
 

For centuries, people have been terrified that their programmed creations
might outsmart them, overpower them, or put them out of work. The fear has long been played out in fiction, from the medieval Jewish legend of the Golem, a clay automaton animated by an inscription of the name of God placed in its mouth, to HAL, the mutinous computer of
2001: A Space Odyssey
. But when the branch of engineering called “artificial intelligence” (AI) was born in the 1950s, it looked as though fiction was about to turn into frightening fact. It is easy to accept a computer calculating pi to a million decimal places or keeping track of a company’s payroll, but suddenly computers were also proving theorems in logic and playing respectable chess. In the years following there came computers that could beat anyone but a grand master, and programs that outperformed most experts at recommending treatments for bacterial infections and investing pension funds. With computers solving such brainy tasks, it seemed only a matter of time before a C3PO or a Terminator would be available from the mailorder catalogues; only the easy tasks remained to be programmed. According to legend, in the 1970s Marvin Minsky, one of the founders of AI, assigned “vision” to a graduate student as a summer project.

But household robots are still confined to science fiction. The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard. The mental abilities of a four-year-old that we take for granted—recognizing a face, lifting a pencil, walking across a room, answering a question—in fact solve some of the hardest engineering problems ever conceived. Do not be fooled by the assembly-line robots in the automobile commercials; all they do is weld and spray-paint, tasks that do not require these clumsy Mr. Magoos to see or hold or place anything. And if you want to stump an artificial intelligence system, ask it questions like, Which is bigger, Chicago or a breadbox? Do zebras wear underwear? Is the floor likely to rise up and bite you? If Susan goes to the store, does her head go with her? Most fears of automation are misplaced. As the new generation of intelligent devices appears, it will be the stock analysts and petrochemical engineers and parole board members who are in danger of being replaced by machines. The gardeners, receptionists, and cooks are secure in their jobs for decades to come.

Understanding a sentence is one of these hard easy problems. To interact with computers we still have to learn their languages; they are not smart enough to learn ours. In fact, it is all too easy to give computers more credit at understanding than they deserve.

Recently an annual competition was set up for the computer program that can best fool users into thinking that they are conversing with another human. The competition for the Loebner Prize was intended to implement a suggestion made by Alan Turing in a famous 1950 paper. He suggested that the philosophical question “Can machines think?” could best be answered in an imitation game, where a judge converses with a person over one terminal and with a computer programmed to imitate a person on another. If the judge cannot guess which is which, Turing suggested, there is no basis for denying that the computer can think. Philosophical questions aside, it was apparent to the committee charged with overseeing the competition that no program could come close to winning the $100,000 prize, so they devised a $1,500 version that would be fairer to the state of the art. Each of the judges had to stick to a single topic of conversation selected by the programmer or by the human foil, whichever it was, and the judge was not allowed to engage in any “trickery or guile” such as repeating a question ten times or asking whether zebras wear underwear; the conversation had to be “natural.” After interacting with several programs and human foils for about seven minutes apiece, the judges ranked all the humans as more humanlike than any of the computers. About half the judges did, however, misidentify the winning program as human.

The accomplishment is less impressive than it sounds. The rules handcuffed the judges: “unnatural trickery or guile” is another way of referring to any attempt to determine whether one is conversing with a human or a machine, which is the whole point of the test! Also, the winning programmer shrewdly exploited the opportunity to designate the topic of conversation for his program. He chose “whimsical conversation,” which is a dubious example of a “topic,” and which, by definition, can be full of non sequiturs:

Judge: It has been a long day! Now I think I’m ready for some fun. The weather isn’t very pleasant today. Do you have any jokes that could lift the clouds?

Computer: Everybody talks about the weather but nobody seems to do much about it. Why do you need to tell me now you think you’re ready for some fun? Groannnnnn…

Judge: You’re trying to make this difficult for me aren’t you? I just explained that I was ready for fun!

Computer: Would you prefer it if it was not true that I are trying to make this difficult for you aren’t you? I just explained that I was ready for fun! I am, therfore I compute…What makes you think I am trying to make this difficult for you aren’t I?

Judge: I can’t say I understand the question…but the whimsy of a poet is one thing and the whimsy of most of us is another.

Computer: How do you know you can’t say you understand the question…but the whimsy of a poet is one thing and the whimsy of most of us is another.

 

In fact the programmer had simply updated a technique invented in 1965 by the computer scientist Joseph Weizenbaum. Weizenbaum wrote a rather stupid program called
ELIZA
, named after the Shaw heroine, which looked for a few key words in an input line, retrieved a canned response from a list, and inserted fragments of the input line into its response (with some minor substitutions, like “you are” for “I am”). It could thus crudely simulate a conversation. One version, called Doctor, mimicked a Rogerian psychotherapist, taking advantage of Carl Rogers’ philosophy that a therapist should recast patients’ statements back to them for their contemplation. A version of the program was later sold as a computer game in Radio Shack stores, and another comes packaged with my word processing program, GNU Emacs. (The documentation says, “If using Emacs becomes terribly frustrating and none of the techniques described above solve the problem, Emacs can still help you. Type ‘M-x doctor RET.’”) Here is a typical dialogue:

User: I have been rather depressed lately.

Computer: Are you depressed often?

User: Yes, almost all the time.

Computer: Why do you say yes almost all the time?

User: Right now I feel like I could cry.

Computer: Is it because right now you feel like you could cry that you came to me?

 

To Weizenbaum’s horror, one day he found his secretary transfixed at the terminal, pouring her heart out to the program.

Most computer scientists are annoyed by the Loebner Prize competition. They consider it a pointless publicity stunt, because it is an exercise in how to fool an amateur, not how to get computers to use language. (Artificial intelligence researchers and other professionals who are knowledgeable about language were not allowed to act as judges, and none bothered to compete; the submissions were from hobbyists.) It is about as productive as promoting biology by offering a prize to the designer of the most convincing silk flower, or running a space program by simulating a moon landing on a Hollywood back lot. There has been intensive research on computer language-understanding systems, but no serious engineer has the hubris to predict that the systems will duplicate the human ability anytime soon.

In fact, from a scientist’s perspective, people have no right to be as good at sentence understanding as they are. Not only can they solve a viciously complex task, but they solve it
fast
. Comprehension ordinarily takes place in “real time.” Listeners keep up with talkers; they do not wait for the end of a batch of speech and interpret it after a proportional delay, like a critic reviewing a book. And the lag between speaker’s mouth and listener’s mind is remarkably short: about a syllable or two, around half a second. Some people can understand
and
repeat sentences, shadowing a speaker as he speaks, with a lag of a quarter of a second!

Understanding understanding has practical applications other than building machines we can converse with. Human sentence comprehension is fast and powerful, but it is not perfect. It works when the incoming conversation or text is structured in certain ways. When it is not, the process can bog down, backtrack, and misunderstand. As we explore language understanding in this chapter, we will discover which kinds of sentences mesh with the mind of the understander. One practical benefit is a set of guidelines for clear prose, a scientific style manual, such as Joseph Williams’ 1990
Style: Toward Clarity and Grace
, which is informed by many of the findings we will examine.

Another practical application involves the law. Judges are frequently faced with guessing how a typical person is likely to understand some ambiguous passage, such as a customer scanning a contract, a jury listening to instructions, or a member of the public reading a potentially libelous characterization. Many of the people’s habits of interpretation have been worked out in the laboratory, and the linguist and lawyer Lawrence Solan has explained the connections between language and law in his interesting 1993 book
The Language of Judges
, to which we will return.

 

 

How do we understand a sentence? The first step is to “parse” it. This does not refer to the exercises you grudgingly did in elementary school, which Dave Barry’s “Ask Mr. Language Person” remembers as follows:

Q. Please explain how to diagram a sentence.

A. First spread the sentence out on a clean, flat surface, such as an ironing board. Then, using a sharp pencil or X-Acto knife, locate the “predicate,” which indicates where the action has taken place and is usually located directly behind the gills. For example, in the sentence: “LaMont never would of bit a forest ranger,” the action probably took place in forest. Thus your diagram would be shaped like a little tree with branches sticking out of it to indicate the locations of the various particles of speech, such as your gerunds, proverbs, adjutants, etc.

 

But it does involve a similar process of finding subject, verbs, objects, and so on, that takes place unconsciously. Unless you are Woody Allen speed-reading
War and Peace
, you have to group words into phrases, determine which phrase is the subject of which verb, and so on. For example, to understand the sentence
The cat in the hat came back
, you have to group the words
the cat in the hat
into one phrase, to see that it is the cat that came back, not just the hat. To distinguish
Dog bites man
from
Man bites dog
, you have to find the subject and object. And to distinguish
Man bites dog
from
Man is bitten by dog
or
Man suffers dog bite
, you have to look up the verbs’ entries in the mental dictionary to determine what the subject,
man
, is doing or having done to him.

Grammar itself is a mere code or protocol, a static database specifying what kinds of sounds correspond to what kinds of meanings in a particular language. It is not a recipe or program for speaking and understanding. Speaking and understanding share a grammatical database (the language we speak is the same as the language we understand), but they also need procedures that specify what the mind should
do
, step by step, when the words start pouring in or when one is about to speak. The mental program that analyzes sentence structure during language comprehension is called the parser.

The best way to appreciate how understanding works is to trace the parsing of a simple sentence, generated by a toy grammar like the one of Chapter 4, which I repeat here:

S
NP VP

“A sentence can consist of a noun phrase and a verb phrase.”

 

NP
(det) N (PP)

“A noun phrase can consist of an optional determiner, a noun, and an optional prepositional phrase.”

BOOK: The Language Instinct: How the Mind Creates Language
6.91Mb size Format: txt, pdf, ePub
ads

Other books

Rogue Sword by Poul Anderson
The Four Ms. Bradwells by Meg Waite Clayton
Lily and the Duke by Helen Hardt
Blood Ties by Kevin Emerson
Raistlin, mago guerrero by Margaret Weis
Straddling the Line by Sarah M. Anderson
Black Friday by William W. Johnstone
Eight Pieces of Empire by Lawrence Scott Sheets
Danger in High Heels by Gemma Halliday