The Shallows (25 page)

Read The Shallows Online

Authors: Nicholas Carr

BOOK: The Shallows
7.44Mb size Format: txt, pdf, ePub
A Thing Like Me

I
t was one of the odder episodes in the history of computer science, yet also one of the more telling. Over the course of a few months in 1964 and 1965, Joseph Weizenbaum, a forty-one-year-old computer scientist at the Massachusetts Institute of Technology, wrote a software application for parsing written language, which he programmed to run on the university’s new time-sharing system. A student, sitting at one of the system’s terminals, would type a sentence into the computer, and Weizenbaum’s program, following a set of simple rules about English grammar, would identify a salient word or phrase in the sentence and analyze the syntactical context in which it was used. The program would then, following another set of rules, transform the sentence into a new sentence that had the appearance of being a response to the original. The computer-generated sentence would appear almost instantly on the student’s terminal, giving the illusion of a conversation.

In a January 1966 paper introducing his program, Weizenbaum provided an example of how it worked. If a person typed the sentence “I am very unhappy these days,” the computer would need only know that the phrase “I am” typically comes before a description of the speaker’s current situation or state of mind. The computer could then recast the sentence into the reply “How long have you been very unhappy these days?” The program worked, Weizenbaum explained, by first applying “a kind of template to the original sentence, one part of which matched the two words ‘I am’ and the remainder [of which] isolated the words ‘very unhappy these days.’” It then used an algorithmic “reassembly kit,” tailored to the template, that included a rule specifying that “any sentence of the form ‘I am BLAH’” should be “transformed to ‘How long have you been BLAH,’ independently of the meaning of BLAH.”
1

Weizenbaum’s application was a product of its time. During the 1950s and ’60s, the enthusiasm for computers, software programming, and artificial intelligence gave rise not only to the idea that the human brain is a type of computer but to the sense that human language is the output of one of the algorithms running inside that computer. As David Golumbia explains in
The Cultural Logic of Computation
, a new breed of “computational linguists,” led by Weizenbaum’s MIT colleague Noam Chomsky, posited that the form of the “natural language” that people speak and write reflects “the operation of the computer inside the human mind that performs all linguistic operations.”
2
In a 1958 article in the journal
Information and Control
, Chomsky had written that “one possible method for describing a grammar is in terms of a program for a universal Turing machine.”
3
What made the computationalist theory so compelling was that it came wrapped in a seductive “penumbra of technological newness,” writes Golumbia. It offered a “mechanic clarity,” replacing language’s human “messiness” with “a clean internal computer.”
4
By reverse-engineering the way people talk, you could discover language’s underlying code, which you could then replicate as software.

Weizenbaum named his program ELIZA, after Eliza Doolittle, the cockney flower girl who, in George Bernard Shaw’s
Pygmalion
, learns to speak proper English under the tutelage of the haughty phonetics professor Henry Higgins. To make the simulated conversations a little more interesting, Weizenbaum also gave his artificial interlocutor a persona—that of a Rogerian psychotherapist. The choice, as he explained in his paper, was a pragmatic one. Using a technique developed by the psychologist Carl Rogers in the 1940s, Rogerian therapists pretended, in their conversations with patients, to have no understanding of the world. For the most part, they simply parroted their patients’ statements back to them in the form of banal, open-ended questions or comments. Knowing that the naïveté was a pose, the patients were free to attribute to their therapists “all sorts of background knowledge, insights and reasoning ability.” The Rogerian persona, Weizenbaum recognized, had “a crucial psychological utility” for ELIZA: it suggested that the program’s vacuity masked some sort of real intelligence.

Weizenbaum provided a transcript of part of an exchange between ELIZA and a young woman who had been an early tester of the program (the machine’s responses are in italic type):

Men are all alike.
In what way?
They’re always bugging us about something or other.
Can you think of a specific example?
Well, my boyfriend made me come here.
Your boyfriend made you come here.
He says I’m depressed much of the time.
I am sorry to hear you are depressed.
It’s true. I am unhappy.
Do you think coming here will help you to be not unhappy?
I need some help, that much seems certain.
What would it mean to you if you got some help?
Perhaps I could learn to get along with my mother.
Tell me more about your family.
My mother takes care of me.
Who else in your family takes care of you?
My father.
Your father.
You are like my father in some ways.
What resemblance do you see?
You are not very aggressive but I think you don’t want me to notice that.
What makes you think I am not very aggressive?
You don’t argue with me.
Why do you think I don’t argue with you?
You are afraid of me.
Does it please you to believe I am afraid of you?
My father is afraid of everybody.
5

In commenting on his creation, which he acknowledged to be a modest and even slightly silly contribution to the fledgling field of natural-language processing, Weizenbaum observed how easy it is for computer programmers to make machines “behave in wondrous ways, often sufficient to dazzle even the most experienced observer.” But as soon as a program’s “inner workings are explained in language sufficiently plain to induce understanding,” he continued, “its magic crumbles away; it stands revealed as a mere collection of procedures, each quite comprehensible. The observer says to himself ‘I could have written that.’” The program goes “from the shelf marked ‘intelligent’ to that reserved for curios.”
6

But Weizenbaum, like Henry Higgins, was soon to have his equilibrium disturbed. ELIZA quickly found fame on the MIT campus, becoming a mainstay of lectures and presentations about computing and time-sharing. It was among the first software programs able to demonstrate the power and speed of computers in a way that laymen could easily grasp. You didn’t need a background in mathematics, much less computer science, to chat with ELIZA. Copies of the program proliferated at other schools as well. Then the press took notice, and ELIZA became, as Weizenbaum later put it, “a national plaything.”
7
While he was surprised by the public’s interest in his program, what shocked him was how quickly and deeply people using the software “became emotionally involved with the computer,” talking to it as if it were an actual person. They “would, after conversing with it for a time, insist, in spite of my explanations, that the machine really understood them.”
8
Even his secretary, who had watched him write the code for ELIZA “and surely knew it to be merely a computer program,” was seduced. After a few moments using the software at a terminal in Weizenbaum’s office, she asked the professor to leave the room because she was embarrassed by the intimacy of the conversation. “What I had not realized,” said Weizenbaum, “is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”
9

Things were about to get stranger still. Distinguished psychiatrists and scientists began to suggest, with considerable enthusiasm, that the program could play a valuable role in actually treating the ill and the disturbed. In an article in the
Journal of Nervous and Mental Disease
, three prominent research psychiatrists wrote that ELIZA, with a bit of tweaking, could be “a therapeutic tool which can be made widely available to mental hospitals and psychiatric centers suffering a shortage of therapists.” Thanks to the “time-sharing capabilities of modern and future computers, several hundred patients an hour could be handled by a computer system designed for this purpose.” Writing in
Natural History
, the prominent astrophysicist Carl Sagan expressed equal excitement about ELIZA’s potential. He foresaw the development of “a network of computer therapeutic terminals, something like arrays of large telephone booths, in which, for a few dollars a session, we would be able to talk with an attentive, tested, and largely non-directive psychotherapist.”
10

In his paper “Computing Machinery and Intelligence,” Alan Turing had grappled with the question “Can machines think?” He proposed a simple experiment for judging whether a computer could be said to be intelligent, which he called “the imitation game” but which soon came to be known as the Turing test. It involved having a person, the “interrogator,” sit at a computer terminal in an otherwise empty room and engage in a typed conversation with two other people, one an actual person and the other a computer pretending to be a person. If the interrogator was unable to distinguish the computer from the real person, then the computer, argued Turing, could be considered intelligent. The ability to conjure a plausible self out of words would signal the arrival of a true thinking machine.

To converse with ELIZA was to engage in a variation on the Turing test. But, as Weizenbaum was astonished to discover, the people who “talked” with his program had little interest in making rational, objective judgments about the identity of ELIZA. They
wanted
to believe that ELIZA was a thinking machine. They
wanted
to imbue ELIZA with human qualities—even when they were well aware that ELIZA was nothing more than a computer program following simple and rather obvious instructions. The Turing test, it turned out, was as much a test of the way human beings think as of the way machines think. In their
Journal of Nervous and Mental Disease
article, the three psychiatrists hadn’t just suggested that ELIZA could serve as a substitute for a real therapist. They went on to argue, in circular fashion, that a psychotherapist was in essence a kind of computer: “A human therapist can be viewed as an information processor and decision maker with a set of decision rules which are closely linked to short-range and long-range goals.”
11
In simulating a human being, however clumsily, ELIZA encouraged human beings to think of themselves as simulations of computers.

The reaction to the software unnerved Weizenbaum. It planted in his mind a question he had never before asked himself but that would preoccupy him for many years: “What is it about the computer that has brought the view of man as a machine to a new level of plausibility?”
12
In 1976, a decade after ELIZA’s debut, he provided an answer in his book
Computer Power and Human Reason
. To understand the effects of a computer, he argued, you had to see the machine in the context of mankind’s past intellectual technologies, the long succession of tools that, like the map and the clock, transformed nature and altered “man’s perception of reality.” Such technologies become part of “the very stuff out of which man builds his world.” Once adopted, they can never be abandoned, at least not without plunging society into “great confusion and possibly utter chaos.” An intellectual technology, he wrote, “becomes an indispensable component of any structure once it is so thoroughly integrated with the structure, so enmeshed in various vital substructures, that it can no longer be factored out without fatally impairing the whole structure.”

That fact, almost “a tautology,” helps explain how our dependence on digital computers grew steadily and seemingly inexorably after the machines were invented at the end of the Second World War. “The computer was not a prerequisite to the survival of modern society in the post-war period and beyond,” Weizenbaum argued; “its enthusiastic, uncritical embrace by the most ‘progressive’ elements of American government, business, and industry made it a resource essential to society’s survival
in the form
that the computer itself had been instrumental in shaping.” He knew from his experience with time-sharing networks that the role of computers would expand beyond the automation of governmental and industrial processes. Computers would come to mediate the activities that define people’s everyday lives—how they learn, how they think, how they socialize. What the history of intellectual technologies shows us, he warned, is that “the introduction of computers into some complex human activities may constitute an irreversible commitment.” Our intellectual and social lives may, like our industrial routines, come to reflect the form that the computer imposes on them.
13

What makes us most human, Weizenbaum had come to believe, is what is least computable about us—the connections between our mind and our body, the experiences that shape our memory and our thinking, our capacity for emotion and empathy. The great danger we face as we become more intimately involved with our computers—as we come to experience more of our lives through the disembodied symbols flickering across our screens—is that we’ll begin to lose our humanness, to sacrifice the very qualities that separate us from machines. The only way to avoid that fate, Weizenbaum wrote, is to have the self-awareness and the courage to refuse to delegate to computers the most human of our mental activities and intellectual pursuits, particularly “tasks that demand wisdom.”
14

Other books

The Last Witness by Denzil Meyrick
Zuni Stew: A Novel by Kent Jacobs
Silver Sea by Wright, Cynthia
The Sanction by Reeyce Smythe Wilder
Night Hush by Leslie Jones
Bec by Darren Shan
Bitter Drink by F.G. Haghenbeck
A Promise of Love by Karen Ranney