Labyrinths of Reason (37 page)

Read Labyrinths of Reason Online

Authors: William Poundstone

BOOK: Labyrinths of Reason
8.93Mb size Format: txt, pdf, ePub

The oldest picture of the mind is no picture at all. The caged parakeet that thinks its mirror image is another parakeet has no need of “mind” in formulating a worldview. This is not to say that the parakeet is stupid but only that it has no knowledge of self. The parakeet is aware of its bell and cuttlebone and the other objects of its world. This awareness may go so far as to predict the behavior of animate objects, such as the owner who fills the seed dispenser every morning. Is there anything the owner might do that would prove to the parakeet that he has a mind? No. The parakeet (a very intelligent parakeet, anyway) could ascribe any observed behavior to known and unknown causes and have no need to believe in mind.

It is worth noting that some extreme philosophical skeptics have espoused almost this view (note Hume’s skepticism of his own mind). What, then, makes us think that other people have minds such as our own? A big part of the answer is language. The more we communicate with others, the more we are led to believe they have minds.

A second way of thinking about mind is dualism. This is the belief that mind or spirit or consciousness is something distinct from matter. We all talk this way, whether we believe in dualism or not: someone is full of spirit; they give up the ghost; not enough money to keep body and soul together. Dualism is motivated by the realization that there are other minds and that minds more or less stick to bodies.

As biologists have learned more about the human body, they have been impressed with the fact that it is made out of substances not so different from nonliving matter. The body is mostly water. “Organic” compounds can be synthesized. Physical forces such as osmotic pressure and electrical conductivity work in cells and account for much of their function. Mechanistic models of the body and brain have been so successful in limited ways that it is extremely attractive to think that they might account for all the myriad workings of the brain. This, a third way of thinking about the
mind, supposes that the brain is a “machine” or “computer” of sorts, and that consciousness is—somehow—a result of that machine’s operation.

Despite the modern trappings, mechanistic explanations for consciousness—and skepticism about them—are quite old. Gottfried Leibniz’s “thinking machine,” which he discussed in 1714, is equally cogent today:

Moreover, it must be avowed that
perception
and what depends upon it
cannot possibly be explained by mechanical reasons
, that is, by figure and movement. Supposing that there be a machine, the structure of which produces thinking, feeling, and perceiving; imagine this machine enlarged but preserving the same proportions, so that you might enter it as if it were a mill. This being supposed, you might enter its inside; but what would you observe there? Nothing but parts which push and move each other, and never anything that could explain perception.

Leibniz’s example is not especially compelling, but it does capture the malaise most of us feel about the mechanistic model. The thinking machine thinks, all right, but looking inside we find it as empty as a magician’s trick cabinet. What did you expect to see?

A concise counterargument to Leibniz is David Cole’s. Blow up a tiny drop of water until it is the size of a mill. Now the H
2
O molecules are as big as the plastic models of H
2
O molecules in a chemistry class. You could walk through the water droplet and never see anything
wet
.

The Paradox of Functionalism

Other thought experiments challenging the mechanistic model are much less easily refuted. One is Lawrence Davis’s “paradox of functionalism.”

Functionalism
holds that a computer program that can do the same thing as a human brain must be comparable in every important respect, including having consciousness. The human brain can be idealized as a “black box” that receives sensory inputs from the nerve cells, manipulates this information in a certain way, and sends out impulses to the muscles. (Each vat in the brains-in-vats lab has two cables, one labeled “in” and one labeled “out.”) What if there was a computer that, given the same inputs, would always produce the same outputs as a human brain? Would that computer be conscious? It is like Einstein and Infeld’s sealed watch; one never really knows. Functionalism, however, says that it is most reasonable to
believe that the computer would be conscious to the extent that that term has any objective meaning at all. One should believe this for the same reason one believes other people have minds: because of the way they act.

Davis proposed his paradox in an unpublished paper delivered at a conference in 1974. It deserves more attention than it has received. Suppose, he says, we learned about the sensation of pain in all relevant detail. Then (if the functionalists are right) we could build a giant robot capable of feeling pain. Like Leibniz’s thinking machine, it is a really huge robot you can walk inside. The inside of the robot’s head looks just like a big office building. Instead of integrated circuits, there are people in suits sitting behind desks. Each desk has a telephone with several lines, and the phone network simulates the connections of the neurons in a brain capable of feeling pain. Each person has been trained to duplicate the function of a neuron. It is boring work, but they get a competitive salary and benefits package.

Suppose that right now the set of phone calls among the bureaucrats is that which has been identified with excruciating pain. The robot is in agony, according to functionalism. But where is the pain? You won’t find it on a tour of the office. All you will see is placid and disinterested middle management, sipping coffee and talking on the phone.

And the next time the robot is feeling unbearable pain, you visit and find that the people are holding the company Christmas party. Everybody’s having a real ball.

The Turing Test

I will defer comment on Davis’s paradox and go on to a closely related thought experiment, John Searle’s “Chinese room.” One further bit of background is necessary to appreciate the latter.

This is the “Turing test” of Alan Turing. In a 1950 essay, Turing asked whether computers can think. Turing argued that the question was meaningless unless one could point to something that a thinking agent can do that a nonthinking agent cannot. What would that difference be?

Already computers were performing calculations that had previously required the work of dedicated and intelligent human beings. Turing realized that the test would have to be something rather subtler than, say, playing a decent game of chess. Computers would soon do that, long before they would come close to being able to
“think.” Turing proposed as a test what he called the “imitation game.”

A person sits at a computer terminal and directs questions to two beings, A and B, who are concealed in other rooms. One of the beings is a person, and the other is a sophisticated computer program alleged to be capable of thought. The questioner’s goal is to tell which is the human and which is the computer. Meanwhile both human and computer are trying their best to convince the questioner that they are human. It is like a television panel show where the point is to distinguish an unknown person from an impostor.

The fact that the questioner communicates only via the computer terminal prevents him from using anything but the actual text of replies. He cannot expect to discern mechanical-sounding synthesized speech or other irrelevant giveaways. The concealed human is allowed to say things like “Hey, I’m the human!” This perhaps would do little good, for the computer is allowed to say the same thing. The computer does not have to own up to being the computer, even when asked directly. Both parties are allowed to lie if and when they think it suits their purpose. Should the questioner ask for “personal” data like A’s mother’s maiden name or B’s shoe size, the computer is allowed to fabricate this out of whole cloth.

To “pass” this test, a computer program would have to be able to give such human responses that it is chosen as the human about half the time the game is played. If a computer could pass this test, Turing said, then it would indeed exhibit intelligence,
insofar
as intelligence is definable by external actions and reactions. This is no small claim.

That said, could a computer
think?
Turing concluded that the original question of whether computers can think was “too meaningless to to deserve discussion. Nevertheless, I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.”

In the years since Turing’s essay, it has become common in cognitive science to associate mental processes with algorithms. If you execute a certain algorithm for calculating the digits of pi, then some small part of your thought process is directly comparable to the action of a computer calculating pi via the same algorithm. A widespread and popular speculation is that intelligence and even consciousness are like computer programs that may “run” on different types of “hardware,” including the biological hardware of the brain. The functions of the neurons in your brain and their states
and interconnections could in principle be exactly modeled by a marvelously complex computer program. If that program were run, even on a computer of microchips and wires, it would perhaps exhibit the same intelligence and even consciousness as you do.

Mind has long been thought of as soul, as
élan vital
, as half of the Cartesian dualism. Much of the intellectual community has abandoned these models in favor of a mechanistic picture of consciousness. John Searle’s 1980 thought experiment caricatures the diminishing abode of mind into a shell game. If consciousness is nothing but algorithm, where does mind come in? Searle lifts the last shell and shows it to be empty.

The Chinese Room

Imagine that you are confined to a locked room. The room is virtually bare. There is a thick book in the room with the unpromising title
What to Do If They Shove Chinese Writing Under the Door
.

One day a sheet of paper bearing Chinese script is shoved underneath the locked door. To you, who know nothing of Chinese, it contains meaningless symbols, nothing more. You are by now desperate for ways to pass the time. So you consult
What to Do If They Shove Chinese Writing Under the Door
. It describes a tedious and elaborate solitaire pastime you can “play” with the Chinese characters on the sheet. You are supposed to scan the text for certain Chinese characters and keep track of their occurrences according to complicated rules outlined in the book. It all seems pointless, but having nothing else to do, you follow the instructions.

The next day, you receive another sheet of paper with more Chinese writing on it. This very contingency is covered in the book too. The book has further instructions for correlating and manipulating the Chinese symbols on the second sheet, and combining this information with your work from the first sheet. The book ends with instructions to copy certain Chinese symbols (some from the paper, some from the book) onto a fresh sheet of paper. Which symbols you copy depends, in a very complicated way, on your previous work. Then the book says to shove the new sheet under the door of your locked room. This you do.

Unknown to you, the first sheet of Chinese characters was a Chinese short story, and the second sheet was questions about the story, such as might be asked in a reading test. The sheet of characters you copied according to the instructions were (still unknown to you!) answers to the questions. You have been manipulating the
characters via a very complex algorithm written in English. The algorithm simulates the way a speaker of Chinese thinks—or at least the way a Chinese speaker reads something, understands it, and answers questions about what he has read. The algorithm is so good that the “answers” you gave are indistinguishable from those that a native speaker of Chinese would give, having read the same story and been asked the same questions.

The people who built the room claim that it contains a trained pig that can understand Chinese. They take the room to county fairs and let people on the outside submit a story in Chinese and a set of questions based on the story. The people on the outside are skeptical of the pig story. The answers are so consistently “human” that everyone figures there is really a Chinese-speaking person in there. As long as the room remains sealed, nothing will dissuade the outsiders from this hypothesis.

Searle’s point is this: Do you understand Chinese? Of course not! Being able to follow complex English directions is not the same as knowing Chinese. You do not know, and have not deduced, the meaning of a single Chinese character. The book of instructions is emphatically
not
a crash course in Chinese. It has taught you nothing. It is pure rote, and never does it divulge why you do something or what a given character means.

To you, it is all merely a pastime. You take symbols from the Chinese sheets and copy them onto blank sheets in accordance with the rules. It is as if you were playing solitaire and moving a red jack onto a black queen according to a card game’s rules. If, in solitaire, someone asked what a card “meant,” you would say it didn’t mean anything. Oh, sure, playing cards once had symbolic significance, but you would insist that none of that symbolism enters into the context of the game. A card is called a seven of diamonds just to distinguish it from the other cards and to simplify application of the game’s rules.

If you as a human can run through the Chinese algorithm and still not understand Chinese (much less experience the consciousness of a Chinese speaker), it seems all the more ridiculous to think that a machine could run through an algorithm and experience consciousness. Therefore, claims Searle, consciousness is not an algorithm.

Brains and Milk

Searle’s skepticism is considerably more generous than many who doubt that computers can think. His thought experiment postulates a working artificial intelligence algorithm. It is the set of instructions for manipulating the Chinese characters. Clearly the algorithm must encapsulate much, much more than Chinese grammar. It cannot be much less than a complete simulation of human thought processes, and it must contain the common knowledge expected of any human to boot.

Other books

SILENT GUNS by Bob Neir
Love by Proxy by Diana Palmer
Lulu Bell and the Cubby Fort by Belinda Murrell
Cat Running by Zilpha Keatley Snyder
Come to Castlemoor by Wilde, Jennifer;
A Fort of Nine Towers by Qais Akbar Omar
Seeking Celeste by Solomon, Hayley Ann
Yes, Justin by Michele Zurlo
His Secret by Ann King