Labyrinths of Reason (38 page)

Read Labyrinths of Reason Online

Authors: William Poundstone

BOOK: Labyrinths of Reason
3.4Mb size Format: txt, pdf, ePub

The story can be
any
story, and the questions can demand any fact, conjecture, interpretation, or opinion about it. The questions are not (at least, need not be) multiple-choice or questions asking you to regurgitate or complete lines from the story. Searle gave as an example this brief short story: “A man went into a restaurant and ordered a hamburger. When the hamburger arrived it was burned to a crisp, and the man stormed out of the restaurant angrily, without paying for the hamburger or leaving a tip.” One question is: “Did the man eat the hamburger?” Now, the story doesn’t say, nor will the Chinese character for “eat” even appear in the story. But anyone who understands the story will surmise that the man didn’t eat the hamburger.

Questions could ask if a Big Mac is a hamburger (no way of telling from the story; you just have to know it) or if the story made you sad (the word/character for “sad” not appearing); they could ask you to point out sentences that made you laugh or to write another story based on the same characters. The algorithm must interact with the story much as a human would. Were this algorithm written in a computer language like LISP or Prolog, it would pass the Turing test. Searle avoids the “black box” mystery of a computer executing a sophisticated algorithm by dropping it in a human’s lap.

Searle felt that the Turing test may not be all that it’s cracked up to be. A computer that can act just like a human would be a remarkable thing whether “conscious” or not. We face the “other minds” problem in a more trenchant form. Even the skeptic does not doubt, in his nonphilosophical moments, that other people have minds. But we all have doubts about whether a machine may have a consciousness similar to ours.

Searle’s opinion of the matter is surprising. He believed that the brain
is
a machine of sorts, but that consciousness has something to
do with the biochemical and neurological makeup of the brain. A computer of wires and integrated circuits, even one that exactly duplicated the function of all the neurons in a human brain, would not experience consciousness (though it would function just like the human brain and pass the Turing test). A Frankensteinian brain—made “from scratch” out of the same kinds of chemicals as real brains—might be capable of consciousness.

Searle compared artificial intelligence to a computer simulation of photosynthesis. A computer program might well simulate photosynthesis in full detail (say, by creating a realistic animation of chlorophyll atoms and photons on a display screen). Though the program contains all the relevant information, it would never produce real sugar like living plants do. Searle felt that consciousness was a biological by-product like sugar or milk.

Few philosophers agree with Searle on this point, but his thought experiment has generated debate as few others have. Let’s look at some of the reactions.

Reactions

One is that the experiment is flatly impossible. There can be no such book as
What to Do If They Shove Chinese Writing Under the Door
. The way we interpret language and think cannot be expressed in cut-and-dried fashion; it is something we can never grasp well enough to put it in a book. (Possibly, Berry’s paradox and Putnam’s Twin Earth lend credence to this position.) Therefore the algorithm
won’t work
. The “answers” will be nonsense or the stock phrases of a talking Chinese teddy bear. They won’t fool anyone.

This position is fine as far as it goes and cannot be refuted until the time, if any, that we have a working algorithm. Note only that Searle himself was willing to concede the possibility of the algorithm. And it is not strictly necessary to suppose that we will ever know how the brain
as a whole
works to conduct the experiment or one like it. You could implement Davis’s bureaucratic simulation. The human brain contains approximately 100 billion neurons. As far as we know, the function of each individual neuron is relatively simple. The neuron waits for firing (electrical impulses) at its synapses, and when that firing meets certain logical criteria, it transmits an impulse. Suppose that we determine the exact state of one person’s brain: the current states of all the neurons, the connections between them, and how each neuron works. Then all the world’s population might participate in an experiment to simulate that person’s
brain. Each of the world’s 5 billion people would have to handle the actions of about twenty neurons. For every connection between neurons, there would be a string between the two persons representing the neurons. Tugging on the strings would represent neural firing. Each person would operate the strings just as the neurons they represent would respond to neural firing. Again, however marvelous the simulation, no one would have any idea of what “thoughts” were being represented.

A second reaction is to agree with Searle that the algorithm would work but there would be no consciousness as a Chinese speaker. Searle’s supporters cite the distinction between syntactic and semantic understanding. The rules in effect give the human syntactic understanding of Chinese, but no semantic understanding. He would not know that a certain character meant house and another meant water. Apparently, semantic understanding is essential for consciousness, and this is something computers can never have.

Nearly all those who disagree with Searle contend that there is some sort of consciousness buzzing around the Chinese room. It may be potential, incipient, slowed down, or brain-damaged, but it is there.

Chinese the Hard Way

Among the positions claiming a consciousness of Chinese, the simplest is that the subject would (contrary to Searle’s claim) learn Chinese after all. There is a continuum between syntactic and semantic understanding. Maybe the rules would become second nature if applied long enough. Maybe the subject would surmise the meaning of the symbols from the way they were manipulated.

The heart of the issue is whether it is ever necessary to be told that “water” means
this
, and “house” means
this
. Or can we gather what
all
words mean from how they are used? Even if you had never seen a zebra, that does not prevent you from having semantic understanding of the word “zebra.” You have certainly not seen a unicorn, and still have semantic understanding of the term.

Could you still have this semantic understanding if you have never seen a horse? Never seen an animal of any kind (not even a human)? At some degree of isolation from the object, you must wonder whether understanding would exist.

Suppose you were sick and missed the first day of arithmetic class, the day when the teacher explained what numbers are. When you return to school, you’re afraid to ask what numbers are because
everyone else seems to know. You make an extra effort to learn all the subsequent material, like the addition tables, fractions, and so on. You make such an effort that you end up being the top student in math. But inside you feel you’re a fraud—you still don’t know what numbers are. You only know how numbers
work
, how they interrelate with each other and everything else in the world!

One feels that that is all the understanding that anyone can have of numbers (though possibly numbers differ from zebras in this respect). A similar case is Euclidean geometry. The study of geometry typically starts with the disclaimer that concepts such as “points” and “lines” will not be defined as such and should be taken to mean only what may be inferred from the axioms and theorems about them.

An objection to this position is that the human starts cranking out Chinese answers right away—before he could have memorized the rules or deduced the meanings of characters. For a long time, the questioners outside the room will be able to pose questions whose answers will be new words that the subject has never used before. (“What’s that condiment some people put on hamburgers, made from chopped-up pickle?” Will Searle’s subject be able to deduce the meaning of the character for “chowchow”?)

Dr. Jekyll and Mr. Hyde

Some claim that the human simulator would understand Chinese but not know it. David Cole compared Searle’s subject to a bilingual person with a peculiar type of brain damage that prevents him from being able to translate. Or (take your pick) he would be like a person with multiple personalities, a “split brain” patient, or an amnesiac.

Dr. Jekyll goes into the room, speaking only English. Running through the algorithm creates a Mr. Hyde who speaks Chinese. Jekyll doesn’t know about Hyde, and vice versa. Consequently the subject is unable to translate between English and Chinese. He is unaware of his Chinese ability, and even denies having it.

We have many mental capacities of which we are unaware. Right now your cerebellum is regulating your breathing, eye blinking, and other automatic functions. Normally these functions are on autopilot. You can take conscious control of them if desired. Other functions, like pulse, are more automatic and can be controlled consciously only to a degree, through biofeedback techniques. Still
more automatic functions may not be controllable at all. All are under the guidance of your one brain.

If this is the case, why are the two linguistic personalities so poorly integrated? Maybe because of the bizarre way that a knowledge of Chinese has been “grafted” onto the subject’s brain.

The Systems Reply

In his original article, Searle anticipated several responses to his thought experiment. He called one the “systems reply.” This says that indeed the person would not “know” Chinese, but the process—of which the person is a part—might in principle. The person in Searle’s Chinese room is not analogous to our mind; he is analogous to a small but important part of a brain.

The systems reply is no straw man. Broadly interpreted, it is the most popular resolution of the paradox among cognitive scientists. Not even the most dogmatic mechanist presumes that individual neurons experience consciousness. The consciousness is in the process, of which the neurons are mere agents. The person in the locked room—the book of instructions—the sheets of paper shoved under the door—the pen the person writes with—are agents.

Searle’s counterargument to the systems reply went like this: All right, assume there
is
consciousness in the system composed of the human, the room, the books of instructions, the bits of scratch paper, the pencils, and anything else used. Tear down the walls of the room; let the human work alfresco. Have him memorize the instructions and do all the manipulations henceforth in his head. If the pencils are suspect, have him scratch out the answers with his fingernails. The system is reduced to just the human.
Now
does he understand Chinese? Of course not.

One danger with thought experiments is that their convenience can lead us astray. You must make sure that the reason you’re just fantasizing rather than doing the experiment isn’t something that would invalidate the fantasy. Most philosophers and scientists of the systems-reply camp feel that this is the essential problem with Searle’s Chinese room.

A Page from the Instructions

It may be helpful to analyze the mechanics of the situation in slightly greater detail. Reverse the situation so that the human is a Chinese speaker and reader who knows nothing of English or the
Roman alphabet (this is more convenient here, for we will discuss how to understand English). Let the story be an English translation of Aesop’s fable of the fox and the stork, and the next day’s batch of writing poses questions about both animals. Consider what the text of a (Chinese-language) book entitled
What to Do If They Shove English Writing Under the Door
must be like.

Part of the discussion must tell you how to recognize the word “fox.” We know that only words, rather than letters, have meaning in an English text. Therefore any algorithm that simulates reasoning about the characters and events of a story must isolate and recognize the words naming them. An English reader recognizes “fox” at a glance. Not so the Chinese reader. He must follow a laborious algorithm that may go something like this:

1. Scan the text for a symbol that looks like one of these symbols:

If you find the symbol, go to step
2
.
If there is no such symbol in the text, go to the instructions
.

2
.
If there is a blank space immediately to the right of the symbol, go back to step
1
.
If there is a symbol immediately to the right, compare it to these symbols:

If the symbol matches, go to step
3
.
If not, go back to step
1
.

3
.
If there is a blank space immediately to the right of the symbol in step
2
,
go back to step
1
.
If not, compare the symbol to these symbols:

If it matches, go to step
4
.
If not, go back to step
1
.

4
.
If there is a blank space or one of these symbols immediately to the right of the symbol in step
3
,
go to the instructions
.
If there is a different symbol to the right, go back to step 1
.

Other books

McNally's Secret by Lawrence Sanders
The Cursed by Heather Graham
Mark Clodi by Kathy
Raging Heat by Richard Castle
Rekindle the Flame by Kate Meader
Portrait in Crime by Carolyn Keene
Fox at the Front (Fox on the Rhine) by Douglas Niles, Michael Dobson