Labyrinths of Reason (41 page)

Read Labyrinths of Reason Online

Authors: William Poundstone

BOOK: Labyrinths of Reason
8.88Mb size Format: txt, pdf, ePub

If I was convinced that Newcomb’s situation truly existed, I would take box B only. I’m not saying that’s “right” but only that that’s what I’d do. This seems to be the most popular choice, and is in keeping with the game-theoretic analysis of the prisoner’s dilemma, for what that’s worth. Newcomb felt you should take only box B. Many philosophers take the opposite position.

Nozick’s Two Principles of Choice

One of the most insightful analyses of the paradox is Robert Nozick’s “Newcomb’s Problem and Two Principles of Choice,” published in
Essays in Honor of Carl G. Hempel
(1969). Nozick pointed out that the paradox calls two time-tested principles of game theory into conflict. One principle is that of
dominance
. If a certain strategy is always better than another, whatever the circumstances, then it is said to dominate the other strategy and should be preferred over it. Here the strategy of taking both boxes dominates the strategy of taking B only. No matter what the psychic did, you are $1000 the richer for taking both boxes.

Just as unquestioned is the principle of
expected utility
. It says that if you total the gains from alternative strategies (as done above), you should choose the one with a greater expected gain. No one had expected that these two principles could be in conflict.

It’s not that simple, though. Whether a strategy dominates another can depend on how you look at the situation. Suppose you must choose between betting on either of two horses, Sea Biscuit and Hard Tack. It costs $5 to bet on Sea Biscuit, and you win $50 (plus the return of the original $5) if he wins. It costs $6 to bet on Hard Tack, and you are ahead $49 if he wins. This can be summarized in a table:

      
SEA BISCUIT WINS
      
HARD TACK WINS
Bet on Sea Biscuit:     
Win $50     
Lose $5
Bet on Hard Tack:
Lose $6
Win $49

What should you do here? Neither of the two permissible wagers dominates. Obviously, it’s better to bet on Sea Biscuit if Sea Biscuit wins, and on Hard Tack if Hard Tack wins. In this case you must use the expected utility principle, which looks at the probabilities of the two horses winning. Suppose that Hard Tack actually has a 90 percent chance of winning and Sea Biscuit only a 10 percent chance. Then you would certainly want to bet on Hard Tack.

Now look at things a little differently. Instead of categorizing the
possible states of affairs by the horse that wins, talk about your luck. Consider your gains or losses if you are lucky in your bet or unlucky in your bet:

 
YOUR HORSE WINS
YOUR HORSE LOSES
Bet on Sea Biscuit:
Win $50
Lose $5
Bet on Hard Tack:
Win $49
Lose $6

Now betting on Sea Biscuit
does
dominate betting on Hard Tack. If your horse wins, you are ahead a dollar, and if your horse loses, your loss is a dollar less.

Something is peculiar here. Both tables accurately describe the payoffs. The difference may suggest that between Goodman’s “grue” and “green.” But the two ways of categorizing the situation (by the
name
of the winning horse and by whether
your
horse wins or loses) are natural ways of talking, not made-up categories like “grue” and “bleen.”

The conflict, Nozick surmised, comes from the fact that the second states (your horse wins/your horse loses) are not “probabilistically independent” of your decision. Your choice of which horse to bet on influences the chance of being lucky or unlucky. Sea Biscuit is the long shot. Bet on him, and chances are that you will be unlucky. Betting on the shoo-in, Hard Tack, raises the odds that you will be lucky.

From this Nozick concluded that it is valid to use the dominance principle only when one’s choice does not affect the outcome. Try out this rule in the paradox. The dominance principle, which tells you to take both boxes, is unreliable if your choice can influence the psychic’s prediction. That would be possible only if there was backward causality. This is generally presumed impossible. This rule fails to resolve the paradox.

Nozick then considered other intriguing scenarios. It is possible that one’s choice has no causal effect on an outcome but is nonetheless probabilistically linked to it.

What about a hypochondriac who has memorized the symptoms of all known diseases and reasons thus: “I’m a little thirsty; I think I’ll have a glass of water. I’ve sure been drinking a lot of fluids lately. Oh-oh! Excessive thirst is a symptom of diabetes insipidus. Do I
really
want that glass of water? Guess not.”

Everyone agrees that this is ridiculous. Drinking water does not cause diabetes. It is the height of absurdity to base a choice on whether to have a glass of water on its pathological correlations. This is not to say that the pathological correlations aren’t legitimate.
A desire for water
is
(very slight) confirmation for a hypothesis that one has a disease whose symptoms include a craving for water. The fallacy is basing a choice on the correlations. The hypochondriac is (literally) treating the symptoms, not the disease.

Nozick compared Newcomb’s situation to a prisoner’s dilemma with two identical twins. A prisoner and his identical twin are being held incommunicado, each independently considering whether to turn state’s evidence. Suppose, Nozick said, it has been established that behavior in prisoner’s dilemma situations is genetically determined. Some people’s genes cause them to cooperate; others are congenitally inclined to defect. Environment and other factors enter into it too, but say that one’s choice is 90 percent determined by the genes. Neither prisoner knows which gene he and his twin have. Each prisoner might reason like this: If I defect, chances are my twin brother will defect too, having identical genes. That will be bad for both of us. If I cooperate, my twin probably will as well—which isn’t a bad outcome at all. So I should cooperate (with the twin; refuse to turn state’s evidence).

The diagram looks like this. The outcomes are expressed for both twins in arbitrary units. “(0,10)” means the worst possible outcome for twin 1 and the best possible outcome for twin 2. The two genetically favored (?!?) outcomes, where both twins act identically, are italicized.

 
TWIN
2
TURNS
         
TWIN
2
REFUSES
 
STATE’S EVIDENCE
TO TALK
TWIN I TURNS
 
 
STATE’S EVIDENCE
1,1
10,0
TWIN I REFUSES
 
 
TO TALK
0,10
5,5

Is not this reasoning just as silly as the hypochondriac’s? Twin 1
’S
choice cannot affect twin 2’s decision, much less “reach back” and affect their genes. Either the twins have the gene or they don’t. Although cooperating may not be such a bad idea, it is unreasonable to use the genetic correlation to make the decision.

Nozick’s essay ends by asking how the Newcomb situation is any different from the twins’ reasoning. Nozick concluded that “if the actions or decisions to do the actions do not affect, help bring about, influence, etc.,
which
state obtains, then whatever the conditional probabilities … one should perform the dominant action.” He thus recommends taking both boxes.

Must It Be a Hoax?

Martin Gardner made the interesting claim that the type of prediction required is impossible: Any real-world Newcomb experiment must be a hoax, or the evidence of the predictor’s accuracy must be invalid. Were he ever faced with a real Newcomb experiment, Gardner said, it would be “as if someone asked me to put 91 eggs in 13 boxes, so each box held seven eggs, and then added that an experiment had proved that 91 is prime. On that assumption, one or more eggs would be left over. I would be given a million dollars for each leftover egg, and 10 cents if there were none. Unable to believe that 91 is prime, I would proceed to put seven eggs in each box, take my 10 cents and not worry about having made a bad decision.”

If the experiment as stated is inherently impossible, it changes everything. No prediction means no paradox, and you certainly should take both boxes. Still, the
practical
difficulties of performing the experiment should be irrelevant. Even whether there is or is not such a thing as ESP or an omniscient being is probably beside the point. The question is whether there is any possible way of effecting that type of prediction. It could be that there is something self-contradictory about prediction of another person’s actions (especially where the person knows that his actions have been predicted).

No one can predict arbitrary human actions with the accuracy of Newcomb’s paradox. This is rarely cited as a fundamental flaw in the situation, however. The idea that the human body, including the brain, is subject to the same physical laws as the rest of the universe is accepted as a commonplace in both the scientific and philosophic communities. If human actions are deterministic, then we must be open to the possibility of predicting them.

It seems to me that a Newcomb experiment could be carried out in practice. The method I propose is a frank cheat, but perhaps it does not fundamentally change the situation. Let the psychic be a fake who uses unknown trickery to accomplish the feat. The trickery need not (and must not) violate the rules. Possibly the psychic has discovered that, after mulling over the situation, 90 percent of the general public invariably takes box B only. In that case, he
always
predicts the subject will take B only, and he is right the claimed 90 percent of the time.

After discussing the paradox in a 1973 issue of
Scientific American
, Martin Gardner reported that the people writing to the magazine
favored taking box B only by a margin of 2.5 to 1. If the correspondents were typical, then
anyone
could predict correctly more than 70 percent of the time by always saying the subject will take box B only. A 70 percent accuracy is well above the 50.05 percent threshold needed for the paradox with amounts of $1000 and $1 million. There is even enough slack for a cagey “psychic” to throw in an occasional prediction of both boxes to throw onlookers off the track.

The subjects must, of course, remain ignorant of this method of “prediction.” In view of the success of many fake psychics (who likewise conceal their methods from their subjects), I think it possible that a charlatan could attain a track record of correct predictions and allow a Newcomb experiment.

Nonetheless, there is a larger and more interesting question of whether something as complex as human behavior can be predicted. Human beings are capable of defying predictions.

Two Types of Prediction

Science is good at predicting
some
things. Eclipses for the year 5000 A.D. can be predicted with certainty and relative ease. The morning weather report is often wrong by the afternoon. Why the disparity?

Evidently some phenomena are more predictable than others. This stems from the fact that there are two kinds of prediction. One variety uses modeling or simulation. You create a representation of the subject of the prediction that is as complex as the subject itself. The other, simpler kind of prediction uses “shortcuts” to accomplish the same thing.

What day of the week will it be 100 days from now? A calendar typifies the modeling approach. Each of the 100 future days is represented by a square of paper on a leaf of the calendar. Count forward 100 days, and read off the answer.

You could also recognize this shortcut: Divide 100 by 7, and take the remainder. It will be
that
many days of the week from the current day. One hundred divided by 7 leaves a remainder of 2. If today is Monday, two days from now will be Wednesday. One hundred days from now will also be Wednesday.

Whenever possible, we prefer the shortcut method. What if you wanted to know the day of the week 1,000,000 days from now? There may not be any calendar on earth that covers that day. You would have to make your own calendars for the next several thousand
years. The shortcut method avoids that kind of busywork. Dividing 1,000,000 by seven and taking the remainder is scarcely more difficult than dividing 100.

Unfortunately, we are often forced to resort to a model. Some phenomena allow no shortcuts in their prediction. No method, no model that is any simpler than the phenomenon itself, will predict it.

Chaos

Blow up a toy balloon without tying it, then let go. The balloon’s wild path around the room is unpredictable. Could you, by measuring the exact position and degree of inflation of the balloon at the moment of release, predict its path? Probably not. No matter how accurate your measurements, they wouldn’t be accurate enough.

Determining the initial state of the balloon and room entails a lot more information than has been mentioned here. The pressure, temperature, and velocity of the air at each point in the room would have to be known, for the balloon interacts with the air it passes through. Eventually the balloon will bump against walls or furniture, so an exact knowledge of everything in the room would be necessary.

Even this knowledge would fall short. The balloon would still go this way and that, and end up in a different spot each time it is released. This failure of prediction is remarkable in a way. The balloon does not invoke unknown laws of physics. Its motion is a matter of air pressure, gravity, and inertia. If we can predict the orbit of Neptune millennia into the future, why do we fail with a toy balloon?

The answer is
chaos
. This is a relatively new term for phenomena that are unpredictable though deterministic. Science mostly deals in the predictable. Yet the unpredictable is all around us: a crack of lightning, the spurting of a bottle of champagne, the shuffling of a deck of cards, the meandering of rivers. There is reason to consider chaos the norm and predictable phenomena the freaks.

Other books

Light by M John Harrison
Firehorse (9781442403352) by Wilson, Diane Lee
Reaper by Goodwin, Emily
The Winter Ghosts by Kate Mosse
Night Is the Hunter by Steven Gore
Helltown by Jeremy Bates
Nine White Horses by Judith Tarr
The Full Experience by Dawn Doyle