Labyrinths of Reason (23 page)

Read Labyrinths of Reason Online

Authors: William Poundstone

BOOK: Labyrinths of Reason
11Mb size Format: txt, pdf, ePub

Like the expectancy paradox, the paradox of the preface questions the role of deductive reasoning in situations involving inductive probabilities rather than certainties. Since probabilities rather than certainties are the lot of the scientist, it deserves a thoughtful response.

Our worldview is a set of beliefs, mostly justified and mostly true (so we think, anyway). The paradox of the preface asks whether it is possible to have justified beliefs that are logically contradictory. Note the paradox within the paradox. The author has a set of beliefs (that each statement, considered individually, is true, plus the belief that the book contains an error) that contains a contradiction. Suppose the book proper makes 1000 distinct assertions, which are mutually compatible. The claim of the preface (“At least one statement in this book is wrong”) is the 1001st assertion. This yields the most recherché of contradictions, for any 1000 of the 1001 statements are logically consistent, even though the complete set of 1001 is self-contradictory.

Probability enters more explicitly into the related “lottery paradox” of Henry E. Kyburg, Jr. (1961). No one who buys a lottery ticket can reasonably expect to win; the odds against it are too high. Yet everyone’s expectation of not winning conflicts with the fact that somebody
will
win. In practice the chain of suspect reasoning goes a step further. Many lottery players justify their wagers with “Someone has to win, so why not me?”—a fallacious rationale that is echoed in state lottery advertising. Kyburg felt that his paradox shows that one’s set of justified beliefs
can
be logically inconsistent.

One subtext of Makinson’s and Kyburg’s paradoxes is the way a large number of beliefs may conceal contradictions. A single statement
can subtly introduce contradiction in a set of millions. Take this sorites:

1. Alice is a logician.

2. All logicians eat pork chops.

3. All pork-chop eaters are Cretans.

4. All Cretans are liars.

5. All liars are cabdrivers.

.

.

.

999,997. All Texans are rich.

999,998. All rich people are unhappy.

999,999. All unhappy people smoke cigarettes.

1,000,000. Alice doesn’t smoke.

The dots signify that premises 6 through 999,996 are additional statements of the type “all X’s are Y’s,” so that one may ultimately conclude that all logicians are cigarette smokers, and from that, that Alice smokes. That contradicts the 1,000,000th premise, so the set is unsatisfiable (self-contradictory).

Nothing so remarkable there. The surprising thing is that removing any
one
premise makes the set satisfiable. Strike out premise 4. Then you can conclude that Alice is a Cretan, that all liars smoke, and that Alice does not smoke (and thus isn’t a liar).

In this example, the premises are in a neat order to facilitate seeing the contradiction. If the million premises were shuffled into random order, it would be a more arduous task to see that the set was self-contradictory. If some of the statements were more complicated, it would be more difficult yet. A set of beliefs is like the Borromean rings, or a mechanical puzzle where the removal of one piece causes all the others to fall apart. The influence of each assertion can “ripple out” and affect the whole set.

Pollock’s Gas Chamber

When embroiled in paradox, one’s impulse is to give up one or more of the original assumptions that have led to contradiction. The question is, how do you decide which belief to give up? John L. Pollock resolved the paradox of the preface through rules of confirmation he illustrated with this thought experiment:

A room is occasionally filled with a poisonous green gas. To warn those who might want to enter, the room is supplied with a warning
system. The system (which was designed by a committee) works like this: A warning light is visible through a window in the door to the room. The light is green (for “go”) when it is safe to enter. It is white (the color of death in some Asian countries) when the room contains the deadly gas.

Unfortunately, the system is worthless because the green gas makes the light look green when it is actually white. The light
always
looks green, gas or no gas. The committee has remedied this horrible deficiency by mounting a closed-circuit television camera just inches from the warning light. The video signal goes to a color monitor outside the room. The monitor accurately reproduces the color of the warning light, whether the room contains the gas or not. A sign on the door warns the public to ignore the apparent color of the light through the window and instead consult the television monitor.

Pollock’s kludgy warning system is an allegory of our imperfect knowledge of the world. The light is green or white; we do not know which. It looks green through the window. That is prima facie evidence for believing it to be green. The light looks white on the TV screen. That is reason for believing it to be white. But if it is green it cannot be white, and vice versa. We must give up one of these initially credible suppositions.

Pollock notes that there is more than one way of rejecting a belief. You might say, “The light looks green through the window. I know from experience that most windows are made out of colorless glass that does not distort colors, and that air is colorless too. Therefore, the light’s appearance through the glass is justification for believing that it
is
green. If it’s green, it can’t be white. So it’s not white.”

Of course, you could just as easily say something like this: “The light is white on the television monitor. Things usually are the color they appear to be on color TV—that’s the whole point of having color TV. Therefore, the light’s appearance on the monitor is a good reason for believing that it’s white. If it’s white, it can’t be green. So it’s not green.”

We have a mini-paradox, in that reasoning from a small set of observations leads to contradiction. Each line of reasoning rebuts the other in what seems to be the strongest way possible.

The resolution is obvious. The light must really be white, as it appears on the TV screen. But we
aren’t
using the second argument above. The second argument is no stronger than the first—maybe slightly weaker. (When what you see on TV conflicts with what you
see directly, you probably prefer the evidence of your own eyes.) There is another argument for the light being white, symbolized by the sign on the door.

All empirical beliefs are defeasible. It is always possible that you could learn something (a defeater) to invalidate a belief. There are two types of defeaters,
rebutting
defeaters and
undercutting
defeaters.

A rebutting defeater flatly asserts that the belief is wrong. Learning of a colony of white ravens in the Copenhagen zoo would be a rebutting defeater of the hypothesis that all ravens are black. You would still have all the evidence you ever had for this hypothesis (all the sightings of black ravens) and it would still “count,” yet you would be forced to admit that the hypothesis is false.

An undercutting defeater demonstrates that the evidence for the belief is invalid. Learning that you are actually a brain in a vat would be an undercutting defeater for
everything
you believe about the external world. An undercutting defeater puts the “evidence” for a belief in a new light, and shows that it cannot be used to justify the belief. The belief might still happen to be true, but the supposed evidence is bad.

It sounds like the rebutting defeater is the stronger of the two. Actually, said Pollock, undercutting defeaters take precedence over rebutting defeaters. It is like the difference between an interesting debate and a boring one: In the latter, the opponents alternate telling each other they’re wrong; in the former, they say
why
their opponent is wrong.

The empirically justified conclusions about the light (that it is green based on its appearance through the window; that it is white based on its appearance on the TV screen) are rebutting defeaters of each other. The situation is resolved only through the sign, an undercutting defeater. By explaining that the light may look deceptively green when seen through the green gas, it gives us reason to throw out one belief and keep the other.

This principle of the dominance of undercutting defeaters helps make sense of most of the paradoxes of this chapter (and the unexpected hanging as well). The argument of the author’s friend in the paradox of the preface is a rebutting defeater of the singled-out statement. It says the statement is
wrong
, without saying
why
. The reasoning is quite external to the statement. In fact, the content of the covered (and unread) statement never enters into it.

The author could cite an undercutting defeater for the friend’s argument. The friend’s reasoning rests on the belief that the book
contains an error. Although there may be excellent empirical evidence for that belief (finding mistakes and typos in other books), it would surely be undermined were it known for a fact that all the book’s statements other than the one the friend singled out are correct. Then the only way the book could contain an error would be for the covered statement to be wrong—and there is no reason to believe that it is any more likely to be wrong than the other statements. When push comes to shove, you should go with the undercutting defeater.

The paradox of the preface is a facetious paradox. We knew all along that the friend’s argument was wrong; the riddle was to say exactly why. The expectancy paradox is a tougher nut to crack. Applying Pollock’s principle to it yields the following resolution (not necessarily the last word):

An argument that an experimental result is false is a rebutting defeater; showing that an experiment is invalid is an undercutting defeater. In case of conflict, Pollock would have us favor a demonstration that the experiment on the expectancy effect is invalid rather than that it is false.

Take the strong version of the paradox, where the blue-ribbon committee of famous scientists has supervised the experiment, and we therefore are assured of the experiment’s validity. The rebutting defeater is this: If the results are true, then the experiment must be invalid. But since we
know
the experiment is valid (thanks to the expert supervision), the results must not be true (by
modus tollens)
.

The undercutting defeater goes: If the experiment is valid and true, then our subconscious expectations have compromised the experiment. Regrettably, we conclude that the experiment is invalid. (For what it’s worth, this seems the more reasonable of the two positions.)

And finally, for the unexpected hanging (which resembles the paradox of the preface in the plurality of days/statements): The prisoner’s reasoning rebuts the possibility of his being hanged on each of the seven days of the week. This set of beliefs creates its own undercutting defeater, for the executioner, aware of the prisoner’s beliefs, can hang him any day. Favoring the undercutting defeater gives us Quine’s position that the prisoner is wrong.

You might wonder when you can conclude that something is established beyond all doubt. The answer is: never. This is the trouble with accepting indefeasibility as a fourth criterion of knowledge. No belief is immune from defeaters—not even a belief that
is
a defeater.

A watchman approaches and inspects the monitor outside Pollock’s gas chamber. “Kids!” he mutters. “They think it’s some big joke to play with the controls of this thing! Just wait till someone gets
killed
, then they’ll do something,” he grouses, twiddling the dial and turning the image of the light bulb a vivid green.

1
There is a 1 in 10 chance it is correct, and a Gettier counterexample!

T
HE “THOMSON LAMP” (after James F. Thomson) looks like any other lamp with a toggling on-off switch. Push the switch once and the lamp is on. Push it again to turn it off. Push it still another time to turn it on again. A supernatural being likes to play with the lamp as follows: It turns the lamp on for 1/2 minute, then switches it off for 1/4 minute, then switches it on again for 1/8 minute, off for 1/16 minute, and so on. This familiar infinite series (1/2 + 1/4 + 1/8 + …) adds up to unity. So at the end of one minute, the being has pushed the switch an
infinite
number of times. Is the lamp on or off at the end of the minute?

Now, sure, everyone knows that the lamp is
physically
impossible. Mundane physics shouldn’t hamper our imaginations, though. The description of the lamp’s operation is as logically precise as it can
be. It seems indisputable that we have all the necessary information to say if the lamp would be on or off. It seems equally indisputable that the lamp has to be either on or off.

But to answer the riddle of the Thomson lamp would be preposterous. It would be tantamount to saying whether the biggest whole number is even or odd!

The Pi Machine

The unease is greater yet with the “pi machine.” This amazing device looks like an old-fashioned cash register. Switch it on, and the pi machine swiftly calculates the digits of pi (the length of a circle’s circumference when its diameter is 1). As has been known since classical times, pi is an
endless
string of digits: 3.14159265 … The pi machine telescopes this infinity by computing each successive digit in only half the time of the preceding one. As it determines each digit, the numeral pops up in a window at the top of the machine. Only the single most recently calculated digit is visible at any instant.

Other books

The Keep of Fire by Mark Anthony
Diary of a Mad Diva by Joan Rivers
Rise Again by Ben Tripp
Dark Passage by Marcia Talley
The Rancher's Second Chance by James, Victoria
Wicked Ink by Simon, Misty
Boots and Buckles by Myla Jackson
Small Favor by Jim Butcher
Gambling on a Secret by Ellwood, Sara Walter