Labyrinths of Reason (30 page)

Read Labyrinths of Reason Online

Authors: William Poundstone

BOOK: Labyrinths of Reason
6.93Mb size Format: txt, pdf, ePub

The term NP (“nondeterministic, polynomial-time”) refers to something known as a
nondeterministic
computer—specifically, the idealized computer known as a Turing machine, envisioned by computer pioneer Alan Turing. A nondeterministic computer is not quite what it may sound like. It sounds like a computer that works randomly or follows a less than exact “algorithm” (or a computer that has free will!).

The operation of a nondeterministic computer can be imagined this way: Instead of one computer we have a large and potentially infinite number of computers. Each computer is assigned one of the possible solutions of a problem, and is charged with checking out that solution.

If, for instance, the problem is to find a path through a maze, the ensemble of computers (actually robots in this case) starts at the entrance. Every time the army of robots comes to a fork in the labyrinth, they split up into as many parties as there are paths. The search parties keep splitting up at each new fork, and eventually all possible routes are explored.

At least one of the robots will indeed travel from entrance to goal. Let our attention focus on
that
machine. How long did it take? Chances are, it didn’t take very long at all. Solutions to mazes are generally short; it is going down all the wrong paths and backtracking that makes them so difficult. The time a nondeterministic
computer takes to “solve” a problem is exactly the time it takes to
check
a guessed solution.

The NP problems parallel the set of questions open to scientific inquiry. The scientist who tries to establish a new truth is in much the same position as the oracle above. Science is mostly concerned with hypotheses similar to answers to NP problems: hypotheses that may be verified or refuted readily.

There is an even more striking connection between the NP problems and science—logical deduction itself is an NP problem.

The Hardest Problem

What is the hardest problem in the class NP? In 1971 Stephen Cook proved that SATISFIABILITY is at least as hard as any problem in NP. His proof showed that no problem in NP can be any harder because all NP problems may be transformed into SATISFIABILITY problems.

This was followed (1972) by Richard Karp’s discovery that many types of intractable problems share this distinction with SATISFIABILITY. Diverse problems from graph theory, logic, mathematical games, number theory, cryptography, and computer programming are equally as hard as SATISFIABILITY. The class of hardest NP problems is “NP-complete.” In the Venn diagram, NP-complete is a circle in NP but outside P.

Strictly speaking, NP-complete is a shadow land that may not properly exist. It has not been proven that the NP-complete problems cannot be solved in polynomial time. The evidence is only empirical: For years, theorists and computer programmers have tried to come up with polynomial-time solutions to NP problems, and have always failed. In practice, proving that a problem is NP-complete is considered strong evidence that it cannot be solved efficiently.

It remains barely conceivable that every problem in NP can actually be solved in polynomial time by some unknown super-algorithm. In that case, P, NP, and NP-complete would be identical, and would be represented as a single fused circle.

If an efficient solution, a magic key, exists, then there are virtually no limits on what we may deduce, Sherlock Holmes-like, from logical premises. If, on the other hand, there is no efficient solution to SATISFIABILITY and the NP-complete problems, there is a world of truths that, as a practical matter, must go unrecognized. It
is strongly suspected that there is no magic key: We are all Dr. Watsons who miss the implications of much of what we see.

This means that there is a relatively sharp cutoff in the size of a solvable logic problem. Just as a maze larger than a certain size will be practically unsolvable, so will a logic problem of greater than a certain complexity. It evidently follows that our deductions about the real world are limited too.

A Catalogue of Experience

Paradox is a more significant and far-reaching concept than it may seem. If one holds beliefs that are contradictory, then one cannot have justification for at least some of those beliefs. Without justification there is no knowledge. Therefore, understanding a set of beliefs entails (at a minimum) being able to detect a contradiction in those beliefs. For that reason, the problem of detecting paradox, SATISFIABILITY, is a delimiter of knowledge. The difficulty of SATISFIABILITY is inherited by any attempt to understand implications fully.

Newton’s theory of universal gravitation was founded on nothing that the ancient Greeks didn’t know. The germ theory of disease could have been advanced and confirmed centuries before it was, if someone had made the right connections. It follows that there must be yet undiscovered generalizations that are “overdue” right now. Quite possibly, we have all the necessary facts needed to deduce how to prevent cancer or the location of a tenth planet, but no one is putting them together in the right order. More than that: Maybe we’re missing all sorts of logical conclusions about the world. They could be implicit in everything we see and hear, but might be just a little too complex to grasp.

“The grand aim of all science is to cover the greatest number of empirical facts by logical deduction from the smallest number of hypotheses or deductions,” Einstein wrote. Take the sum of all human experience: everything that anyone ever saw, felt, heard, tasted, or smelled from the Ice Age up to this instant. This is the starting point for any codification of knowledge. In principle this information could be assembled into a vast catalogue. Let the catalogue be a simple list of experiences, without any interpretation of them. Dreams, delusions, hallucinations, mirages, and optical illusions are tabulated in detail, side by side with “real” experiences. It is left to the reader of the catalogue to determine which (if any) are the real experiences.

In the catalogue of experience must be every observation on which the naturalistic sciences are founded. A description of every bird, star, fern, crystal, and Paramecium ever seen would be found somewhere. The catalogue would also contain the minutest details of every scientific experiment ever conducted. It would include both Michelson’s and Morley’s impressions of how the late afternoon sun glinted on their apparatus’s mirrors on certain dates in 1887; the color, size, shape, velocity, and acceleration of every apple Newton ever saw fall.

Science is
not
simply a catalogue of experience. For one thing, no human mind can comprehend the totality of human experience. The catalogue must contain everything you’ve ever experienced—which has occupied 100 percent of your attention for your whole life up to now! Science compresses the human experience (certain aspects of it, anyway) into a manageable form. What we are really interested in is
understanding
the world described by the catalogue. That means seeing the generalities, if sometimes mercifully forgetting specifics. A vexing question in the philosophy of science is to what extent this is possible.

Every experience constrains the truth values of some of the unknowns of the world, as in a logic puzzle. The relationships between unknowns could be quite subtle, of course, and everything would have to be qualified with
ifs
. Perhaps one of your experiences is hearing your friend Fred tell about how he saw the Loch Ness monster last Tuesday. The actual import of this experience might be something like:

If
Fred wasn’t mistaken
and
Fred wasn’t lying
and
the external world isn’t an illusion,
then
the Loch Ness monster existed on Tuesday.

These if clauses are the inevitable auxiliary hypotheses that so complicate confirmation.

Feed the catalogue of experience into a super-computer, and program it to look for deductions. This task requires only logic, and logic is one thing computers do superbly well. When it is through, the computer could even sort the list of deductions according to importance, measured by how many distinct experiences they account for. The deduction at the top of the list would be the most important thing humanly knowable.

Fanciful as this idea is, it is a framework for introducing some of the deepest concerns of the philosophy of science. Sorites, the basis of our science, can be recognized and tested for consistency in polynomial
time. These easy logic problems are comparable to unicursal mazes (one or two paths per node), which are trivial, unlike “regular” mazes, where at least some nodes have three or more paths. More complex deductions, involving premises with three or more unknowns, require (impractical) exponential time. There may be a whole world of logical deductions—of interpretations of our sense experience—that is forever concealed from us.

Think of our experience as a maze and of logical truths about that experience as paths through the maze. The NP-completeness of SATISFIABILITY suggests that we will never exhaust all the potential routes.

A Computer as Big as the Universe

Computer scientists Larry J. Stockmeyer and Albert R. Meyer dramatized the intractability of NP problems in a fantasy about a computer the size of the universe. They showed that the universe is not big enough for us to answer many questions about the universe.

Suppose we try to make a list of accepted beliefs. Like Descartes, we want to start with a blank slate and be very careful about adding beliefs to the list. Before any belief is added, it is first checked against the beliefs already on the list to make sure that it doesn’t introduce a contradiction. This check is a SATISFIABILITY problem.

You might think that you could detect contradictions merely by running through the list and making sure that the proposed new belief does not directly contradict any belief already on the list. It’s not that simple, though.

Granted, a new belief might contradict an old one. If the new belief is “All ravens are black,” and one of the old ones is “No ravens are black,” then you have a contradiction right there. Much more treacherous is the type of contradiction that arises from three or more statements that are tenable individually. Usually the term “paradox” is reserved for these latter cases, where the contradiction is not immediately evident.

Suppose the new belief is “All grass is green.” The list could already contain this pair of statements:

All hay is brown.
Hay is grass.

Together with the new statement, this creates a contradiction. You might have missed it if you examined statements only in pairs:
Any two of the three offending beliefs are mutually compatible. To rule out cases like this, it is necessary to test each new belief with every other pair of statements in the list. This greatly multiplies the amount of checking to be done. And that still isn’t the end of it. There may be subtler paradoxes that pop up only when sets of four, five, or more statements are considered together. Adding a belief to a set of a million can create a contradiction even when the new belief is compatible with every set of 999,999.

There is a lot of fact checking to be done, so clearly a computer is called for. We start with belief No. 1
(Cogito ergo sum?)
. For the computer’s benefit, the belief is encoded as a logical statement about Boolean unknowns. Next we get ready to add belief No. 2. We first instruct the computer to check it against belief No. 1 for possible contradiction. In this case there is only one logical test (belief No. 2 against belief No. 1).

Now the list has two beliefs and we want to add a third. The third belief must be checked three times: against No. 1, against No. 2, and against Nos. 1 and 2.

The fourth belief must be checked against
seven
sets: against Nos. 1, 2, and 3 collectively, against Nos. 1 and 2, against Nos. 1 and 3, against Nos. 2 and 3, and against each of the three beliefs individually.

In fact, a new belief must be checked against every possible subset of the current list. The formula for the number of subsets of
n
things is an exponential function: 2
n
. This formula counts the null set, which we don’t have to worry about. The number of nonempty subsets is 2
n
– 1.

Assume that the beliefs, or some of them, are logically complex enough so that there is no way of avoiding an exponential-time algorithm. Then the number of required comparisons can be gauged from this table:

SIZE OF LIST
   SUBSETS
I
   I
2
   3
3
   7
4
   15
5
   31
10
   1,023
100
   1.27×10
30
1,000
   10
301
10,000
   10
3010

Even a fairly modest list of beliefs—100, say—has an astronomical number of subsets. To qualify a 101st belief, it must be checked against over 10
30
distinct subsets of the list.

How can that be? Isn’t it obvious that you could write down 101 statements and
quickly
assure yourself that no paradox exists?

Indeed so. You could copy 100 random assertions from an encyclopedia, making sure that every sentence talks about something different. We are talking here of the more general case where the beliefs on the list will concern many of the same unknowns and be logically complex. The beliefs are allowed to intertwine like the premises in Carroll’s pork-chop problem. Then we must fall back on an algorithm—a slow algorithm.

How fast could a computer add beliefs to the list?

Stockmeyer and Meyer’s analysis had an “ideal” computer deciding the truth of certain mathematical statements with an exponential-time algorithm. Essentially the same reasoning applies to SATISFIABILITY problems. The power of any computer ultimately depends on the number of components it contains, Stockmeyer and Meyer said. The smaller the components, the more processing power can be packed into a given volume.

Other books

The Surprise of His Life by Keast, Karen
Enemies & Allies by Kevin J Anderson
A House Called Askival by Merryn Glover
The Dead Lie Down by Sophie Hannah
Such Sweet Sorrow by Jenny Trout
Anne Frank and Me by Cherie Bennett