Read Outer Limits of Reason Online

Authors: Noson S. Yanofsky

Outer Limits of Reason (24 page)

BOOK: Outer Limits of Reason
12.06Mb size Format: txt, pdf, ePub
ads

How does one go about solving such a problem? We have to examine all possible splittings of the set. For each splitting, we need to sum one part and see if it is equal to half of the sum of all the elements. How many such splittings are there? For each element of a given set, we can either place the element in one set or in the other set. So we have two possibilities for the first element and two possibilities for the second element and . . . and two possibilities for
n
th element. This comes to

possibilities. We say that this problem demands an exponential amount of work.

Let's say we have a hundred numbers that we want to partition into two groups. How many possible splittings would we have to examine? A calculation shows that there are

2
100
= 1,267,650,600,228,229,401,496,703,205,376

splittings. If we had a computer that could check a million splittings per second, we would still need

1,267,650,600,228,229,401,496,703,205,376
/
1,000,000 = 1,267,650,600,228,229,401,496,703 seconds.

Divide that by 60 to get 21,127,510,003,803,823,358,278 minutes. Divide again by 60 for 352,125,166,730,063,722,637 hours. That is 14,671,881, 947,085,988,443 days or 40,196,936,841,331,475 years or 401,969,368,413,314 centuries. This is not exactly something for which we can simply wait!

One might ponder different, faster methods for solving this problem. However, no known method will work in all instances and in a shorter time. The only known method that will always find the exact solution in all cases is a brute-force search through all
2
n
subsets.

Subset Sum Problem

Similar to the Set partition problem is the subset sum problem. Imagine that you are packing your car for a long weekend in the country. There are many things that you would like to take but, alas, your car has finite capacity. You would like to fill your car to its exact capacity. This common situation is the informal version of the Subset Sum Problem: for a given set of whole numbers and a whole number
C
(for capacity), determine if there exists a subset of the set whose sum is exactly equal to
C
. For example, consider the set {34, 62, 85, 35, 18, 17, 52} and the capacity
C
=
115. If you play with the set for a long enough time, you will realize that the sum of the elements in the subset {62, 35, 18} equals 115. What would happen if we changed
C
to 114 or if we changed the given whole numbers?

In general, given a set of whole numbers and a whole number
C
, how should a computer go about solving this decision problem? As in the set partition problem, we can sift through all subsets and for each one, sum up the elements to see if they sum to
C
. We have seen that for an
n
-element set, there are 2
n
subsets. This will again lead to the same large number of operations and infeasible time situation as the Set Partition Problem for inputs of larger sizes.

Satisfiability Problem

This problem deals with simple logical statements. We assume some basic knowledge of elementary logic and the connectives ∧ (and), ∨ (or), ~ (not), and → (implies). Consider the logical statement

(
p
∨ ~
q
) →~ (
p
∧
q
).

This statement is either true or false depending on which values are assigned to variables
p
and
q
. If, for example, we assign
p
= TRUE and
q
= FALSE, then the entire statement is true. If, however, we set
p
= TRUE and
q
= TRUE, then the entire statement is false. If a logical statement can be made true, then it is termed
satisfiable.
For a given statement in logic, the Satisfiability Problem asks whether there is a way to assign TRUE and FALSE to the variables such that the entire statement is true (i.e., satisfiable). We saw that the above logical statement is satisfiable. Consider the statement

a
∧ (
a
→
b
) ∧ (
b
→
c
) ∧ (~
c
).

For this statement to be true, we must have that
a
= TRUE and (
a
→
b
) = TRUE. By Modus Ponens, we have that
b
must be TRUE. Continuing with (
b
→
c
), we see that
c
needs to be TRUE. But
c
= FALSE because
~
c
must be TRUE. In conclusion, there is no way to make this statement logically true.

How does one go about solving the satisfiability problem? Most high school students know that to determine the value of a logical statement, one needs to construct a truth table. The decision version of this problem asks whether there is a row in the truth table of a logical statement that has the value TRUE.

One algorithm for solving this problem is to construct a truth table for a given statement and on inspection of all the rows, return a Yes or No depending on whether any row has the value TRUE. However, constructing such truth tables will require a lot of work. If there are two variables in the given logical statement, there will be four rows in the truth table. If there are three variables, there will be eight rows. In general, for a statement with
n
different variables, the truth table will have 2
n
rows. This exponential growth in the amount of work done is a sign that this problem is as impractical as the other four discussed in this section.

Now that we've seen a few examples, let's come up with a nice definition. We will denote by
NP
the class of all decision problems that require 2
n
,
n
!, or fewer operations to solve.
6
A problem in
NP
will be called an “
NP
problem.” Since
P
is the class of problems that can be solved in a polynomial amount of operations, and polynomials grow more slowly than exponential or factorial functions, we have that
P
is a subset of
NP
. What this means is that every “easy” problem is an element of the class of all “hard and easy” problems.

One might have doubts that there really is a difference between polynomial functions and exponential or factorial functions.
Table 5.3
should dispel any doubts.

Table 5.3

Several values for polynomial and nonpolynomial functions

The first few columns show typical polynomial functions and their values for various
n
's. The last two columns show the values for the exponential and factorial functions. Although, for small values of
n
, some polynomial functions are larger than the two right-hand columns, already for
n
=
50
the polynomial functions cannot keep up with the other columns. It is this tremendous growth that seems to mark a clear demarcation between polynomial and nonpolynomial functions.

Since NP problems were defined in the early 1970s, researchers have found literally thousands of such problems. NP problems occur in every aspect of business, industry, computer science, mathematics, physics, and so on. They are about scheduling, routing, graph theory, combinatorics, and many other topics. Lately, many problems in biology related to gene sequencing have been shown to be NP problems. As biologists have been learning to read people's DNA and use this information to customize medicines, they have come to realize that some tasks seem to demand too many operations and too much time.

Not all limitations are bad. We saw in
section 5.1
that multiplying two numbers is a polynomial problem. By contrast, factoring a number is rather difficult. Consider multiplying 4,871 by 7,237. Any intelligent fourth grader can, in short order, determine the result of 35,251,427. Now consider the number 38,187,637. This number is a product of two prime numbers (that is, numbers that cannot be further divided). Can you find those prime factors? It would take a human being or a computer a long time to realize that the factors of this number are 7,193 and 5,309. The reason for this is that although multiplication is an
n
2
problem, factoring is an NP problem. This asymmetry is used to send secret messages over the Internet. We will not get into the gory details of how this is accomplished, but there is an algorithm that uses large numbers to transmit messages in secret. The algorithm was described in 1978 by Ron Rivest, Adi Shamir, and Leonard Adleman and is called RSA. If eavesdroppers were listening in on a conversation, they would not be able to understand the message (or obtain the credit card number given in the conversation) without factoring a very large number. Since this seems to be a very hard problem, our security is safe. If the brilliant Lex Luthor ever found an algorithm that could easily factor numbers, RSA would be “broken” and much of the World Wide Web's security would be useless.

We have seen that all five of the problems highlighted in this section are essentially unsolvable for any large-sized inputs—that is, they cannot be solved in a reasonable amount of time with our present-day computers. One might try to ignore these limitations by expecting that as computers become faster and faster, NP problems will become easier and easier to solve. Alas, such hopes are futile. As we saw before, the Traveling Salesman Problem for a relatively small input size of
n
= 100 demands 2.9 × 10
142
centuries. Even if our future computers are 10,000 times faster than they are now, our problem will still demand 2.9 × 10
138
centuries. This remains an unacceptable amount of time. Conclusion: faster computers will not help us.

Similarly, as our technology and ability to multiprocess improve—that is, as our ability to use many processors working in parallel improves—we might shrug off the entire concept of an NP problem. One might think that such problems are hard for single processors performing one step at a time, but with many processors, the task might be accomplished in a reasonable period. This hope is also in vain. Even with 10,000 processors working in conjunction with each other, the time needed would not improve beyond 2.9 × 10
138
centuries, which is still an unacceptable time requirement. Let us go further. Scientists estimate that there are 10
80
atoms in the universe. Imagine that every single atom in the universe was a computer working on different parts of this problem. Splitting up the problem like this means that completing the entire problem will demand that all these computers work a measly 10
62
centuries. And this is only for an input of size 100. What about larger inputs? The point is that such problems are not solvable in any machine within any reasonable amount of time.

You may have noticed a media buzz on the subject of quantum computers. These are, at present, hypothetical devices that use some of the strange and counterintuitive aspects of quantum mechanics
7
to improve our computational powers. One of the strangest notions of quantum mechanics is that subatomic objects can be in more than one place at the same time. While regular-sized objects are either in this or that position, subatomic objects have superposition—that is, they can be in many positions simultaneously. Regardless of any doubt you might have about such notions, superposition is an established fact of our universe. Quantum computer scientists try to use superposition to make better computers. A regular computer searches through a large number of possible solutions one at a time. In contrast, a quantum computer would put itself into a superposition of searching states and examining many possible solutions at one time. One might believe that the development of quantum computers will make NP problems easier to solve. Again, such hopes are dashed. It can be shown that when searching for a particular element in a list of size
n,
a quantum computer cannot do any better than √
n
steps. So when searching through all 2
n
possible solutions to an NP problem, a quantum computer will be able to do this task in 
 
steps. However,

BOOK: Outer Limits of Reason
12.06Mb size Format: txt, pdf, ePub
ads

Other books

True Deceptions (True Lies) by Veronica Forand
Your Dream and Mine by Susan Kirby
1634: The Baltic War by Eric Flint, David Weber
Far Pavilions by M. M. Kaye
The Genius Files #4 by Dan Gutman
The Devil's Own Luck by David Donachie
Heirs of Grace by Pratt, Tim
To Live Again by L. A. Witt