Gödel showed that mathematics could not be both complete and consistent but had not definitively answered the third question, at least not for all mathematics. Even though a particular closed system of formal logic must contain statements that could neither be proved nor disproved from within the system, it might conceivably be decided, as it were, by an outside referee—by external logic or rules.
♦
♦
Alan Turing, just twenty-two years old, unfamiliar with much of the relevant literature, so alone in his work habits that his professor worried about his becoming “a confirmed solitary,”
♦
posed an entirely different question (it seemed): Are all numbers computable? This was an unexpected question to begin with, because hardly anyone had considered the idea of an
un
computable number. Most numbers that people work with, or think about, are computable by definition. The rational numbers are computable because they can be expressed as the quotient of two integers,
a
/
b
. The algebraic numbers are computable because they are solutions of polynomial equations. Famous numbers like Π and
e
are computable; people compute them all the time. Nonetheless Turing made the seemingly mild statement that numbers might exist that are somehow nameable, definable, and
not
computable.
What did this mean? He defined a computable number as one whose decimal expression can be calculated by finite means. “The justification,” he said, “lies in the fact that the human memory is necessarily
limited.”
♦
He also defined
calculation
as a mechanical procedure, an algorithm. Humans solve problems with intuition, imagination, flashes of insight—arguably nonmechanical calculation, or then again perhaps just computation whose steps are hidden. Turing needed to eliminate the ineffable. He asked, quite literally, what would a machine do? “According to my definition, a number is computable if its decimal can be written down by a machine.”
No actual machine offered a relevant model. “Computers” were, as ever, people. Nearly all the world’s computation was still performed through the act of writing marks on paper. Turing did have one information machine for a starting point: the typewriter. As an eleven-year-old sent to boarding school he had imagined inventing one. “You see,” he wrote to his parents, “the funny little rounds are letters cut out on one side slide along to the round
along an ink pad and stamp down and make the letter, thats not nearly all though.”
♦
Of course, a typewriter is not automatic; it is more a tool than a machine. It does not flow a stream of language onto the page; rather, the page shifts its position space by space under the hammer, where one character is laid down after another. With this model in mind, Turing imagined another kind of machine, of the utmost purity and simplicity. Being imaginary, it was unencumbered by the real-world details one would need for a blueprint, an engineering specification, or a patent application. Turing, like Babbage, meant his machine to compute numbers, but he had no need to worry about the limitations of iron and brass. Turing did not plan ever to build his machine.
He listed the very few items his machine must possess: tape, symbols, and states. Each of these required definition.
Tape
is to the Turing machine what paper is to a typewriter. But where a typewriter uses two dimensions of its paper, the machine uses only one—thus, a tape, a long strip, divided into squares. “In elementary arithmetic the two-dimensional character of the paper is sometimes used,” he wrote. “But such a use is always avoidable, and I think that it will be agreed that the two-dimensional character of paper is no essential of computation.”
♦
The tape is to be thought of as infinite: there is
always more when needed. But just one square is “in the machine” at any given time. The tape (or the machine) can move left or right, to the next square.
Symbols
can be written onto the tape, one per square. How many symbols could be used? This required some thought, especially to make sure the number was finite. Turing observed that words—in European languages, at least—behaved as individual symbols. Chinese, he said, “attempts to have an enumerable infinity of symbols.” Arabic numerals might also be considered infinite, if 17 and 999,999,999,999,999 were treated as single symbols, but he preferred to treat them as compound: “It is always possible to use sequences of symbols in the place of single symbols.” In fact, in keeping with the machine’s minimalist spirit, he favored the absolute minimum of two symbols: binary notation, zeroes and ones. Symbols were not only to be written but also read from the tape—“scanned” was the word Turing used. In reality, of course, no technology could yet scan symbols written on paper back into a machine, but there were equivalents: for example, punched cards, now used in tabulating machines. Turing specified one more limitation: the machine is “aware” (only the anthropomorphic word would do) of one symbol at a time—the one on the square that is in the machine.
States
required more explaining. Turing used the word “configurations” and pointed out that these resembled “states of mind.” The machine has a few of these—some finite number. In any given state, the machine takes one or more actions depending on the current symbol. For example, in state
a
, the machine might move one square to the right if the current symbol is 1, or move one square to the left if the current symbol is 0, or print 1 if the current symbol is blank. In state
b
, the machine might erase the current symbol. In state
c
, if the symbol is 0 or 1, the machine might move to the right, and otherwise stop. After each action, the machine finishes in a new state, which might be the same or different. The various states used for a given calculation were stored in a table—how this was to be managed physically did not matter. The state table was, in effect, the machine’s set of instructions.
And this was all.
Turing was
programming
his machine, though he did not yet use that word. From the primitive actions—moving, printing, erasing, changing state, and stopping—larger processes were built up, and these were used again and again: “copying down sequences of symbols, comparing sequences, erasing all symbols of a given form, etc.” The machine can see just one symbol at a time, but can in effect use parts of the tape to store information temporarily. As Turing put it, “Some of the symbols written down … are just rough notes ‘to assist the memory.’ ” The tape, unfurling to the horizon and beyond, provides an unbounded record. In this way all arithmetic lies within the machine’s grasp. Turing showed how to add a pair of numbers—that is, he wrote out the necessary table of states. He showed how to make the machine print out (endlessly) the binary representation of Π. He spent considerable time working out what the machine could do and how it would accomplish particular tasks. He demonstrated that this short list covers everything a person does in computing a number. No other knowledge or intuition is necessary. Anything computable can be computed by this machine.
Then came the final flourish. Turing’s machines, stripped down to a finite table of states and a finite set of input, could themselves be represented as numbers. Every possible state table, combined with its initial tape, represents a different machine. Each machine itself, then, can be described by a particular number—a certain state table combined with its initial tape. Turing was encoding his machines just as Gödel had encoded the language of symbolic logic. This obliterated the distinction between data and instructions: in the end they were all numbers. For every computable number, there must be a corresponding machine number.
Turing produced (still in his mind’s eye) a version of the machine that could simulate every other possible machine—every digital computer. He called this machine
U
, for “universal,” and mathematicians fondly use the name
U
to this day. It takes machine numbers as input. That is, it reads the descriptions of other machines from its tape—their algorithms
and their own input. No matter how complex a digital computer may grow, its description can still be encoded on tape to be read by
U
. If a problem can be solved by any digital computer—encoded in symbols and solved algorithmically—the universal machine can solve it as well.
Now the microscope is turned onto itself. The Turing machine sets about examining every number to see whether it corresponds to a computable algorithm. Some will prove computable. Some might prove uncomputable. And there is a third possibility, the one that most interested Turing. Some algorithms might defy the inspector, causing the machine to march along, performing its inscrutable business, never coming to a halt, never obviously repeating itself, and leaving the logical observer forever in the dark about whether it
would
halt.
By now Turing’s argument, as published in 1936, has become a knotty masterpiece of recursive definitions, symbols invented to represent other symbols, numbers standing in for numbers, for state tables, for algorithms, for machines. In print it looked like this:
By combining the machines
D
and
U
we could construct a machine
M
to compute the sequence
β′
. The machine
D
may require a tape. We may suppose that it uses the
E
-squares beyond all symbols on
F
-squares, and that when it has reached its verdict all the rough work done by
D
is erased.…We can show further that
there can be no machine
E
which, when applied with the S.D of an arbitrary machine
M
, will determine whether
M
ever prints a given symbol (0 say).
Few could follow it. It seems paradoxical—it
is
paradoxical—but Turing proved that some numbers are uncomputable. (In fact, most are.)
Also, because every number corresponds to an encoded proposition of mathematics and logic, Turing had resolved Hilbert’s question about whether every proposition is decidable. He had proved that the
Entscheidungsproblem
has an answer, and the answer is no. An uncomputable number is, in effect, an undecidable proposition.
So Turing’s computer—a fanciful, abstract, wholly imaginary machine—led him to a proof parallel to Gödel’s. Turing went further than Gödel by defining the general concept of a formal system. Any mechanical procedure for generating formulas is essentially a Turing machine.
Any
formal system, therefore, must have undecidable propositions. Mathematics is not decidable. Incompleteness follows from uncomputability.
Once again, the paradoxes come to life when numbers gain the power to encode the machine’s own behavior. That is the necessary recursive twist. The entity being reckoned is fatally entwined with the entity doing the reckoning. As Douglas Hofstadter put it much later, “The thing hinges on getting this halting inspector to try to predict its own behavior when looking at itself trying to predict its own behavior when looking at itself trying to predict its own behavior when …”
♦
A conundrum that at least smelled similar had lately appeared in physics, too: Werner Heisenberg’s new uncertainty principle. When Turing learned about that, he expressed it in terms of self-reference: “It used to be supposed in Science that if everything was known about the Universe at any particular moment then we can predict what it will be through all the future.… More modern science however has come to the conclusion that when we are dealing with atoms and electrons we are quite unable to know the exact state of them; our instruments being made of atoms and electrons themselves.”
♦
A century had passed between Babbage’s Analytical Engine and Turing’s Universal Machine—a grand and unwieldy contraption and an elegant unreal abstraction. Turing never even tried to be a machinist. “One can picture an industrious and diligent clerk, well supplied with scratch paper, tirelessly following his instructions,”
♦
as the mathematician and logician Herbert Enderton remarked years later. Like Ada Lovelace, Turing was a programmer, looking inward to the step-by-step logic of his own mind. He imagined himself as a computer. He distilled mental procedures into their smallest constituent parts, the atoms of information processing.
Alan Turing and Claude Shannon had codes in common. Turing encoded instructions as numbers. He encoded decimal numbers as zeroes and ones. Shannon made codes for genes and chromosomes and relays and switches. Both men applied their ingenuity to mapping one set of objects onto another: logical operators and electric circuits; algebraic functions and machine instructions. The play of symbols and the idea of
mapping
, in the sense of finding a rigorous correspondence between two sets, had a prominent place in their mental arsenals. This kind of coding was not meant to obscure but to illuminate: to discover that apples and oranges were after all equivalent, or if not equivalent then fungible. The war brought both men to cryptography in its most riddling forms.