What a Wonderful World (13 page)

Read What a Wonderful World Online

Authors: Marcus Chown

BOOK: What a Wonderful World
3.56Mb size Format: txt, pdf, ePub

One day ladies will take their computers for walks in the park and tell each other, ‘My little computer said such a funny thing this morning.’

ALAN TURING

I think the world market for computers is maybe … five.

THOMAS J. WATSON
, Chairman of IBM, 1943

‘Computers are useless,’ said Pablo Picasso. ‘They can only give you answers.’ But what answers they give! In the past half a century, those answers have dramatically changed our world.
1
The computer is unlike any other human invention. A washing machine is a washing machine is a washing machine. It is impossible to change it into a vacuum cleaner or a toaster or a nuclear reactor. But a computer can be a word processor or an interactive video game or a smart phone. And the list goes on and on and on. The computer’s unique selling point is that it can
simulate any other machine
. Although we have yet to build computers that can fabricate stuff quite as flexibly as human beings, it is merely a matter of time.
2

Fundamentally, a computer is just a shuffler of symbols. A bunch of symbols goes in – perhaps the altitude, ground speed, and so on, of an aeroplane; and another bunch of symbols comes out – for instance, the amount of jet fuel to burn, the necessary changes to be made in the angle of ailerons, and so on. The thing that changes the input symbols into the output symbols is a program, a set of instructions that is stored internally and, crucially, is
infinitely rewritable
. The reason a computer can simulate any other machine is that it is
programmable
. It is the extraordinary versatility of the computer program that is at the root of the unprecedented, world-conquering power of the computer.

The first person to imagine an abstract machine that shuffles symbols on the basis of a stored program was the English mathematician Alan Turing, famous for his role in breaking the German ‘Enigma’ and ‘Fish’ codes, which arguably shortened the Second World War by several years.
3

Turing’s symbol shuffler, devised in the 1930s, is unrecognisable as a computer. Its program is stored on a one-dimensional tape in binary – as a series of 0s and 1s – because everything, including numbers and instructions, can ultimately be reduced to binary digits. Precisely how it works, with a read/write head changing the digits one at a time, is not important. The crucial thing is that Turing’s machine can be fed a description of any other machine, encoded in binary, and then
simulate
that machine. Because of this unprecedented ability, Turing called it a Universal Machine. Today, it is referred to as a Universal Turing Machine.

Bizarrely, Turing devised his machine-of-the-mind not to show what a computer can do but
what it cannot do
. He was at heart a pure mathematician. And what interested him, even before the advent of nuts-and-bolts hardware, was the ultimate limits of computers.

Remarkably, Turing very quickly found a simple task that no computer, no matter how powerful, could ever do. It is called the halting problem and it is easily stated. Computer programs can sometimes get caught in endless loops, running around the same set of instructions for ever like a demented hamster in a wheel. The halting problem says: if a computer is given a computer program, can it tell,
ahead of actually running the program
, whether it will eventually halt – that is, that it will not get caught in an interminable loop?

Turing, by clever reasoning, showed that deciding whether a program eventually halts or goes on for ever is logically impossible and therefore beyond the capability of any conceivable computer. In the jargon, it is ‘uncomputable ’.
4

Thankfully, the halting problem turns out not to be typical of the kind of problems we use computers to solve. Turing’s limit on computers has, therefore, not held us back. And, despite their rather surprising birth in the abstract field of pure mathematics as machines of the imagination, computers have turned out to be immensely practical devices.

A vast numerical irrigation system

Computers, like the Universal Turing Machine, use binary. Binary was invented by Gottfried Leibniz, a seventeenth-century German mathematician who clashed bitterly with Isaac Newton over who had invented calculus. Binary is a way of representing numbers as strings of 0s and 1s. Usually, we use decimal, or base 10. The right-hand digit represents the 1s, the next digit the 10s, the next the 10 × 10s, and so on. So, for instance, 9217 means 7 + 1 × 10 + 2 × (10 × 10) + 9 × (10 × 10 × 10). In binary, or base 2, the right-hand digit represents the 1s, the next digit the 2s, the next the 2 × 2s, and so on. So, for instance, 1101 means 1 + 0 × 2 + 1 × (2 × 2) + 1 × (2 × 2 × 2), which in decimal is 13.

Binary can be used to represent not only numbers but also instructions. It is merely necessary to specify that
this
particular string of binary digits, or bits, means add;
this one
means multiply;
this one
execute these instructions and go back to the beginning and execute them again, and so on. And it is not only numbers and program instructions that can be represented in
binary. Binary can encode
anything
– from the information content of an image of Saturn’s rings sent back by the Cassini spacecraft to the information content of a human being (although this is somewhat beyond our current capabilities). This has led some physicists, drunk on the information revolution, to suggest that binary information is the fundamental bedrock of the Universe out of which physics emerges. ‘It from bit,’ as the American physicist John Wheeler memorably put it.
5

Binary is particularly suitable for use in computers because representing 0s and 1s in hardware requires devices that can be set only to two distinct states. Take the storage of information. This can be done with a magnetic medium, tiny regions of which can be magnetised in one direction to represent a 0 and in the opposite direction to represent a 1. Think of an array of miniature compass needles. To manipulate the information, on the other hand, an electronic device that has two distinct states is necessary. Such a device is the transistor.

Imagine a garden hose through which water is flowing. The water comes from a source and ends up in a drain. Now imagine stepping on the middle of the hose. The flow of water chokes off. Essentially, this is all a transistor in a computer does.
6
Except of course it controls not a flow of water but a flow of electrons – an electrical current. And, instead of a foot, it has a gate. Applying a voltage to the gate controls the flow of electrons from the source to the drain as surely as stepping on a hose controls the flow of water.
7
When the current is switched on, it can represent a 1 and when it is off a 0. Simples.

A modern transistor (on a microchip) actually looks like a tiny T. The top crossbar of the T is the source/drain (the hose) and the upright of the T is the gate (the foot).

Now imagine two transistors connected together – that is, the source of one transistor is connected to the drain of another. This is just like having a hose beside which you and a friend are standing. If you step on the hose, no water will flow. If your friend steps on it, no water will flow either. If you both stand on the hose, once again no water will flow. Only if you do not stand on the hose
and
your friend does not stand on the hose will water flow. In the case of the transistor, electrons will flow only if there is a certain voltage on the first gate
and
the same voltage on the second gate.

It also possible to connect up transistors so that electrons will flow if there is a particular voltage on the first gate
or
on the second gate. Such
AND
and
OR
gates are just two possibilities among a host of logic gates that can be made from combinations of transistors. Just as atoms can be combined into molecules, and molecules into human beings, transistors can be combined into logic gates, and logic gates into things such as adders, which sum two binary numbers. And, by combining millions of such components, it is possible to make a computer. ‘Computers are composed of nothing more than logic gates stretched out to the horizon in a vast numerical irrigation system,’ said Stan Augarten, an American writer on the history of computing.
8

Cities on chips

Transistors are made from one of the most common and mundane substances on the planet: sand. Or, rather, they are made from silicon, the second most abundant element in the Earth’s crust and
one component
of the silicon dioxide of sand. Silicon is neither a conductor of electricity – through which electrons flow
easily – nor an insulator – through which electrons cannot flow. Crucially, however, it is a semiconductor. Its electrical properties can be radically altered merely by doping it with a tiny number of atoms of another element.

Silicon can be doped with atoms such as phosphorus and arsenic, which bond with it to leave a single leftover electron that can be given up, or donated. This transforms it into a conductor of negative electrons, or an
n
-type material. But silicon can also be doped with atoms such as boron and gallium, which bond with silicon and leave room for one more electron. Bizarrely, the empty space where an electron
isn’t
can move through the material exactly as if it is a positively charged electron. This transforms the silicon into a conductor of positive holes, or a
p
-type material.

A transistor is created simply by making a
pnp
or an
npn
sandwich – most commonly an
npn
. You do not need to know any more than this to grasp the basics of transistors (in fact, you already know more than you need).
9

In the beginning, when transistors were first invented, they had to be linked together individually to make logic gates and computer components such as adders. But the computer revolution has been brought about by a technology that creates, or integrates, billions upon billions of transistors simultaneously on a single wafer, or chip, of silicon. The ‘Very Large Scale Integration’ of such integrated circuits is complex and expensive.
10
But, in a nutshell, it involves etching a pattern of transistors on a wafer of silicon, then, layer by layer, depositing doping atoms, microscopic wires, and so on.

To make a computer you need a computer. It is only with computer-aided design that it is possible to create a pattern of
transistors as complex as a major city. Such a pattern is then made into a mask. Think of it as a photographic negative. By shining light through the mask onto a wafer of silicon, it is possible to create an image of the pattern of transistors. But a pattern of light and shadows is just that – a pattern of light and shadows. The trick is to turn it into something real. This can be done if the surface of the silicon wafer is coated with a special chemical that undergoes a chemical change when struck by light. Crucially, light makes the photoresistant material resistant to attack by acid.
11
So, when acid is applied to the silicon wafer in the next step of the process, the silicon is eaten away, or etched, everywhere except where the light falls. Hey presto, the image of the mask has been turned into concrete – or, rather, silicon – reality.

There are many other ingenious steps in the process, which might involve using many masks to create multiple layers, spraying the wafer with dopants and spraying on microscopic gold connecting wires, and so on. But, basically, this is the idea. The technique of photolithography quickly and elegantly impresses the pattern of a complex electric circuit onto the wafer. It creates a city on a chip.

Probably, most people think that microchips originate in the US or in Japan or South Korea. Surprisingly, they are born in Britain. The company behind the designs of the overwhelming majority of the chips in the world’s electronic devices is based in Cambridge. ARM started out as Acorn Computers in 1985. While the big chip-makers like Intel in the US concentrated on making faster and more compact chips for desktop computers, or PCs, ARM struck out in a different direction completely. It put
entire computers
on a chip. This made possible the vast numbers of compact and mobile electronic devices from SatNavs to games
consoles to mobile phones. It moved chips from dedicated and unwieldy computers
into the everyday world
.

Big bang computing

The limit on how small components can be made on a chip is determined by the kind of light that is shone through a mask. Chip-makers have made ever smaller components – packing in more and more transistors – by using light with a shorter wavelength, such as ultraviolet or X-rays, which can squeeze through smaller holes. They have even replaced light with beams of electrons since electrons have a shorter wavelength than light.
12
And chips have become ever more powerful.

In 1965, Gordon Moore, one of the founders of the American computer chip-maker Intel, pointed out that the computational power available at a particular price – or, equivalently, the number of transistors on a chip – appears to double roughly every eighteen months.
13
‘If the automobile had followed the same development cycle as the computer, a Rolls-Royce would today cost $100, get a million miles per gallon, and explode once a year, killing everyone inside,’ observed Robert X. Cringely, technology columnist on
Info World
magazine.
14

People have been claiming that Moore’s law is about to break down every decade since it was formulated. But so far everyone has been wrong.

Undoubtedly, however, Moore’s law will break down one day. It is a sociological law – a law of human ingenuity. But even human ingenuity cannot do the impossible. There are physical limits set by the laws of nature, which are impossible to circumvent, that ultimately determine the limits of computers.

Other books

Atlantis by Robert Doherty
Miss Carter's War by Sheila Hancock
Alaska by James A. Michener
Dark Days (Apocalypse Z) by Manel Loureiro
Crystal Lies by Melody Carlson
Hawk's Way by Joan Johnston
Byzantium by Ben Stroud
I'll Take Care of You by Caitlin Rother
Eldritch Tales by H.P. Lovecraft