The Beginning of Infinity: Explanations That Transform the World (75 page)

BOOK: The Beginning of Infinity: Explanations That Transform the World
10.47Mb size Format: txt, pdf, ePub

Everyone’s first thought was that unbounded knowledge-creation is possible only in a universe that does not recollapse. However, on analysis it turned out that the reverse is true: in universes that expand for ever, the inhabitants would run out of energy. But the cosmologist Frank Tipler discovered that in certain types of recollapsing universes the Big Crunch singularity is suitable for performing the faster-and-faster trick that we used in Infinity Hotel: an infinite sequence of computational steps could be executed in a finite time before the singularity, powered by the ever-increasing tidal effects of the gravitational collapse itself. To the inhabitants – who would eventually have to upload their personalities into computers made of something like pure tides – the universe would last for ever because they would be thinking faster and faster, without limit, as it collapsed, and storing their memories in ever smaller volumes so that access times could also
be reduced without limit. Tipler called such universes ‘omega-point universes’. At the time, the observational evidence was consistent with the real universe being of that type.

A small part of the revolution that is currently overtaking cosmology is that the omega-point models have been ruled out by observation. Evidence – including a remarkable series of studies of supernovae in distant galaxies – has forced cosmologists to the unexpected conclusion that the universe not only will expand for ever but has been expanding
at an accelerating rate.
Something has been counteracting its gravity.

We do not know what. Pending the discovery of a good explanation, the unknown cause has been named ‘dark energy’. There are several proposals for what it might be, including effects that merely give the appearance of acceleration. But the best working hypothesis at present is that in the equations for gravity there is an additional term, of a form first mooted by Einstein in 1915 and then dropped because he realized that his explanation for it was bad. It was proposed again in the 1980s as a possible effect of quantum field theory, but again there is no theory of the physical meaning of such a term that is good enough to predict, for instance, its magnitude. The problem of the nature and effects of dark energy is no minor detail, nor does anything about it suggest a perpetually unfathomable mystery. So much for cosmology being a fundamentally completed science.

Depending on what dark energy turns out to be, it may well be possible to harness it in the distant future, to provide energy for knowledge-creation to continue for ever. Because this energy would have to be collected over ever greater distances, the computation would have to become ever slower. In a mirror image of what would happen in omega-point cosmologies, the inhabitants of the universe would notice no slowdown, because, again, they would be instantiated as computer programs whose total number of steps would be unbounded. Thus dark energy, which has ruled out one scenario for the unlimited growth of knowledge, would provide the literal driving force of another.

The new cosmological models describe universes that are infinite in their spatial dimensions. Because the Big Bang happened a finite time ago, and because of the finiteness of the speed of light, we shall only ever see a finite portion of infinite space – but that portion will continue to grow for ever. Thus, eventually, ever more unlikely phenomena will
come into view. When the total volume that we can see is a million times larger than it is now, we shall see things that have a probability of one in a million of existing in space as we see it today. Everything physically possible will eventually be revealed: watches that came into existence spontaneously; asteroids that happen to be good likenesses of William Paley; everything. According to the prevailing theory, all those things
exist today
, but many times too far away for light to have reached us from them – yet.

Light becomes fainter as it spreads out: there are fewer photons per unit area. That means that ever larger telescopes are needed to detect a given object at ever larger distances. So there may be a limit to how distant – and therefore how unlikely – a phenomenon we shall ever be able to see. Except, that is, for one type of phenomenon: a beginning of infinity. Specifically, any civilization that is colonizing the universe in an unbounded way will eventually reach our location.

Hence a single infinite space could play the role of the infinitely many universes postulated by anthropic explanations of the fine-tuning coincidences. In some ways it could play that role better: if the probability that such a civilization could form is not zero, there must be infinitely many such civilizations in space, and they will eventually encounter each other. If they could estimate that probability from theory, they could test the anthropic explanation.

Furthermore, anthropic arguments could not only dispense with all those parallel universes,
*
they could dispense with the variant laws of physics too. Recall from
Chapter 6
that all the mathematical functions that occur in physics belong to a relatively narrow class, the
analytic functions
. They have a remarkable property: if an analytic function is non-zero at even one point, then over its entire range it can pass through zero only at isolated points. So this must be true of ‘the probability that an astrophysicist exists’ expressed as a function of the constants of physics. We know little about this function, but we do know that it is non-zero for at least one set of values of the constants, namely ours. Hence we also know that it is non-zero for almost any values. It is
presumably unimaginably tiny for almost all sets of values – but, nevertheless, non-zero. And hence, almost whatever the constants were, there would be infinitely many astrophysicists in our single universe.

Unfortunately, at this point the anthropic explanation of fine-tuning has cancelled itself out: astrophysicists exist whether there is fine-tuning or not. So, in the new cosmology even more than in the old one, the anthropic argument does not explain the fine-tuning. Nor, therefore, can it solve the Fermi problem, ‘Where are they?’ It may turn out to be a necessary part of the explanation, but it can never explain anything by itself. Also, as I explained in
Chapter 8
, any theory involving an anthropic argument must provide a measure for defining probabilities in an infinite set of things. It is unknown how to do that in the spatially infinite universe that cosmologists currently believe we live in.

That issue has a wider scope. For example, there is the so-called ‘quantum suicide argument’ in regard to the multiverse. Suppose you want to win the lottery. You buy a ticket and set up a machine that will automatically kill you in your sleep if you lose. Then, in all the histories in which you do wake up, you are a winner. If you do not have loved ones to mourn you, or other reasons to prefer that most histories not be affected by your premature death, you have arranged to get something for nothing with what proponents of this argument call ‘subjective certainty’. However, that way of applying probabilities does not follow directly from quantum theory, as the usual one does. It requires an additional assumption, namely that when making decisions one should ignore the histories in which the decision-maker is absent. This is closely related to anthropic arguments. Again, the theory of probability for such cases is not well understood, but my guess is that the assumption is false.

A related assumption occurs in the so-called
simulation argument
, whose most cogent proponent is the philosopher Nick Bostrom. Its premise is that in the distant future the whole universe as we know it is going to be simulated in computers (perhaps for scientific or historical research) many times – perhaps infinitely many times. Therefore virtually all instances of us are in those simulations and not the original world. And therefore we are almost certainly living in a simulation. So the argument goes. But is it really valid to equate ‘most instances’ with ‘near certainty’ like that?

For an inkling of why it might not be, consider a thought experiment. Imagine that physicists discover that space is actually many-layered like puff pastry; the number of layers varies from place to place; the layers split in some places, and their contents split with them. Every layer has identical contents, though. Hence, although we do not feel it, instances of us split and merge as we move around. Suppose that in London space has a million layers, while in Oxford it has only one. I travel frequently between the two cities, and one day I wake up having forgotten which one I am in. It is dark. Should I bet that I am much more likely to be in London, just because a million times as many instances of me ever wake up in London as in Oxford? I think not. In that situation it is clear that counting the number of instances of oneself is no guide to the probability one ought to use in decision-making. We should be counting histories not instances. In quantum theory, the laws of physics tell us how to count histories by measure. In the case of multiple simulations, I know of no good argument for
any
way of counting them: it is an open question. But I do not see why repeating the same simulation of me a million times should in any sense make it ‘more likely’ that I am a simulation rather than the original. What if one computer uses a million times as many electrons as another to represent each bit of information in its memory? Am I more likely to be ‘in’ the former computer than the latter?

A different issue raised by the simulation argument is this: will the universe as we know it really be simulated often in the future? Would that not be immoral? The world as it exists today contains an enormous amount of suffering, and whoever ran such a simulation would be responsible for recreating it. Or would they? Are two identical instances of a quale the same thing as one? If so, then creating the simulation would not be immoral – no more so than reading a book about past suffering is immoral. But in that case how different do two simulations of people have to be before they count as two people for moral purposes? Again, I know of no good answer to those questions. I suspect that they will be answered only by the explanatory theory from which AI will also follow.

Here is a related but starker moral question. Take a powerful computer and set each bit randomly to 0 or 1 using a quantum randomizer. (That means that 0 and 1 occur in histories of equal measure.) At that
point
all possible contents
of the computer’s memory exist in the multiverse. So there are necessarily histories present in which the computer contains an AI program – indeed, all possible AI programs in all possible states, up to the size that the computer’s memory can hold. Some of them are fairly accurate representations of you, living in a virtual-reality environment crudely resembling your actual environment. (Present-day computers do not have enough memory to simulate a realistic environment accurately, but, as I said in
Chapter 7
, I am sure that they have more than enough to simulate a person.) There are also people in every possible state of suffering. So my question is: is it wrong to switch the computer on, setting it executing all those programs simultaneously in different histories? Is it, in fact, the worst crime ever committed? Or is it merely inadvisable, because the combined measure of all the histories containing suffering is very tiny? Or is it innocent and trivial?

An even more dubious example of anthropic-type reasoning is the
doomsday argument.
It attempts to estimate the life expectancy of our species by assuming that the typical human is roughly halfway through the sequence of all humans. Hence we should expect the total number who will ever live to be about twice the number who have lived so far. Of course this is prophecy, and for that reason alone cannot possibly be a valid argument, but let me briefly pursue it in its own terms. First, it does not apply at all if the total number of humans is going to be infinite – for in that case every human who ever lives will live unusually early in the sequence. So, if anything, it suggests that we are at the beginning of infinity.

Also, how long is a human lifetime? Illness and old age are going to be cured soon – certainly within the next few lifetimes – and technology will also be able to prevent deaths through homicide or accidents by creating backups of the states of brains, which could be uploaded into new, blank brains in identical bodies if a person should die. Once that technology exists, people will consider it considerably more foolish not to make frequent backups
of themselves
than they do today in regard to their computers. If nothing else, evolution alone will ensure that, because those who do not back themselves up will gradually die out. So there can be only one outcome: effective immortality for the whole human population, with the present generation being one of the last that will have short lives. That being so, if our species will
nevertheless have a finite lifetime, then knowing the total number of humans who will ever live provides no upper bound on that lifetime, because it cannot tell us how long the potentially immortal humans of the future will live before the prophesied catastrophe strikes.

In 1993 the mathematician Vernor Vinge wrote an influential essay entitled ‘The Coming Technological Singularity’, in which he estimated that, within about thirty years, predicting the future of technology would become impossible – an event that is now known simply as ‘the Singularity’. Vinge associated the approaching Singularity with the achievement of AI, and subsequent discussions have centred on that. I certainly
hope
that AI is achieved by then, but I see no sign yet of the theoretical progress that I have argued must come first. On the other hand, I see no reason to single out AI as a mould-breaking technology: we already have billions of humans.

Most advocates of the Singularity believe that, soon after the AI breakthrough,
superhuman
minds will be constructed and that then, as Vinge put it, ‘the human era will be over.’ But my discussion of the universality of human minds rules out that possibility. Since humans are already universal explainers and constructors, they can already transcend their parochial origins, so there can be no such thing as a superhuman mind as such. There can only be further automation, allowing the existing kind of human thinking to be carried out faster, and with more working memory, and delegating ‘perspiration’ phases to (non-AI) automata. A great deal of this has already happened with computers and other machinery, as well as with the general increase in wealth which has multiplied the number of humans who are able to spend their time thinking. This can indeed be expected to continue. For instance, there will be ever-more-efficient human–computer interfaces, no doubt culminating in add-ons for the brain. But tasks like internet searching will never be carried out by super-fast AIs scanning billions of documents creatively for meaning, because they will not want to perform such tasks any more than humans do. Nor will artificial scientists, mathematicians and philosophers ever wield concepts or arguments that humans are inherently incapable of understanding. Universality implies that, in every important sense, humans and AIs will never be other than equal.

Other books

Long Goodbyes by Scott Hunter
Nowhere Girl by Susan Strecker
Dark Mirror by Barry Maitland
Jolene 1 by Sarina Adem
Master of the Game by Sidney Sheldon
Sold by Sean Michael
How Not To Be Popular by Jennifer Ziegler
Cherrybrook Rose by Tania Crosse
Soldier of Arete by Wolfe, Gene
The Red Pony by John Steinbeck