The Big Questions: Physics (26 page)

Read The Big Questions: Physics Online

Authors: Michael Brooks

BOOK: The Big Questions: Physics
4.79Mb size Format: txt, pdf, ePub
 

To keep us from reacting to this horror, we are granted an existence in a simulated reality, accessed via a direct connection to our brains. All our conscious experiences, then, are nothing more than the product of a computer program.

It’s not an unprecedented idea. Philosophers since Descartes have argued about whether our perception of reality could be the product of deception, and science-fiction writers have used a similar premise many times. In 1966, for example, Philip K. Dick published a story where people bought ‘implanted memories’ that enabled them to experience things they had never done. The TV series
Doctor Who
introduced a massive computer system called the ‘Matrix’ in 1976; this could also be directly connected to the brain to allow out-of-body experiences.

 

But the 1999 movie
The Matrix
obviously hit the screens at just the right time. Within a few years of its release, physicists were discussing the idea at scientific conferences, and every time they
did, it was the movie that was referenced. Strange as that may seem, there was good reason. The idea that we live in a simulated reality was one of the few plausible answers to a very old question that had just resurfaced in physics.

 

Looking out at the universe, astronomers have noticed something strange. They almost hesitate to mention it, but it is like an elephant in the room, and has to be acknowledged. This universe is remarkably good for us. Change it a little bit – tweak one of the laws of nature, say – and we simply wouldn’t have arisen. It is almost as if the universe was purposely designed for our habitation. If that is the case, could the designer be a race of super-intelligent beings who have some reason – maybe work, maybe pleasure – to will our existence?

 

It’s a big ‘if’, of course – perhaps the biggest ‘if’ in physics. The discussion of that ‘if’ even has a name: the ‘anthropic principle’. It’s a misnomer really. For starters, it’s more of a suggestion than a principle. And, although anthropic means ‘human-centred’, that’s not really what it’s about. The person who coined the term, astrophysicist Brandon Carter, meant it to encompass not just human life, but the existence of intelligent life in general.

 

Carter came up with the anthropic principle at a time when physicists were coming to terms with a new paradigm: the Big Bang. Until the idea of a beginning to the universe was widely accepted, physicists had assumed there was no such thing as a ‘special’ time in the universe’s history. The universe had always existed, and would always exist, pretty much as it is.

 

With the 1963 discovery of the cosmic microwave background radiation, though, everything changed. Once the radiation was recognized as an echo of the moment of creation, the universe was seen to have an unfolding history, punctuated by significant events. The trouble was, one of the central premises of astronomy has always been the Copernican principle, which asserts that humans do not hold any special place in space, nor in time. With the Big Bang, the Copernican principle was under threat.

 
A special universe?
 

But, Carter said, whatever our prejudices, we have to acknowledge there is something special about our relationship with the universe. ‘Although our situation is not necessarily central, it is inevitably privileged to some extent,’ he told an assembly of scientists in 1974. That privilege comes, first, through the laws that govern the universe’s evolution.

 

There are a number of reasons why one might think that these laws were designed to give us a comfortable existence. The first is the rather convenient strength of gravity. After the Big Bang, space was expanding, forcing all the particles of matter further and further away from each other. The force of gravity was working against that expansion, however: the mutual gravitational attraction of the particles pulled them towards each other.

 

There are three ways in which this could have worked out. First, the expansion of space could have overwhelmed gravity’s pull. In this scenario, known as the ‘open’ universe, every particle of matter would be pushed further and further apart, and the increasing separation would make the gravitational pull weaker and weaker. In this situation, galaxies – maybe even the stars themselves – would not have formed.

 

What if gravity’s pull overwhelmed the push of expanding space? Then stars and galaxies might have briefly formed, but the strength of gravity means they would have quickly collapsed in on themselves and each other, and the universe would have imploded in a huge gravitational crunch. This is the ‘closed’ universe.

 

The third, ‘critical’ scenario involves a delicate balance between push and pull. Here the density of matter in the universe is such that, just after the Big Bang, the gravitational pull almost perfectly offsets the expansion of space. It pulls matter together just enough for stars to form, and for the stars to gather into galaxies. Thanks to their mutual gravitational pull, the expansion of the space between them is slowed, and the universe is granted a long and fruitful life.

 
A cosmic coincidence
 

So, what is the difference between these scenarios? When astronomers crunch the numbers, they first look at the critical universe. For this, they need to examine the density of matter in the universe, a parameter they call ‘Omega’. It turns out that, for the critical universe scenario to occur, Omega had to have a particular value at one second after the Big Bang. Astronomers set this at one. And if Omega had been greater or less than one by an astonishingly small amount – one in a million billion – the universe would have either crunched closed or flung matter far apart way before life could establish itself in the benign environment surrounding a young star such as our sun.

 

It’s not the only cosmic coincidence. If the strength of gravity is conveniently but finely balanced against the initial expansion of space, allowing stars like our sun to form, consider the efficiency with which the sun releases energy by fusing hydrogen atoms to form helium. The efficiency is around 0.007. That is, when the atomic masses of the hydrogen atoms are compared to the mass of the new-formed helium, 0.7 per cent has disappeared. This is the energy – mainly heat – that powers life on Earth.

 

So how much leeway is there here? Raising the efficiency of transformation means allowing a slightly stronger ‘glue’ between the particles in the nucleus of an atom. If the efficiency were higher than 0.008, all the hydrogen created in the Big Bang would have been turned into helium almost immediately, and there would be none left to burn in stars. It would give a dead universe, in other words. Going the other way, lowering the efficiency to 0.006 would mean nuclear glue so weak that helium would never form, and the sun would never ignite. Again, no life would be possible.

 

Then there is the fact that the electric force is around 10
40
times bigger than gravity. This gives atoms their fundamental characteristics. There is a mutual repulsion between the positively charged nucleus and the negatively charge orbiting electrons. But there is also a mutual attraction due to gravity. Alter the ratio between them by a small amount, and you change the
characteristics of atoms so much that it alters the characteristics of stars – go one way and you would create a universe where planets don’t form around planets like our sun. Go in the other direction and you threaten the existence of the supernovae that forged the carbon atoms that underpin the chemistry of life. There are other examples. Reduce the neutron mass by a fraction of 1 per cent, and no atoms would form, for example.

Monkeying with the universe
 

It all sounds like a fix, doesn’t it? The great English astronomer Fred Hoyle thought so. He once complained that the universe was so bio-friendly that it looked like ‘a put-up job’. Someone or something, he suggested, was ‘monkeying’ with the laws of physics to facilitate the production of life.

 

So what does a scientist do about this? Besides saying God did it – which leads scientists nowhere in the quest for an answer – there are three options. The first option is to turn the problem on its head. We wouldn’t be around to worry about these things if the universe were any different. Of course it is so precisely balanced for life. We could not be in a universe that was any different. Such an approach forces us to consider the existence of other universes where the laws of physics give different values to those crucial numbers. Besides being dead universes, however, they are also scientific dead ends. We cannot access them, so we have to content ourselves with not finding a satisfactory answer to the question of our universe’s fine-tuning for life. A second approach is similarly unsatisfactory: we put the finetuning down to the existence of a supernatural designer, a being that transcends the natural laws. Here, too, we have no hope of discerning whether the approach is the right one.

 

The third option is the one we have been working towards: that the universe is so well suited to our existence because it was designed for our existence. The designers in this case are not deities. They are beings like us. Only much, much more advanced in their control of technology. So advanced, in fact, that they can create two amazing things. First, beings that exhibit what we consider to be consciousness. And second, a world for those beings
to experience with their consciousness. This is the logical sequence known as the simulation argument. The first person to pull it all together was a philosopher called Nick Bostrom. In 2001, he began circulating a paper entitled ‘Are You Living in a Computer Simulation?’. His answer was, yes, quite possibly.

 
Creating the world anew
 

Bostrom’s argument is fairly straightforward. Stop and think about the computing power now at your disposal. Compare that to the power available a decade ago. What about two decades ago? Now translate that into the future. If our civilization survives the next millennium, the computing power available to its population will be of a magnitude that is unimaginable to us today.

 

Now come back to the present. What is one of the most popular types of computer games? Simulation. Take the extraordinary success of the simulation
Second Life
, for example. It gives people the opportunity for an alternative existence – an opportunity that millions have grabbed with both hands. Other simulation games allow you to play the deity, controlling others, or just watching how their fates unfold. Something about the human mind loves to get involved in another world. And why should things be any different one thousand years into the future?

 

 

Bostrom’s argument is that one out of the following three propositions has to be true. The first is that humans are overwhelmingly likely to become extinct before reaching a level of sophistication where they are able to run computer simulations – virtual reality – that would mirror what we experience as reality. The second is that any such civilizations that survive are extremely unlikely to run such simulations. The third is that we are almost certainly living in such a computer simulation.

 

The first proposition seems unlikely. There is no a priori reason why we will necessarily wipe ourselves out, or be wiped out. The second seems even more unlikely: our own delight in simulations gives no room for supposing that, given even more simulating power, we won’t use it. Which leaves the third proposition. Given the fact that we are talking about a far future, where an almost infinite number of civilizations spread throughout the ‘original’ universe will be running simulations, what are the chances that we are in that original universe and not a simulation? Infinitesimal. In other words, we are almost certainly living in a simulation.

 

It’s not something to get depressed about – the world is as real as it has ever been. What’s more, unlike the ideas of universes run by supernatural gods, the simulation argument just might be open to testing. The first point to recognize is that it does indeed answer the question about fine-tuning. The simulation’s creators must have a reason to create it. It seems sensible to suggest, therefore, that the overwhelming majority of simulations will have to work well enough to be interesting to their creators and users. Our experience with creating simulation environments suggests that this means populating them with beings that can enjoy their ‘existence’, which, in turn, tends to involve an ability to interact with the simulated world and its inhabitants.

 

A plausible simulation will therefore encourage the development of something we would regard as complex life. As we have seen with our look at the laws of nature, that gives a fairly narrow range of possibilities for the set-up. That at least provides a plausible explanation for the fine-tuning. Now we have to look for a scientific test for such an explanation. Again, this can be found within our own experience of creating simulations.

Other books

Out of Time by April Sadowski
The Lostkind by Stephens, Matt
The Nun's Tale by Candace Robb
Sharpe's Fury - 11 by Bernard Cornwell
Quicksilver by Stephanie Spinner
Hookah (Insanity Book 4) by Cameron Jace
ToLoveaCougar by Marisa Chenery
Double Indemnity by James M. Cain