Three Roads to Quantum Gravity (24 page)

BOOK: Three Roads to Quantum Gravity
7.51Mb size Format: txt, pdf, ePub
A string seen through a Planck-scale magnifying glass is found to consist of discrete bits, rather like a wooden toy snake.
There turns out to be a simple way to express the fact that there is a minimum size that can be probed in string theory. In ordinary quantum theory the limitations to what can be observed are expressed in terms of the Heisenberg uncertainty principle. This says that
Δ
x
> (h/Δp)
where Δx is the uncertainty in position, h is Planck’s famous constant and Δp is the uncertainty in momentum. String theory amends this equation to
Δx > (h/Δ
p
) + CΔ
p
where
C
is another constant that has to do with the Planck scale. Now, without this new term one can make the uncertainty in position as small as one likes, by making the uncertainty in momentum large. With the new term in the equation one cannot do this, for when the uncertainty in momentum becomes large enough the second term comes in and forces the uncertainty in position to start to increase
rather than decrease. The result is that there is a minimum value to the uncertainty in position, and this means that there is an absolute limit to the precision with which any object can be located in space.
This tells us that M theory, if it exists, cannot describe a world in which space is continuous and one can pack an infinite amount of information into any volume, no matter how small. This suggests that whatever it is, M theory will not be some direct extension of string theory, as it will have to be formulated in a different conceptual language. The present formulation of string theory is likely, then, to be a transitional stage in which elements of a new physics are mixed up with the old Newtonian framework, according to which space and time are continuous, infinitely divisible and absolute. The problem that remains is to separate out the old from new and find a coherent way to formulate a theory using only those principles that are supported by the experimental physics of the twentieth and twenty-first centuries.
III
THE PRESENT FRONTIERS
CHAPTER 12
THE HOLOGRAPHIC PRINCIPLE
I
n Part II we looked at three different approaches to quantum gravity: black hole thermodynamics, loop quantum gravity and string theory. While each takes a different starting point, they all agree that when viewed on the Planck scale, space and time cannot be continuous. For seemingly different reasons, at the end of each of these roads one reaches the conclusion that the old picture according to which space and time are continuous must be abandoned. On the Planck scale, space appears to be composed of fundamental discrete units.
Loop quantum gravity gives us a detailed picture of these units, in terms of spin networks. It tells us that areas and volumes are quantized and come only in discrete units. String theory at first appears to describe a continuous string moving in a continuous space. But a closer look reveals that a string is actually made of discrete pieces, called string bits, each of which carries a discrete amount of momentum and energy. This is expressed in a simple and beautiful way as an extension of the uncertainty principle, which tells us that there is a smallest possible length.
Black hole thermodynamics leads to an even more extreme conclusion, the Bekenstein bound. According to this principle the amount of information that can be contained in any region is not only finite, it is proportional to the area of the boundary of the region, measured in Planck units. This implies that the world must be discrete on the Planck scale,
for were it continuous any region could contain an infinite amount of information.
It is remarkable that all three roads lead to the general conclusion that space becomes discrete on the Planck scale. However, the three different pictures of quantum spacetime that emerge seem rather different. So it remains to join these pictures together to make a single picture which, when we understand it, will become the one final road to quantum gravity.
At first it may not be obvious how to do this. The three different approaches investigate different aspects of the world. Even if there is one ultimate theory of quantum gravity, there will be different physical regimes, in which the basic principles may manifest themselves differently. This seems to be what is happening here. The different versions of discreteness arise from asking different questions. We would find an actual contradiction only if, when we asked the same question in two different theories, we got two different answers. So far this has not happened, because the different approaches ask different kinds of question. It is possible that the different approaches represent different windows onto the same quantum world - and if this is so, there must be a way of unifying them all into a single theory.
If the different approaches are to be unified, there must be a principle which expresses the discreteness of quantum geometry in a way that is consistent with all three approaches If such a principle can be found, then it will serve as a guide to combining them into one theory. In fact, just such a principle has been proposed in recent years. It is called the holographic principle.
Several different versions of this principle have been proposed by different people. After a lot of discussion over the last few years there is still no agreement about exactly what the holographic principle means, but there is a strong feeling among those of us in the field that some version of the holographic principle is true. And if it is true, it will be the first principle which makes sense only in the context of a quantum theory of gravity. This means that even if it is presently understood as a consequence of the principles of
general relativity and quantum theory, there is a chance that in the end the situation will be reversed and the holographic principle will become part of the foundations of physics, from which quantum theory and relativity may both be deduced as special cases.
The holographic principle was inspired first of all by the Bekenstein bound, which we discussed in Chapter 8. Here is one way to describe the Bekenstein bound. Consider any physical system, made of anything at all - let us call it The Thing. We require only that The Thing can be enclosed within a finite boundary, which we shall call The Screen (
Figure 39
). We would like to know as much as possible about The Thing. But we cannot touch it directly - we are restricted to making measurements of it on The Screen. We may send any kind of radiation we like through The Screen, and record whatever changes result on The Screen. The Bekenstein bound says that there is a general limit to how many yes/no questions we can answer about The Thing by making observations through The Screen that surrounds it. The number must be less than one-quarter of the area of The Screen, in Planck units. What if we ask more questions? The principle tells us that either of two things must happen. Either the area of the screen will increase, as a result of doing an experiment that asks questions beyond the limit; or the experiments we do that go beyond the limit will erase, or invalidate, the answers to some of the previous questions. At no time can we know more about The Thing than the limit, imposed by the area of The Screen.
The argument for the Bekenstein bound. We observe The Thing through The Screen, which limits the amount of information we can receive about The Thing to what can be represented on The Screen.
What is most surprising about this is not just that there is a limit on the amount of information that can be coded into The Thing - after all, if we believe that the world has a discrete structure then this is exactly what we should expect. It is just that we would normally expect the amount of information that can be coded into The Thing to be proportional to its volume, not to the area of a surface that contains it. For example, suppose that The Thing is a computer memory. If we continue to miniaturize computers more and more, we shall eventually be building them purely out of the quantum geometry in space - and that has to be the limit of what can be done. Imagine that we can then build a computer memory out of nothing but the spin network states that describe the quantum geometry of space. The number of different such spin network states can be shown to be proportional to volume of the world that state describes (The reason is that there are so many states per node, and the volume is proportional to the number of nodes.) The Bekenstein bound does not dispute this, but it asserts that the amount of information that we outside observers could extract is proportional to the area and not the volume. And the area is proportional not to the number of nodes of the network, but to the number of edges that go through the screen (
Figure 40
). This tells us that the most efficient memory we could construct out of the quantum geometry of space is achieved by constructing a surface and putting one bit of memory in every region 2 Planck lengths on a side. Once we have done this, building the memory into the third dimension will not help.
A spin network, which describes the quantum geometry of space, intersects a boundary such as a horizon in a finite number of points. Each intersection adds to the total area of the boundary.
This idea is very surprising. If it is to be taken seriously, there had better be a good reason for it. In fact there is, for the Bekenstein bound is a consequence of the second law of thermodynamics. The argument that leads from the laws of thermodynamics to the Bekenstein bound is not actually very complicated. Because of its importance I give a form of it in the box on the next page.
There are at least two more good reasons to believe in the Bekenstein bound. One is that the relationship between Einstein’s theory and the bound can be turned around. In the argument for the Bekenstein bound as I present it in the box, the bound is partly a consequence of the equations of Einstein’s general theory of relativity. But, as Ted Jacobson has shown in a justly famous paper, the argument can be turned on its head so that the equations of Einstein’s theory can be derived by assuming that the laws of thermodynamics and the Bekenstein bound are true. He does this by showing that the area of The Screen must change when energy flows through it, because the laws of thermodynamics require that some entropy flows along with the energy. The result is that the geometry of space, which determines the area of The Screen, must change in response to the flow of energy. Jacobson shows that this actually implies the equations of Einstein’s theory.
Argument for the Bekenstein bound
Let us suppose that The Thing is big enough to be described both in terms of an exact quantum description and in terms of an averaged, macroscopic description. We shall argue by contradiction, which means that we first assume the opposite of what we are trying to show. Thus we assume that the amount of information required to describe The Thing is much larger than the area of The Screen. For simplicity, we assume that The Screen is spherical.
We know that The Thing is not a black hole, because we know that the entropy of any black hole that can fit into The Screen must be equivalent to an area less than that of the screen. But in this case its entropy must be lower than the area of the screen, in Planck units. If we assume that the entropy of a black hole counts the number of its possible quantum states, this is much less than the information contained in The Thing.
It then follows (from a theorem of classical general relativity) that The Thing has less energy than a black hole that would just fit inside The Screen. Now, we can slowly add energy to The Thing by dripping it slowly through the screen. We shall reach some point by which we shall have given it so much energy that, by that same theorem, it must collapse to a black hole. But then we know that its entropy is given by one-quarter of the area of the screen. Since that is lower than the entropy of The Thing initially, we have managed to lower the entropy of a system. This contradicts the second law of thermodynamics.
We dripped the energy in slowly to ensure that nothing surprising happens outside The Screen which might increase the entropy strongly somewhere else. There seem to be no loopholes in this argument. Therefore, if we believe the second law of thermodynamics, we must believe that the most entropy that we, outside the Screen, can attribute to The Thing is one-quarter of the area of The Screen. And because entropy is a count of answers to yes/no questions, this implies the Bekenstein bound as we have stated it.

Other books

A Cowboy Unmatched by Karen Witemeyer
Strongman by Roxburgh, Angus
A Taste of Midnight by Lara Adrian
Kiss of a Traitor by Cat Lindler
Memento mori by César Pérez Gellida
Head Wounds by Chris Knopf
Island of Fog (Book 1) by Robinson, Keith
Riverbreeze: Part 3 by Johnson, Ellen E.
Ghost Boy by Iain Lawrence