Authors: Michio Kaku,Robert O'Keefe
Today, we know that the cosmological constant is very close to zero. If there were a small negative cosmological constant, then gravity would be powerfully attractive and the entire universe might be, say, just a few feet across. (By reaching out with your hand, you should be able to grab the person in front of you, who happens to be yourself.) If there were a small positive cosmological constant, then gravity would be repulsive and everything would be flying away from you so fast that their light would never reach you. Since neither nightmarish scenario occurs, we are confident that the cosmological constant is extremely tiny or even zero.
But this problem resurfaced in the 1970s, when symmetry breaking was being intensively studied in the Standard Model and GUT theory. Whenever a symmetry is broken, a large amount of energy is dumped into the vacuum. In fact, the amount of energy flooding the vacuum is 10
100
times larger than the experimentally observed amount. In all of physics, this discrepancy of 10
100
is unquestionably the largest. Nowhere in physics do we see such a large divergence between theory (which predicts a large vacuum energy whenever a symmetry is broken) and experiment (which measures zero cosmological constant in the universe). This is where Coleman’s wormholes comes in; they’re needed to cancel the unwanted contributions to the cosmological constant.
According to Hawking, there may be an infinite number of alternative universes coexisting with ours, all of which are connected by an infinite web of interlocking wormholes. Coleman tried to add up the contribution from this infinite series. After the sum was performed, he found a surprising result: The wave function of the universe prefers to have zero cosmological constant, as desired. If the cosmological constant was zero, the wave function became exceptionally large, meaning that there was a high probability of finding a universe with zero cosmological constant. Moreover, the wave function of the universe quickly vanished
if the cosmological constant became nonzero, meaning that there was zero probability for that unwanted universe. This was exactly what was needed to cancel the cosmological constant. In other words, the cosmological constant was zero because that was the most probable outcome. The only effect of having billions upon billions of parallel universes was to keep the cosmological constant zero in our universe.
Because this was such an important result, physicists immediately began to leap into the field. “When Sidney came out with this work, everyone jumped,” recalls Stanford physicist Leonard Susskind.
13
In his typical puckish way, Coleman published this potentially important result with a bit of humor. “It is always possible that unknown to myself I am up to my neck in quicksand and sinking fast,” he wrote.
14
Coleman likes to impress audiences vividly with the importance of this problem, that the chances of canceling out a cosmological constant to one part in 10
100
is fantastically small. “Imagine that over a ten-year period you spend millions of dollars without looking at your salary, and when you finally compare what you earn with what you spent, they balance out to the penny,” he notes.
15
Thus his calculation, which shows that you can cancel the cosmological constant to one part in 10
100
, is a highly nontrivial result. To add frosting to the cake, Coleman emphasizes that these wormholes also solve another problem: They help to determine the values of the fundamental constants of the universe. Coleman adds, “It was a completely different mechanism from any that had been considered. It was Batman swinging in on his rope.”
16
But criticisms also began to surface; the most persistent criticism was that he assumed that the wormholes were small, on the order of the Planck length, and that he forgot to sum over large wormholes. According to the critics, large wormholes should also be included in his sum. But since we don’t see large, visible wormholes anywhere, it seems that his calculation has a fatal flaw.
Unfazed by this criticism, Coleman shot back in his usual way: choosing outrageous titles for his papers. To prove that large wormholes can be neglected in his calculation, he wrote a rebuttal to his critics with the title “Escape from the Menace of the Giant Wormholes.” When asked about his titles, he replied, “If Nobel Prizes were given for titles, I’d have already collected mine.”
17
If Coleman’s purely mathematical arguments are correct, they would give hard experimental evidence that wormholes are an essential feature of all physical processes, and not just some pipe dream. It would mean that wormholes connecting our universe with an infinite number of dead universes are essential to prevent our universe from wrapping itself up
into a tight, tiny ball, or from exploding outward at fantastic rates. It would mean that wormholes are the essential feature making our universe relatively stable.
But as with most developments that occur at the Planck length, the final solution to these wormhole equations will have to wait until we have a better grasp of quantum gravity. Many of Coleman’s equations require a means of eliminating the infinities common to all quantum theories of gravity, and this means using superstring theory. In particular, we may have to wait until we can confidently calculate finite quantum corrections to his theory. Many of these strange predictions will have to wait until we can sharpen our calculational tools.
As we have emphasized, the problem is mainly theoretical. We simply do not have the mathematical brainpower to break open these well-defined problems. The equations stare at us from the blackboard, but we are helpless to find rigorous, finite solutions to them at present. Once physicists have a better grasp of the physics at the Planck energy, then a whole new universe of possibilities opens up. Anyone, or any civilization, that truly masters the energy found at the Planck length will become the master of all fundamental forces. That is the next topic to which we will turn. When can we expect to become masters of hyperspace?
What does it mean for a civilization to be a million years old? We have had radio telescopes and spaceships for a few decades; our technical civilization is a few hundred years old … an advanced civilization millions of years old is as much beyond us as we are beyond a bush baby or a macaque.
Carl Sagan
PHYSICIST Paul Davies once commented on what to expect once we have solved the mysteries of the unification of all forces into a single superforce. He wrote that
we could change the structure of space and time, tie our own knots in nothingness, and build matter to order. Controlling the superforce would enable us to construct and transmute particles at will, thus generating exotic forms of matter. We might even be able to manipulate the dimensionality of space itself, creating bizarre artificial worlds with unimaginable properties. Truly we should be lords of the universe.
1
When can we expect to harness the power of hyperspace? Experimental verification of the hyperspace theory, at least indirectly, may come in the twenty-first century. However, the energy scale necessary to manipulate (and not just verify) ten-dimensional space-time, to become “lords of the universe,” is many centuries beyond today’s technology. As we have seen, enormous amounts of matter-energy are necessary to
perform near-miraculous feats, such as creating wormholes and altering the direction of time.
To be masters of the tenth dimension, either we encounter intelligent life within the galaxy that has already harnessed these astronomical energy levels, or we struggle for several thousand years before we attain this ability ourselves. For example, our current atom smashers or particle accelerators can boost the energy of a particle to over 1 trillion electron volts (the energy created if an electron were accelerated by 1 trillion volts). The largest accelerator is currently located in Geneva, Switzerland, and operated by a consortium of 14 European nations. But this energy pales before the energy necessary to probe hyperspace: 10
19
billion electron volts, or a quadrillion times larger than the energy that might have been produced by the SSC.
A quadrillion (1 with 15 zeros after it) may seem like an impossibly large number. The technology necessary to probe this incredible energy may require atom smashers billions of miles long, or an entirely new technology altogether. Even if we were to liquidate the entire gross national product of the world and build a super-powerful atom smasher, we would not be able to come close to this energy. At first, it seems an impossible task to harness this level of energy.
However, this number does not seem so ridiculously large if we understand that technology expands
exponentially
, which is difficult for our minds to comprehend. To understand how fast exponential growth is, imagine a bacterium that splits in half every 30 minutes. If its growth is unimpeded, then within a few weeks this single bacterium will produce a colony that will weigh as much as the entire planet earth.
Although humans have existed on this planet for perhaps 2 million years, the rapid climb to modern civilization within the last 200 years was possible due to the fact that the growth of scientific knowledge is exponential; that is, its rate of expansion is proportional to how much is already known. The more we know, the faster we can know more. For example, we have amassed more knowledge since World War II than all the knowledge amassed in our 2-million-year evolution on this planet. In fact, the amount of knowledge that our scientists gain doubles approximately every 10 to 20 years.
Thus it becomes important to analyze our own development historically. To appreciate how technology can grow exponentially, let us analyze our own evolution, focusing strictly on the energy available to the average human. This will help put the energy necessary to exploit the ten-dimensional theory into proper historical perspective.
Today, we may think nothing about taking a Sunday drive in the country in a car with a 200-horsepower engine. But the energy available to the average human during most of our evolution on this planet was considerably less.
During this period, the basic energy source was the power of our own hands, about one-eighth of a horsepower. Humans roamed the earth in small bands, hunting and foraging for food in packs much like animals, using only the energy of their own muscles. From an energy point of view, this changed only within the last 100,000 years. With the invention of hand tools, humans could extend the power of their limbs. Spears extended the power of their arms, clubs the power of their fists, and knives the power of their jaws. In this period, their energy output doubled, to about one-quarter of a horsepower.
Within the past 10,000 or so years, the energy output of a human doubled once again. The main reason for this change was probably the end of the Ice Age, which had retarded human development for thousands of years.
Human society, which consisted of small bands of hunters and gatherers for hundreds of thousands of years, changed with the discovery of agriculture soon after the ice melted. Roving bands of humans, not having to follow game across the plains and forests, settled in stable villages where crops could be harvested around the year. Also, with the melting of the ice sheet came the domestication of animals such as horses and oxen; the energy available to a human rose to approximately 1 horsepower.
With the beginning of a stratified, agrarian life came the division of labor, until society underwent an important change: the transition to a slave society. This meant that one person, the slave owner, could command the energy of hundreds of slaves. This sudden increase in energy made possible inhuman brutality; it also made possible the first true cities, where kings could command their slaves to use large cranes, levers, and pulleys to erect fortresses and monuments to themselves. Because of this increase in energy, out of the deserts and forests rose temples, towers, pyramids, and cities.
From an energy point of view, for about 99.99% of the existence of humanity on this planet, the technological level of our species was only one step above that of animals. It has only been within the past few hundred years that humans have had more than 1 horsepower available to them.
A decisive change came with the Industrial Revolution. Newton’s discovery of the universal law of gravity and motion made it possible to reduce mechanics to a set of well-defined equations. Thus Newton’s classical theory of the gravitational force, in some sense, paved the way for the modern theory of machines. This helped to make possible the widespread use of steam-powered engines in the nineteenth century; with steam, the average human could command tens to hundreds of horsepowers. For example, the railroads opened up entire continents to development, and steamships opened up modern international trade. Both were energized by the power of steam, heated by coal.
It took over 10,000 years for humanity to create modern civilization over the face of Europe. With steam-driven and later oil-fired machines, the United States was industrialized within a century. Thus the mastery of just a single fundamental force of nature vastly increased the energy available to a human being and irrevocably changed society.
By the late nineteenth century, Maxwell’s mastery of the electromagnetic force once again set off a revolution in energy. The electromagnetic force made possible the electrification of our cities and our homes, exponentially increasing the versatility and power of our machines. Steam engines were now being replaced by powerful dynamos.
Within the past 50 years, the discovery of the nuclear force has increased the power available to a single human by a factor of a million. Because the energy of chemical reactions is measured in electron volts, while the energy of fission and fusion is measured in millions of electron volts, we have a millionfold increase in the power available to us.