But What If We're Wrong? (12 page)

Read But What If We're Wrong? Online

Authors: Chuck Klosterman

BOOK: But What If We're Wrong?
9.4Mb size Format: txt, pdf, ePub

But what if we told that seventeenth-century shepherd something even crazier?

What if we told him that he did not exist? And that his sheep didn't exist, and neither did the pasture he was standing in, nor the moon, nor the sun? Or even the person who was telling him this?

This is a shift in the world around us.

This is a shift in “the (simulated) world around us.”

[
6
]
Like most people who enjoy dark rooms and Sleep's
Jerusalem
, I dig the simulation argument. It is, as far as I can tell, the most reasonable scientific proposition no one completely believes. I have yet to encounter anyone who totally buys it; even the man most responsible for its proliferation places the likelihood of its validity at roughly 20 percent. But even a one-in-five chance presents the potential for a paradigm shift greater than every other historical shift combined. It would place the Copernican Revolution on a par with the invention of Velcro.

The man to whom I refer is existential Swedish philosopher Nick Bostrom, currently directing the Future of Humanity Institute at the University of Oxford. He's relatively young (born in '73), balding, and slightly nervous that the human race will be annihilated by robots. Yet it is his simulation hypothesis (building off the earlier work of Austrian roboticist Hans Moravec) that really moves the stoner needle. The premise of his hypothesis
started showing up in places like
The New York Times
around 2007, and it boils down to this: What we believe to be reality is actually a computer simulation, constructed in a future where artificial intelligence is so advanced that those living inside the simulation cannot tell the difference. Essentially, we would all be characters in a supernaturally sophisticated version of The Sims or Civilization, where the constructed characters—us—are self-aware and able to generate original thoughts and feelings. But none of this would be
real
, in the way that term is traditionally used. And this would be true for all of history and all of space.
40

What Bostrom is asserting is that there are three possibilities about the future, one of which must be true. The first possibility is that the human race becomes extinct before reaching the stage where such a high-level simulation could be built. The second possibility is that humans
do
reach that stage, but for whatever reason—legality, ethics, or simple disinterest—no one ever tries to simulate the complete experience of civilization. The third possibility is that we are living in a simulation right now. Why? Because if it's possible to create this level of computer simulation (and if it's legally and socially acceptable to do so), there won't just be
one
simulation. There will be an almost limitless number of competing simulations, all of which would be disconnected from each other. A computer program could be created that does nothing
except
generate new simulations, all day long, for a thousand consecutive years. And once those various simulated societies reach technological maturity, they would (assumedly) start creating simulations of their own—simulations inside of simulations. Eventually, we would be left with the one original “real” reality, along with billions and billions of simulated realities. Simple mathematical odds tell us that it's far more likely our current reality would fall somewhere in the latter category. The chance that we are living through the immature stages of the original version is certainly possible, but ultra-remote.

If you're the type of person who first read about the simulation argument in 2007 and stopped thinking about it by 2008, your reaction to the previous paragraph is probably, “This incomprehensible nonsense again?” If you've never heard of the simulation argument before today, you're probably trying to imagine how any of this could possibly be true. There's always an entrenched psychological hurdle with this hypothesis—it's just impossible for any person to circumvent the sense that what appears to be happening
is really happening
, and that the combination of strangeness and comfort within this experience makes the sensation of “being alive” too uncanny to be anything but genuine. But this sensation can't be trusted (in fact, it might be baked into the simulation). And what's most compelling about this concept is how rational it starts to seem, the longer you think about it. Bostrom is a philosopher, but this hypothesis is not really an extension of philosophy. This is not a situation where we start from the premise that we don't exist and demand someone prove that we do. It follows a basic progression:

  1. We have computers, and these computers keep getting better.
  2. We can already create reality simulations on these computers, and every new generation of these simulations dramatically improves.
  3. There is no reason to believe that these two things will stop being true.

In a limited capacity, artificial intelligence already exists. Even if mankind is never able to create a digital character that's fully conscious, it seems possible that mankind could create a digital character that
assumes
it is conscious, within the context of its program. Which actually sounds a lot like the experience we're all having here, right now, on “Earth.” That actually sounds a lot like life.

Certainly, it takes a mental leap to imagine how this circumstance would have transpired. But that leap is less than you might think. Here's one possible scenario, described by Brian Greene: At the time, Greene was discussing a collection of (roughly) twenty numbers that seem to dictate how the universe works. These are constants like “the mass of an electron” and “the strength of gravity,” all of which have been precisely measured and never change. These twenty numbers appear inconceivably fine-tuned—in fact, if these numbers didn't have the exact value that they do, nothing in the universe would exist. They are so perfect that it almost appears as if someone
set
these numbers. But who could have done that? Some people would say God. But the simulation hypothesis presents a secular answer: that these numbers were set by the simulator.

“That's a rational possibility: that someday, in the future, we'll be able to simulate universes with such verisimilitude that the
beings within those simulations believe they are alive in a conventional sense. They will not know that they are inside a simulation,” says Greene. “And in that case, there
is
a simulator—maybe some kid in his garage in the year 4956—who is determining and defining the values of the constants in this new universe that he built on a Sunday morning on a supercomputer. And within that universe, there are beings who will wonder, ‘Who set the values of these numbers that allow stars to exist?' And the answer is the kid. There
was
an intelligent being outside that universe who was responsible for setting the values for these essential numbers. So here is a version of the theological story that doesn't involve a supernatural anything. It only involves the notion that we will be able to simulate realistic universes on futuristic computers.”

Part of what makes the simulation argument so attractive is the way its insane logic solves so many deep, impossible problems. Anything we currently classify as unexplainable—ghosts, miracles, astrology, demonic possession—suddenly has a technological explanation: They are bugs in the program (or, in the case of near-death experiences, cheat codes). Theologians spend a lot of time trying to figure out how a righteous God could allow the Holocaust to happen, but that question disappears when God is replaced by Greene's teenager in the year 4956 (weird kids love death). Moreover, the simulation hypothesis doesn't contradict God's existence in any way (it just inserts a middle manager).

The downside to the simulation hypothesis is that it appears impossible to confirm (although maybe not totally
41
impossible).
Such a realization wouldn't be like Jim Carrey's character's recognition of his plight in
The Truman Show
, because there would be no physical boundary to hit; it would be more like playing Donkey Kong and suddenly seeing Mario turn toward the front of the monitor in order to say, “I know what's going on here.” Maybe speculating on the mere possibility of this simulacrum is the closest we could ever come to proving that it's real. But this is a book, so those limitations don't apply. For my purposes, the
how
is irrelevant. I'm just going to pretend we all collectively realized that we are simulated digital creatures, living inside a simulated digital game. I'm going to pretend our reality is a sophisticated computer simulation, and that we all know this.

If this were true, how should we live? Or maybe: How should we “live”?

[
7
]
Imagine two men in a bar, having (in Neil deGrasse Tyson's parlance) a “beer conversation.” One man believes in God and the other does not, and they are debating the nature of morality. The man who believes in God argues that
without the existence of a higher power, there would be no reason for living a moral life, since this would mean ethics are just slanted rules arbitrarily created by flawed people for whatever reason they desire. The man who does not believe in God disagrees and insists that morality matters
only
if its tenets are a human construct, since that would mean our ethical framework is based not on a fear of supernatural punishment but on a desire to give life moral purpose. They go back and forth on this for hours, continually restating their core position in different ways. But then a third man joins their table and explains the new truth: It turns out our moral compass comes from neither God nor ourselves. It comes from Brenda. Brenda is a middle-aged computer engineer living in the year 2750, and she designed the simulation that currently contains all three of their prefab lives. So the difference between right and wrong
does
come from a higher power, but that higher power is just a mortal human. And the ethical mores ingrained in our society are
not
arbitrary, but they're also not communal or fair (they're just Brenda's personal conception of what a society should believe and how people should behave).

The original two men finish their beers and exit the tavern. Both are now aware they've been totally wrong about everything. So what do they do now? For a moment, each man is overcome with suicidal tendencies. “If we are not even real,” they concurrently think, “what is the meaning of any of this?” But these thoughts quickly fade. For one thing, learning you're not real doesn't
feel
any different from the way you felt before. Pain still hurts, even though no actual injury is being inflicted; happiness still feels good, even if the things making you happy are as fake as you are. The “will to live” still subsists, because that will was
programmed into your character (and so was a fear of death). Most critically, the question of “What is the meaning of any of this?” was just as present yesterday as it is today—the conditions are different, but the confusion is the same.

Even if you're not alive, life goes on. What changes is the purpose.

Think of a video game that immerses the player in an alternative reality—I'll use Grand Theft Auto as an example, simply because of its popularity. When a casual gamer plays any new version of GTA, they typically work through three initial steps. The first is to figure out the various controls and to develop a general sense of how to move around the virtual sandbox. The second is a cursory examination of the game's espoused plot, done mostly to gauge its level of complexity and to get a fuzzy sense of how long it will take to complete. And then—and particularly if the game looks like it will be time-consuming and hard—the player enters a third phase: a brief, chaotic attempt at “breaking the game.” Can I drive my car into the ocean? Can I shoot people who are trying to help me? Can I punch animals? What, exactly, are the limits here?

When I first played the crime-solving video game L.A. Noire, I realized that the main character (voiced by
Mad Men
's Ken Cosgrove) would sometimes fall through the floor of certain buildings and disappear into the middle of the earth. I had no idea why this happened, so I spent a lot of time searching for floors to inexplicably fall through. And if I knew that my actual life was similarly unreal, I'd do the same thing: I'd look for ways to break the simulation. Obviously, I could not be as militant as I was while playing L.A. Noire, because I wouldn't have unlimited lives. I wouldn't want my character to die. When I use my Xbox, I'm an
extension of the simulator (which means I could let my little Cosgrove fall through the floor a hundred times, knowing he'd always return). Were I actually living inside the simulation hypothesis, I'd be a one-time avatar. So the boundaries I would try to break would not be physical. In fact, I'd say the first principle to adopt in this scenario would be the same as the one we use in regular life—don't get terminated. Stay alive. But beyond that? I'd spend the rest of my “life” trying to figure out what I
can't
do. What are the thoughts I can't have? What beliefs are impossible for me to understand or express? Are there aspects of this simulation that its creator never considered? Because if this simulation is all there is (and there's no way to transcend beyond it), I would have to look for the only possible bright side: A simulated world is a
limited
world. It's a theoretically
solvable
world, which is not something that can be said of our own.

The only problem is that anyone capable of building such a world would likely consider this possibility, too.

“You could try to ‘break' the simulation, but if the simulators did not want the simulation to be broken, I would expect your attempts to fail,” Bostrom e-mails me from England. I suspect this is not the first time he's swatted this argument into the turf. “I figure they would be vastly superior to us in intelligence and technological capability (or they could not have created this kind of simulation in the first place). And so they could presumably prevent their simulated creatures from crashing the simulation or discovering its limitations.”

Other books

Death of an Outsider by M.C. Beaton
A Winter Wedding by Amanda Forester
The outlaw's tale by Margaret Frazer
Indiscreet by Carolyn Jewel
Kaleidoscope Hearts by Claire Contreras