But What If We're Wrong? (9 page)

Read But What If We're Wrong? Online

Authors: Chuck Klosterman

BOOK: But What If We're Wrong?
12.7Mb size Format: txt, pdf, ePub

[
8
]
Not everyone agrees with this, or with me. “I don't think
purest distillation
is how giant fields get replaced by one single figure,” novelist Jonathan Lethem contends. “I think the one single figure isn't the inventor or the purest distillation, but the most embracing and mercurial, and often incredibly prolific.” Ryan Adams disputes Berry on similar grounds: “If you're looking for a cultural highlight that will still be talked about later, it would be a symptom of the thing that was set in motion—not the inventor itself. We talk about Twitter all the time, but rarely about the person who designed it.” Interestingly (or maybe unavoidably), Lethem and Adams both think the better answer is
Bob Dylan. But something tells me that their dual conclusion is too rooted in the world we still inhabit. It seems self-evident only because Dylan still feels culturally present.

I keep imagining a college classroom in five hundred years, where a hipster instructor is leading a tutorial filled with students. These students relate to rock music with the same level of fluency as the music of Mesopotamia: It's a style of music they've learned to recognize, but just barely (and only because they've taken this specific class). Nobody in the room can name more than two rock songs, except the professor. He explains the sonic structure of rock, and its origins, and the way it served as cultural currency, and how it shaped and defined three generations of a global superpower. He attempts to personify the concept of rock through the life of a rock person. He shows the class a photo—or maybe a hologram—of this chosen individual.
This is the guy
. This is the image of what rock was, and what rock is.

Will that image be a Jewish intellectual from Minnesota who never rocked?

I don't think so. And if it is, I don't know if that means things went wrong or right. Both,
probably.

“Merit”

Right about now, were I reading this book (as opposed to writing it), I'd probably be asking myself the following reasonable questions: “But what about the
merit
of these things? Shouldn't we emphasize
that
? Isn't merit the most reliable criteria for longevity?” Were I the type of person predisposed toward disagreeing with whatever I was reading, I might suspect the author viewed the actual quality of these various artifacts as tangential to their ultimate value, and that all the author's suppositions inevitably suggest that what things actually are matters less than random social conditions and capricious assessments from people who don't necessarily know what they're talking about.

If that is what you assume, here is my response: You're right. Not totally, but mostly.

[This is not what people want to hear.]

I realize my partial dismissal of “merit” as a vital element of the historical record is problematic (even to me). Part of this problem is philosophical—it's depressing to think quality doesn't
necessarily matter. Another part is practical—whenever we consider any specific example, it
does
seem like merit matters, in a way that feels too deep-seated to ignore. William Shakespeare is the most famous playwright who's ever lived, and his plays (or at least the themes and the language of those plays) still seem better than those of his peers.
Citizen Kane
is a clichéd response within any debate about the greatest film of all time, but it's also a legitimate response—it's a groundbreaking movie that can be rewatched and reevaluated dozens of times, somehow improving with every fresh viewing. It doesn't seem arbitrary that we all know who Vincent van Gogh is, or Pablo Picasso, or Andy Warhol. In the broadest possible sense, merit does play a key role: The work has to be good enough to enter the critical conversation, whatever that conversation happens to be. But once something is safely inside the walls of that discussion, the relative merits of its content matters much less. The final analysis is mostly just a process of reverse engineering.

Take architecture: Here we have a creative process of immense functional consequence. It's the backbone of the urban world we inhabit, and it's an art form most people vaguely understand—an architect is a person who designs a structure on paper, and that design emerges as the structure itself. Architects fuse aesthetics with physics and sociology. And there is a deep consensus over who did this best, at least among non-architects: If we walked down the street of any American city and asked people to name the greatest architect of the twentieth century, most would say Frank Lloyd Wright. In fact, if someone provided a different answer, we'd have to assume we've stumbled across an actual working architect
,
an architectural historian, or a personal friend of Frank Gehry. Of course, most individuals in those subsets would cite Wright, too.
But in order for someone to argue in favor of any architect
except
Wright (or even to be in a position to name three other plausible candidates), that person would almost need to be an expert in architecture. Normal humans don't possess enough information to nominate alternative possibilities. And what emerges from that social condition is an insane kind of logic: Frank Lloyd Wright is indisputably the greatest architect of the twentieth century, and the only people who'd potentially disagree with that assertion are those who legitimately understand the question.

History is defined by people who don't really understand what they are defining.

As a brick-and-mortar visionary, Wright was dazzling. He was also prolific, which matters almost as much. He championed the idea of “organic architecture,” which—to someone who doesn't know anything about architecture, such as myself—seems like the condition all architecture should aspire to. But I know these imperative perspectives have no origin in my own brain. The first time I ever heard Frank Lloyd Wright's name, I was being told he was brilliant, which means the first time I looked at a building he designed, I thought either, “That is what brilliance looks like,” or “This is what everyone else recognizes as brilliance.” I knew he was considered “prolific” long before I ever wondered how many buildings an architect needed to design in order to be considered average, much less productive. I believe all architecture should aspire to be in harmony with its habitat, because (a) it seems like a good line of reasoning, and (b) that was Wright's line of reasoning. Yet I am certain—
certain
—that if I had learned that Wright had instead pioneered the concept of “inorganic architecture,” based on a premise that architecture should be an attempt to separate the
rational world of man from the uncivilized creep of nature . . . not only would I agree with those thoughts, but I would actively
see
that philosophy, fully alive within his work (even if the buildings he designed were exactly the same as they are now).

I don't believe all art is the same. I wouldn't be a critic if I did. Subjective distinctions can be made, and those distinctions are worth quibbling about. The juice of life is derived from arguments that don't seem obvious. But I don't believe subjective distinctions about quality transcend to anything close to objective truth—and every time somebody tries to prove otherwise, the results are inevitably galvanized by whatever it is they get wrong.
30

In 1936, a quarterly magazine called
The Colophon
polled its subscribers (of whom there were roughly two thousand, although who knows how many actually voted) about what contemporary writers they believed would be viewed as canonical at the turn of the twenty-first century. The winner was Sinclair Lewis, who had won the Nobel Prize for literature just five years earlier. Others on the list include Willa Cather, Eugene O'Neill, George Santayana, and
Robert Frost. It's a decent overview of the period. Of course, what's more fascinating is who was left off: James Joyce, F. Scott Fitzgerald, and Ernest Hemingway (although the editors of
The Colophon
did include Hemingway on their own curated list). Now, the predictive time frame we're dealing with—sixty-four years—is not that extreme. It's possible that someone who voted in this poll was still alive when the century turned. I also suspect several of the 1936 writers who still seem like valid picks today will be barely recognizable in another sixty-four years and totally lost in 640. That's just how history works. But the meaningful detail to glean from such a list is the probable motives used by the voters, since that's how we dissect their reasonable mistakes. For example: Edna St. Vincent Millay is fourth on the
Colophon
list, and Stephen Vincent Benét is ninth. Both were known primarily as poets—Millay won the Pulitzer Prize in 1923 and Benét in '29. Benét was something of a Rock Star Poet (at least at the time of the poll) and is retroactively described by the Poetry Foundation as “more widely read than Robert Frost.” Yet of the three poets on this list, only Frost remains familiar. Now, the fact that
Colophon
voters went only one-for-three in their poet prognostication is not what matters here; what matters is that they voted
for three poets
. If such a poll were taken today, it's hard to imagine how far down the list one would have to scan before finding the name of even one. A present-day
Colophon
would need to create a separate category exclusively for poetry, lest it not be recognized at all. So what we see with this 1936 list is people selecting artists under the assumption that 1936 is the end of time, and that the temporary tastes and obsessions of 1936 would remain historically universal. The poll operates from the perspective that poetry is roughly half as important as prose, which is how the literary world
thought in 1936. These voters were okay at gauging the relative timelessness of the various literary works, but they were terrible at projecting what the literary world would be like in the year 2000 (when the planet's best-selling, highest-profile book of poetry was
A Night Without Armor
, written by Alaskan pop star Jewel). The forces shaping collective memory are so complicated and inconsistent that any belief system dogmatically married to the concept of “merit” ends up being a logical contention that misses the point entirely. It's like arguing that the long-term success of any chain restaurant is a valid reflection of how delicious the food is.

Do you unconsciously believe that Shakespeare was an
objectively
better playwright than his two main rivals, Christopher Marlowe and Ben Jonson? If so, have no fear—as far as the world is concerned, he was. If you want to prove that he was, all you need to do is go through the texts of their respective plays and find the passages that validate the opinion most of the world already accepts. It will not be difficult, and it will feel like the differences you locate are a manifestation of
merit
. But you will actually be enforcing a tautology: Shakespeare is better than Marlowe and Jonson because Shakespeare
is more like Shakespeare
, which is how we delineate greatness within playwriting. All three men were talented. All three had enough merit to become the historical equivalent of Shakespearean, had history unspooled differently. But it didn't. It unspooled the way we understand it, and Shakespeare had almost nothing do with that. He is remembered in a way that Marlowe and Jonson are not, particularly by those who haven't
really
thought about any of these guys, ever.

To matter forever, you need to matter to those who don't care. And if that strikes you as sad, be sad.

Burn Thy Witches

I've written about pop music for over twenty years, productively enough to deliver musicology lectures at universities I could've never attended. I've been identified as an expert in rock documentaries broadcast in countries where I do not speak the language. I've made a lot of money in a profession where many talented peers earn the adult equivalent of birdseed. I've had multiple conversations about the literal meaning of the Big Star single “September Gurls,” chiefly focused on who the September girls were, what they allegedly did, and why the word “girls” needed to be misspelled. I own a DVD about the prehistory of Quiet Riot and I've watched it twice. Yet any time I write about popular music—and even if the sentiment I articulate is something as banal and innocuous as “The Beach Boys were pretty great”—many, many people will tell me I don't know
anything
about music, including a few people I classify as friends. Even though every concrete signifier suggests my understanding of rock music is airtight and stable, I
live my life with an omnipresent sensation of low-level anxiety about all the things I don't know about music. This is a reflection of how the world works and how my brain works.

So now I'm going to write about fucking physics.

And here are my qualifications for doing so: I took physics as a senior in high school and did not fail.

That's it. That's as far as it goes. I know how a fulcrum works and I know how to make the cue ball roll backward when I shoot pool. I know that “quantum mechanics” means “extremely small mechanics.” I understand the concepts of
lift
and
drag
just enough to be continually amazed every time an airplane doesn't crash during takeoff. But that's the extent of my expertise. I don't own a microscope or a Bunsen burner. So when I write about science, I'm not really writing about “science.” I'm not pretending to refute anything we currently believe about the natural world, particularly since my natural inclination is to reflexively accept all of it. I am, however, willing to reconsider the
idea
of science, and the way scientific ideas evolve. Which—in many contradictory ways—is at the center of every question this book contains.

There is, certainly, an unbreachable chasm between the subjective and objective world. A reasonable person expects subjective facts to be overturned, because subjective facts are not facts; they're just well-considered opinions, held by multiple people at the same time. Whenever the fragility of those beliefs is applied to a specific example, people bristle—if someone says, “It's possible that Abraham Lincoln won't always be considered a great president,” every presidential scholar scoffs. But if you remove the specificity and ask, “Is it possible that someone currently viewed as a historically great president will have that view reversed by
future generations?” any smart person will agree that such a scenario is not only plausible but inevitable. In other words, everyone concedes we have the potential to be subjectively wrong about anything, as long as we don't explicitly name whatever that something is. Our sense of subjective reality is simultaneously based on an acceptance of abstract fallibility (“Who is to say what constitutes good art?”) and a casual certitude that we're right about exclusive assertions that
feel
like facts (“
The Wire
represents the apex of television”).

But the objective world is different. Here, we traffic in literal facts—but the permanence of those facts matters less than the means by which they are generated. What follows is an imperfect example, but it's one of the few scientific realms that I (and many people like me) happen to have an inordinate amount of knowledge about: the Age of Dinosaurs.

In 1981, when I was reading every dinosaur book I could locate, the standard belief was that dinosaurs were cold-blooded lizards, with the marginalized caveat that “some scientists” were starting to believe they may have been more like warm-blooded birds. There were lots of reasons for this alternative theory, most notably the amount of time in the sun required to heat the blood of a sixty-ton sauropod and the limitations of a reptilian two-chambered heart. But I rejected these alternatives. When I was nine, people who thought dinosaurs were warm-blooded actively made me angry. By the time I hit the age of nineteen, however, this line of thinking had become accepted by everyone, myself included. Dinosaurs were warm-blooded, and I didn't care that I'd once thought otherwise. Such intellectual reinventions are just part of being interested in a group of animals that were already extinct
ten million years before the formation of the Himalayan mountains. You naturally grow to accept that you can't really know certain things everyone considers absolute, since these are very hard things to know for sure. For almost one hundred years, one of the earmarks of a truly dino-obsessed kid was his or her realization that there actually wasn't such a thing as a
brontosaurus
—that beast was a fiction, based on a museum's nineteenth-century mistake. The creature uninformed dilettantes referred to as a “brontosaurus” was technically an “apatosaurus” . . . until the year 2015. In 2015, a paleontologist in Colorado declared that there really
was
a species of dinosaur that should rightfully be classified as a brontosaurus, and that applying that name to the long-necked animal we imagine is totally acceptable, and that all the dolts
31
who had used the wrong term out of ignorance for all those years had been correct the whole time. What was (once) always true was (suddenly) never true and then (suddenly) accidentally true.

Yet these kinds of continual reversals don't impact the way we think about paleontology. Such a reversal doesn't impact the way we think about anything, outside of the specialized new data that replaced the specialized old data. If any scientific concept changes five times in five decades, the perception is that we're simply refining what we thought we knew before, and every iteration is just a “more correct” depiction of what was previously considered “totally correct.” In essence, we anchor our sense of objective reality in science itself—its laws and methods and sagacity. If certain
ancillary details turn out to be specifically wrong, it just means the science got better.

But what if we're
really
wrong, about something
really
big?

I'm not talking about things like the relative blood temperature of a stegosaurus or whether Pluto can be accurately classified as a planet, or even the nature of motion and inertia. What I'm talking about is the possibility that we think we're playing checkers when we're really playing chess. Or maybe even that metaphor is too conservative for what I'm trying to imagine—maybe we think we're playing checkers, but we're actually playing goddamn Scrabble. Every day, our understanding of the universe incrementally increases. New questions are getting answered. But are these the right questions? Is it possible that we are mechanically improving our comprehension of principles that are all components of a much larger illusion, in the same way certain eighteenth-century Swedes believed they had
finally
figured out how elves and trolls caused illness? Will our current understanding of how space and time function eventually seem as absurd as Aristotle's assertion that a brick doesn't float because the ground is the “natural” place a brick wants to be?

No. (Or so I am told.)

“The only examples you can give of complete shifts in widely accepted beliefs—beliefs being completely thrown out the window—are from before 1600,” says superstar astrophysicist Neil deGrasse Tyson. We are sitting in his office in the upper deck of the American Museum of Natural History. He seems mildly annoyed by my questions. “You mentioned Aristotle, for example. You could also mention Copernicus and the Copernican Revolution. That's all
before 1600. What was different from 1600 onward was how science got conducted. Science gets conducted by experiment. There is no truth that does not exist without experimental verification of that truth. And not only one person's experiment, but an ensemble of experiments testing the same idea. And only when an ensemble of experiments statistically agree do we then talk about an emerging truth within science. And that emerging truth does not change, because it was verified. Previous to 1600—before Galileo figured out that experiments matter—Aristotle had no clue about experiments, so I guess we can't blame him. Though he was so influential and so authoritative, one might say some damage was done, because of how much confidence people placed in his writing and how smart he was and how deeply he thought about the world . . . I will add that in 1603 the microscope was invented, and in 1609 the telescope was invented. So these things gave us tools to replace our own senses, because our own senses are quite feeble when it comes to recording objective reality. So it's not like this is a
policy
. This is, ‘Holy shit, this really works. I can establish an objective truth that's not a function of my state of mind, and you can do a different experiment and come up with the same result.' Thus was born the modern era of science.”

This is all accurate, and I would never directly contradict anything Neil deGrasse Tyson says, because—compared to Neil deGrasse Tyson—my skull is a bag of hammers. I'm the functional equivalent of an idiot. But maybe it takes an idiot to pose this non-idiotic question: How do we know we're not currently living in our own version of the year 1599?

According to Tyson, we have not reinvented our understanding of scientific reality since the seventeenth century. Our beliefs have
been relatively secure for roughly four hundred years. That's a long time—except in the context of science. In science, four hundred years is a grain in the hourglass. Aristotle's ideas about gravity were accepted for more than twice that long. Granted, we're now in an era where repeatable math can confirm theoretical ideas, and that numeric confirmation creates a sense that—
this time
—what we believe to be true is not going to change. We will learn much more in the coming years, but mostly as an extension of what we already know now. Because—
this time
—what we know is actually right.

Of course, we are not the first society to reach this conclusion.

[
2
]
If I spoke to one hundred scientists about the topic of scientific wrongness, I suspect I'd get one hundred slightly different answers, all of which would represent different notches on a continuum of confidence. And if this were a book
about science
, that's what I'd need to do. But this is not a book about science; this is a book about continuums. Instead, I interviewed two exceptionally famous scientists who exist (or at least
appear
to exist) on opposite ends of a specific psychological spectrum. One of these was Tyson, the most conventionally famous astrophysicist alive.
32
He hosted the Fox reboot of the science series
Cosmos
and created his own talk show on the National Geographic Channel. The other was string theorist Brian Greene at Columbia University (Greene is the person mentioned in this book's introduction, speculating on the possibility that “there is a very, very good
chance that our understanding of gravity will not be the same in five hundred years”).

Talking to only these two men, I must concede, is a little like writing about debatable ideas in pop music and interviewing only Taylor Swift and Beyoncé Knowles. Tyson and Greene are unlike the overwhelming majority of working scientists. They specialize in translating ultra-difficult concepts into a language that can be understood by mainstream consumers; both have written bestselling books for general audiences, and I assume they both experience a level of envy and skepticism among their professional peers. That's what happens to any professional the moment he or she appears on TV. Still, their academic credentials cannot be questioned. Moreover, they represent the competing poles of this argument almost perfectly. Which might have been a product of how they chose to hear the questions.

When I sat down in Greene's office and explained the premise of my book—in essence, when I explained that I was interested in considering the likelihood that our most entrenched assumptions about the universe might be wrong—he viewed the premise as playful. His unspoken reaction came across as “This is a fun, non-crazy hypothetical.” Tyson's posture was different. His unspoken attitude was closer to “This is a problematic, silly supposition.” But here again, other factors might have played a role: As a public intellectual, Tyson spends a great deal of his time representing the scientific community in the debate over climate change. In certain circles, he has become the face of science. It's entirely possible Tyson assumed my questions were veiled attempts at debunking scientific thought, prompting him to take an
inflexibly hard-line stance. (It's also possible this is just the stance he always takes with everyone.) Conversely, Greene's openness might be a reflection of his own academic experience: His career is punctuated by research trafficking on the far edges of human knowledge, which means he's accustomed to people questioning the validity of ideas that propose a radical reconsideration of everything we think we know.

One of Greene's high-profile signatures is his support for the concept of “the multiverse.” Now, what follows will be an oversimplification—but here's what that connotes: Generally, we work from the assumption that there is one universe, and that our galaxy is a component of this one singular universe that emerged from the Big Bang. But the multiverse notion suggests there are infinite (or at least numerous) universes beyond our own, existing as alternative realities. Imagine an endless roll of bubble wrap; our universe (and everything in it) would be one tiny bubble, and all the other bubbles would be other universes that are equally vast. In his book
The Hidden Reality
, Greene maps out nine types of parallel universes within this hypothetical system. It's a complicated way to think about space, not to mention an inherently impossible thing to prove; we can't get (or see) outside our own universe any more than a man can get (or see) outside his own body. And while the basic concept of a limited multiverse might not seem particularly insane, the logical extensions of what a limitless multiverse would entail are almost impossible to fathom.

Other books

No Way Back by Unknown
Body on Fire by Sara Agnès L
Traci On The Spot by Marie Ferrarella
Murdoch's World by David Folkenflik
Your Eyes Don't Lie by Branton, Rachel