Read But What If We're Wrong? Online
Authors: Chuck Klosterman
This is a level of scrutiny that can't be applied to the distant past, for purely practical reasons. Most of history has not been videotaped. But what's interesting is our communal willingness to assume most old stories
may as well be true
, based on the logic that (a) the story is already ancient, and (b) there isn't any way to confirm an alternative version, despite the fact that we can't categorically confirm the original version, either.
A week before Manhattan cops were being attacked by hammer-wielding schizophrenics, Seymour Hersh published a ten-thousand-word story in the
London Review of Books
headlined “The Killing of Osama bin Laden.” Hersh's wide-ranging story boiled down to this: The accepted narrative of the 2011 assassination of bin Laden was a fabrication, deliberately perpetrated by the Obama administration. It was not a clandestine black ops attack by Navy SEALs, working off the CIA's meticulous intelligence gathering; it was the result of a former Pakistani intelligence officer exchanging the whereabouts of bin Laden for money, thereby allowing the SEALs to just walk into his compound and perform an execution. It was not a brazen military gamble; the government of Pakistan knew it was going to happen in advance and quietly allowed the cover-up. During the first thirty-six hours of the
story's publication, it felt like something unthinkable was suddenly transparent: Either we were being controlled by a shadow government where nothing was as it seemed, or the finest investigative reporter of the past half century had lost his goddamn mind. By the end of the week, most readers leaned in the direction of the latter. Some of this was due to a follow-up interview Hersh gave to
Slate
that made him seem unreliable, slightly crazy, and very old. But most of the skepticism came from a multitude of sources questioning the validity of specific particulars in Hersh's account, even though the refutation of those various details did not really contradict the larger conspiratorial thesis. Hersh's alternative narrative was scrutinized far more aggressively than the conventional narrative, even though the mainstream version of bin Laden's assassination was substantially more dramatic (if film director Kathryn Bigelow had used Hersh's story as the guide for
Zero Dark Thirty
, it might have qualified as mumblecore).
By the first week of June, “The Killing of Osama bin Laden” had been intellectually discarded by most people in the United States. Every subsequent conversation I had about the Hersh story (and I had many) drifted further and further from seriousness. More than a year later, journalist Jonathan Mahler wrote a story for
The New York Times Magazine
reexamining the dispute from a media perspective. “For many,” wrote Mahler, “[the official bin Laden story] exists in a kind of liminal state, floating somewhere between fact and mythology.” Considering what can be conclusively verified about the assassination, that's precisely where the story should float. But I don't believe that it does. Judging from the (mostly incredulous) reaction to Mahler's story, I don't think a sizable chunk of US citizenry distrusts the conventional depiction
of how bin Laden was killed. This acceptance is noteworthy for at least two reasons. The first is thatâhad this kind of alternative story emerged from a country like Russia, and if the man orchestrating the alleged conspiracy was Vladimir Putinânobody in America would question it at all. It would immediately be accepted as plausible, and perhaps even probable. The second is a discomfiting example of how “multiple truths” don't really mesh with the machinations of human nature: Because we were incessantly told one version of a story before hearing the second version, it's become impossible to overturn the original template. It was unconsciously assumed that Hersh's alternative story had to both prove itself
and
disprove the primary story, which automatically galvanizes the primary version as factual. It took only four years for that thinking to congeal. Extrapolate that phenomenon to forty years, or to four hundred years, or to four thousand years: How much of history is classified as true simply because it can't be sufficiently proven false? In other words, there's no way we can irrefutably certify that an event from 1776 didn't happen in the manner we've always believed, so there's no justification for presenting a counter-possibility. Any counter-possibility would have to use the same methodology, so it would be (at best) equally flawed. This becomes more and more ingrained as we move further and further from the moment of the event. So while it's absurd to think that all of history never really happened, it's almost as absurd to think that everything we know about history is real. All of which demands a predictable question: What significant historical event is most likely wrong? And not because of things we know that contradict it, but because of the way wrongness works.
We understand the past through the words of those who
experienced it. But those individuals aren't necessarily reliable, and we are reminded of this constantly. The average person can watch someone attack a cop with a hammer and misdescribe what he saw twenty minutes after it happened. But mistakes are only a fraction of the problem. There's also the human compulsion to lieâand not just for bad reasons, but for good reasons, and sometimes for
no
reasons, beyond a desire to seem interesting. When D. T. Max published his posthumous biography of David Foster Wallace, it was depressing to discover that many of the most memorable, electrifying anecdotes from Wallace's nonfiction were total fabrications. Of course, that accusation would be true for countless essays published before the fact-checking escalation of the Internet. The defining works of Joseph Mitchell, Joan Didion, and Hunter Thompson all contain moments of photographic detail that would never withstand the modern verification process
51
âwe've just collectively decided to accept the so-called larger truth and ignore the parts that skew implausible. In other words, people who don't know better are often wrong by accident, and people who do know better
are sometimes wrong on purposeâand whenever a modern news story explodes, everyone recognizes that possibility. But we question this far less when the information comes from the past. It's so hard to get viable info about pre-twentieth-century life that any nugget is reflexively taken at face value. In Ken Burns's documentary series
The Civil War
, the most fascinating glimpses of the conflict come from personal letters written by soldiers and mailed to their families. When these letters are read aloud, they almost make me cry. I robotically consume those epistles as personal distillations of historical fact. There is not one moment of
The Civil War
that feels false. But why is that? Why do I assume the things Confederate soldiers wrote to their wives might not be wildly exaggerated, or inaccurate, or straight-up untruths? Granted, we have loads of letters from lots of unrelated Civil War veterans, so certain claims and depictions can be fact-checked against each other. If multiple letters mention that there were wheat weevils in the bread, we can concede that the bread was infested with wheat weevils. But the American Civil War isn't exactly a distant historical event (amazingly, a few Civil War veterans were still alive in the 1950s). The further we go back, the harder it becomes to know how seriously any eyewitness account can be taken, particularly in cases where the number of accounts is relatively small.
There's a game I like to play with people when we're at the bar, especially if they're educated and drunk. The game has no name, but the rules are simple: The player tries to answer as many of the following questions as possible, without getting one wrong, without using the same answer twice, and without looking at a phone. The first question is, “Name any historical figure who was alive
in the twenty-first century.” (No one has ever gotten this one wrong.) The second question is, “Name any historical figure who was alive in the twentieth century.” (No one has ever gotten this one wrong, either.) The third question is, “Name any historical figure who was alive in the nineteenth century.” The fourth question is, “Name any historical figure who was alive in the eighteenth century.” You continue moving backward through time, in centurial increments, until the player fails. It's mildly shocking how often highly intelligent people can't get past the sixteenth century; if they make it down to the twelfth century, it usually means they either know a lot about explorers or a shitload about popes. What this game illustrates is how vague our understanding of history truly is. We know all the names, and we have a rough idea of what those names accomplishedâbut how much can that be trusted if we can't even correctly identify when they were alive? How could our abstract synopsis of what they did be internalized if the most rudimentary, verifiable detail of their lives seems tricky?
It's hard to think of a person whose portrait was painted more than Napoleon. We should definitely know what he looked like. Yet the various firsthand accounts of Napoleon can't even agree on his height, much less his actual appearance. “None of the portraits that I had seen bore the least resemblance to him,” insisted the poet Denis Davydov when he met Napoleon in 1807. Here again, we're only going back about two hundred years. What is the realistic probability that the contemporary understanding of Hannibal's 218 BC crossing of the Alps on the back of war elephants is remotely accurate? The two primary texts that elucidate this story
were both composed decades after it happened, by authors
52
who were not there, with motives that can't be understood. And there's no conspiracy here; this is just how history is generated. We know the story exists and we know how the Second Punic War turned out. To argue that we knowâreally, truly
know
âmuch more than that is an impossibly optimistic belief. But this is the elephant-based Hannibal narrative we've always had, and any story contradicting it would be built on the same kind of modern conjecture and ancient text. As far as the world is concerned, it absolutely happened. Even if it didn't happen, it happened.
This is the world that is not
there.
Television is an art form where the relationship to technology supersedes everything else about it. It's one realm of media where the medium
is
the message, without qualification. TV is not like other forms of consumer entertainment: It's slippier and more dynamic, even when it's dumb. We know people will always read, so we can project the future history of reading by considering the evolution of books. (Reading is a static experience.) We know music will always exist, so we can project a future history of rock 'n' roll by placing it in context with other genres of music. The internal, physiological sensation of hearing a song today is roughly the same as it was in 1901. (The ingestion of sound is a static experience.) The machinery of cinema persistently progresses, but how we watch movies in publicâand the communal role cinema occupies, particularly in regard to datingâhas remained weirdly unchanged since the fifties. (Sitting in a dark theater with strangers is a static experience.) But this is not the case with television.
Both collectively and individually, the experience of watching TV in 2016 already feels totally disconnected from the experience of watching TV in 1996. I doubt the current structure of television will exist in two hundred fifty years, or even in twenty-five. People will still want cheap escapism, and something will certainly satisfy that desire (in the same way television does now). But whatever that something is won't be anything like the television of today. It might be immersive and virtual (like a
Star Trek
ian holodeck) or it might be mobile and open-sourced (like a universal YouTube, lodged inside our retinas). But it absolutely won't be small groups of people, sitting together in the living room, staring at a two-dimensional thirty-one-inch rectangle for thirty consecutive minutes, consuming linear content packaged by a cable company.
Something will replace television, in the same way television replaced radio: through the process of addition. TV took the audio of radio and added visual images. The next tier of innovation will affix a third component, and that new component will make the previous iteration obsolete. I have no idea what that third element will be. But whatever it is will result in a chronological “freezing” of TV culture. Television will be remembered as a stand-alone medium that isn't part of any larger continuum
53
âthe most
dominant force of the latter twentieth century, but a force tethered to the period of its primacy. And this will make retroactive interpretations of its artistic value particularly complicated.
Here's what I mean: When something fits into a lucid, logical continuum, it's generally remembered for how it (a) reinterprets the entity that influenced its creation, and (b) provides influence for whatever comes next. Take something like skiffle musicâa musical genre defined by what it added to early-twentieth-century jazz (rhythmic primitivism) and by those individuals later inspired by it (rock artists of the British Invasion, most notably the Beatles). We think about skiffle outside of itself, as one piece of a multidimensional puzzle. That won't happen with television. It seems more probable that the entrenched memory of television will be like those massive stone statues on Easter Island: monoliths of creative disconnection. Its cultural imprint might be akin to the Apollo space program, a zeitgeist-driving superstructure that (suddenly) mattered more than everything around it, until it (suddenly) didn't matter at all. There won't be any debate over the
importance
of TV, because that has already been assured (if anything, historians might exaggerate its significance). What's hazier
are the particulars. Which specific TV programs will still matter centuries after the medium itself has been replaced? What TV content will resonate with future generations, even after the technological source of that content has become nonexistent?
These are queries that require a thought experiment.
[
2
]
Let's pretend archaeologists made a bizarre discovery: The ancient Egyptians had television. Now, don't concern yourself with how this would have worked.
54
Just pretend it (somehow) happened, and that the Egyptian relationship to television was remarkably similar to our own. Moreover, this insane archaeological discovery is also insanely completeâwe suddenly have access to all the TV shows the Egyptians watched between the years 3500 and 3300 BC. Every frame of this library would be (on some level) interesting. However, some frames would be way more interesting than others. From a sociological vantage point, the most compelling footage would be the national news, closely followed by the local news, closely followed by the commercials. But the least compelling material would be whatever the Egyptians classified as their version of “prestige” television.
The ancient Egyptian
Breaking Bad
, the ancient Egyptian
House of Cards
, the ancient Egyptian rendering of
The Americans
(which
I suppose would be called
The Egyptians
and involve promiscuous spies from Qatna)âthese would be of marginal significance. Why? Because the aesthetic strengths that make sophisticated TV programs superior to their peers do not translate over time. Looking backward, no one would care how good the acting was or how nuanced the plots were. Nobody would really care about the music or the lighting or the mood. These are artful, subjective qualities that matter in the present. What we'd actually want from ancient Egyptian television is a way to look directly into the past, in the same manner we look at Egyptian hieroglyphics without fixating on the color palette or the precision of scale. We'd want to see what their world looked like and how people lived. We would want to understand the experience of subsisting in a certain place during a certain time, from a source that wasn't consciously trying to illustrate those specific traits (since conscious attempts at normalcy inevitably come with bias). What we'd want, ultimately, is “ancillary verisimilitude.” We'd want a TV show that provided the most realistic portrait of the society that created it, without the self-aware baggage embedded in any overt attempt at doing so. In this hypothetical scenario, the most accurate depiction of ancient Egypt would come from a fictional product that achieved this goal accidentally, without even trying. Because that's the way it
always
is, with
everything
. True naturalism can only be a product of the unconscious.
So apply this philosophy to ourselves, and to our own version of televised culture: If we consider all possible criteria, what were the most accidentally realistic TV shows of all time? Which American TV programsâif watched by a curious person in a
distant futureâwould latently represent how day-to-day American society actually was?
This is the kind of question even people who think about television for a living don't think about very often. When I asked
The Revolution Was Televised
author Alan Sepinwall, he noted the “kitchen-sink realism” of sitcoms from the seventies (the grimy aesthetics of
Taxi
and the stagnation of
Barney Miller
, a cop show where the cops never left the office).
New Yorker
TV critic Emily Nussbaum suggested a handful of shows where the dialogue captured emotional inarticulation without the crutch of clichés (most notably the mid-nineties teen drama
My So-Called Life
). Still, it's hard to view any of the programs cited by either as vehicles for understanding reality. This is not their fault, though: We're not supposed to think about TV in this way. Television critics who obsess over the authenticity of picayune narrative details are like poetry professors consumed with penmanship. To attack
True Detective
or
Lost
or
Twin Peaks
as “unrealistic” is a willful misinterpretation of the intent. We don't need television to accurately depict literal life, because life can literally be found by stepping outside. Television's only real-time responsibility is to entertain. But that changes as years start to elapse. We don't reinvestigate low culture with the expectation that it will entertain us a second timeâthe hope is that it will be instructive and revelatory, which sometimes works against the intentions of the creator. Take, for example, a series like
Mad Men
: Here was a show set in the New York advertising world of the 1960s, with a dogged emphasis on precise cultural references and era-specific details. The unspoken goal of
Mad Men
was to depict how the sixties “really” were. And to the present-day
Mad Men
viewer, that's precisely how the show
came across. The goal was achieved. But
Mad Men
defines the difference between ancillary verisimilitude and premeditated reconstruction.
Mad Men
cannot show us what life was like in the sixties.
Mad Men
can only show how life in the sixties came to be interpreted in the twenty-first century. Sociologically,
Mad Men
says more about the mind-set of 2007 than it does about the mind-set of 1967, in the same way
Gunsmoke
says more about the world of 1970 than the world of 1870. Compared to
The Andy Griffith Show
or
Gilligan's Island
, a mediated construct like
Mad Men
looks infinitely more authenticâbut it can't be philosophically authentic, no matter how hard it tries. Its well-considered portrait of the sixties can't be more real than the accidental sixties rooted in any 1964 episode of
My Three Sons
. Because those 1964 accidents are what 1964 actually was.
[
3
]
My point is not that we're communally misguided about which TV series are good, or that prestige programming should be ignored because the people who make it are too aware of what they're doing. As a consumer, I'd argue the opposite. But right now, I'm focused on a different type of appreciation. I'm trying to think about TV as a dead mediumânot as living art, but as art history (a process further convoluted by the ingrained reflex to
never
think about TV as “art,” even when it clearly is). This brand of analysis drives a certain type of person bonkers, because it ignores the conception of taste. Within this discussion, the quality of a program doesn't matter; the assumption is that the future person considering these artifacts won't be remotely concerned with entertainment value. My interest is utility. It's a
formalist assessment, focusing on all the things a (normal) person is not supposed to (normally) be cognizant of while watching any given TV show. Particularly . . .
That first quality is the most palpable and the least quantifiable. If anyone on a TV show employed the stilted, posh, mid-Atlantic accent of stage actors, it would instantly seem preposterous; outside a few notable exceptions, the goal of televised conversation is fashionable naturalism. But vocal delivery is only a fraction of this equation. There's also the issue of word choice: It took decades for screenwriters to realize that no adults have ever walked into a tavern and said, “I'll have a beer,” without noting what specific brand of beer they wanted
55
(an interaction between Kyle MacLachlan and Laura Dern in the 1986 theatrical film
Blue Velvet
is the first time I recall seeing the overt recognition of this). What's even
harder to compute is the relationship between a period's depiction of conversation and the way people of that period were talking in real life. Did the average American father in 1957 truly talk to his kids the way Ward Cleaver talked to Wally and the Beaver? It doesn't seem possibleâbut it was, in all likelihood, the way 1957 suburban fathers
imagined
they were speaking.
The way characters talk is connected to the second quality, but subtly. I classify “the machinations of the world” as the unspoken, internal rules that govern how characters exist. When these rules are illogical, the fictional world seems false; when the rules are rational, even a sci-fi fantasy realm can seem plausible. Throughout the 1970s, the most common narrative trope on a sitcom like
Three's Company
or
Laverne and Shirley
was “the misunderstanding”âa character infers incorrect information about a different character, and that confusion drives the plot. What always felt unreal about those scenarios was the way no one ever addressed these misunderstandings aloud, even when that was the obvious solution. The flawed machinations of the seventies sitcom universe required all misunderstandings to last exactly twenty-two minutes. But when a show's internal rules are good, the viewer is convinced that they're seeing something close to life. When the rom-com series
Catastrophe
debuted on Amazon, a close friend tried to explain why the program seemed unusually true to him. “This is the first show I can ever remember,” he said, “where the characters laugh at each other's jokes in a non-obnoxious way.” This seemingly simple idea was, in fact, pretty novelâprior to
Catastrophe
, individuals on sitcoms constantly made hilarious remarks that no one seemed to notice were hilarious. For decades, this was an unspoken, internal rule: No one laughs at anything. So
seeing characters laugh naturally at things that were plainly funny was a new level of realness.
The way a TV show is photographed and staged (
this is point number three
) are industrial attributes that take advantage of viewers' preexisting familiarity with the medium: When a fictional drama is filmed like a news documentary, audiences unconsciously absorb the action as extra-authentic (a scene shot from a single mobile perspective, like most of
Friday Night Lights
, always feels closer to reality than scenes captured with three stationary cameras, like most of
How I Met Your Mother
). It's a technical choice that aligns with the fourth criterion, the extent to which the public recognition of authenticity informs the show's success (a realization that didn't happen in earnest until the 1980s, with shows like
Hill Street Blues
). Now, it's possible thatâin two hundred fifty yearsâthose last two points may be less meaningful to whoever is excavating these artifacts. Viewers with no relationship to TV won't be fooled by the perspective of the camera, and people living in a different time period won't intuitively sense the relationship between the world they're seeing and the world that was. But these points will still matter a little, because all four qualities are interrelated. They amplify each other. And whatever television program exemplifies these four qualities most successfully will ultimately have the most usefulness to whatever future people end up watching them. For these (yet-to-be-conceived) cultural historians, TV will be a portal into the past. It will be a way to psychically contact the late twentieth century with an intimacy and depth that can only come from visual fiction, without any need for imagination or speculation. It won't be a personal, interpretive experience, like reading a book; it will be like the book is alive.
Nothing will need to be mentally conjured. The semi-ancient world will just be
there
, moving and speaking in front of them, unchanged by the sands of time.