A Field Guide to Lies: Critical Thinking in the Information Age (26 page)

BOOK: A Field Guide to Lies: Critical Thinking in the Information Age
11.17Mb size Format: txt, pdf, ePub
ads

To make matters worse, a now-discredited physician, Andrew Wakefield, published a scientific paper in 1998 claiming a link. The
British Medical Journal
declared his work fraudulent, and six years later, the journal that originally published it, the
Lancet
, retracted it. His medical license was revoked. Wakefield was a surgeon, not an expert in epidemiology, toxicology, genetics, neurology, or any specialization that would have qualified him as an expert on autism.

Post hoc, ergo propter hoc
caused people to believe the correlation implied causation. Illusory correlation caused them to focus only on the coincidence of some people developing autism who also had the vaccine. The testimony of a computer scientist and a physician caused people to be persuaded by association. Belief perseverance caused people who initially believed the link to cling to their beliefs even after the evidence had been removed.

Parents continue to blame the vaccine for autism, and many parents stopped vaccinating their children. This led to several outbreaks of measles around the world. All because of a spurious link and the failure of a great many people to distinguish between correlation and causation, and a failure to form beliefs based on what is now overwhelming scientific evidence.

K
NOWING
W
HAT
Y
OU
D
ON

T
K
NOW

. . . as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don’t know we don’t know.
—U.S. Secretary of Defense Donald Rumsfeld

This is clearly tortured language, and the meaning of the sentence is obscured by that. There’s no reason for the repetitive use of the same word, and the secretary might have been clearer if he had said instead, “There are things we know, things we are aware that we do not know, and some things we aren’t even aware that we don’t know.” There’s a fourth possibility, of course—things we know that we aren’t aware we know. You’ve probably experienced this—someone asks you a question and you answer it, and then say to yourself, “I’m not even sure how I knew that.”

Either way, the fundamental point is sound, you know? What will really hurt you, and cause untold amounts of damage and inconvenience, are the things you think you know but don’t (per Mark Twain’s/Josh Billings’s epigraph at the beginning of this book), and the things that you weren’t even aware of that are
supremely relevant to the decision you have ahead (the unknown unknowns). Formulating a proper scientific question requires taking an account of what we know and what we don’t know. A properly formulated scientific hypothesis is
falsifiable
—there are steps we can take, at least in theory, to test the true state of the world, to determine if our hypothesis is true or not. In practice, this means considering alternative explanations ahead of time, before conducting the experiment, and designing the experiment so that the alternatives are ruled out.

If you’re trying out a new medicine on two groups of people, the experimental conditions have to be the same in order to conclude that medicine A is better than medicine B. If all the people in group A get to take their medicine in a windowed room with a nice view, and the people in group B have to take it in a smelly basement lab, you’ve got a confounding factor that doesn’t allow you to conclude the difference (if you find one) was due solely to the medication. The smelly basement problem is a known known. Whether medicine A works better than medicine B is a known unknown (it’s why we’re conducting the experiment). The unknown unknown here would be some other potentially confounding factor. Maybe people with high blood pressure respond better to medicine A in every case, and people with low blood pressure respond better to medicine B. Maybe family history matters. Maybe the time of day the medication is taken makes a difference. Once you identify a potential confounding factor, it very neatly moves from the category of unknown unknown to known unknown. Then we can modify the experiment, or do additional research that will help us to find out.

The trick to designing good experiments—or evaluating ones that have already been conducted—comes down to being able to
generate alternative explanations. Uncovering unknown unknowns might be said to be
the
principal job of scientists. When experiments yield surprising results, we rejoice because this is a chance to learn something we didn’t know. The B-movie characterization of the scientist who clings to his pet theory to his last breath doesn’t apply to any scientist I know; real scientists know that they only learn when things don’t turn out the way they thought they would.

In a nutshell:

  1. There are some things we know, such as the distance from the Earth to the sun. You may not be able to generate an answer without looking it up, but you are aware that the answer is known. This is Rummy’s
    known known
    .
  2. There are some things that we don’t know, such as how neural firing leads to feelings of joy. We’re aware that we don’t know the answer to this. This is Rummy’s
    known unknown
    .
  3. There are some things that we know, but we aren’t aware that we know them, or forget that we know them. What is your grandmother’s maiden name? Who sat next to you in third grade? If the right retrieval cues help you to recollect something, you find that you knew it, although you didn’t realize ahead of time that you did. Although Rumsfeld doesn’t mention them, this is an
    unknown known
    .
  4. There are some things that we don’t know, and we’re not even aware we don’t know them. If you’ve bought a house, you’ve probably hired various inspectors to report on the condition of the roof, the foundation, and the existence of termites or other wood-destroying organisms. If you had never heard of radon, and your real estate agent was more interested in closing the deal than protecting your family’s health, you wouldn’t think to test for it. But many homes do have high levels of radon, a known carcinogen. This would count as an
    unknown unknown
    (although, having read this paragraph, it is no longer one). Note that whether you’re aware or unaware of an unknown depends on your expertise and experience. A pest-control inspector would tell you that he is only reporting on what’s visible—it is known to him that there might be hidden damage to your house, in areas he was unable to access. The nature and extent of this damage, if any, is unknown to him, but he’s aware that it might be there (a
    known unknown
    ). If you blindly accept his report and assume it is complete, then you’re unaware that additional damage could exist (an
    unknown unknown
    ).

We can clarify Secretary Rumsfeld’s four possibilities with a four-fold table:

 

 

What we know that we know:

 GOOD—PUT IT IN THE BANK

 

What we know that we don’t know:

 NOT BAD, WE CAN LEARN IT

 

What we don’t know that we know:

 A BONUS

 

What we don’t know that we don’t know:

 DANGER—HIDDEN SHOALS

The unknown unknowns are the most dangerous. Some of the biggest human-caused disasters can be traced to these. When bridges collapse, countries lose wars, or home purchasers face foreclosure, it’s often because someone didn’t allow for the possibility that they don’t know everything, and they proceeded along blindly thinking that every contingency had been accounted for. One of the main purposes of training someone for a PhD, a law or medical degree, an MBA, or military leadership is to teach them to identify and think systematically about what they don’t know, to turn unknown unknowns into known unknowns.

A final class that Secretary Rumsfeld didn’t talk about either are incorrect knowns—things that we think are so, but aren’t. Believing false claims falls into this category. One of the biggest causes of bad, even fatal, outcomes is belief in things that are untrue.

B
AYESIAN
T
HINKING
IN
S
CIENCE
AND
IN
C
OURT

Recall from Part One the idea of Bayesian probability, in which you can modify or update your belief about something based on new data as it comes in, or on the prior probability of something being true—the probability you have pneumonia
given
that you show certain symptoms, or the probability that a person will vote for a particular party
given
where they live.

In the Bayesian approach, we assign a subjective probability to the hypothesis (the
prior
probability), and then modify that probability in light of the data collected (the
posterior
probability, because it’s the one you arrive at after you’ve conducted the experiment). If we had reason to believe the hypothesis was true before we tested it, it doesn’t take much evidence for us to confirm it. If we had reason to believe the hypothesis unlikely before we tested it, we need more evidence.

Unlikely claims, then, according to a Bayesian perspective, require stronger proof than likely ones. Suppose your friend says she saw something flying right outside the window. You might entertain three hypotheses,
given
your own recent experiences at that window: It is a robin, it is a sparrow, or it is a pig. You can assign probabilities to these three hypotheses. Now your friend shows you a photo of a pig flying outside the window. Your prior belief that pigs fly is so small that the posterior probability is still very small, even with this evidence. You’re
probably now entertaining new hypotheses that the photo was doctored, or that there was some other kind of trickery involved. If this reminds you of the fourfold tables and the likelihood that someone has breast cancer given a positive test, it should—the fourfold tables are simply a method for performing Bayesian calculations.

Scientists should set a higher threshold for evidence that goes against standard theories or models than for evidence that is consistent with what we know. Following thousands of successful trials for a new retroviral drug in mice and monkeys, when we find that it works in humans we are not surprised—we’re willing to accept the evidence following standard conventions for proof. We might be convinced by a single study with only a few hundred participants. But if someone tells us that sitting under a pyramid for three days will cure AIDS, by channeling qi into your chakras, this requires stronger evidence than a single experiment because it is farfetched and nothing like it has ever been demonstrated before. We’d want to see the result replicated many times and under many different conditions, and ultimately, a meta-analysis.

The Bayesian approach isn’t the only way that scientists deal with unlikely events. In their search for the Higgs boson, physicists set a threshold (using conventional, not Bayesian, statistical tests) 50,000 times more stringent than usual—not because the Higgs was unlikely (its existence was hypothesized for decades) but because the cost of being wrong is very high (the experiments are very expensive to conduct).

The application of Bayes’s rule can perhaps best be illustrated with an example from forensic science.
One of the cornerstone principles of forensic science was developed by the French physician and lawyer Edmond Locard: Every contact leaves a trace. Locard stated
that either the wrongdoer leaves signs at the scene of the crime or has taken away with her—on her person, body, or clothes—indications of where she has been or what she has done.

Suppose a criminal breaks into the stables to drug a horse the night before a big race. He will leave some traces of his presence at the crime scene—footprints, perhaps skin, hair, clothing fibers, etc. Evidence has been transferred from the criminal to the scene of the crime. And similarly, he will pick up dirt, horsehair, blanket fibers, and such from the stable, and in this way evidence has been transferred from the crime scene to the criminal.

Now suppose someone is arrested the next day. Samples are taken from his clothing, hands, and fingernails, and similarities are found between these samples and other samples taken at the crime scene. The district attorney wants to evaluate the strength of this evidence. The similarities may exist because the suspect is guilty. Or perhaps the suspect is innocent, but was in contact with the guilty party—that contact too would leave a trace. Or perhaps the suspect, quite innocently, was in another barn, interacting innocently with another horse, accounting for the similarities.

Using Bayes’s rule allows us to combine objective probabilities, such as the probability of the suspect’s DNA matching the DNA found at the crime scene, with personal, subjective views, such as the credibility of a witness, or the honesty and track record of the CSI officer who had custody of the DNA sample. Is the suspect someone who has done this before, or someone who knows nothing about horse racing, has no connection to anyone involved in the race, and has a very good alibi? These factors help us to determine a prior, subjective probability that the suspect is guilty.

If we
take literally the assumption in the American legal system that one is innocent until proven guilty, then the prior probability of a suspect being guilty is zero, and any evidence, no matter how damning, won’t yield a posterior probability above zero, because you’ll always be multiplying by zero. A more reasonable way to establish the prior probability of a suspect’s innocence is to consider anyone in the population equally likely. Thus, if the suspect was apprehended in a city of 100,000 people, and investigators have reason to believe that the perpetrator was a resident of the city, the prior odds of the suspect being guilty are 1 in 100,000. Of course, evidence can narrow the population—we may know, for example, that there were no signs of forced entry, and so the suspect had to be one of fifty people who had access to the facility.

BOOK: A Field Guide to Lies: Critical Thinking in the Information Age
11.17Mb size Format: txt, pdf, ePub
ads

Other books

Free-Falling by Nicola Moriarty
Jinn and Juice by Nicole Peeler
My Angel by Christine Young
Head Over Heels by Susan Andersen
Delicious by Susan Mallery
Night of Wolves by David Dalglish
A Thief's Treasure by Miller, Elena