A Field Guide to Lies: Critical Thinking in the Information Age (24 page)

BOOK: A Field Guide to Lies: Critical Thinking in the Information Age
13.19Mb size Format: txt, pdf, ePub
ads
DIMMOCK:
But the gun . . . why—
HOLMES:
He was waiting for the killer. He’d been threatened.

Note that Sherlock uses the phrase
highly unlikely
. This signals that he’s not using deduction. And it’s not induction because he’s not going from the specifics to the general—in a way, he’s going from one set of specifics (the observations he makes in the victim’s flat) to another specific (ruling it murder rather than suicide). Abduction, my dear Watson.

Arguments

When evidence is offered to support a statement, these combined statements take on a special status—what logicians call an argument
.
Here, the word
argument
doesn’t mean a dispute or disagreement with someone; it means a formal logical system of statements. Arguments have two parts: evidence and a conclusion. The evidence can be one or more statements, or premises. (A statement without evidence, or without a conclusion, is not an argument in this sense of the word.)

Arguments set up a system. We often begin with the conclusion—I know this sounds backward, but it’s how we typically speak; we state the conclusion and
then
bring out the evidence.

Conclusion: Jacques cheats at pool.
Evidence (or premise): When your back was turned, I saw him move the ball before taking a shot.

Deductive reasoning follows the process in the opposite direction.

Premise: When your back was turned, I saw him move the ball before taking a shot.
Conclusion: Jacques cheats at pool.

This is closely related to how scientists talk about the results of experiments, which are a kind of argument, again in two parts.

Hypothesis = H
Implication = I
H: There are no black swans.
  I: If H is true, then neither I nor anyone else will ever see a black swan.
But
I
is not true. My uncle Ernie saw a black swan, and then took me to see it too.
Therefore, reject H.

A Deductive Argument

The germ theory of disease was discovered through the application of deduction. Ignaz Semmelweis was a Hungarian physician who conducted a set of experiments (twelve years before Pasteur’s germ and bacteria research) to determine what was causing high mortality rates at a maternity ward in the Vienna General Hospital. The scientific method was not well established at that point, but his systematic observations and manipulations helped not only to pinpoint the culprit, but also to advance scientific knowledge. His experiments are a model of deductive logic and scientific reasoning.

Built into the scenario was a kind of control condition: The Vienna General had two maternity wards adjacent to each other, the first division (with the high mortality rate) and the second division (with a low mortality rate). No one could figure out why infant and mother death rates were so much higher in one ward than the other.

One explanation offered by a board of inquiry was that the configuration of the first division promoted psychological distress: Whenever a priest was called in to give last rites to a dying woman, he had to pass right by maternity beds in the first division to get to her; this was preceded by a nurse ringing a bell. The combination was believed to terrify the women giving birth and therefore make them more likely victims of this “childbed fever.” The priest did not have to pass by birthing mothers in the second division when he delivered last rites because he had direct access to the room where dying women were kept.

Semmelweis proposed a hypothesis and implication that described an experiment:

H: The presence of the ringing bell and the priest increases chances of infection.
  I: If the bell and priest are not present, infection is not increased.

Semmelweis persuaded the priest to take an awkward, circuitous route to avoid passing the birthing mothers of the first division, and he persuaded the nurse to stop ringing the bell. The mortality rate did not decrease.

I is not true.
Therefore H is false.

We reject the hypothesis after careful experimentation.

Semmelweis entertained other hypotheses. It wasn’t overcrowding, because, in fact, the second division was the more crowded one. It wasn’t temperature or humidity, because they were the same in the two divisions. As often happens in scientific discovery, a chance event, purely serendipitous, led to an insight. A good friend of Semmelweis’s was accidentally cut by the scalpel of a student who had just finished performing an autopsy. The friend became very sick, and the subsequent autopsy revealed some of the same signs of infection as were found in the women who were dying during childbirth. Semmelweis wondered if there was a connection between the particles or chemicals found in cadavers and the spread of the disease. Another difference between the two divisions that had seemed irrelevant now suddenly seemed relevant: The first division staff were medical students, who were often performing autopsies or cadaver dissections when they were called away to deliver a baby; the second division staff were midwives who had no other duties. It was not common practice for doctors to wash their hands, and so Semmelweis proposed the following:

H: The presence of cadaverous contaminants on the hands of doctors increases chances of infection.
  I: If the contaminants are neutralized, infection is not increased.

•   •   •

Of course, an alternative
I
was possible too: If the workers in the two divisions were switched (if midwives delivered in division one and medical students in division two) infection would be decreased. This is a valid implication too, but for two reasons switching the workers was not as good an idea as getting the doctors to wash their hands. First, if the hypothesis was really true, the death rate at the hospital would remain the same—all Semmelweis would have done was to shift the deaths from one division to another. Second, when not delivering babies, the doctors still had to work in their labs in division one, and so there would be an increased delay for both sets of workers to reach mothers in labor, which could contribute to additional deaths. Getting the doctors to wash their hands had the advantage that if it worked, the death rate throughout the hospital would be lowered.

Semmelweis conducted the experiment by asking the doctors to disinfect their hands with a solution containing chlorine. The mortality rate in the first division dropped from 18 percent to under 2 percent.

L
OGICAL
F
ALLACIES

Illusory Correlation

The brain is a giant pattern detector, and it seeks to extract order and structure from what often appear to be random configurations. We see Orion the Hunter in the night sky not because the stars were organized that way but because our brains can project patterns onto randomness.

When that friend phones you just as you’re thinking of them, that kind of coincidence is so surprising that your brain registers it. What it doesn’t do such a good job of is registering all the times you
didn’t
think of someone and they called you. You can think of this like one of those fourfold tables from Part One. Suppose it’s a particularly amazing week filled with coincidences (a black cat crosses your path as you walk by a junkyard full of broken mirrors, make your way up to the thirteenth floor of a building to find the movie
Friday the 13th
playing on a television set there). Let’s say you get twenty phone calls that week and two of them were from long-lost friends whom you hadn’t thought about for a while, but they called within ten minutes of you thinking of them. That’s the top row of your table: twenty calls, two that you summoned using
extrasensory signaling, eighteen that you didn’t. But wait! We have to fill in the bottom row of the table: How many times were you thinking about people and they
didn’t
call, and—here’s my favorite—how many times were you
not
thinking about someone and they didn’t call?

 

  
 

Was I Thinking About Them Just Before? 

  
 

 

  
 

YES 

 

NO 

  
 

Someone Phoned 

 

YES 

 


 

18 

 

20 

 

NO 

 

50 

 

930 

 

980 

  
  
 

52 

 

948 

 

1,000 

To fill out the rest of the table, let’s say there are 52 times in a week that you’re thinking about people, and 930 times in a week when you are not thinking about people. (This last one is just a crazy guess, but if we divide up the 168-hour week into ten-minute increments, that’s about 980 total thoughts, and we already know that 50 of those were about people who didn’t phone you, leaving 930 thoughts about things other than people; this is probably an underestimate, but the point is made with any reasonable number you care to put here—try it yourself.)

The brain really only notices the upper left-hand square and ignores the other three, much to the detriment of logical thinking (and to the encouragement of magical thinking). Now, before you book a trip to Vegas to play the roulette wheel, let’s run the numbers. What is the probability that someone will call
given
that you just thought about them? It’s only two out of fifty-two, or 4 percent.
That’s right, 4 percent of the time when you think of someone they call you. That’s not so impressive.

What might account for the 4 percent of the times when this coincidence occurs? A physicist might just invoke the 1,000 events in your fourfold table and note that only two of them (two-tenths of 1 percent) appear to be “weird” and so you should just expect this by chance. A social psychologist might wonder if there was some external event that caused both you and your friend to think of each other, thus prompting the call. You read about the terrorist attacks in Paris on November 13, 2015. Somewhere in the back of your mind, you remember that you and a college friend always talked about going to Paris. She calls you and you’re so surprised to hear from her you forget the Paris connection, but she is reacting to the same event, and that’s why she picked up the phone.

If this reminds you of the twins-reared-apart story earlier, it should. Illusory correlation is the standard explanation offered by behavioral geneticists for the strange confluence of behaviors, such as both twins scratching their heads with their middle finger, or both wrapping tape around pens and pencils to improve their grip. We are fascinated by the contents of the upper left-hand cell in the fourfold table, fixated on all the things that the twins do in common. We tend to ignore all the things that one twin does and the other doesn’t.

Framing of Probabilities

After that phone call from your old college friend, you decide to go to Paris on vacation for a week next summer. While standing in front of the
Mona Lisa
, you hear a familiar voice and look up to see your old college roommate Justin, whom you haven’t seen in years.
“I can’t believe it!” Justin says. “I know!” you say. “What are the odds that I’d run into you here in Paris, standing right in front of the
Mona Lisa
! They must be millions to one!”

Yes, the odds of running into Justin in front of the
Mona Lisa
are probably millions to one (they’d be difficult to calculate precisely, yet any calculation you do would make clear that this was very unlikely). But this way of framing the probability is fallacious. Let’s take a step back. What if you hadn’t run into Justin just as you were standing in front of the
Mona Lisa
, but as you were in front of the
Venus de Milo
, in
les toilettes
, or even as you were walking in the entrance? What if you had run into Justin at your hotel, at a café, or the Eiffel Tower? You would have been just as surprised. For that matter, forget about Justin—if you had run into
anyone you knew
during that vacation,
anywhere in Paris,
you’d be just as surprised. And why limit it to your vacation in Paris? It could be on a business trip to Madrid, while changing planes in Cleveland, or at a spa in Tucson. Let’s frame the probability this way: Sometime in your adult life, you’ll run into someone you know where you wouldn’t expect to run into them. Clearly the odds of that happening are quite good. But the brain doesn’t automatically think this way—cognitive science has shown us just how necessary it is for us to train ourselves to avoid squishy thinking.

Framing Risk

A related problem in framing probabilities is the failure to frame risks logically. Even counting the airplane fatalities of the 9/11 attacks in the United States, air travel remained (and continues to remain) the safest transportation mode, followed closely by rail transportation.
The chances of dying on a commercial flight or train trip are next to zero. Yet, right after 9/11, many U.S. travelers avoided airplanes and took to the highways instead. Automobile deaths increased dramatically. People followed their emotional intuition rather than a logical response, oblivious to the increased risk. The
rate
of vehicular accidents did not increase beyond baseline, but the sum of people who died in all transportation-related accidents increased as more people chose a
less safe mode of travel.

BOOK: A Field Guide to Lies: Critical Thinking in the Information Age
13.19Mb size Format: txt, pdf, ePub
ads

Other books

All the King's Horses by Lauren Gallagher
Leaving Earth by Loribelle Hunt
The Locavore's Dilemma by Pierre Desrochers
Buzz Kill by Beth Fantaskey
Targets of Revenge by Jeffrey Stephens
La hija del Adelantado by José Milla y Vidaurre
The Convent by Maureen McCarthy