A Field Guide to Lies: Critical Thinking in the Information Age (15 page)

BOOK: A Field Guide to Lies: Critical Thinking in the Information Age
12.36Mb size Format: txt, pdf, ePub
ads

Medical Decision Making

This way of visualizing conditional probabilities is useful for medical decision making. If you take a medical test, and it says you have some disease, what is the probability you actually have the disease? It’s not 100 percent, because the tests are not perfect—they produce false positives (reporting that you have the disease when you don’t) and false negatives (reporting that you don’t have the disease when you do).

The probability that a woman has breast cancer is 0.8 percent. If she has breast cancer, the probability that a mammogram will indicate it is only 90 percent because the test isn’t perfect and it misses some cases. If the woman does not have breast cancer, the probability of a positive result is 7 percent. Now, suppose a woman, drawn at random, has a positive result—what is the probability that she actually has breast cancer?

We start by drawing a fourfold table and filling in the possibilities: The woman actually has breast cancer or doesn’t, and the test can report that she does or that she doesn’t.
To make the numbers work out easily—to make sure we’re dealing with whole numbers—let’s assume we’re talking about 10,000 women.

That’s the total population, and so that number goes in the lower right-hand corner of the figure, outside the boxes.

 

  
  
 

Test Result 

  
 

 

  
 

YES 

 

NO 

  
 

Actually Has Breast Cancer 

 

YES 

   
  
 

  

 

NO 

 

  

  
 

  

  
  
 

 

 

 

 

10,000 

Unlike the hamburger-ketchup example, we fill in the margins first, because that’s the information we were given. The probability of breast cancer is 0.8 percent, or 80 out of 10,000 people. That number goes in the margin of the top row. (We don’t yet know how to fill in the boxes, but we will in a second.) And because the row has to add up to 10,000, we know that the margin for the bottom row has to equal

10,000 − 80 = 9,920.

 

  
  
 

Test Result 

  
 

 

  
 

YES 

 

NO 

  
 

Actually Has Breast Cancer 

 

YES 

  
  
 

80 

 

NO 

  
  
 

9,920 

  
  
 

 

 

 

 

10,000 

We were told that the probability that the test will show a positive
if
breast cancer exists is 90 percent. Because probabilities have to add up to 100 percent, the probability that the test will
not
show a positive result if breast cancer exists has to be 100 percent – 90 percent, or 10 percent. For the eighty women who actually have breast cancer (the margin for the top row), we now know that 90 percent of them will have a positive test result (90% of 80 = 72) and 10 percent will have a negative result (10% of 80 = 8). This is all we need to know how to fill in the boxes on the top row.

 

  
  
 

Test Result 

  
 

 

  
 

YES 

 

NO 

  
 

Actually Has Breast Cancer 

 

YES 

 

72 

 


 

80 

 

NO 

  
  
 

9,920 

  
  
 

 

 

 

 

10,000 

We’re not yet ready to calculate the answer to questions such as “What is the probability that I have breast cancer given that I had a positive
test result?” because we need to know how many people will have a positive test result. The missing piece of the puzzle is in the original description: 7 percent of women who don’t have breast cancer will still show a positive result. The margin for the lower row tells us 9,920 women don’t have breast cancer; 7 percent of them = 694.4. (We’ll round to 694.) That means that 9,920 − 694 = 9,226 goes in the lower right square.

 

  
  
 

Test Result 

  
 

 

  
 

YES 

 

NO 

  
 

Actually Has Breast Cancer 

 

YES 

 

72 

 


 

80 

 

NO 

 

694 

 

9,226 

 

9,920 

  
  
 

766 

 

9,234 

 

10,000 

Finally, we add up the columns.

If you’re among the millions of people who think that having a positive test result means you definitely have the disease, you’re wrong. The conditional probability of having breast cancer given a positive test result is the upper left square divided by the left column’s margin total, or 72⁄766. The good news is that,
even with a positive mammogram, the probability of actually having breast cancer is
9.4 percent. This is because the disease is relatively rare (less than 1 in 1,000) and the test for it is imperfect.

 

  
  
 

Test Result 

  
 

 

  
 

YES 

 

NO 

  
 

Actually Has Breast Cancer 

 

YES 

 

 
  72  

 


 

80 

 

NO 

 

694   

 

9,226 

 

9,920 

  
  
 

 
  766  

 

9,234 

 

10,000 

Conditional Probabilities Do Not Work Backward

We’re used to certain symmetries in math from grade school: If x = y then y = x. 5 + 7 = 7 + 5. But some concepts don’t work that way, as we saw in the discussion above on probability values (if the probability of a false alarm is 10 percent, that doesn’t mean that the probability of a hit is 90 percent).

Consider the statistic:

Ten times as many apples are sold in supermarkets as in roadside stands.

A little reflection should make it apparent that this does not mean you’re more likely to find an apple on the day you want one by going to the supermarket: The supermarket may have more than ten times the number of customers as roadside stands have, but even with its greater inventory, it may not keep up with demand. If you see a random person walking down the street with an apple, and you have no information about where they bought it, the probability is higher that they bought it at a supermarket than at a roadside stand.
We can ask, as a conditional probability, what is the probability that this person bought it at a supermarket given that they have an apple?

P(was in a supermarket | found an apple to buy)

It is not the same as what you might want to know if you’re craving a Honeycrisp:

P(found an apple to buy | was in a supermarket)

This same asymmetry pops up in various disguises, in all manner of statistics.
If you read that more automobile accidents occur at seven p.m. than at seven a.m., what does that mean? Here, the language of the statement itself is ambiguous. It could either mean you’re looking at the probability that it was seven p.m. given that an accident occurred, or the probability that an accident occurred given that it was seven p.m. In the first case, you’re looking at all accidents and seeing how many were at seven p.m. In the second case, you’re looking at how many cars are on the road at seven p.m., and seeing what proportion of them are involved in accidents. What?

Perhaps there are far more cars on the road at seven p.m. than any other time of day, and far fewer accidents per thousand cars. That would yield more accidents at seven p.m. than any other time, simply due to the larger number of vehicles on the road. It is the accident
rate
that helps you determine the safest time to drive.

Similarly, you may have heard that most accidents occur within three miles of home. This isn’t because that area is more dangerous per se, it’s because the majority of trips people take are short ones, and so the three miles around the home is much more traveled. In
most cases, these two different interpretations of the statement will not be equivalent:

P(7 p.m. | accident) ≠ P(accident | 7 p.m.)

The consequences of such confusion are hardly just theoretical: Many court cases have hinged on a misapplication of conditional probabilities, confusing the direction of what is known. A forensics expert may compute, correctly, that the probability of the blood found at the crime scene matching the defendant’s blood type by chance is only 1 percent. This is
not
at all the same as saying that there is only a 1 percent chance the defendant is innocent. What? Intuition tricks us again. The forensics expert is telling us the probability of a blood match
given
that the defendant is innocent:

P(blood match | innocence)

Or, in plain language, “the probability that we would find a match if the defendant were actually innocent.” That is not the same as the number you really want to know: What is the probability that the defendant is innocent
given
that the blood matched:

P(blood match | innocence) ≠ P(innocence | blood match)

Many innocent citizens have been sent to prison because of this misunderstanding. And many patients have made poor decisions about medical care because they thought, mistakenly, that

P(positive test result | cancer) = P(cancer | positive test result)

And it’s not just patients—doctors make this error all the time (in one study
90 percent of doctors treated the two different probabilities the same). The results can be horrible.
One surgeon persuaded ninety women to have their healthy breasts removed if they were in a high-risk group. He had noted that 93 percent of breast cancers occurred in women who were in this high-risk group. Given that a woman had breast cancer, there was a 93 percent chance she was in this group: P(high-risk group | breast cancer) = .93. Using a fourfold table for a sample of 1,000 typical women, and adding the additional information that 57 percent of women fall into this high-risk group, and that the probability of a woman having breast cancer is 0.8 percent (as mentioned earlier), we can calculate P(breast cancer | high-risk group), which is the statistic a woman needs to know before consenting to the surgery (numbers are rounded to the nearest integer):

 

  
  
 

High-Risk Group 

  
 

 

  
 

YES 

 

NO 

  
 

Actually Has Breast Cancer 

 

YES 

 

 
  7  

 


 


 

NO 

 

563   

 

429 

 

992 

  
  
 

 
  570  

 

430 

 

1,000 

The probability that a woman has cancer, given that she is in this high-risk group is not 93 percent, as the surgeon erroneously thought, but only
7

570
, or 1 percent. The surgeon overestimated the cancer risk by nearly one hundred times the actual risk. And the consequences were devastating.

BOOK: A Field Guide to Lies: Critical Thinking in the Information Age
12.36Mb size Format: txt, pdf, ePub
ads

Other books

Democracy of Sound by Alex Sayf Cummings
Applaud the Hollow Ghost by David J. Walker
Life in a Medieval City by Frances Gies, Joseph Gies
The Love Market by Mason, Carol
Bangkok Tattoo by John Burdett
Goldie by Ellen Miles