Mind Hacks™: Tips & Tools for Using Your Brain (35 page)

Read Mind Hacks™: Tips & Tools for Using Your Brain Online

Authors: Tom Stafford,Matt Webb

Tags: #COMPUTERS / Social Aspects / Human-Computer Interaction

BOOK: Mind Hacks™: Tips & Tools for Using Your Brain
5.65Mb size Format: txt, pdf, ePub
Chapter 7. Reasoning: Hacks 70–74

We consider ourselves pretty rational animals, and we can indeed be pretty logical
when we put our minds to it. But you only have to scratch the surface to find out how easily
we’re misled by numbers
[
Use Numbers Carefully
]
, and it’s well-known that statistics are really hard to understand
[
Think About Frequencies Rather than Probabilities
]
. So
how good are we at being rational? It depends: our logic skills aren’t too hot, for instance,
until we need to catch people who might be cheating on us
[
Detect Cheaters
]
instead of just logically solving sums. And that’s
the point. We have a very pragmatic kind of rationality, solving complex problems as long as
they’re real-life situations.

Pure rationality is overrated anyway. Figuring out logic is slow going when we can have
gut feelings instead, and that’s a strategy that works. Well, the placebo effect
[
Fool Others into Feeling Better
]
works at
least — belief is indeed a powerful thing. And we have a strong bias toward keeping the status
quo
[
Maintain the Status Quo
]
too. It’s
not rational, that’s for sure, but don’t worry; the “If it ain’t broke, don’t fix it” policy
is a pragmatic one, at least.

Use Numbers Carefully
Our brains haven’t evolved to think about numbers. Funny things happen to them as they
go into our heads.

Although we can instantly appreciate how many items comprise small groups (small meaning
four or fewer
[
Count Faster with Subitizing
]
), reasoning about bigger numbers requires counting, and counting requires
training. Some cultures get by with no specific numbers higher than 3, and even numerate
cultures took a while to invent something as fundamental as zero.
1

So we don’t have a natural faculty to deal with numbers explicitly; that’s a cultural
invention that’s hitched onto natural faculties we do have. The difficulty we have when
thinking about numbers is most apparent when you ask
people to deal with very large numbers, with very small numbers, or with
probabilities
[
Think About Frequencies Rather than Probabilities
]
.

This hack shows where some specific difficulties with numbers come from and gives you
some tests you can try on yourself or your friends to demonstrate them.

The biases discussed here and, in some of the other hacks in this chapter, don’t affect
everyone all the time. Think of them as forces, like gravity or tides. All things being
equal, they will tend to push and pull your judgments, especially if you aren’t giving your
full attention to what you are thinking about.

In Action

How big is:

9 x 8 x 7 x 6 x 5 x 4 x 3 x 2 x 1

How about:

1 x 2 x 3 x 4 x 5 x 6 x 7 x 8 x 9

Since you’ve got both in front of you, you can easily see that they are equivalent and
so must therefore equal the same number. But try this: ask someone the first version. Tell
her to estimate, not to calculate — have her give her answer within 5 seconds. Now find
another person and ask him to estimate the answer for the second version. Even if he sees
the pattern and thinks to himself “ah, 9 factorial,” unless he has the answer stored in
his head, he will be influenced by the way the sum is presented.

Probably the second person you asked gave a smaller answer, and both people gave
figures well below the real answer (which is a surprisingly large 362,880).

How It Works

When estimating numbers, most people start with a number that comes easily to mind — an
“anchor” — and adjust up or down from that initial base. The initial number that comes to
mind is really just your first guess, and there are two problems. First, people often fail
to adjust sufficiently away from the first guess. Second, the guess can be easily
influenced by circumstances. And the initial circumstance, in this case, is the number at
the beginning of the sum.

In the previous calculations, anchors people tend to use are higher or lower depending
on the first digit of the multiplication (which we read left to right). The anchors then
unduly influence the estimate people make of the
answer to the calculation. We start with a higher anchor for the first series
than for the second. When psychologists carried out an experimental test of these two
questions, the average estimate for the first series was 4200, compared to only 500 for
the second.

Both estimates are well below the correct answer. Because the series as a whole is
made up of small numbers, the anchor in both cases is relatively low, which biases the
estimate most people make to far below the true answer.

In fact, you can give people an anchor that has nothing to do with the task you’ve set
for them, and it still biases their reasoning. Try this experiment, which is discussed in
Edward Russo and Paul Schoemaker’s book
Decision Traps
.
2

Find someone — preferably not a history major — and ask her for the last three digits of
her phone number. Add 400 to this number then ask “Do you think Attila the Hun was
defeated in Europe before or after X,” where X is the year you got by the addition of 400
to the telephone number. Don’t say whether she got it right (the correct answer is A.D.
451) and then ask “In what year would you guess Attila the Hun was defeated?” The answers
you get will vary depending on the initial figure you gave, even though it is based on
something completely irrelevant to the question — her own phone number!

When Russo and Schoemaker performed this experiment on a group of 500 Cornell
University MBA students, they found that the number derived from the phone digits acted as
a strong anchor, biasing the placing of the year of Attila the Hun’s defeat. The
difference between the highest and lowest anchors corresponded to a difference in the
average estimate of more than 300 years.

In Real Life

You can see charities using this anchoring and adjustment hack when they send you
their literature. Take a look at the “make a donation” section on the back of a typical
leaflet. Usually this will ask you for something like “$50, $20, $10, $5, or an amount of
your choice.” The reason they suggest $50, $20, $10, then $5 rather than $5, $10, $20,
then $50 is to create a higher anchor in your mind. Maybe there isn’t ever much chance
you’ll give $50, but the “amount of your choice” will be higher because $50 is the first
number they suggest.

Maybe anchoring explains why it is common to price things at a cent below a round
number, such as at $9.99. Although it is only 1 cent different from $10, it feels (if you
don’t think about it much) closer to $9 because that’s the anchor first established in
your mind by the price tag.

Irrelevant anchoring and insufficient adjustment are just two examples of
difficulties we have when thinking about numbers. (
Think About Frequencies Rather than Probabilities
discusses extra difficulties we
have when thinking about a particularly common kind of number: probabilities.)

The difficulty we have with numbers is one of the reasons people so often try to con
you with them. I’m pretty sure in many debates many of us just listen to the numbers
without thinking about them. Because numbers are hard, they lend an air of authority to an
argument and can often be completely misleading or contradictory. For instance, “83% of
statistics are completely fictitious” is a sentence that could sound convincing if you
weren’t paying attention — so watch out! It shows just how unintuitive this kind of
reasoning is, that we still experience such biases despite most of us having done a decade
or so of math classes, which have, as a major goal, to teach us to think carefully about
numbers.

The lesson for communicating is that you shouldn’t use numbers unless you have to. If
you have to, then provide good illustrations, but beware that people’s first response will
be to judge by appearance rather than by the numbers. Most people won’t have an automatic
response to really think about the figures you give unless they are motivated, either by
themselves or by you and the discussion you give of the figures.

End Notes
  1. The MacTutor History of Mathematics Archive: a History of Zero (
    http://www-gap.dcs.st-and.ac.uk/~history/HistTopics/Zero.html
    ).
  2. Russo, J. E., and Schoemaker, P. J. H. (1989).
    Decision
    Traps
    . New York: Doubleday.
Think About Frequencies Rather than Probabilities
Probability statistics are particularly hard to think about correctly. Fortunately you
can make it easier by presenting the same information in a way that meshes with our
evolved capacity to reason about how often things happen.

Mark Twain once said, “People commonly use statistics like a drunk uses a lamppost: for
support rather than for illumination.”
1
Things haven’t changed. It’s strange, really, given how little people trust
them, that statistics get used so much.

Our ability to think about probabilities evolved to keep us safe from rare events that
would be pretty serious if they did happen (like getting eaten) and to help us learn to make
near-correct estimates about things that aren’t
quite so dire and at which we get multiple attempts (like estimating the chances
of finding food in a particular part of the valley for example). So it’s not surprising
that, when it comes to formal reasoning about single-case probabilities, our evolved ability
to estimate likelihood tends to fail us.

One example is that we overestimate low-frequency events that are easily noticed. Just
ask someone if he gets more scared traveling in a car or by airplane. Flying is about the
safest form of transport there is, whether you calculate it by miles flown or trips made.
Driving is pretty risky in comparison, but most people would say that flying feels like the
more dangerous of the two.

Another thing we have a hard time doing is accounting for the basic frequency at which
an event occurs, quite aside from the specific circumstances of its occurrence on the
current occasion. Let me give an example of this in action...

In Action

This is a famous demonstration of how hard we find it to work out probabilities. When
it was published in
Parade
magazine in 1990, the magazine got around
10,000 letters in response — 92% of which said that their columnist, Marilyn vos Savant, had
reached the wrong conclusion.
2
Despite the weight of correspondence, vos Savant
had
reached the correct conclusion, and here’s the confusing problem she put forward, based
roughly on the workings of the old quiz show
Let’s Make a Deal
presented by Monty Hall.

Imagine you’re a participant on a game show, hoping to win the big prize. The final
hoop to jump through is to select the right door from a choice of three. Behind each door
is either a prize (one of the three doors) or a booby prize (two of the doors). In this
case, the booby prizes are goats.

You choose a door.

To raise the tension, the game-show host, Monty, looks behind the other doors and
throws one open (not yours) to reveal a goat. He then gives you the choice of sticking
with your choice or switching to the remaining unopened door.

Two doors are left. One
must
have a goat behind it, one
must
have a prize. Should you stick, or should you switch? Or
doesn’t it matter?

Note

This is not a trick question, like some lateral thinking puzzles. It’s the
statistics that are tricky, not the wording.

Most people get this wrong — even those with formal mathematics training. Many
of the thousands who wrote to Marilyn vos Savant at
Parade
were
university professors who were convinced that she had got it wrong and insisted she was
misleading the nation. Even the famous Paul Erdos, years before the
Parade
magazine incident, had got the answer wrong and he was one
of the most talented mathematicians of the century (and inspiration for Erdos numbers,
which you may have heard of
3
).

The answer is that you should switch — you are twice as likely to win the prize if you
switch doors than if you stick with your original door. Don’t worry if you can’t see why
this is the right answer; the problem is famous precisely because it is so hard to get
your head around. If you did get this right, try telling it to someone else and then
explaining
why
switching is the right answer. You’ll soon see just
how difficult the concepts are to get across.

How It Works

The chance you got it right on the first guess is 1 in 3. Since by the time it comes
to sticking or switching, the big prize (often a car) must be behind one of the two
remaining doors, there must be a 2 in 3 chance that the car is behind the other door
(i.e., a 2 in 3 chance your first guess was wrong).

Our intuition seems compelled to ignore the prior probabilities and the effect that
the game show host’s actions have. Instead, we look at the situation as it is when we come
to make the choice. Two doors, one prize. 50-50 chance, right? Wrong. The host’s actions
make switching a better bet. By throwing away one dud door from the two you didn’t choose
initially, he’s essentially making it so that switching is like choosing between
two
doors and you win if the prize is behind either of them.

Another way to make the switching answer seem intuitive is to imagine the situation
with 1000 doors, 999 goats, and still just one prize. You choose a door (1 in 1000 chance
it’s the right door) and your host opens all the doors you didn’t choose, which have goats
behind them (998 goats). Stick or switch? Obviously you have a 999 in 1000 chance of
winning if you switch, even though as you make the choice there are two doors, one prize,
and one goat like before. This variant highlights one of the key distractions in the
original problem — the host knows where the prize is and acts accordingly to eliminate dud
doors. You choose without knowing where the prize is, but given that the host acts
knowing
where the prize is, your decision to stick or switch should
take that into account.

Part of the problem is that we are used to thinking about probabilities as things
attached to objects or events in simple one-to-one correspondence.

But probabilities are simply statements about what can be known about
uncertain situations. The probabilities themselves can be affected by factors that don’t
actually affect the objects or events they label (like base rates and, in this case, the
game show host’s actions).

Evolutionary psychologists Leda Cosmides and John Tooby
4
argue that we have evolved to deal with frequency information when making
probability judgments, not to do abstract probability calculations. Probabilities are not
available directly to perception, whereas how often something happens is. The availability
of frequencies made it easier for our brains to make use of them as they evolved. Our
evolved faculties handle probabilities better as frequencies because this is the format of
the information as it is naturally present in the environment. Whether something occurs or
not can be easily seen (is it raining or is it not raining, to take an example), and
figuring out a frequency of this event is a simple matter of addition and comparison:
comparing the number of rainy days against the number of days in spring would
automatically give you a good idea whether this current day in spring is likely to be
rainy or not. One-off probabilities aren’t like this; they are a cultural invention — and
like a lot of cultural inventions, we still have difficulty dealing with them.

The idea that we are evolved to make frequency judgments, not probability
calculations, is supported by evidence that we use frequencies as inputs and outputs for
our likelihood estimates. We automatically notice and remember the frequency of events
(input) and have subjective feelings of confidence that an event will or will not occur
(output).

If you rephrase the Monty Hall problem in terms of frequencies, rather than in terms
of a one-off decision, people are more likely to get it right.
5
Here’s a short version of the same problem, but focusing explicitly on
frequencies rather than one-off probabilities. Is it easier to grasp intuitively?

Take the same routine as before — three doors, one prize, and two duds. But this time
consider two different ways of playing the game, represented here by two players, Tom and
Helen. Tom always chooses one door and sticks with it. Helen is assigned the other two
doors. Monty always lets out a goat from behind one of these two doors, and Helen gets the
prize if it is behind the remaining door. They play the game, say, 30 times. How often is
it likely Tom will win the prize? How often is it likely Helen will win the prize? Given
this, which is the better strategy, Tom’s (stick) or Helen’s (switch)?

In Real Life

An example of an everyday choice that is affected by our problems with probabilities
is thinking about weather forecasts. It can be simultaneously
true that the weather forecasts are highly accurate and that you shouldn’t
believe them. The following quote is from a great article by Neville Nicholls about errors
and biases in our commonsense reasoning and how they affect the way we think about weather prediction:
6

The accuracy of the United Kingdom 24-hour rain forecast is 83%. The climatological
probability of rain on the hourly timescale appropriate for walks is 0.08 (this is the
base rate). Given these values, the probability of rain, given a forecast of rain, is
0.30. The probability of no rain, given a forecast of rain, is 0.70. So, it is more
likely that you would enjoy your walk without getting wet, even if the forecast was for
rain tomorrow.

It’s a true statement but not easy to understand, because we don’t find probability
calculations intuitive. The trick is to avoid them. Often probability statistics can be
equally well-expressed using frequencies, and they will be better understood this way. We
know the probabilities concerning base rates will be neglected, so you need to be extra
careful if the message you are trying to convey relies on this information. It also helps
to avoid conditional probabilities — things like “the probability of X given Y” — and relative
risks — “your risk of X goes down by Y% if you do Z.” People just don’t find it easy to
think about information given in this way.
7

End Notes
  1. Or at least it’s commonly attributed to Mark Twain. It’s one of
    those free-floating quotations.
  2. vos Savant, M. (1997).
    The Power of Logical
    Thinking
    . New York: St Martin’s Press.
  3. Paul Erdos published a colossal number of papers in his lifetime by
    collaborating with mathematicians around the world. If you published a paper with
    Erdos, your Erdos number is 1; if you published with someone who published with Erdos,
    it is 2. The mathematics of these indices of relationship can be quite interesting.
    See “The Erdos Number Project,”
    http://www.oakland.edu/enp
    .
  4. Cosmides, L., & Tooby, J. (1996). Are humans good intuitive
    statisticians after all? Rethinking some conclusions from the literature on judgment
    under uncertainty.
    Cognition, 58
    (1), 1–73.
  5. Krauss, S., & Wang, X. T. (2003). The psychology of the
    Monty Hall problem: Discovering psychological mechanisms for solving a tenacious brain
    teaser.
    Journal of Experimental Psychology: General, 132
    (1),
    3–22.
  6. Nicholls, N. (1999). Cognitive illusions, heuristics, and climate
    prediction.
    Bulletin of the American Meteorological Society,
    80
    (7), 1385–1397.
  7. Gigerenzer, G., & Edwards, A. (2003). Simple tools for
    understanding risks: From innumeracy to insight.
    British Medical Journal,
    327
    , 741–744 (
    http://bmj.bmjjournals.com/cgi/reprint/327/7417/741
    ). This article is great on ways you can use frequency information as an
    alternative to help people understand probabilities.
See Also
  • A detailed discussion of the psychology of the Monty Hall dilemma, but one that
    doesn’t focus on the base-rate interpretation highlighted here is given by Burns, B.
    D., & Wieth, M. (in press). The collider principle in causal reasoning: Why
    the Monty Hall dilemma is so hard.
    Journal of Experimental Psychology:
    General
    . More discussion of the Monty Hall dilemma and a simulation that
    lets you compare the success of the stick and switch strategies is at
    http://www.cut-the-knot.org/hall.shtml
    .

Other books

loose by Unknown
Pretty Leslie by R. V. Cassill
So Many Boys by Suzanne Young
Murder on Capitol Hill by Margaret Truman
Unspoken 2 by A Lexy Beck
Hacking Happiness by John Havens