Read Mind Hacks™: Tips & Tools for Using Your Brain Online
Authors: Tom Stafford,Matt Webb
Tags: #COMPUTERS / Social Aspects / Human-Computer Interaction
When you’re speaking, written words can distract you. If you’re thinking
nonlinguistically, they can’t.
The Stroop Effect is a classic of experimental psychology. In fact, it’s more than a
classic, it’s an
industry
. J. Ridley Stroop first did his famous
experiment in 1935, and it’s been replicated thousands of times since then. The task is
this: you are shown some words and asked to name the ink color the words appear in.
Unfortunately, the words themselves can be the names of colors. You are slower, and make
more errors, when trying to name the ink color of a word that spells the name of a different
color. This, in a nutshell, is the Stroop Effect. You can read the original paper online at
http://psychclassics.yorku.ca/Stroop
.
To try out the Stroop Effect yourself, use the interactive experiment available at
http://faculty.washington.edu/chudler/java/ready.html
1
(you don’t need Java in your web browser to give this a go).
Start the experiment by clicking the “Go to the first test” link; the first page will
look like
Figure 5-1
, only (obviously) in
color.
As fast as you’re able, read out loud the color of each word — not what it spells, but
the actual color in which it appears. Then click the Finish button and note the time it
tells you. Continue the experiment and do the same on the next screen. Compare the
times.
The difference between the two tests is that whereas the ink colors and the words
correspond on the first screen, on the second they conflict for each word. It takes you
longer to name the colors on the second screen.
Although you attempt to ignore the word itself, you are unable to do so and it still
breaks through, affecting your performance. It slows your response to the actual ink color
and can even make you give an incorrect answer. You can get this effect with most people
nearly all of the time, which is one reason why psychologists love it.
The other reason it’s a psychologist’s favorite is that, although the task is simple,
it involves many aspects of how we think, and the experiment has variations to explore
these. At first glance, the explanation of the task seems simple — we process words
automatically, and this process overrides the processing of color information. But this
isn’t entirely true, although that’s the reason still taught in many classes.
Reading the word interferes only if two conditions are fulfilled. First, the level and
focus of your attention has to be broad enough that the word can be unintentionally read.
Second, the response you are trying to give must be a linguistic one. In this case, the
required response is spoken, so it is indeed linguistic.
Avoiding reading is easier when the color to report is disentangled from the word. If
you have to respond to only the color of the first letter of each word and the rest are
black, the confusion is reduced. Ditto if the word and block of color are printed
separately. In these cases, we’re able to configure ourselves to respond to certain
stimuli (the color of the ink) and ignore certain others (the word). It’s only when we’re
not able to divide the two types of information that the Stroop Effect emerges.
It’s probably this kind of selective concentration that renders otherwise bizarre
events invisible, as with inattention blindness
[
Make Things Invisible Simply by Concentrating (on Something Else)
]
when attention on a
basketball game results in a gorilla walking unseen across the court.
The second condition, that the response is linguistic, is really a statement about the
compatibility between the stimulus and response required to it. Converting a written word
into its spoken form is easier than converting a visual color into its spoken form.
Because of immense practice, word shapes are already linguistic items, whereas color has
to be translated from the purely visual into a linguistic symbol (the sensation of red on
the eye, to the word “red”).
So the kind of response normally required in the Stroop Effect uses the same
code — language — as the word part of the stimulus, not the color part. When we’re asked to
give a linguistic label to the color information, it’s not too surprising that the
response-compatible information from the word part of the stimulus distracts us.
But by changing the kind of response required, you can remove the distracting effect.
You can demonstrate this by doing the Stroop Effect task from earlier, but instead of
saying the color out loud, respond by pointing to a square of matching color on a
printout. The interference effect disappears — you’ve stopped using a linguistic response
code, and reading the words no longer acts as a disruption.
Taking this one step further, you can reintroduce the effect by changing the task to
its opposite — try responding to what the written word says and attempting to ignore the ink
color (still pointing to colors on the chart rather than reading out loud). Suddenly
pointing is hard again when the written word and ink color don’t match.
2
You’re now getting the reverse effect because your response is in a code that is
different from the stimulus information you’re trying to use (the word) and the same as
the stimulus information you’re trying to ignore (the color).
Take-home message: more or less mental effort can be required to respond to the same
information, depending on how compatible the response is with the stimulus. If you don’t
want people to be distracted, don’t make them translate from visual and spatial
information into auditory and verbal information (or vice versa).
The myth of ballistic processing: Evidence from Stroop’s paradigm.
Psychonomic Bulletin & Review, 8
(2), 324–330. And: MacLeod,
C. M., & MacDonald, P. A. (2000). Interdimensional interference in the Stroop
Effect: Uncovering the cognitive and neural anatomy of attention.
Trends in
Cognitive Sciences, 4
(10), 383–391.
You’re drawn to reach in the same direction as something you’re reacting to, even if
the direction is completely unimportant.
So much of what we do in everyday life is responding to something that we’ve seen or
heard — choosing and clicking a button on a dialog box on a computer or leaping to turn the
heat off when a pan boils over. Unfortunately, we’re not very good at reacting
only
to the relevant information. The form in which we receive it
leaks over into our response.
For instance, if you’re reacting to something that appears on your left, it’s faster to
respond with your left hand, and it takes a little longer to respond with your right. And
this is true even when location isn’t important at all. In general, the distracting effect
of location responses is called the Simon Effect,
1
named after J. Richard Simon, who first published on it in 1968 and is now
Professor Emeritus at the University of Iowa.
2
The Simon Effect isn’t the only example of the notionally irrelevant elements of a
stimulus leaking into our response. Similar is the Stroop Effect
[
Confuse Color Identification with Mixed Signals
]
, in which naming
an ink color nets a slower response if the ink spells out the name of a different color.
And, although it’s brought about by a different mechanism, brighter lights triggering better
reaction times
[
Why People Don’t Work Like Elevator Buttons
]
is similar in that
irrelevant stimulus information modifies your response (this one is because a stronger
signal evokes a faster neural response).
A typical Simon task goes something like this: you fix your gaze at the center of a
computer screen and at intervals a light flashes up, randomly on the left or the
right — which side is unimportant. If it is a red light, your task is to hit a button on
your left. If it is a green light, you are to hit a button on your right. How long it
takes you is affected by which side the light appears on, even though you are supposed to
be basing which button you press entirely on the color of the light. The light on the left
causes quicker reactions to the red button and slower reactions to the green button (good
if the light is red, bad if the light is green). Lights appearing on the right naturally
have the opposite effect. Even though you’re supposed to disregard the location entirely,
it still interferes with your response. The reaction times being measured
are usually a half-second or less for this sort of experiment, and the
location confusion results in an extension of roughly 5%.
It’s difficult to tell what these reaction times mean without trying the experiment,
but it is possible to feel, subjectively, the Simon Effect without equipment to measure
reaction time.
You need stimuli that can appear on the left or the right with equal measure. I popped
outside for 10 minutes and sat at the edge of the road, looking across it, so traffic
could come from either my left or my right. (
Figure 5-2
shows the view from where I was
sitting.) My task was to identify red and blue cars, attempting to ignore their direction
of approach.
In choosing this task, I made use of the fact that color discrimination is poor in
peripheral vision
[
See the Limits of Your Vision
]
. By fixing my gaze at a position directly opposite me, over the road, and
refusing to move my eyes or my head, I would be able to tell the color of each car only as
it passed directly in front of me. (If I had chosen to discriminate black cars and white
cars, there’s no color information required, so I would have been able to tell using my
peripheral vision.) I wanted to do this so I wouldn’t have much time to do my color task,
but would be able to filter out moving objects that weren’t cars (like people with
strollers).
As a response, I tapped my right knee every time a red car passed and my left for blue
ones, trying to respond as quickly as possible.
After 10 minutes of slow but steady traffic, I could discern a slight bias in my
responses. My right hand would sometimes twitch a little if cars approached from that
direction, and vice versa.
Now, I wouldn’t be happy claiming a feeling of a twitchy hand as any kind of
confirmation of the Simon Effect. The concept of location in my experiment is a little
blurred: cars that appear from the
right
are also in motion to the
left —
which stimulus location should be interfering with my
knee-tapping response?
But even though I can’t fully claim the entire effect, that a car on the right causes
a twitching right hand, I can still claim the basic interference effect: although I’d been
doing the experiment for 10 minutes, my responses were still getting mucked up
somehow.
To test whether my lack of agility at responding was caused by the location of the
cars conflicting with the location of my knees, I changed my output, speaking “red” or
“blue” as a response instead. In theory, this should remove the impact of the Simon Effect
(because I was taking away the left-or-right location component of my response), and I
might feel a difference. If I felt a difference, that would be the Simon Effect, and then
its lack, in action.
And indeed, I did feel a difference. Using a spoken output, responding to the color of
the cars was absolutely trivial, a very different experience from the knee tapping and
instantly more fluid.
For my traffic watching exercise, the unimportant factor of the location of the
colored cars was interfering with my tapping my left or right knee. Factoring out the
location variable by speaking instead of knee tapping — effectively routing around the Simon
Effect — made the whole task much easier.
Much like the Stroop Effect
[
Confuse Color Identification with Mixed Signals
]
(in which you
involuntarily read a word rather than sticking to the task of identifying the color of the
ink in which it is printed), the Simon Effect is a collision of different pieces of
information. The difference between the two is that the conflict in the Stroop Effect is
between two component parts of a stimulus (the color of the word and the word itself),
while, in the Simon Effect, the conflict is between the compatibility of stimulus and
response. You’re told to ignore the location of the stimulus, but just can’t help knowing
location is important because you’re using it in making your response.
The key point here is that location information is almost always important,
and so we’re hardwired to use it when available. In real life, and especially before the
advent of automation, you generally reach to the location of something you perceive in
order to interact with it. If you perceive a light switch on your left, you reach to the
left to switch off the lights, not to the right — that’s the way the world works. Think of
the Simon Effect not as location information leaking into our responses, but the
lack
of a mechanism to specifically
ignore
location information. Such a mechanism has never really been needed.
Knowing that location information is carried along between stimulus and response is
handy for any kind of interface design. My range has four burners, arranged in a square.
But the controls for those burners are in a line. It’s because of the Simon Effect that I
have to consult the diagram next to the controls each and every time I use them, not yet
having managed to memorize the pattern (which, after all, never changes, so it should be
easy). When I have to respond to the pot boiling over at the top right, I have the
top-right location coded in my brain. If the controls took advantage of that instead of
conflicting, they’d be easier to use.
Dialog boxes on my computer (I run Mac OS X) are better aligned with keyboard
shortcuts than my stove’s controls with the burners. There are usually two controls: OK
and Cancel. I can press Return as a shortcut for OK and Escape as a shortcut for Cancel.
Fortunately, the right-left arrangement of the keys on my keyboard matches the right-left
arrangement of the buttons in the dialog (Escape and Cancel on the left, Right and OK on
the right). If they didn’t match, there would be a small time cost every time I attempted
to use the keyboard, and it’d be no quicker at all.
And a corollary: my response to the color of the cars in the traffic experiment was
considerably easier when it was verbal rather than directional (tapping left or right). To
make an interface more fluid, avoid situations in which the directions of stimulus and
response clash. For technologies that are supposed to be transparent and intuitive — like my
Mac (and my stove, come to that) — small touches like this make all the difference.