Mind Hacks™: Tips & Tools for Using Your Brain (17 page)

Read Mind Hacks™: Tips & Tools for Using Your Brain Online

Authors: Tom Stafford,Matt Webb

Tags: #COMPUTERS / Social Aspects / Human-Computer Interaction

BOOK: Mind Hacks™: Tips & Tools for Using Your Brain
3.24Mb size Format: txt, pdf, ePub
Minimize Imaginary Distances
If you imagine an inner space, the movements you make in it take up time according to
how large they are. Reducing the imaginary distances involved makes manipulating mental
objects easier and quicker.

Mental imagery requires the same brain regions that are used to represent real
sensations. If you ask someone to imagine hearing the first lines to the song “Purple Haze”
by Jimi Hendrix, the activity in her auditory cortex increases. If you ask someone to
imagine what the inside of a teapot looks like, his visual cortex works harder. If you put a
schizophrenic who is hearing voices into a brain scanner, when she hears voices, the parts
of the brain that represent language sounds really are active — she’s not lying; she really is
hearing voices.

Any of us can hear voices or see imaginary objects at will; it’s only when we lose the
ability to suppress the imaginings that we think of it as a problem.

When we imagine objects and places, this imagining creates mental space that is
constrained in many of the ways real space is constrained. Although you can imagine
impossible movements like your feet lifting up and your body rotating until your head floats
inches above the floor, these movements take time to imagine and the amount of time is
affected by how large they are.

In Action

Is the left shape in
Figure 2-28
the
same as the right shape?

Figure 2-28. Is the left shape the same as the right shape?

How about the left shape in
Figure 2-29
— is it the same as the right shape?

Figure 2-29. Is the left shape the same as the right shape?

And is the left shape in
Figure 2-30
the
same as the one on the right?

Figure 2-30. Is the left shape the same as the right shape?

To answer these questions, you’ve had to mentally rotate one of each pair of
the shapes. The first one isn’t too hard — the right shape is the same as the left but
rotated 50°. The second pair is not the same; the right shape is the mirror inverse of the
left and again rotated by 50°. The third pair is identical, but this time the right shape
has been rotated by 150°. To match the right shape in the third example to the left shape,
you have to mentally rotate 100° further than to match the first two examples. It should
have taken you extra seconds to do this. If you’d like to try an online version, see the
demonstration at
http://www.uwm.edu/People/johnchay/mrp.htm
(requires Shockwave). When we tried it, the long version didn’t save our data
(although it claimed it did) so don’t get excited about being able to analyze your
results; at the moment, you can use it only to get a feel for how the experiment
works.

How It Works

These shapes are similar to the ones used by Robert Shepard and Jacqueline Metzler
1
in their seminal experiments on mental rotation. They found that the time
to make a decision about the shapes was linearly related to the angle of rotation. Other
studies have shown the mental actions almost always take up an amount of time that is
linearly related to the amount of imaginary movement required.

This shows that mental images are analog representations of the real thing — we don’t
just store them in our head in some kind of abstract code. Also interesting is the fact
that mental motions take up a linearly increasing amount of time as mental distance
increases; in the original experiments by Shepard and Metzler, it was one extra second for
every extra 50°. This relationship implies that the mental velocity of our movements is
constant (unlike our actual movements, which tend to accelerate sharply at the beginning
and decelerate sharply at the end, meaning that longer movements have higher
velocities).

Further studies of mental rotation
2
showed that the mental image does indeed move through all the transitional
points as it is rotated in and that at least in some experiments, rotating complex shapes
didn’t take any longer than rotating simple shapes.

Other experiments
3
have also shown that moving your mind’s eye over a mental space (such as an
imagined map) takes time that is linearly related to the imagined distance. If you “zoom
in” on a mental image, that takes time as well. So if you ask people to imagine an
elephant next to a rabbit, they will take longer to answer a question about the color of
the rabbit’s eyes than about the color of the elephant’s eyes. You can partially avoid
this zooming-in
time by getting people to imagine the thing really large to start with — asking
them to start, say, by imagining a fly and then the rabbit next to it.

Recent neuroimaging research
4
has shown that mentally rotating objects may involve different brain
regions from mentally rotating your own body through space. Studies that compare the
difficulty of the two have found that it is easier and faster to imagine yourself mentally
rotating around a display of objects than it is to imagine the objects rotating around
their own centers.
5
So if you are looking at a pair of scissors that have the handle pointing
away from you, it will be easier to imagine yourself rotating around the scissors in order
to figure out if they are lefthanded or righthanded scissors, rather than imaging the
scissors rotating around so that the handle faces you. And easiest of all is probably to
imagine just your own hand rotating to match the way the handle is facing.

All this evidence suggests that mental space exists in analog form in our minds. It’s
not just statements about the thing, but a map of the thing in your mind’s eye. There is
some evidence, however, that the copy in your mind’s eye isn’t an exact copy of the visual
input — or at least that it can’t be used in exactly the same way as visual input can be.
Look at
Figure 2-31
, which shows an
ambiguous figure that could be a duck or could be a rabbit. You’ll see one of them
immediately, and if you wait a few seconds, you’ll spot the other one as well. You can’t
see both at once; you have to flip between them and there will always be one you saw first
(and which one you see first is the sort of thing you can affect by priming
[
Bring Stuff to the Front of Your Mind
]
,
exposing people to concepts that influence their later behavior).

If you flash a figure up to people for just long enough for them to see it and make
one interpretation — to see a duck or a rabbit, but not both — then they can’t flip their
mental image in their mind’s eye to see the other interpretation. If they say they saw a
duck, then if you ask them if the duck could be a rabbit, they just think you’re mad.
6

Perceiving the ambiguity seems to require real visual input to operate on. Although
you have the details of the image in your mind’s eye it seems you need to experience them
anew, to refresh the visual information, to be able to make a reinterpretation of the
ambiguous figure.

In Real Life

We use mental imagery to reason about objects before we move them or before we move
around them. Map reading involves a whole load of mental rotation, as does fitting
together things like models or flat-pack furniture. Assembly instructions that involve
rotating the object will be harder to compute, all other things being equal. But if you
can imagine the object staying in the same place with you rotating around it, you can
partially compensate for this. The easier it is to use mental rotation, the less physical
work we actually have to do and the more likely we are to get things right the first
time.

Figure 2-31. You can see this picture as a duck or a rabbit, but if you’d seen only one
interpretation at the time, could you see the other interpretation in your mind’s eye?
7
End Notes
  1. Shepard, R. N., & Metzler, J. (1971). Mental rotation of
    three dimensional objects.
    Science, 171
    , 701–703.
  2. Cooper, L. A., & Shepard, R. N. (1973). Chronometric studies
    of the rotation of mental images. In W. G. Chase (ed.),
    Visual Information
    Processing
    , 75–176. New York: Academic Press.
  3. Kosslyn, S., Ball, T., & Reiser, B. (1978). Visual images
    preserve metric spatial information: Evidence from studies of image scanning.
    Journal of Experimental Psychology: Human Perception and Performance,
    4
    , 47–60.
  4. Parsons, L. M. (2003). Superior parietal cortices and varieties of
    mental rotation.
    Trends in Cognitive Sciences, 7
    (12),
    515–517.
  5. Wraga, M., Creem, S. H., & Proffitt, D. R. (2000). Updating
    displays after imagined object and viewer rotations.
    Journal of Experimental
    Psychology: Learning, Memory, and Cognition, 26
    , 151–168.
  6. Chambers, D., & Reisberg, D. (1985). Can mental images be
    ambiguous?
    Journal of Experimental Psychology: Human Perception and
    Performance, 11
    (3), 317–328.
  7. Fliegende Blätter
    (1892, No. 2465, p. 17).
    Munich: Braun & Schneider. Reprinted in: Jastrow, J. (1901).
    Fact
    & Fable in Psychology
    . London: Macmillan.
Explore Your Defense Hardware
We have special routines that detect things that loom and make us flinch in
response.

Typically, the more important something is, the deeper in the brain you find it, the
earlier in evolution it arose, and the quicker it can happen.

Avoiding collisions is pretty important, as is closing your eyes or tensing if you can’t
avoid the collision. What’s more, you need to do these things to a deadline. It’s no use
dodging after you’ve been hit.

Given this, it’s not surprising that we have some specialized neural mechanisms for
detecting collisions and that they are plugged directly into motor systems for dodging and
defensive behavior.

In Action

The startle reaction is pretty familiar to all of us — you blink, you flinch, maybe your
arms or legs twitch as if beginning a motion to protect your vulnerable areas. We’ve all
jumped at a loud noise or thrown up our arms as something expands toward us. It’s
automatic. I’m not going to suggest any try-it-at-home demonstrations for this hack.
Everyone knows the effect, and I don’t want y’all firing things at each other to see
whether your defense reactions work.

How It Works

Humans can show response to a collision-course stimulus within 80 ms.
1
This is far too quick for any sophisticated processing. In fact, it’s even
too quick for any processing that combines information across both eyes.

It’s done, instead, using a classic hack — a way of getting good-enough 3D direction and
speed information from crude 2D input. It works like this: symmetrical expansion of
darker-than-background areas triggers the startle response.

“Darker-than-background” because this is a rough-and-ready way of deciding what to
count as an object rather than just part of the background. “Symmetrical expansion”
because this kind of change in visual input is characteristic of objects that are coming
right at you. If it’s not expanding, it’s probably just moving, and if it’s not expanding
symmetrically, it’s either changing shape or not moving on a collision course.

These kind of stimuli capture attention
[
Grab Attention
]
and cause a startle response. Everything from
reptiles to pigeons to human infants will blink and/or flinch their heads when they see
this kind of input. You don’t get the same effects with contracting patches, rather than
expanding patches, or with light patches, rather than dark patches.
2

Looming objects always provoke a reaction, even if they are predictable; we don’t
learn to ignore them as we learn to ignore other kinds of event.
3
This is another sign that they fall in a class for which there is dedicated
neural machinery — and the reason why is pretty obvious as well. A looming object is always
potentially dangerous. Some things you just shouldn’t get used to.

In pigeons, the cells that detect looming exist in the midbrain. They are very tightly
tuned so that they respond only to objects that look as if they are going to collide — they
don’t respond to objects that are heading for a near miss, even if they are still within
5° of collision.
4
These neurons fire at a consistent time before collision, regardless of the
size and velocity of the object.

This, and the fact that near misses don’t trigger a response, shows that path and
velocity information is extracted from the rate and shape of expansion. Now this kind of
calculation can be done cortically, using the comparison of information from both eyes,
but for high-speed, non-tiny objects at anything more than 2 m away, it isn’t.
5
You don’t need to compare information from both eyes; the looming hack is
quick and works well enough.

End Notes
  1. Busettini, C., Masson, G. S., Miles, F. A. (1997). Radial optic flow
    induces vergence eye movements with ultra-short latencies.
    Nature,
    390
    (6659), 512–515.
  2. Nanez, J. E. (1988). Perception of impending collision in 3- to
    6-week-old human infants.
    Infant Behaviour and Development, 11,
    447–463.
  3. Caviness, J. A., Schiff, W., & Gibson, J. J. (1962).
    Persistent fear responses in rhesus monkeys to the optical stimulus of “looming.”
    Science, 136
    , 982–983.
  4. Wang, Y., & Frost, B. J. (1992). Time to collision is
    signalled by neurons in the nucleus rotundus of pigeons.
    Nature
    ,
    356
    , 236–238.
  5. Rind, F. C., & Simmons, P. J. (1999). Seeing what is coming:
    Building collision-sensitive neurones.
    Trends in Neurosciences,
    22
    , 215–220. (This reference contains some calculations showing exactly
    what size of approaching objects, at what distances, are suitable for processing using
    the looming system and what are suitable for processing by the stereovision
    system.)

Other books

Ghost of a Chance by Franklin W. Dixon
Dark Victory by Brenda Joyce
Strange Things Done by Elle Wild
The Keeper by Suzanne Woods Fisher
Blackwater by Eve Bunting
Ghosts Beneath Our Feet by Betty Ren Wright