Mind Hacks™: Tips & Tools for Using Your Brain (12 page)

Read Mind Hacks™: Tips & Tools for Using Your Brain Online

Authors: Tom Stafford,Matt Webb

Tags: #COMPUTERS / Social Aspects / Human-Computer Interaction

BOOK: Mind Hacks™: Tips & Tools for Using Your Brain
5.67Mb size Format: txt, pdf, ePub
Objects Move, Lighting Shouldn’t
Moving shadows make us see moving objects rather than assume moving light
sources.

Shadows get processed early when trying to make sense of objects, and they’re one of the
first things our visual system uses when trying to work out shape.
Fool Yourself into Seeing 3D
further showed that our visual system makes the
hardwired assumption that light comes from above. Another way shadows are used is to infer
movement, and with this, our visual system makes the further assumption that a moving shadow
is the result of a moving object, rather than being due to a moving light source. In theory,
of course, the movement of a shadow could be due to either cause, but we’ve evolved to
ignore one of those possibilities — rapidly moving objects are much more likely than rapidly
moving lights, not to mention more dangerous.

In Action

Observe how your brain uses shadows to construct the 3D model of a scene.
Watch the ball-in-a-box movie at:

Note

If you’re currently without Internet access, see
Figure 2-12
for movie stills.

The movie is a simple piece of animation involving a ball moving back and forth twice
across a 3D box. Both times, the ball moves diagonally across the floor plane. The first
time, it appears to move along the floor of the box with a drop shadow directly beneath
and touching the bottom of the ball. The second time the ball appears to move horizontally
and float up off the floor, the shadow following along on the floor. The ball actually
takes the same path both times; it’s just the path of the shadow that changes (from
diagonal along with the ball to horizontal). And it’s that change that alters your
perception of the ball’s movement. (
Figure 2-12
shows stills of the first (left) and
second (right) times the ball crosses the box.)

Now watch the more complex “zigzagging ball” movie (
http://www.kyb.tue.mpg.de/bu/demo/index.html
;
Figure 2-13
shows a still from
the movie), again of a ball in motion inside a 3D box.

This time, while the ball is moving in a straight line from one corner of the box to
the other (the proof is in the diagonal line it follows), the shadow is darting about all
over the place. This time, there is even strong evidence that it’s the light source — and
thus the shadow — that’s moving: the shading and colors on the box change continuously and
in a way that is consistent with a moving light source rather than a zigzagging ball
(which doesn’t produce any shading or color changes!). Yet still you see a zigzagging
ball.

How It Works

Your brain constructs an internal 3D model of a scene as soon as you look at one, with
the influence of shadows on the construction being incredibly strong. You can see this in
action in the first movie: your internal model of the scene changes dramatically based
solely on the position and motion of a shadow.

Figure 2-12. Stills from the “ball-in-a-box” movie

I feel bad saying “internal model.” Given that most of the information about a scene
is already in the universe, accessible if you move your head, why bother storing it
inside your skull too? We probably store internally only what we need to, when
ambiguities have been involved. Visual data inside the head isn’t a photograph, but a
structured model existing in tandem with extelligence, information that we can treat as
intelligence but isn’t kept internally.

— T.S.

The second movie shows a couple more of the assumptions (of which there are many) the
brain makes in shadow processing. One assumption is that darker coloring means shadow.
Another is that light usually comes from overhead (these assumptions are so natural we
don’t even notice they’ve been made). Both of these come into play when two-dimensional
shapes — ordinary pictures — appear to take on depth with the addition of judicious shading
[
Fool Yourself into Seeing 3D
]
.

Figure 2-13. A still from the “zigzagging ball” movie
1

Based on these assumptions, the brain prefers to believe that the light source is
keeping still and the moving object is jumping around, rather than that the light source
is moving. And this despite all the cues to the contrary: the lighting pattern on the
floor and walls, the sides of the box being lit up in tandem with the shifting
shadow — these should be more than enough proof. Still, the shadow of the ball is all that
the brain takes into account. In its quest to produce a 3D understanding of a scene as
fast as possible, the brain doesn’t bother to assimilate information from across the whole
visual field. It simplifies things markedly by just assuming the light source stays
still.

It’s the speed of shadow processing you have to thank for this illusion. Conscious
knowledge is slower to arise than the hackish-but-speedy early perception and remains
influenced by it, despite your best efforts to see it any other way.

End Note
  1. Zigzagging ball animation thanks to D. Kersten (University of
    Minnesota, U.S.) and I. Bülthoff (Max-Planck-Institut für biologische Kybernetik,
    Germany)
See Also
  • The Kersten Lab (
    http://gandalf.psych.umn.edu/~kersten/kersten-lab
    ) researches vision, action, and the computational principles behind how we
    turn vision into an understanding of the world. As well as publications on the
    subject, their site houses demos exploring what information we can extract from what
    we see and the assumptions made. One demo of theirs, Illusory Motion from Shadows (
    http://gandalf.psych.umn.edu/~kersten/kersten-lab/images/kersten-shadow-cine.MOV
    ), demonstrates how the assumption that light sources are stationary can be
    exploited to provide another powerful illusion of motion.
  • Kersten, D., Knill, D., Mamassian, P., & Buelthoff, I. (1996). Illusory
    motion from shadows.
    Nature, 379
    (6560), 31.
Depth Matters
Our perception of a 3D world draws on multiple depth cues as diverse as atmospheric
haze and preconceptions of object size. We use all together in vision and individually in
visual design and real life.

Our ability to see depth is an amazing feature of our vision. Not only does depth make
what we see more interesting, it also plays a crucial, functional role. We use it to
navigate our 3D world and can employ it in the practice of visual communication design to
help organize what we see through depth’s ability to clarify through separation
1
.

Psychologists call a visual trigger that gives us a sense of depth a
depth
cue
. Vision science suggests that our sense of depth originates from at least
19 identifiable cues in our environment. We rarely see depth cues individually, since they
mostly appear and operate in concert to provide depth information, but we can loosely
organize them together into several related groups:

Binocular cues (stereoscopic depth, eye convergence)

  • With binocular (two-eye) vision, the brain sees depth by comparing angle differences
    in the images from each eye. This type of vision is very important to daily life (just
    try catching a ball with one eye closed), but there are also many monocular (single-eye)
    depth cues. Monocular cues have the advantage that they are easier to employ for depth
    in images on flat surfaces (e.g., in print and on computer screens).

Perspective-based cues (size gradient, texture gradient, linear
perspective)

  • The shape of a visual scene gives cues to the depth of objects within it.
    Perspective lines converging/diverging or a change in the image size of patterns that we
    know to be at a constant scale (such as floor tile squares) can be used to inform our
    sense of depth.

Occlusion-based cues (object overlap, cast shadow, surface shadow)

  • The presence of one object partially blocking the form of another and the cast
    shadows they create are strong cues to depth. See
    Fool Yourself into Seeing 3D
    for examples.

Focus-based cues (atmospheric perspective, object intensity, focus)

  • Greater distance usually brings with it a number of depth cues associated with
    conditions of the natural world, such as increased atmospheric haze and physical limits
    to the eye’s focus range. We discuss one of these cues, object intensity, next.

Motion-based cues (kinetic depth, a.k.a. motion parallax)

  • As you move your head, objects at different distances move at different relative
    speeds. This is a very strong cue and is also the reason a spitting cobra sways its head
    from side to side to work out how far away its prey is from its position.

There isn’t room to discuss all of these cues here, so we’ll look in detail at just two
depth cues: object intensity and known size (a cue that is loosely connected to the
prespective-based cue family). More information on depth cues and their use in information
design can be found in the references at the end of this hack.

Object Intensity

Why do objects further away from us appear to be faded or faint? Ever notice that
bright objects seem to attract our attention? It’s all about intensity.

If we peer into the distance, we notice that objects such as buildings or mountains
far away appear less distinct and often faded compared to objects close up. Even the
colors of these distant objects appear lighter or even washed out. The reason for this is
something psychologists call atmospheric perspective or object intensity. It is a visual
cue our minds use to sense depth; we employ it automatically as a way to sort and
prioritize information about our surroundings (foreground as distinct from
background).

Designers take advantage of this phenomenon to direct our attention by using bold
colors and contrast in design work. Road safety specialists make traffic safety signs
brighter and bolder in contrast than other highway signs so they stand out, as shown in
Figure 2-14
. You too, in fact, employ the
same principle when you use a highlighter to mark passages in a book. You’re using a depth
cue to literally bring certain text into the foreground, to prioritize information in your
environment.

Figure 2-14. Important street signs often use more intense colors and bolder contrast elements
so they stand out from other signage
2
In action

Close one eye and have a look at the two shaded blocks side by side in
Figure 2-15
. If you had to decide which block
appears to be visually closer, which would you choose? The black block seems to separate
and appear forward from the gray block. It is as if our mind wants it to be in
front.

Figure 2-15. Which block appears closer?
How it works

The reason for this experience of depth, based on light-dark value
differences, is
atmospheric perspective
and the science is actually
quite simple. Everywhere in the air are dust or water particles that partially obscure
our view of objects, making them appear dull or less distinct. Up close, you can’t see
these particles, but as the space between you and an object increases, so do the numbers
of particles in the air. Together these particles cause a gradual haze to appear on
distant objects. In the daytime, this haze on faraway objects appears to be colored
white or blue as the particles scatter the natural light. Darker objects separate and
are perceived as foreground and lighter ones as background. At night, the effect is the
same, except this time the effect is reversed: objects that are lit appear to be closer,
as shown in
Figure 2-16
. So as a general
rule of thumb, an object’s intensity compared to its surroundings helps us generate our
sense of its position. Even colors have this same depth effect because of comparative
differences in their value and chroma. The greater the difference in intensity between
two objects, the more pronounced the sense of depth separation between them.

Figure 2-16. At night, lit objects appear closer

So how does intensity relate to attention? One view is that we pay more attention to
objects that are closer, since they are of a higher concern to our physical body. We
focus on visually intense objects because their association with the foreground
naturally causes us to assign greater importance to them. Simply put, they stand out in
front.

In real life

Since weather can affect the atmosphere’s state, it can influence perceived depth:
the more ambient the air particles, the more acute the
atmospheric perspective. Hence, a distance judged in a rainstorm, for
example, will be perceived as further than that same distance judged on a clear, sunny
day.

Known Size

How do we tell the distance in depth between two objects if they aren’t the
same?

We all know that if you place two same-size objects at different distances and look at
them both, the object further away appears smaller. But have you ever been surprised at an
object’s size when you see it for the first time from afar and discover it is much bigger
up close? Psychologists call this phenomenon
size gradient
and
known size
. Size gradient states that as objects are moved further
away, they shrink proportionally in our field of view. From these differences in relative
size, we generate a sense of depth. This general rule holds true, but our prior knowledge
of an object’s size can sometime trip us up because we use the known size of an object (or
our assumptions of its size) to measure the relative size of objects we see.

Being aware of a user’s knowledge of subjects and objects is key if comparative size
is an important factor. Many visual communication designers have discovered the peril of
forgetting to include scale elements in their work for context reference. A lack of
user-recognizable scale can render an important map, diagram, or comparative piece
completely useless. An unexpected change in scale can disorientate a user — or, if employed
right, can help grab attention.

In action

Have a look at the mouse and elephant in
Figure 2-17
. We know about their true
relative sizes from our memory, even though the mouse appears gigantic in
comparison.

But what about
Figure 2-18
, which
shows a mouse and a zerk (a made-up animal). Since we’ve never seen a zerk before, do we
know which is truly bigger or do we assume the scale we see is correct?

How it works

Our knowledge of objects and their actual size plays a hidden role in our perception
of depth. Whenever we look at an object, our mind recalls memories of its size, shape,
and form. The mind then compares this memory to what we see, using scale to calculate a
sense of distance. This quick-and-dirty comparison can sometimes trip us however,
especially when we encounter something unfamiliar. One psychologist, Bruce Goldstein,
offers a cultural example of an anthropologist who met an African bushman living in
dense rain forest. The anthropologist led the bushman out to an open plain and showed
him some buffalo from afar. The bushman refused to believe that the animals were large
and said they must be insects. But when he approached them up close, he was astounded as
they appeared to grow in size, and attributed it to magic. The dense rain forest and its
limitations on viewing distance, along with the unfamiliar animal, had distorted his
ability to sense scale.

Figure 2-17. An elephant and a mouse — you know from memory that elephants are bigger
Figure 2-18. A zerk and a mouse — since a zerk is made up, you can use only comparison with the
mouse to judge size
In real life

Some designers have captured this magic to their benefit. The movie industry has
often taken our assumptions of known size and captivated us by breaking them, making the
familiar appear monstrous and
novel. For example, through a distortion of scale and juxtaposition, we can
be fooled into thinking that 50-foot ants are wreaking havoc on small towns and
cities.

Other books

Stockholm Surrender by Harlem, Lily
Max by Michael Hyde
Sin on the Strip by Lucy Farago
High Spirits at Harroweby by Comstock, Mary Chase
The Fire King by Marjorie M. Liu
Prince of Dharma by Ashok Banker
Thriller by Patterson, James
The Black Sheep by Yvonne Collins, Sandy Rideout
Relic Tech (Crax War Chronicles) by Ervin II, Terry W.
The Final Fabergé by Thomas Swan