Read In Pursuit of Silence Online

Authors: George Prochnik

In Pursuit of Silence (8 page)

BOOK: In Pursuit of Silence
12.04Mb size Format: txt, pdf, ePub
ads

Heffner broke off our Socratic dialogue to announce that a family of deer had just stepped in front of her window despite the fact that a train was blasting past on the nearby rails (I could faintly hear its roar). She told me that she also regularly saw groundhogs and rabbits, along with the odd fox, the “dreaded muskrat,” and “platoons of evil raccoons.” They’re habituated to trains, she said, “so the animals don’t get frightened away—even though the noise is terrible.” Almost all of the Heffners’ research has been directed toward exploring an overarching, two-part theory: animals hear what they need to hear in order to survive; animals with the same kinds of lifestyles hear the same kinds of
things. As it turns out, this amounts to a theory wherein skull size is destiny and sound localization is the raison d’être of the auditory apparatus.

In the 1960s, the Heffners were both students of the evolutionary neuroscientist R. Bruce Masterton. Henry co-authored and Rickye helped with statistical analysis on Masterton’s first landmark paper,
“The Evolution of Human Hearing
,” a comparative study of the hearing of eighteen mammals. They discovered that if you could measure the space between an animal’s ears, you could predict its high-frequency hearing with tremendous accuracy. This is because we graph the location of a sound source in space on the basis of the differences in the way sound waves strike each of our ears. The amount of inter-ear distance available to an animal will be the main factor determining what cues it can leverage to track which way a beating wing or falling paw is moving.

Since that study, the Heffners have looked at close to seventy additional species, and while the theory has been expanded and tweaked, the correlation remains unchanged: if you’ve got a big head, your high-frequency hearing is going to be less sensitive than if you have a small head. Most mammals take advantage of both time cues and “spectral differences”—changes in the intensity of the sound striking each ear successively—to identify the position of a sound source.

Temporal signals are straightforward. Given sufficient distance between the ears, a sound on one side is going to hit one ear before the other. Time delay is the strongest, most reliable cue. “But,” Heffner noted, “not everybody has that luxury.” I found myself patting my hands to the sides of my head as she spoke to
try and gauge my own head span. “If you’re a mouse, think about how far apart the ears are,” she said. “Sound travels from one ear to the next in perhaps thirty microseconds and the nervous system just can’t calculate a sound source being off at three o’clock or two o’clock in that interval.” So what does the mouse do? “When the chips are down you use what you’ve got—intensity. There are plenty of good reasons for being a small animal and you want to occupy that niche as effectively as you can. So you hear really high frequencies to make as much use as you can of the intensity signals.”

The usefulness of the
“sound shadow”
cast by the head in localizing sound has been recognized since the mid-1870s. It was then that Lord Rayleigh, the indefatigable English physicist who discovered argon and explained the irregular flight of tennis balls, stood with his eyes shut at the center of a lawn in Cambridge surrounded by a ring of assistants brandishing tuning forks. When one assistant set his tuning fork vibrating at a sufficient intensity, Rayleigh could accurately identify that man’s position in the circle. If sound-wave cycles are sufficiently close together, he found, sound is louder in the first ear it hits, since the head blocks out the upper frequencies of the waves en route to the second ear. Rayleigh dubbed this variation in intensity the “binaural ratio,” with “binaural” signifying the employment of both ears. In the last century, the extent of this shadowing was calculated electronically. At one thousand cycles, the ear closest to the sound source receives the wave at a level eight decibels more intense than that of the farther ear. At ten thousand cycles this ratio jumps to a thirty-decibel difference.

Spectral difference is vital for creatures with tiny skulls, but time delay doesn’t work at low frequencies. When a sound wave is long enough, the whole wave can just “hang ten” around the skull and strike the second ear without being blocked at all. Masterton predicted that the smaller the skull, the higher the upper range of an animal’s hearing would be.

Exceptions to the correlation between skull size and high-frequency hearing appear most notably in the case of pea-headed subterranean creatures like pocket gophers, which have poor high-frequency hearing. In these instances, the Heffners argue, having adapted themselves to “the one-dimensional world of an underground habitat,” sound localization becomes an empty exercise. Throughout the animal kingdom, the selective pressures on hearing concern the need to detect the nature and location of what’s out there snapping the twig. Rickye Heffner argues that much of the time when we try to use noise either to protect or frighten animals, we end up doing so on the basis of human psychology rather than evolutionary realities. Particular bugbears for her in this regard are ultrasonic deer whistles and flea collars. “Like the sound of a deer whistle on a truck is going to scare an animal more than the sound of the truck!” she scoffed. Our aware ness that a sound is pitched outside our hearing range triggers in
us
an association with danger, she contends. Flea collars blast eighty decibels directly beneath the cat’s ear at a frequency the felines hear perfectly well and quite possibly find agonizing—despite the fact that there’s no evidence the fleas themselves even perceive, let alone are affected by the sound. “There’s one born every minute,” Heffner sighed, “and most of them seem to own cats.”

THE EVOLUTIONARY PURSUIT OF SILENCE

A cross-section of the ear suggests an improbable patent application: a bagpipe, several models for the St. Louis Arch, and a couple of snapped rubber bands grafted onto a sea snail. Until recently, the ear was understood to operate on a model that might be abridged to the CBCs of hearing—channel, boost, convert—with those three steps corresponding to the external, middle, and inner ear respectively.
The outer-ear channels
and condenses sound waves from outside so that those waves strike against the eardrum. The eardrum then sends the mechanical energy of the sound into the middle ear, with its three tiny bones: the hammer, anvil, and stirrup. The wave amplifies as it passes along these vibration-friendly ossicles, the last of which is pressed flush against the oval window of the liquid-filled coil of the cochlea. This entrance to the cochlea marks the threshold of the inner ear. At the point where the stirrup goads the oval window, the pressure of the original force will have multiplied dramatically. The energized vibration now ripples into the fluid in the cochlea, triggering thousands of hair cells into motion. The movement of those cells transduces the vibration into an electrical signal that enters the auditory nerve, which, in turns, sends the sound into the brain. But this is not the whole story.

Complications with the model begin at the innocent-seeming flaps on the sides of our heads. For all the confidence produced by three decades of work demonstrating correlation between skull size and high-frequency hearing, the visible ear throws a spanner into Heffner’s work. When she spoke about it, her voice took on a bitterness otherwise reserved for deer whistles and flea
collars. “The pinnas,” she said, using the technical name for the external ear, “act as independent sound shadowers. They alter the degree of the sound shadow cast by the skull. This is part of their work as directional amplifiers. Animals point their ears at something and then they can hear it better.” But the extent of the pinna’s impact as a frontline amplification system continues to defy researchers. “The head is basically a lumpy sphere with two big funnels on it,” Heffner said. “Those funnels intensify a sound as it drops down toward the eardrums. But we’ve never tried to measure pinna dimensions because—what do you measure? Ideally, you’d get some of these animals and take several of their pinnas and measure the physical properties of what the pinnas do to sound coming into the sound canal, but it’s not remotely practical. Pinnas are very complicated shapes. Some are kind of flat. Some have big openings. There are all kinds of folds. And while animals with big heads generally have big pinnas, that’s not always the case. It’s known that the external folds help to augment sound and create a difference between what’s heard in each ear. So if you have a sound off to the right quadrant somewhere, and you’re a little bat with big ears full of fancy convoluted folds, sound coming in is going to have very different features than it does for a cow.”

The mysteries of the inner ear are still more pronounced. Jim Hudspeth, who works at Rockefeller University studying the molecular and biological basis of hearing, has shown that the motion of the hair cells not only converts the mechanical wave into an electrical signal that can be read by the auditory nerve in the brain; the various reactions set in motion by the oscillation of the hair cells also serve to magnify the sound. A
huge “power
gain” takes place, he says, within the inner ear itself. How exactly this happens is still not understood.

Regardless, we now know that all three parts of the ear play a dynamic role in boosting sound. If our auditory mechanism is working normally, Hudspeth told me, by the time we realize we’ve heard a sound it’s a hundred times louder than it was before it began bouncing around inside our ears. When you consider how little energy is released by a pin falling onto the floor, the amplification power of our ears is clear. Indeed, since so much of what the ear accomplishes involves making noise louder, it’s unsurprising that a majority of hearing problems represent an inability not to perceive sound but to properly amplify it.

People like to distinguish between the ears and the eyes by saying that the latter have lids. But, in fact, the amplification function of the middle ear is complemented by a series of equivalent mechanisms that mitigate the effects of a loud noise. Our middle-ear bones have small muscles attached to them that are part of a reflex to reduce the vibration of the bones under the impact of a loud sound. One of them jerks on the eardrum itself so that it tightens and vibrates less violently. Another yanks the stirrup back from the oval window. The eustachian tube performs a complicated maneuver to equalize air pressure. But why amplify to begin with if you’re only going to end up deadening the noise?

Because in nature, there aren’t very many loud sounds.

“Most animals don’t announce their presence if they can help it,” Heffner told me. “Even the famous roar of the lion is an exceptional event to threaten an intruder.” For the most part, animals move through space as quietly as possible. Today people
make noise to reassert their importance, but for our predecessors silence was almost always the secret to survival. “That’s why kids today are at such risk,” Heffner added. “I can guarantee you they’re going to have hearing loss, because when you’re listening to headphones you don’t realize the volume. Continuing loud sounds put a stress on the auditory system because the middle-ear reflexes are constantly trying to protect you from them … If you’ve got a generally noisy environment, you don’t hear the twig snap. But really loud sounds are just going to knock you off your perch no matter how preoccupied you are.”

In 1961, Dr. Samuel Rosen, a New York ear specialist, wanted to measure the hearing of a people who had not
“become adjusted to
the constant bombardment of modern mechanization.” Rosen went off to visit the Mabaan tribe, some 650 miles southeast of Khartoum, in what was then one of the most noise-free regions of Africa. The Mabaans were notable among their neighbors for having neither drums nor guns. He went armed with 1,000 bottle caps, which he planned to distribute to tribe members as rewards for their participation in audio tests: Mabaan women, he had heard, liked to fix the caps to their ears and make necklaces from them.

Rosen discovered that the hearing of Mabaan tribe members at the age of seventy was often superior to that of Americans in their twenties. Some 53 percent of Mabaan villagers could discern sounds that only 2 percent of New Yorkers could hear.
“Two Mabaans standing
300 feet apart, or the length of a football
field, can carry on a conversation in a soft voice with their backs turned,” he reported. Rosen attributed the extraordinary preservation of hearing among the Mabaans to both their low-fat diet—which along with eliminating heart disease kept the cochlea well nourished—and the fact that they heard so little noise. The imbalance between noise and silence to which most of us who don’t live in remote tribal areas are subject dramatically accelerates the aging process of our hearing.

Without having recourse to the hearing power of the Mabaans, there are still a few groups of people who use their ears in a manner consistent with the evolutionary pursuit of silence.

BOOK: In Pursuit of Silence
12.04Mb size Format: txt, pdf, ePub
ads

Other books

Duty Bound (1995) by Scott, Leonard B
The Stars of San Cecilio by Susan Barrie
Thunder at Dawn by Alan Evans
Merlin's Booke by Jane Yolen
The Truth About Forever by Sarah Dessen
Beyond the Bear by Dan Bigley, Debra McKinney
Swords Over Fireshore by Pati Nagle
Rules of Prey by John Sandford