Chances Are (49 page)

Read Chances Are Online

Authors: Michael Kaplan

BOOK: Chances Are
4.98Mb size Format: txt, pdf, ePub
These functional studies suggest that the brain operates not like a single calculator of probabilities, but like a network of specialists, all submitting expert probability judgments in their several domains to our conscious, rational intelligence. A person does indeed behave like an economic unit, but less like an individual than like a corporation, with the conscious self (ensconced in its new corner office in the prefrontal cortex) as chief executive. It draws its information not from the outside world, but from other departments and regions, integrating these reports into goals, plans, and ambitions. The executive summaries that come into the conscious mind, like those from department heads, can often seem in competition with one another: the hypothalamus keeps putting in requests for more food, sleep, and sex; the occipital cortex respectfully draws your attention to that object moving on the horizon; the amygdala wishes to remind you that the last time you had oysters, you really regretted it.
In the best-organized world, all departments would work in cooperation for the greater good of the whole personality: emotion and judgment, reflex and deliberation would apportion experience, each according to its ability, leaving the rational mind to get on with strategic initiatives and other executive-corridor matters. But have you ever worked for such a smooth-running organization? Most of us, like most companies, muddle along at moderate efficiency. Memos from the affective system usually get priority treatment, automatic responses remain unexamined, and the conscious mind, like a weak chief executive, tries to take credit for decisions that are, in fact, unconscious desires or emotional reflexes: “Plenty of smokers live to be 90.” “He's untrustworthy because his eyes are too close together.” There may be a best way to be human, but we haven't all found it—which is why, like anxious bosses, our conscious minds often seek out advice from books, seminars, and highly paid consultants.
 
If we must abandon the classical idea of the rational mind as an individual agent making probability judgments in pursuit of maximum utility, we have to accept that its replacement is even more subtle and remarkable: our minds contain countless such agents, each making probability judgments appropriate to its own particular field of operation. When, say, you walk on stage to give a speech, or play the piano, or perform the part of Juliet, you can almost hear the babble of internal experts offering their assessments: “You'll die out there.” “You've done harder things before.” “That person in row two looks friendly.” “More oxygen! Breathe!”—and, in that cordial but detached boardroom tone: “It will all seem worthwhile when you've finished.”
How do they all know this? How do our many internal agents come to their conclusions—and how do they do it on so little evidence? Our senses are not wonderfully sharp; what's remarkable is our ability to draw conclusions from them. Such a seemingly straightforward task as using the two-dimensional evidence from our eyes to master a three-dimensional world is a work of inference that still baffles the most powerful computers.
Vision is less a representation than a hypothesis—a theory about the world. Its counterexamples, optical illusions, show us something about the structure and richness of that theory. For we come up against optical illusions not just in the traditional flexing cubes or converging parallel lines, but in every perspective drawing or photograph. In looking, we are making complex assumptions for which there are almost no data; so we can be wrong. The anthropologist Colin Turnbull brought a Pygmy friend out of the rain forest for the first time; when the man saw a group of cows across a field, he laughed at such funny-shaped ants. He had never had the experience of seeing something far off, so if the cows took up such a small part of his visual field, they must be tiny. The observer is the true creator.
Seeing may require a complex theory, but it's a theory that four-month-old infants can hold and act upon, focusing their attention on where they
expect
things to be. Slightly older children work with even more powerful theories: that things are still there when you don't see them, that things come in categories, that things and categories can both have names, that things make other things happen, that
we
make things happen—and that all this is true of the world, not just of me and my childish experience.
In a recent experiment, four-year-olds were shown making sophisticated and extended causal judgments based on the behavior of a “blicket detector”—a machine that did or did not light up depending on whether particular members of a group of otherwise identical blocks were put on top of it. It took only two or three examples for the children to figure out which blocks were blickets—and that typifies human cognition's challenge to the rules of probability. If we were drawing our conclusions based solely on the frequency of events, on association or similarity, we would need a
lot
of examples, both positive and negative, before we could put forward a hypothesis. Perhaps we would not need von Mises' indefinitely expanding collectives, but we would certainly need more than two or three trials. Even “Student” would throw up his hands at such a tiny sample. And yet, as if by nature, we see, sort, name, and seek for cause.
Joshua Tenenbaum heads the Computational Cognitive Science Group at MIT. His interest in cognition bridges the divide between human and machine. One of the frustrations of recent technology, otherwise so impressive, has been the undelivered promise of artificial intelligence. Despite the hopes of the 1980s, machines not only do not clean our houses, drive for us, or bring us a drink at the end of a long day; they cannot even parse reality. They have trouble pulling pattern out of a background of randomness: “The thing about human cognition, from 2-D visual cognition on up, is that it cannot be deductive. You aren't making a simple, logical connection with reality, because there simply isn't enough data. All sorts of possible worlds could, for example, produce the same image on the retina. Intuitively, you would say—not that we know the axioms, the absolute rules of the visual world—but that we have a sense of what is likely: a hypothesis.
“In scientific procedure, you are supposed to assume the null hypothesis and test for significance. But the data requirements are large. People don't behave like that: you can see them inferring that one thing causes another when there isn't even enough data to show formally that they are even
correlated
. The model that can explain induction from few examples requires that we already have a hypothesis—or more than one—through which we test experience.” The model that Tenenbaum and his colleagues favor is a hierarchy of Bayesian probability judgments.
We first considered Bayes' theorem in the context of law and forensic science, where a theory about what happened needed to be considered in the light of each new piece of evidence. The theorem lets you calculate how your belief in a theory would change depending on how likely the
evidence
appears, given this theory—or given another theory. Bayesian reasoning remains unpopular in some disciplines, both because it requires a prior opinion and because its conclusions remain provisional—each new piece of evidence forces a reexamination of the hypothesis. But that's exactly what learning feels like, from discovering that the moo-cow in the field is the same as the moo-cow in the picture book to discovering in college that all the chemistry you learned at school was untrue. The benefit of the Bayesian approach is that it allows one to make judgments in conditions of relative ignorance, and yet sets up the repeated sequence by which experience can bolster or undermine our suppositions. It fits well with our need, in our short lives, to draw conclusions from slight premises.
One reason for Tenenbaum and his group to talk about
hierarchical
Bayesian induction is that we are able to make separate judgments about several aspects of reality at once, not just the aspect the conscious mind is concentrating on. Take, for instance, the blicket detector. “It is an interesting experiment,” says Tenenbaum, “because you're clearly seeing children make a causal picture of the world—‘how it works,' not just ‘how I see it.' But there's more going on there—the children are also showing they have a theory about how detectors work: these machines are deterministic, they're not random, they respond to blickets even when non-blickets are also present. Behind that, the children have some idea of how causality should behave. They don't just see correlation and infer cause—they have some prior theory of how causes work in general.” And, one assumes, they have theories about how researchers work: asking rational questions rather than trying to trip you up—now, if it was your older sister . . .
This is what is meant by a Bayesian hierarchy: not only are we testing experience in terms of one or more hypotheses, we are applying many different
layers
of hypothesis. Begin with the theory that this experience is not random; pass up through theories of sense experience, emotional value, future consequences, and the opinions of others; and you find you've reached this individual choice: peach ice cream or chocolate fudge cake? Say you decide on peach ice cream and find, as people often claim, that it doesn't taste as good as you'd expected. You've run into a counterexample—but countering what? How does this hierarchy of hypothesis deal with the exception? How far back is theory disproved?
“In the scientific method, you're supposed to set up your experiment to disprove your hypothesis,” says Tenenbaum, “but that's not how real scientists behave. When you run into a counterexample, your first questions are: ‘Was the equipment hooked up incorrectly? Is there a calibration problem? Is there a flaw in the experimental design?' You rank your hypotheses and look at the contingent ones first, rather than the main one. So if that's what happens when we are
explicitly
testing an assumption, you can see that a counterexample is unlikely to shake a personal theory that has gone through many Bayesian cycles.”
Even the most open-minded of us don't keep every assumption in play, ready for falsification; as experience confirms assumptions, we pack our early hypotheses down into deep storage. We discard the incidental and encode the important in its minimum essential information. The conscious becomes the reflex; the hypothetical approaches certainty. Children ask “Whassat?” for about a year and then stop; naming is
done
—they can pick up future nouns automatically, in passing. They ask “Why?” compulsively for longer—but soon the question becomes rhetorical: “Why won't you let me have a motorcycle? It's because you want to
ruin my life,
that's why.”
This plasticity, this permanent shaping of cognition by experience, leaves physical traces that show up in brain scans. London taxi drivers have a bigger hippocampus—the center for remembered navigation— than the rest of us; violinists have bigger motor centers associated with the fingers of the left hand. The corporation headquartered in our skulls behaves like any company, allocating resources where they are most needed, concentrating on core business, and streamlining repetitive processes. As on the assembly line, the goal seems to be to drain common actions of the need for conscious thought—to make them appear automatic. In one delightfully subtle experiment, people were asked to memorize the position of a number of chess pieces on a board. Expert chess players could do this much more quickly and accurately than the others—but only if the arrangement of pieces represented a possible game situation. If not, memorizing became a conscious act, and the experts took just as long as duffers to complete it.
This combination of plasticity and a hierarchical model of probabilities may begin to explain our intractable national, religious, and political differences. Parents who have adopted infants from overseas see them grow with remarkable ease into their new culture—yet someone like Henry Kissinger, an immigrant to America at the age of 15, still retains a German accent acquired in less time than he spent at Harvard and the White House. A local accent, a fluent second language, a good musical ear, deep and abiding prejudice—we develop them young or we do not develop them at all; and once we have them they do not easily disappear. After a few cycles of inference, new evidence has little effect.
As Tenenbaum explains, Bayesian induction offers us speed and adaptability at the cost of potential error: “If you don't get the right data or you start with the wrong range of hypotheses, you can get causal illusions just as you get optical ones: conspiracy theories, superstitions. But you can still test them: if you think you've been passing all these exams because of your lucky shirt—and then you start failing—you might say, ‘Aha; maybe it's the socks.' In any case, you're still assuming that
something
causes it.” It's easy, though, to imagine a life—especially, crucially, a childhood—composed of all the wrong data, so that the mind's assumptions grow increasingly skew to life's averages and, through a gradual hardening of expectation, remain out of kilter forever.
It is a deep tautology that the mad lack common sense—since common sense is very much more than logic. The mentally ill often reason
too
consistently, but from flawed premises: After all, if the CIA were indeed trying to control your brain with radio waves, then a hat made of tinfoil might well offer protection. What is missing, to different degrees in different ailments, is precisely a sense of probability: Depression discounts the chance of all future pleasures to zero; mania makes links the sense data do not justify. Some forms of brain damage separate emotional from rational intelligence, reducing the perceived importance of future reward or pain, leading to reckless risk-taking. Disorders on the autistic spectrum prevent our gauging the likely thoughts of others; the world seems full of irrational, grimacing beings who yet, through some telepathic power, comprehend one another's behavior.

Other books

The Book of Why by Nicholas Montemarano
To Marry a Marquess by Teresa McCarthy
Chaos by Alexis Noelle
The Scorpion's Tale by Wayne Block
Strangers by Bill Pronzini
Heretics by Greg F. Gifune