The Moral Animal: Why We Are, the Way We Are: The New Science of Evolutionary Psychology (29 page)

BOOK: The Moral Animal: Why We Are, the Way We Are: The New Science of Evolutionary Psychology
10.23Mb size Format: txt, pdf, ePub
ads

 

 

 

Darwin, perhaps sensing the weakness of his main theory of the moral sentiments, threw in a second theory for good measure. During human evolution, he wrote in
The Descent of Man
, "as the reasoning powers and foresight ... became improved, each man would soon learn from experience that if he aided his fellow-men, he would commonly receive aid in return. From this low motive he might acquire the habit of aiding his fellows; and the habit of performing benevolent actions certainly strengthens the feeling of sympathy, which gives the first impulse to benevolent actions. Habits, moreover, followed during many generations probably tend to be inherited."
2

That last sentence, of course, is wrong. We now know that habits are passed from parent to child by instruction or example, not via the genes. In fact, no life experiences (except, say, exposure to radiation) affect the genes handed down to offspring. The very beauty
 {189} 
of Darwin's theory of natural selection, in its strict form, was that it didn't require the inheritance of acquired traits, as had previous evolutionary theories, such as Jean-Baptiste de Lamarck's. Darwin saw this beauty, and stressed mainly the pure version of his theory. But he was willing, especially as he grew older, to invoke more dubious mechanisms to solve especially nettlesome issues, such as the origin of the moral sentiments.

In 1966, George Williams suggested a way to make Darwin's musings about the evolutionary value of mutual assistance more useful: take out not only the last sentence, but also the part about "reasoning" and "foresight" and "learning." In
Adaptation and Natural Selection
, Williams recalled Darwin's reference to the "low motive" of doing favors in hopes of reciprocation and wrote: "I see no reason why a conscious motive need be involved. It is necessary that help provided to others be occasionally reciprocated if it is to be favored by natural selection. It is not necessary that either the giver or the receiver be aware of this." He continued, "Simply stated, an individual who maximizes his friendships and minimizes his antagonisms will have an evolutionary advantage, and selection should favor those characters that promote the optimization of personal relationships."
3

Williams's basic point (which Darwin certainly understood, and stressed in other contexts)
4
is one we've encountered before. Animals, including people, often execute evolutionary logic not via conscious calculation, but by following their feelings, which were designed as logic executers. In this case, Williams suggested, the feelings might include compassion and gratitude. Gratitude can get people to repay favors without giving much thought to the fact that that's what they're doing. And if compassion is felt more strongly for some kinds of people — people to whom we're grateful, for example — it can lead us, again with scarce consciousness of the fact, to repay kindness.

Williams's terse speculations were transmuted into a full-fledged theory by Robert Trivers. In 1971, exactly one hundred years after Darwin's allusion to reciprocal altruism appeared in
The Descent of Man
, Trivers published a paper titled "The Evolution of Reciprocal Altruism" in
The Quarterly Review of Biology
. In the paper's abstract, he wrote that "friendship, dislike, moralistic aggression, gratitude, sympathy, trust, suspicion, trustworthiness, aspects of guilt,
 {190} 
and some forms of dishonesty and hypocrisy can be explained as important adaptations to regulate the altruistic system." Today, more than two decades after this nervy pronouncement, there is a diverse and still-growing body of evidence to support it.

 

 

GAME THEORY AND RECIPROCAL ALTRUISM

 

If Darwin were put on trial for not having conceived and developed the theory of reciprocal altruism, one defense would be that he came from an intellectually disadvantaged culture. Victorian England lacked two tools that together form a uniquely potent analytical medium: game theory and the computer.

Game theory was developed during the 1920s and thirties as a way to study decision making.
5
It has become popular in economics and other social sciences, but it suffers from a reputation for being a bit too, well, cute. Game theorists cleverly manage to make the study of human behavior neat and clean, but they pay a high price in realism. They sometimes assume that what people pursue in life can be tidily summarized in a single psychological currency — pleasure, or happiness, or "utility"; and they assume, further, that it is pursued with unwavering rationality. Any evolutionary psychologist can tell you that these assumptions are faulty. Humans aren't calculating machines; they're animals, guided somewhat by conscious reason but also by various other forces. And long-term happiness, however appealing they may find it, is not really what they're designed to maximize.

On the other hand, humans are designed by a calculating machine, a highly rational and coolly detached process. And that machine does design them to maximize a single currency — total genetic proliferation, inclusive fitness.
6

Of course, the designs don't always work. Individual organisms often fail, for various reasons, to transmit their genes. (Some are bound to fail. That is the reason evolution so assuredly happens.) In the case of human beings, moreover, the design work was done in a social environment quite different from the current environment. We live in cities and suburbs and watch TV and drink beer, all the while being pushed and pulled by feelings designed to propagate our genes in a small hunter-gatherer population. It's no wonder that people
 {191} 
often seem not to be pursuing any particular goal — happiness, inclusive fitness, whatever — very successfully.

Game theorists, then, may want to follow a few simple rules when applying their tools to human evolution. First, the object of the game should be to maximize genetic proliferation. Second, the context of the game should mirror reality in the ancestral environment, an environment roughly like a hunter-gatherer society. Third, once the optimal strategy has been found, the experiment isn't over. The final step — the payoff — is to figure out what feelings would lead human beings to pursue that strategy. Those feelings, in theory, should be part of human nature; they should have evolved through generations and generations of the evolutionary game.

Trivers, at the suggestion of William Hamilton, employed a classic game called the prisoner's dilemma. Two partners in crime are being interrogated separately and face a hard decision. The state lacks the evidence to convict them of the grave offense they committed but does have enough evidence to convict both on a lesser charge — with, say, a one-year prison term for each. The prosecutor, wanting a harsher sentence, pressures each man individually to confess and implicate the other. He says to each: If you confess but your partner doesn't, I'll let you off scot-free and use your testimony to put him away for ten years. The flip side of this offer is a threat: If you don't confess but your partner does, you go to prison for ten years. And if you confess and it turns out your partner confesses too, I'll put you both away, but only for three years.
7

If you were in the shoes of either prisoner, and weighed your options one-by-one, you would almost certainly decide to confess — to "cheat" on your partner. Suppose, first of all, that your partner cheats on you. Then you're better off cheating: you get three years in prison, as opposed to the ten you'd get if you stayed mum while he confessed. Now, suppose he doesn't cheat on you. You're still better off cheating: by confessing while he stays mum, you go free, whereas you'd get one year if you too kept your silence. Thus, the logic seems irresistible: betray your partner.

Yet if both partners follow this nearly irresistible logic, and cheat on each other, they end up with three years in jail, whereas both could have gotten off with one year had they stayed mutually faithful
 {192} 
and kept their mouths shut. If only they were allowed to communicate and reach an agreement — then cooperation could emerge, and both would be better off. But they aren't, so how can cooperation emerge?

The question roughly parallels the question of how dumb animals, which can't make promises of repayment, or, for that matter, grasp the concept of repayment, could evolve to be reciprocally altruistic. Betraying a partner in crime while he stays faithful is like an animal's benefiting from an altruistic act and never returning the favor. Mutual betrayal is like neither animal's extending a favor in the first place: though both might benefit from reciprocal altruism, neither will risk getting burned. Mutual fidelity is like a single successful round of reciprocal altruism — a favor is extended and returned. But again: Why extend the favor if there's no guarantee of return?

The match between model and reality isn't perfect.
8
With reciprocal altruism there is a time lag between the altruism and its reciprocation, whereas the players in a prisoner's dilemma commit themselves concurrently. But this is a distinction without much of a difference. Because the prisoners can't communicate about their concurrent decisions, each is in the situation faced by prospectively altruistic animals: unsure whether any friendly overture will be matched. Further, if you keep pitting the same players against one another, game after game after game — an "iterated prisoner's dilemma" — each can refer to the other's past behavior in deciding how to act toward him in the future. Thus each player may reap in the future what he has sown in the past — just as with reciprocal altruism. All in all, the match between model and reality is quite good. The logic that would lead to cooperation in an iterated prisoner's dilemma is fairly precisely the logic that would lead to reciprocal altruism in nature. The essence of that logic, in both cases, is non-zero-sumness.

 

 

NON-ZERO-SUMNESS

 

Suppose you are a chimp that has just killed a young monkey and you give some meat to a fellow chimp that has been short of food lately. Let's say you give him five ounces, and let's call that a five-point loss for you. Now, in an important sense, the other chimp's
 {193} 
gain is larger than your loss. He was, after all, in a period of unusual need, so the real value of food to him — in terms of its contribution to his genetic proliferation — was unusually high. Indeed, if he were human, and could think about his plight, and were forced to sign a binding contract, he might rationally agree to repay five ounces of meat with, say, six ounces of meat right after payday next Friday. So he gets six points in this exchange, even though it cost you only five.

This asymmetry is what makes the game non-zero-sum. One player's gain isn't canceled out by the other player's loss. The essential feature of non-zero-sumness is that, through cooperation, or reciprocation, both players can be better off.
9
If the other chimp repays you at a time when meat is bountiful for him and scarce for you, then he sacrifices five points and you get six points. Both of you have emerged from the exchange with a net benefit of one point. A series of tennis sets, or of innings, or of golf holes eventually produces only one winner. The prisoner's dilemma, being a non-zero-sum game, is different. Both players can win if they cooperate. If caveman A and caveman B combine to hunt game that one man alone can't kill, both cavemen's families get a big meal; if there's no such cooperation, neither family does.

Division of labor is a common source of non-zero-sumness: you become an expert hide-splicer and give me clothes, I carve wood and give you spears. The key here — and in the chimpanzee example above, as well as in much non-zero-sumness — is that one animal's surplus item can be another animal's rare and precious good. It happens all the time. Darwin, recalling an exchange of goods with the Fuegian Indians, wrote of "both parties laughing, wondering, gaping at each other; we pitying them, for giving us good fish and crabs for rags, &c.; they grasping at the chance of finding people so foolish as to exchange such splendid ornaments for a good supper."
10

To judge by many hunter-gatherer societies, division of economic labor wasn't dramatic in the ancestral environment. The most common commodity of exchange, almost surely, was information. Knowing where a great stock of food has been found, or where someone encountered a poisonous snake, can be a matter of life or death. And knowing who is sleeping with whom, who is angry at whom, who
 {194} 
cheated whom, and so on, can inform social maneuvering for sex and other vital resources. Indeed, the sorts of gossip that people in all cultures have an apparently inherent thirst for — tales of triumph, tragedy, bonanza, misfortune, extraordinary fidelity, wretched betrayal, and so on — match up well with the sorts of information conducive to fitness.
11
Trading gossip (the phrase couldn't be more apt) is one of the main things friends do, and it may be one of the main reasons friendship exists.

Unlike food or spears or hides, information is shared without being actually surrendered, a fact that can make the exchange radically non-zero-sum.
12
Of course, sometimes information is of value only if hoarded. But often that's not the case. One Darwin biographer has written that, after scientific discussions between Darwin and his friend Joseph Hooker, "each vied with the other in claiming that the benefits he had received ... far outweighted whatever return he might have been able to make."
13

Non-zero-sumness is, by itself, not enough to explain the evolution of reciprocal altruism. Even in a non-zero-sum game, cooperation doesn't necessarily make sense. In the food-sharing example, though you gain one point from a single round of reciprocal altruism, you gain six points by cheating — accepting generosity and never returning it. So the lesson seems to be: if you can spend your life exploiting people, by all means do; the value of cooperation pales by comparison. Further, if you can't find people to exploit, cooperation still may not be the best strategy. If you're surrounded by people who are always trying to exploit you, then reciprocal exploitation is the way to cut your losses. Whether non-zero-sumness actually fuels the evolution of reciprocal altruism depends heavily on the prevailing social environment. The prisoner's dilemma will have to do more than simply illustrate non-zero-sumness if it is to be of much use here.

BOOK: The Moral Animal: Why We Are, the Way We Are: The New Science of Evolutionary Psychology
10.23Mb size Format: txt, pdf, ePub
ads

Other books

Storm Over the Lake by Diana Palmer
Zodiac by Romina Russell
Blue Boy 1: Bullet by Garrett Leigh
Shadowlight by Lynn Viehl
HARM by Peter Lok
Chaos by David Meyer
Dream Lake by Lisa Kleypas
Full House by Stephen Jay Gould