Read Antifragile: Things That Gain from Disorder Online
Authors: Nassim Nicholas Taleb
Time for American policy makers to understand that the more they intervene in other countries for the sake of stability, the more they bring instability (except for emergency-room-style cases). Or perhaps time to reduce the role of policy makers in policy affairs.
One of life’s packages: no stability without volatility.
My definition of modernity is humans’ large-scale domination of the environment, the systematic smoothing of the world’s jaggedness, and the stifling of volatility and stressors.
Modernity corresponds to the systematic extraction of humans from their randomness-laden ecology—physical and social, even epistemological. Modernity is not just the postmedieval, postagrarian, and postfeudal historical period as defined in sociology textbooks. It is rather the spirit of an age marked by rationalization (naive rationalism), the idea that society is understandable, hence must be designed, by humans. With it was born statistical theory, hence the beastly bell curve. So was linear science. So was the notion of “efficiency”—or optimization.
Modernity is a Procrustean bed, good or bad—a reduction of humans to what appears to be efficient and useful. Some aspects of it work: Procrustean beds are not all negative reductions. Some may be beneficial, though these are rare.
Consider the life of the lion in the comfort and predictability of the Bronx Zoo (with Sunday afternoon visitors flocking to look at him in a combination of curiosity, awe, and pity) compared to that of his cousins in freedom. We, at some point, had free-range humans and free-range children before the advent of the golden period of the soccer mom.
We are moving into a phase of modernity marked by the lobbyist, the very, very limited liability corporation, the MBA, sucker problems, secularization (or rather reinvention of new sacred values like flags to replace altars), the tax man, fear of the boss, spending the weekend in interesting places and the workweek in a putatively less interesting one, the separation of “work” and “leisure” (though the two would look identical to someone from a wiser era), the retirement plan, argumentative intellectuals who would disagree with this definition of modernity, literal thinking, inductive inference, philosophy of science, the invention of social science, smooth surfaces, and egocentric architects. Violence is transferred from individuals to states. So is financial indiscipline. At the center of all this is the denial of antifragility.
There is a dependence on narratives, an intellectualization of actions and ventures. Public enterprises and functionaries—even employees of large corporations—can only do things that seem to fit some narrative, unlike businesses that can just follow profits, with or without a good-sounding story. Remember that you need a name for the color
blue when you build a narrative, but not in action—the thinker lacking a word for “blue” is handicapped; not the doer. (I’ve had a hard time conveying to intellectuals the
intellectual
superiority of practice.)
Modernity widened the difference between the sensational and the relevant—in a natural environment the sensational is, well, sensational for a reason; today we depend on the press for such essentially human things as gossip and anecdotes and we care about the private lives of people in very remote places.
Indeed, in the past, when we were not fully aware of antifragility and self-organization and spontaneous healing, we managed to respect these properties by constructing beliefs that served the purpose of managing and surviving uncertainty. We imparted improvements to the agency of god(s). We may have denied that things can take care of themselves without some agency. But it was the gods that were the agents, not Harvard-educated captains of the ship.
So the emergence of the nation-state falls squarely into this pro-gression—the transfer of agency to mere humans. The story of the nation-state is that of the concentration and magnification of human errors. Modernity starts with the state monopoly on violence, and ends with the state’s monopoly on fiscal irresponsibility.
We will discuss next two central elements at the core of modernity. Primo, in
Chapter 7
, naive interventionism, with the costs associated with fixing things that one should leave alone. Secundo, in
Chapter 8
and as a transition to
Book III
, this idea of replacing God and the gods running future events with something even more religiously fundamentalist: the unconditional belief in the idea of scientific prediction regardless of the domain, the aim to squeeze the future into numerical reductions whether reliable or unreliable. For we have managed to transfer religious belief into gullibility for whatever can masquerade as science.
1
The financier George Cooper has revived the argument in
The Origin of Financial Crises—
the argument is so crisp that an old trader friend, Peter Nielsen, has distributed it to every person he knows.
2
Note these double standards on the part of Western governments. As a Christian, parts of Saudi Arabia are off-limits to me, as I would violate the purity of the place. But no public part of the United States or Western Europe is off-limits to Saudi citizens.
A tonsillectomy to kill time—Never do today what can be left to tomorrow—Let’s predict revolutions after they happen—Lessons in blackjack
Consider this need to “do something” through an illustrative example. In the 1930s, 389 children were presented to New York City doctors; 174 of them were recommended tonsillectomies. The remaining 215 children were again presented to doctors, and 99 were said to need the surgery. When the remaining 116 children were shown to yet a third set of doctors, 52 were recommended the surgery. Note that there is morbidity in 2 to 4 percent of the cases (today, not then, as the risks of surgery were very bad at the time) and that a death occurs in about every 15,000 such operations and you get an idea about the break-even point between medical gains and detriment.
This story allows us to witness probabilistic homicide at work. Every child who undergoes an unnecessary operation has a shortening of her life expectancy. This example not only gives us an idea of harm done by those who intervene, but, worse, it illustrates the lack of awareness of the need to look for a break-even point between benefits and harm.
Let us call this urge to help “naive interventionism.” Next we examine its costs.
In the case of tonsillectomies, the harm to the children undergoing unnecessary treatment is coupled with the trumpeted gain for
some
others. The name for such net loss, the (usually hidden or delayed) damage from treatment in excess of the benefits, is
iatrogenics,
literally, “caused by the healer,”
iatros
being a healer in Greek. We will posit in
Chapter 21
that every time you visit a doctor and get a treatment, you incur risks of such medical harm, which should be analyzed the way we analyze other trade-offs: probabilistic benefits minus probabilistic costs.
For a classic example of iatrogenics, consider the death of George Washington in December 1799: we have enough evidence that his doctors greatly helped, or at least hastened, his death, thanks to the then standard treatment that included bloodletting (between five and nine pounds of blood).
Now these risks of harm by the healer can be so overlooked that, depending on how you account for it, until penicillin, medicine had a largely negative balance sheet—going to the doctor increased your chance of death. But it is quite telling that medical iatrogenics seems to have increased over time, along with knowledge, to peak sometime late in the nineteenth century. Thank you, modernity: it was “scientific progress,” the birth of the clinic and its substitution for home remedies, that caused death rates to shoot up, mostly from what was then called “hospital fever”—Leibniz had called these hospitals
seminaria mortis,
seedbeds of death. The evidence of increase in death rates is about as strong as they come, since all the victims were now gathered in one place: people were dying in these institutions who would have survived outside them. The famously mistreated Austro-Hungarian doctor Ignaz Semmelweis had observed that more women died giving birth in hospitals than giving birth on the street. He called the establishment doctors a bunch of criminals—which they were: the doctors who kept killing patients could not accept his facts or act on them since he “had no theory” for his observations. Semmelweis entered a state of depression, helpless to stop what he saw as murders, disgusted at the attitude of the establishment. He ended up in an asylum, where he died, ironically, from the same hospital fever he had been warning against.
Semmelweis’s story is sad: a man who was punished, humiliated, and even killed for shouting the truth in order to save others. The worst punishment was his state of helplessness in the face of risks and unfairness.
But the story is also a happy one—the truth came out eventually, and his mission ended up paying off, with some delay. And the final lesson is that one should not expect laurels for bringing the truth.
Medicine is comparatively the good news, perhaps the only good news, in the field of iatrogenics. We see the problem there because things are starting to be brought under control today; it is now just what we call the cost of doing business, although medical error still currently kills between three times (as accepted by doctors) and ten times as many people as car accidents in the United States. It is generally accepted that harm from doctors—not including risks from hospital germs—accounts for more deaths than any single cancer. The methodology used by the medical establishment for decision making is still innocent of proper risk-management principles, but medicine is getting better. We have to worry about the incitation to overtreatment on the part of pharmaceutical companies, lobbies, and special interest groups and the production of harm that is not immediately salient and not accounted for as an “error.” Pharma plays the game of concealed and distributed iatrogenics, and it has been growing. It is easy to assess iatrogenics when the surgeon amputates the wrong leg or operates on the wrong kidney, or when the patient dies of a drug reaction. But when you medicate a child for an imagined or invented psychiatric disease, say, ADHD or depression, instead of letting him out of the cage, the long-term harm is largely unaccounted for. Iatrogenics is compounded by the “agency problem” or “principal-agent problem,” which emerges when one party (the agent) has personal interests that are divorced from those of the one using his services (the principal). An agency problem, for instance, is present with the stockbroker and medical doctor, whose ultimate interest is their own checking account, not your financial and medical health, respectively, and who give you advice that is geared to benefit themselves. Or with politicians working on their career.
Medicine has known about iatrogenics since at least the fourth century before our era—
primum non nocere
(“first do no harm”) is a first principle attributed to Hippocrates and integrated in the so-called Hippocratic Oath taken by every medical doctor on his commencement day. It just took medicine about twenty-four centuries to properly execute the
brilliant idea. In spite of the recitations of
non nocere
through the ages, the term “iatrogenics” only appeared in frequent use very, very late, a few decades ago—after so much damage had been done. I for myself did not know the exact word until the writer Bryan Appleyard introduced me to it (I had used “harmful unintended side effects”). So let us leave medicine (to return to it in a dozen chapters or so), and apply this idea born in medicine to other domains of life. Since no intervention implies no iatrogenics, the source of harm lies in the denial of antifragility, and to the impression that we humans are so necessary to making things function.