Read Antifragile: Things That Gain from Disorder Online
Authors: Nassim Nicholas Taleb
The history of medicine is the story—largely documented—of the dialectic between doing and thinking—and how to make decisions under opacity. In the medieval Mediterranean, Maimonides, Avicenna, Al-Ruhawi, and the Syriac doctors such as Hunain Ibn Ishaq were at once philosophers and doctors. A doctor in the medieval Semitic world was called Al-Hakim, “the wise,” or “practitioner of wisdom,” a synonym for philosopher or rabbi (
hkm
is the Semitic root for “wisdom”). Even in the earlier period there was a crop of Hellenized fellows who stood in the exact middle between medicine and the practice of philosophy—the great skeptic philosopher Sextus Empiricus was himself a doctor member of the skeptical empirical school. So were Menodotus of Nicomedia and the experience-based predecessor of evidence-based medicine—on whom a bit more in a few pages. The works of these thinkers, or whatever remains extant are quite refreshing for those of us who distrust those who talk without doing.
Simple, quite simple decision rules and heuristics emerge from this chapter.
Via negativa,
of course (by removal of the unnatural): only resort to medical techniques when the health payoff is very large (say, saving a life) and visibly exceeds its potential harm, such as incontrovertibly
needed surgery or lifesaving medicine (penicillin). It is the same as with government intervention. This is squarely Thalesian, not Aristotelian (that is, decision making based on payoffs, not knowledge). For in these cases medicine has positive asymmetries—convexity effects—and the outcome will be less likely to produce fragility. Otherwise, in situations in which the benefits of a particular medicine, procedure, or nutritional or lifestyle modification appear small—say, those aiming for comfort—we have a large potential sucker problem (hence putting us on the wrong side of convexity effects). Actually, one of the unintended side benefits of the theorems that Raphael Douady and I developed in our paper mapping risk detection techniques (in
Chapter 19
) is an exact link between (a) nonlinearity in exposure or dose-response and (b) potential fragility or antifragility.
I also extend the problem to epistemological grounds and make rules for
what should be considered evidence:
as with whether a cup should be considered half-empty or half-full, there are situations in which we focus on
absence
of evidence, others in which we focus on evidence. In some cases one can be confirmatory, not others—it depends on the risks. Take smoking, which was, at some stage, viewed as bringing small gains in pleasure and even health (truly, people thought it was a good thing). It took decades for its harm to become visible. Yet had someone questioned it, he would have faced the canned-naive-academized and faux-expert response “do you have
evidence
that this is harmful?” (the same type of response as “is there evidence that polluting is harmful?”). As usual, the solution is simple, an extension of
via negativa
and Fat Tony’s
don’t-be-a-sucker
rule: the non-natural needs to prove its benefits, not the natural—according to the statistical principle outlined earlier that nature is to be considered much less of a sucker than humans. In a complex domain, only time—a long time—is evidence.
For any decision, the unknown will preponderate on one side more than the other.
The “do you have evidence” fallacy, mistaking evidence of no harm for no evidence of harm, is similar to the one of misinterpreting NED (no evidence of disease) for evidence of no disease. This is the same error as mistaking absence of evidence for evidence of absence, the one that tends to affect smart and educated people, as if education made people more confirmatory in their responses and more liable to fall into simple logical errors.
And recall that under nonlinearities, the simple statements “harmful” or “beneficial” break down: it is all in the dosage.
I once broke my nose … walking. For the sake of antifragility, of course. I was trying to walk on uneven surfaces, as part of my antifragility program, under the influence of Loic Le Corre, who believes in naturalistic exercise. It was exhilarating; I felt the world was richer, more fractal, and when I contrasted this terrain with the smooth surfaces of sidewalks and corporate offices, those felt like prisons. Unfortunately, I was carrying something much less ancestral, a cellular phone, which had the insolence to ring in the middle of my walk.
In the emergency room, the doctor and staff insisted that I should “ice” my nose, meaning apply an ice-cold patch to it. In the middle of the pain, it hit me that the swelling that Mother Nature gave me was most certainly not directly caused by the trauma. It was my own body’s response to the injury. It seemed to me that it was an insult to Mother Nature to override her programmed reactions unless we had a good reason to do so, backed by proper empirical testing to show that we humans can do better; the burden of evidence falls on us humans. So I mumbled to the emergency room doctor whether he had any statistical evidence of benefits from applying ice to my nose or if it resulted from a naive version of an
interventionism
.
His response was: “You have a nose the size of Cleveland and you are now interested in … numbers?” I recall developing from his blurry remarks the thought that he had no answer.
Effectively, he had no answer, because as soon as I got to a computer, I was able to confirm that there is no compelling empirical evidence in favor of the reduction of swelling. At least, not outside of the very rare cases in which the swelling would threaten the patient, which was clearly not the case. It was pure sucker-rationalism in the mind of doctors, following what made sense to boundedly intelligent humans, coupled with interventionism, this need to
do something,
this defect of thinking that we knew better, and denigration of the unobserved. This defect is not limited to our control of swelling: this confabulation plagues the entire history of medicine, along with, of course, many other fields of practice. The researchers Paul Meehl and Robin Dawes pioneered a tradition to catalog the tension between “clinical” and actuarial (that is, statistical)
knowledge, and examine how many things believed to be true by professionals and clinicians aren’t so and don’t match empirical evidence. The problem is of course that these researchers did not have a clear idea of where the burden of empirical evidence lies (the difference between naive or pseudo empiricism and rigorous empiricism)—the onus is on the doctors to show us why reducing fever is good, why eating breakfast before engaging in activity is healthy (there is no evidence), or why bleeding patients is the best alternative (they’ve stopped doing so). Sometimes I get the answer that they have no clue when they have to utter defensively “I am a doctor” or “are you a doctor?” But worst, I sometimes get some letters of support and sympathy from the alternative medicine fellows, which makes me go postal: the approach in this book is ultra-orthodox, ultra-rigorous, and ultra-scientific, certainly not in favor of alternative medicine.
The hidden costs of health care are largely in the denial of antifragility. But it may not be just medicine—what we call diseases of civilization result from the attempt by humans to make life comfortable for ourselves against our own interest, since the comfortable is what fragilizes. The rest of this chapter focuses on specific medical cases with hidden negative convexity effects (small gains, large losses)—and reframes the ideas of iatrogenics in connection with my notion of fragility and nonlinearities.
The first principle of iatrogenics is as follows: we do not need
evidence of harm
to claim that a drug or an unnatural
via positiva
procedure is dangerous. Recall my comment earlier with the turkey problem that harm is in the future, not in the narrowly defined past. In other words, empiricism is not naive empiricism.
We saw the smoking argument. Now consider the adventure of a human-invented fat, trans fat. Somehow, humans discovered how to make fat products and, as it was the great era of scientism, they were convinced they could make it
better
than nature. Not just equal; better. Chemists assumed that they could produce a fat replacement that was superior to lard or butter from so many standpoints. First, it was more convenient: synthetic products such as margarine stay soft in the refrigerator, so you can immediately spread them on a piece of bread without
the usual wait while listening to the radio. Second, it was economical, as the synthetic fats were derived from vegetables. Finally, what is worst, trans fat was assumed to be healthier. Its use propagated very widely and after a few hundred million years of consumption of animal fat, people suddenly started getting scared of it (particularly something called “saturated” fat), mainly from shoddy statistical interpretations. Today trans fat is widely banned as it turned out that it kills people, as it is behind heart disease and cardiovascular problems.
For another murderous example of such sucker (and fragilizing) rationalism, consider the story of Thalidomide. It was a drug meant to reduce the nausea episodes of pregnant women. It led to birth defects. Another drug, Diethylstilbestrol, silently harmed the fetus and led to delayed gynecological cancer among daughters.
These two mistakes are quite telling because, in both cases, the benefits appeared to be obvious and immediate, though small, and the harm remained delayed for years, at least three-quarters of a generation. The next discussion will be about the burden of evidence, as you can easily imagine that someone defending these treatments would have immediately raised the objection, “Monsieur Taleb, do you have
evidence
for your statement?”
Now we can see the pattern: iatrogenics, being a cost-benefit situation, usually results from the treacherous condition in which the benefits are small, and visible—and the costs very large, delayed, and hidden. And of course, the potential costs are much worse than the cumulative gains.
For those into graphs, the appendix shows the potential risks from different angles and expresses iatrogenics as a probability distribution.
Second principle of iatrogenics: it is not linear. We should not take risks with near-healthy people; but we should take a lot, a lot more risks with those deemed in danger.
1
Why do we need to focus treatment on more serious cases, not marginal
ones? Take this example showing nonlinearity (convexity). When hypertension is mild, say marginally higher than the zone accepted as “normotensive,” the chance of benefiting from a certain drug is close to 5.6 percent (only one person in eighteen benefit from the treatment). But when blood pressure is considered to be in the “high” or “severe” range, the chances of benefiting are now 26 and 72 percent, respectively (that is, one person in four and two persons out of three will benefit from the treatment). So the treatment benefits are convex to condition (the benefits rise disproportionally, in an accelerated manner). But consider that the iatrogenics should be constant for all categories! In the very ill condition, the benefits are large relative to iatrogenics; in the borderline one, they are small. This means that we need to focus on high-symptom conditions and ignore, I mean really ignore, other situations in which the patient is not very ill.
The argument here is based on the structure of conditional survival probabilities, similar to the one that we used to prove that harm needs to be nonlinear for porcelain cups. Consider that Mother Nature had to have tinkered through selection in inverse proportion to the rarity of the condition. Of the hundred and twenty thousand drugs available today, I can hardly find a
via positiva
one that makes a healthy person unconditionally “better” (and if someone shows me one, I will be skeptical of yet-unseen side effects). Once in a while we come up with drugs that enhance performance, such as, say, steroids, only to discover what people in finance have known for a while: in a “mature” market there is no free lunch anymore, and what appears as a free lunch has a hidden risk. When you think you have found a free lunch, say, steroids or trans fat, something that helps the healthy without visible downside, it is most likely that there is a concealed trap somewhere. Actually, my days in trading, it was called a “sucker’s trade.”
And there is a simple statistical reason that explains why we have not been able to find drugs that make us feel unconditionally better when we are well (or unconditionally stronger, etc.): nature would have been likely to find this magic pill by itself. But consider that illness is rare, and the more ill the person the less likely nature would have found the solution by itself, in an accelerating way. A condition that is, say, three units of deviation away from the norm is more than three hundred times rarer than normal; an illness that is five units of deviation from the norm is more than a million times rarer!
The medical community has not modeled such nonlinearity of benefits
to iatrogenics, and if they do so in words, I have not seen it in formalized in papers, hence into a decision-making methodology that takes probability into account (as we will see in the next section, there is little explicit use of convexity biases). Even risks seem to be linearly extrapolated, causing both underestimation and overestimation, most certainly miscalculation of degrees of harm—for instance, a paper on the effect of radiation states the following: “The standard model currently in use applies a linear scale, extrapolating cancer risk from high doses to low doses of ionizing radiation.” Further, pharmaceutical companies are under financial pressures to find diseases and satisfy the security analysts. They have been scraping the bottom of the barrel, looking for disease among healthier and healthier people, lobbying for reclassifications of conditions, and fine-tuning sales tricks to get doctors to overprescribe. Now, if your blood pressure is in the upper part of the range that used to be called “normal,” you are no longer “normotensive” but “pre-hypertensive,” even if there are no symptoms in view. There is nothing wrong with the classification if it leads to healthier lifestyle and robust
via negativa
measures—but what is behind such classification, often, is a drive for more medication.