Read Antifragile: Things That Gain from Disorder Online
Authors: Nassim Nicholas Taleb
There is a difference in medical research between (a) observational studies, in which the researcher looks at statistical relationships on his computer, and (b) the double-blind cohort experiments that extract information in a realistic way that mimics real life.
The former, that is, observation from a computer, produces all manner
of results that tend to be, as last computed by John Ioannides, now more than eight times out of ten, spurious—yet these observational studies get reported in the papers and in
some
scientific journals. Thankfully, these observational studies are not accepted by the Food and Drug Administration, as the agency’s scientists know better. The great Stan Young, an activist against spurious statistics, and I found a genetics-based study in
The New England Journal of Medicine
claiming significance from statistical data—while the results to us were no better than random. We wrote to the journal, to no avail.
Figure 18
shows the swelling number of potential spurious relationships. The idea is as follows. If I have a set of 200 random variables, completely unrelated to each other, then it would be near impossible not to find in it a high correlation of sorts, say 30 percent, but that is entirely spurious. There are techniques to control the cherry-picking (one of which is known as the Bonferoni adjustment), but even then they don’t catch the culprits—much as regulation doesn’t stop insiders from gaming the system. This explains why in the twelve years or so since we’ve decoded the human genome, not much of significance has been found. I am not saying that there is no information in the data: the problem is that the needle comes in a haystack.
Even experiments can be marred with bias: the researcher has the incentive to select the experiment that corresponds to what he was looking for, hiding the failed attempts. He can also formulate a hypothesis after the results of the experiment—thus fitting the hypothesis to the experiment. The bias is smaller, though, than in the previous case.
The fooled-by-data effect is accelerating. There is a nasty phenomenon called “Big Data” in which researchers have brought cherry-picking to an industrial level. Modernity provides too many variables (but too little data per variable), and the spurious relationships grow much, much faster than real information, as noise is convex and information is concave.
Increasingly, data can only truly deliver
via negativa
–style knowledge—it can be effectively used to debunk, not confirm.
The tragedy is that it is very hard to get funding to replicate—and reject—existing studies. And even if there were money for it, it would be hard to find takers: trying to replicate studies will not make anyone a hero. So we are crippled with a distrust of empirical results, except for those that are negative. To return to my romantic idea of the amateur and tea-drinking English clergyman: the professional researcher competes
to “find” relationships. Science must not be a competition; it must not have rankings—we can see how such a system will end up blowing up. Knowledge must not have an agency problem.
Mistakes made collectively, not individually, are the hallmark of organized knowledge—and the best argument against it. The argument “because everyone is doing it” or “that’s how others do it” abounds. It is not trivial: people who on their own would not do something because they find it silly now engage in the same thing but in groups. And this is where academia in its institutional structure tends to violate science.
One doctoral student at the University of Massachusetts, Chris S., once came to tell me that he believed in my ideas of “fat tails” and my skepticism of current methods of risk management, but that it would not help him get an academic job. “It’s what everybody teaches and uses in papers,” he said. Another student explained that he wanted a job at a good university so he could make money testifying as an expert witness—they would not buy my ideas on robust risk management because “everyone uses these textbooks.” Likewise, I was asked by the administration of a university to teach standard risk methods that I believe are pure charlatanism (I refused). Is my duty as a professor to get students a job at the expense of society, or to fulfill my civic obligations? Well, if the former is the case, then economics and business schools have a severe ethical problem. For the point is generalized and that’s why economics hasn’t collapsed yet in spite of the obvious nonsense in it—and
scientifically proven
nonsense in it. (In my “fourth quadrant” paper—see discussion in the Appendix—I show how these methods are empirically invalid, in addition to being severely mathematically inconsistent, in other words, a scientific swindle). Recall that professors are not penalized when they teach you something that blows up the financial system, which perpetuates the fraud. Departments need to teach
something
so students get jobs, even if they are teaching snake oil—this got us trapped in a circular system in which everyone knows that the material is wrong but nobody is free enough or has enough courage to do anything about it.
The problem is that the last place on the planet where the “other people think so” argument can be used is science: science is precisely about arguments standing on their own legs, and something proven to
be wrong empirically or mathematically is plain wrong, whether a hundred “experts” or three trillion disagree with the statement. And the very use of “other people” to back up one’s claims is indicative that the person—or the entire collective that composes the “other”—is a wimp. The appendix shows what has been busted in economics, and what people keep using because they are not harmed by error, and that’s the optimal strategy for keeping a job or getting a promotion.
But the good news is that I am convinced that a single person with courage can bring down a collective composed of wimps.
And here, once again, we need to go back into history for the cure. The scriptures were quite aware of the problem of the diffusion of responsibility and made it a sin to follow the crowd in doing evil—as well as to give false testimony in order to conform to the multitude.
I close
Book VII
with a thought. Whenever I hear the phrase “I am ethical” uttered, I get tense. When I hear about classes in ethics, I get even more tense. All I want is to remove the optionality, reduce the antifragility of some at the expense of others. It is simple
via negativa
. The rest will take care of itself.
1
It is a property of sampling. In real life, if you are observing things in real time, then large deviations matter a lot. But when a researcher looks for them, then they are likely to be bogus—in real life there is no cherry-picking, but on the researcher’s computer, there is.
As usual at the end of the journey, while I was looking at the entire manuscript on a restaurant table, someone from a Semitic culture asked me to explain my book standing on one leg. This time it was Shaiy Pilpel, a probabilist with whom I’ve had a two-decades-long calm conversation without a single episode of small talk. It is hard to find people knowledgeable and confident enough to like to extract the essence of things, instead of nitpicking.
With the previous book, one of his compatriots asked me the same question, but I had to think about it. This time I did not even have to make an effort.
It was so obvious that Shaiy summed it up it himself in the same breath. He actually believes that all real ideas can be distilled down to a central issue that the great majority of people in a given field, by dint of specialization and empty-suitedness, completely miss. Everything in religious law comes down to the refinements, applications, and interpretations of the Golden Rule, “Don’t do unto others what you don’t want them to do to you.” This we saw was the logic behind Hammurabi’s rule. And the Golden Rule was a true distillation, not a Procrustean bed. A central argument is never a summary—it is more like a generator.
Shaiy’s extraction was:
Everything gains or loses from volatility
.
Fragility is what loses from volatility and uncertainty
. The glass on the table is short volatility.
In the novel
The Plague
by Albert Camus, a character spends part of his life searching for the perfect opening sentence for a novel. Once he
had that sentence, he had the full book as a derivation of the opening. But the reader, to understand and appreciate the first sentence, will have to read the entire book.
I glanced at the manuscript with a feeling of calm elation. Every sentence in the book was a derivation, an application, or an interpretation of the short maxim. Some details and extensions can be counterintuitive and elaborate, particularly when it comes to decision making under opacity, but at the end everything flows from it.
The reader is invited to do the same. Look around you, at your life, at objects, at relationships, at entities. You may replace
volatility
with other members of the disorder cluster here and there for clarity, but it is not even necessary—when formally expressed, it is all the same symbol. Time is volatility. Education, in the sense of the formation of character, personality, and acquisition of true knowledge, likes disorder; label-driven education and educators abhor disorder. Some things break because of error, others don’t. Some theories fall apart, not others. Innovation is precisely something that gains from uncertainty: and some people sit around waiting for uncertainty and using it as raw material, just like our ancestral hunters.
Prometheus is long disorder; Epimetheus is short disorder. We can separate people and the quality of their experiences based on exposure to disorder and appetite for it: Spartan hoplites contra bloggers, adventurers contra copy editors, Phoenician traders contra Latin grammarians, and pirates contra tango instructors.
It so happens that everything nonlinear is convex or concave, or both, depending on the intensity of the stressor. We saw the link between convexity and liking volatility. So everything likes or hates volatility up to a point. Everything.
We can detect what likes volatility thanks to convexity or acceleration and higher orders, since convexity is the response by a thing that likes disorder. We can build Black Swan–protected systems thanks to detection of concavity. We can take medical decisions by understanding the convexity of harm and the logic of Mother Nature’s tinkering, on which side we face opacity, which error we should risk. Ethics is largely about stolen convexities and optionality.
More technically, we may never get to know
x,
but we can play with the exposure to
x,
barbell things to defang them; we can control a function of
x, f
(
x
), even if
x
remains vastly beyond our understanding. We
can keep changing
f
(
x
) until we are comfortable with it by a mechanism called
convex transformation,
the fancier name for the barbell.
This short maxim also tells you where fragility supersedes truth, why we lie to children, and why we humans got a bit ahead of ourselves in this large enterprise called modernity.
Distributed randomness (as opposed to the concentrated type) is a necessity, not an option: everything big is short volatility. So is everything fast. Big and fast are abominations. Modern times don’t like volatility.
And the Triad gives us some indication of what should be done to live in a world that does not want us to understand it, a world whose charm comes from our inability to truly understand it.
The glass is dead; living things are long volatility. The best way to verify that you are alive is by checking if you like variations. Remember that food would not have a taste if it weren’t for hunger; results are meaningless without effort, joy without sadness, convictions without uncertainty, and an ethical life isn’t so when stripped of personal risks.
And once again, reader, thank you for reading my book.