Read Antifragile: Things That Gain from Disorder Online
Authors: Nassim Nicholas Taleb
By grasping the mechanisms of antifragility we can build a systematic and broad guide to
nonpredictive
decision making under uncertainty in business, politics, medicine, and life in general—anywhere the unknown preponderates, any situation in which there is randomness, unpredictability, opacity, or incomplete understanding of things.
It is far easier to figure out if something is fragile than to predict the occurrence of an event that may harm it. Fragility can be measured; risk is not measurable (outside of casinos or the minds of people who call themselves “risk experts”). This provides a solution to what I’ve called the Black Swan problem—the impossibility of calculating the risks of consequential rare events and predicting their occurrence. Sensitivity to harm from volatility is tractable, more so than forecasting the event that
would cause the harm. So we propose to stand our current approaches to prediction, prognostication, and risk management on their heads.
In every domain or area of application, we propose rules for moving from the fragile toward the antifragile, through reduction of fragility or harnessing antifragility. And we can almost always detect antifragility (and fragility) using a simple test of asymmetry: anything that has more upside than downside from random events (or certain shocks) is antifragile; the reverse is fragile.
Crucially, if antifragility is the property of all those natural (and complex) systems that have survived, depriving these systems of volatility, randomness, and stressors will harm them. They will weaken, die, or blow up. We have been fragilizing the economy, our health, political life, education, almost everything … by suppressing randomness and volatility. Just as spending a month in bed (preferably with an unabridged version of
War and Peace
and access to
The Sopranos
’ entire eighty-six episodes) leads to muscle atrophy, complex systems are weakened, even killed, when deprived of stressors. Much of our modern, structured, world has been harming us with top-down policies and contraptions (dubbed “Soviet-Harvard delusions” in the book) which do precisely this: an insult to the antifragility of systems.
This is the tragedy of modernity: as with neurotically overprotective parents, those trying to help are often hurting us the most.
If about everything top-down fragilizes and blocks antifragility and growth, everything bottom-up thrives under the right amount of stress and disorder. The process of discovery (or innovation, or technological progress) itself depends on antifragile tinkering, aggressive risk bearing rather than formal education.
Which brings us to the largest fragilizer of society, and greatest generator of crises, absence of “skin in the game.” Some become antifragile at the expense of others by getting the upside (or gains) from volatility, variations, and disorder and exposing others to the downside risks of losses or harm. And such
antifragility-at-the-cost-of-fragility-of-others
is hidden—given the blindness to antifragility by the Soviet-Harvard intellectual
circles, this asymmetry is rarely identified and (so far) never taught. Further, as we discovered during the financial crisis that started in 2008, these blowup risks-to-others are easily concealed owing to the growing complexity of modern institutions and political affairs. While in the past people of rank or status were those and only those who took risks, who had the downside for their actions, and heroes were those who did so for the sake of others, today the exact reverse is taking place. We are witnessing the rise of a new class of inverse heroes, that is, bureaucrats, bankers, Davos-attending members of the I.A.N.D. (International Association of Name Droppers), and academics with too much power and no real downside and/or accountability. They game the system while citizens pay the price.
At no point in history have so many non-risk-takers, that is, those with no personal exposure, exerted so much control.
The chief ethical rule is the following: Thou shalt not have antifragility at the expense of the fragility of others.
I want to live happily in a world I don’t understand.
Black Swans (capitalized) are large-scale unpredictable and irregular events of massive consequence—unpredicted by a certain observer, and such unpredictor is generally called the “turkey” when he is both surprised and harmed by these events. I have made the claim that most of history comes from Black Swan events, while we worry about fine-tuning our understanding of the ordinary, and hence develop models, theories, or representations that cannot possibly track them or measure the possibility of these shocks.
Black Swans hijack our brains, making us feel we “sort of” or “almost” predicted them, because they are retrospectively explainable. We don’t realize the role of these Swans in life because of this illusion of predictability. Life is more, a lot more, labyrinthine than shown in our memory—our minds are in the business of turning history into something smooth and linear, which makes us underestimate randomness. But when we see it, we fear it and overreact. Because of this fear and thirst for order, some human systems, by disrupting the invisible or not so visible logic of things, tend to be exposed to harm from Black Swans and almost never get any benefit. You get pseudo-order when you seek
order; you only get a measure of order and control when you embrace randomness.
Complex systems are full of interdependencies—hard to detect—and nonlinear responses. “Nonlinear” means that when you double the dose of, say, a medication, or when you double the number of employees in a factory, you don’t get twice the initial effect, but rather a lot more or a lot less. Two weekends in Philadelphia are not twice as pleasant as a single one—I’ve tried. When the response is plotted on a graph, it does not show as a straight line (“linear”), rather as a curve. In such environment, simple causal associations are misplaced; it is hard to see how things work by looking at single parts.
Man-made complex systems tend to develop cascades and runaway chains of reactions that decrease, even eliminate, predictability and cause outsized events. So the modern world may be increasing in technological knowledge, but, paradoxically, it is making things a lot more unpredictable. Now for reasons that have to do with the increase of the artificial, the move away from ancestral and natural models, and the loss in robustness owing to complications in the design of everything, the role of Black Swans in increasing. Further, we are victims to a new disease, called in this book
neomania,
that makes us build Black Swan–vulnerable systems—“progress.”
An annoying aspect of the Black Swan problem—in fact the central, and largely missed, point—is that the odds of rare events are simply not computable. We know a lot less about hundred-year floods than five-year floods—model error swells when it comes to small probabilities.
The rarer the event, the less tractable, and the less we know about how frequent its occurrence
—yet the rarer the event, the more confident these “scientists” involved in predicting, modeling, and using PowerPoint in conferences with equations in multicolor background have become.
It is of great help that Mother Nature—thanks to its antifragility—is the best expert at rare events, and the best manager of Black Swans; in its billions of years it succeeded in getting here without much command-and-control instruction from an Ivy League–educated director nominated by a search committee. Antifragility is not just the antidote to the Black Swan; understanding it makes us less intellectually fearful in accepting the role of these events as necessary for history, technology, knowledge, everything.
Consider that Mother Nature is not just “safe.” It is aggressive in destroying and replacing, in selecting and reshuffling. When it comes to random events, “robust” is certainly not good enough. In the long run everything with the most minute vulnerability breaks, given the ruthlessness of time—yet our planet has been around for perhaps four billion years and, convincingly, robustness can’t just be it: you need perfect robustness for a crack not to end up crashing the system. Given the unattainability of perfect robustness, we need a mechanism by which the system regenerates itself continuously by using, rather than suffering from, random events, unpredictable shocks, stressors, and volatility.
The antifragile gains from prediction errors, in the long run. If you follow this idea to its conclusion, then many things that gain from randomness should be dominating the world today—and things that are hurt by it should be gone. Well, this turns out to be the case. We have the illusion that the world functions thanks to programmed design, university research, and bureaucratic funding, but there is compelling—very compelling—evidence to show that this is an illusion, the illusion I call
lecturing birds how to fly.
Technology is the result of antifragility, exploited by risk-takers in the form of tinkering and trial and error, with nerd-driven design confined to the backstage. Engineers and tinkerers develop things while history books are written by academics; we will have to refine historical interpretations of growth, innovation, and many such things.
Fragility is quite measurable, risk not so at all, particularly risk associated with rare events.
1
I said that we can estimate, even measure, fragility and antifragility, while we cannot calculate risks and probabilities of shocks and rare events, no matter how sophisticated we get. Risk management as practiced is the study of an event taking place in the future, and only some economists and other lunatics can claim—against experience—to “measure” the future incidence of these rare events, with suckers listening
to them—against experience and the track record of such claims. But fragility and antifragility are part of the current property of an object, a coffee table, a company, an industry, a country, a political system. We can detect fragility, see it, even in many cases measure it, or at least measure comparative fragility with a small error while comparisons of risk have been (so far) unreliable. You cannot say with any reliability that a certain remote event or shock is more likely than another (unless you enjoy deceiving yourself), but you can state with a lot more confidence that an object or a structure is more fragile than another should a certain event happen. You can easily tell that your grandmother is more fragile to abrupt changes in temperature than you, that some military dictatorship is more fragile than Switzerland should political change happen, that a bank is more fragile than another should a crisis occur, or that a poorly built modern building is more fragile than the Cathedral of Chartres should an earthquake happen. And—centrally—you can even make the prediction of which one will last longer.
Instead of a discussion of risk (which is both predictive and sissy) I advocate the notion of fragility, which is not predictive—and, unlike risk, has an interesting word that can describe its functional opposite, the nonsissy concept of antifragility.
To measure antifragility, there is a philosopher’s-stone-like recipe using a compact and simplified rule that allows us to identify it across domains, from health to the construction of societies.
We have been unconsciously exploiting antifragility in practical life and, consciously, rejecting it—particularly in intellectual life.
Our idea is to avoid interference with things we don’t understand. Well, some people are prone to the opposite. The fragilista belongs to that category of persons who are usually in suit and tie, often on Fridays; he faces your jokes with icy solemnity, and tends to develop back problems early in life from sitting at a desk, riding airplanes, and studying newspapers. He is often involved in a strange ritual, something commonly called “a meeting.” Now, in addition to these traits, he defaults to thinking that what he doesn’t see is not there, or what he does not understand does not exist. At the core, he tends to mistake the unknown for the nonexistent.
The fragilista falls for the
Soviet-Harvard delusion
, the (unscientific)
overestimation of the reach of scientific knowledge. Because of such delusion, he is what is called a
naive rationalist,
a
rationalizer,
or sometimes just a
rationalist,
in the sense that he believes that the
reasons
behind things are automatically accessible to him. And let us not confuse rationalizing with rational—the two are almost always exact opposites. Outside of physics, and generally in complex domains, the reasons behind things have had a tendency to make themselves less obvious to us, and even less to the fragilista. This property of natural things not to advertise themselves in a user’s manual is, alas, not much of a hindrance: some fragilistas will get together to write the user’s manual themselves, thanks to their definition of “science.”
So thanks to the fragilista, modern culture has been increasingly building blindness to the mysterious, the impenetrable, what Nietzsche called the Dionysian, in life.
Or to translate Nietzsche into the less poetic but no less insightful Brooklyn vernacular, this is what our character Fat Tony calls a “sucker game.”
In short, the fragilista (medical, economic, social planning) is one who makes you engage in policies and actions, all artificial, in which
the benefits are small and visible, and the side effects potentially severe and invisible
.
There is the medical fragilista who overintervenes in denying the body’s natural ability to heal and gives you medications with potentially very severe side effects; the policy fragilista (the interventionist social planner) who mistakes the economy for a washing machine that continuously needs fixing (by him) and blows it up; the psychiatric fragilista who medicates children to “improve” their intellectual and emotional life; the soccer-mom fragilista; the financial fragilista who makes people use “risk” models that destroy the banking system (then uses them again); the military fragilista who disturbs complex systems; the predictor fragilista who encourages you to take more risks; and many more.
2