Read Antifragile: Things That Gain from Disorder Online
Authors: Nassim Nicholas Taleb
I can illustrate it best with the modus operandi of Greg Stemm, who specializes in pulling long-lost shipwrecks from the bottom of the sea. In 2007, he called his (then) biggest find “the Black Swan” after the idea of looking for positive extreme payoffs. The find was quite sizable, a treasure with precious metals now worth a billion dollars. His Black Swan is a Spanish frigate called
Nuestra Señora de las Mercedes,
which was sunk by the British off the southern coast of Portugal in 1804. Stemm proved to be a representative hunter of positive Black Swans, and someone who can illustrate that such a search is a highly controlled form of randomness.
I met him and shared ideas with him: his investors (like mine at the time, as I was still involved in that business) were for the most part not programmed to understand that for a treasure hunter, a “bad” quarter (meaning expenses of searching but no finds) was not indicative of distress, as it would be with a steady cash flow business like that of a dentist or prostitute. By some mental domain dependence, people can spend money on, say, office furniture and not call it a “loss,” rather an investment, but would treat cost of search as “loss.”
Stemm’s method is as follows. He does an extensive analysis of the general area where the ship could be. That data is synthesized into a map
drawn with squares of probability. A search area is then designed, taking into account that they must have certainty that the shipwreck is not in a specific area before moving on to a lower probability area. It looks random but it is not. It is the equivalent of looking for a treasure in your house: every search has incrementally a higher probability of yielding a result, but only if you can be certain that the area you have searched does not hold the treasure.
Some readers might not be too excited about the morality of shipwreck-hunting, and could consider that these treasures are national, not private, property. So let us change domain. The method used by Stemm applies to oil and gas exploration, particularly at the bottom of the unexplored oceans, with a difference: in a shipwreck, the upside is limited to the value of the treasure, whereas oil fields and other natural resources are nearly unlimited (or have a very high limit).
Finally, recall my discussion of random drilling in
Chapter 6
and how it seemed superior to more directed techniques. This optionality-driven method of search is not foolishly random. Thanks to optionality, it becomes tamed and harvested randomness.
Someone who got a (minor) version of the point that generalized trial and error has, well,
errors,
but without much grasp of asymmetry (or what, since
Chapter 12
, we have been calling optionality), is the economist Joseph Schumpeter. He realized that some things need to break for the system to improve—what is labeled
creative destruction
—a notion developed, among so many other ones, by the philosopher Karl Marx and a concept discovered, we will show in
Chapter 17
, by Nietzsche. But a reading of Schumpeter shows that he did not think in terms of uncertainty and opacity; he was completely smoked by interventionism, under the illusion that governments could innovate by fiat, something that we will contradict in a few pages. Nor did he grasp the notion of layering of evolutionary tensions. More crucially, both he and his detractors (Harvard economists who thought that he did not know mathematics) missed the notion of antifragility as asymmetry (optionality) effects, hence the philosopher’s stone—on which, later—as the agent of growth. That is, they missed half of life.
Now, since a very large share of technological know-how comes from the antifragility, the optionality, of trial and error, some people and some institutions want to hide the fact from us (and themselves), or downplay its role.
Consider two types of knowledge. The first type is not exactly “knowledge”; its ambiguous character prevents us from associating it with the strict definitions of knowledge. It is a way of doing things that we cannot really express in clear and direct language—it is sometimes called
apophatic
—but that we do nevertheless, and do well. The second type is more like what we call “knowledge”; it is what you acquire in school, can get grades for, can codify, what is explainable, academizable, rationalizable, formalizable, theoretizable, codifiable, Sovietizable, bureaucratizable, Harvardifiable, provable, etc.
The error of naive rationalism leads to overestimating the role and necessity of the second type, academic knowledge, in human affairs—and degrading the uncodifiable, more complex, intuitive, or experience-based type.
There is no proof against the statement that the role such explainable knowledge plays in life is so minor that it is not even funny.
We are very likely to believe that skills and ideas that we actually acquired by antifragile
doing,
or that came naturally to us (from our innate biological instinct), came from books, ideas, and reasoning. We get blinded by it; there may even be something in our brains that makes us suckers for the point. Let us see how.
I recently looked for definitions of technology. Most texts define it as
the application of scientific knowledge to practical projects
—leading us to believe in a flow of knowledge going chiefly, even exclusively, from lofty “science” (organized around a priestly group of persons with titles before their names) to lowly practice (exercised by uninitiated people without the intellectual attainments to gain membership into the priestly group).
So, in the corpus, knowledge is presented as derived in the following manner: basic research yields scientific knowledge, which in turn generates technologies, which in turn lead to practical applications, which in turn lead to economic growth and other seemingly interesting matters. The payoff from the “investment” in basic research will be partly directed to more investments in basic research, and the citizens will prosper
and enjoy the benefits of such knowledge-derived wealth with Volvo cars, ski vacations, Mediterranean diets, and long summer hikes in beautifully maintained public parks.
This is called the Baconian linear model, after the philosopher of science Francis Bacon; I am adapting its representation by the scientist Terence Kealey (who, crucially, as a biochemist, is a practicing scientist, not a historian of science) as follows:
Academia
→
Applied Science and Technology
→
Practice
While this model may be valid in some very narrow (but highly advertised instances), such as building the atomic bomb, the exact reverse seems to be true in most of the domains I’ve examined. Or, at least, this model is not guaranteed to be true and, what is shocking, we have no rigorous evidence that it is true. It may be that academia helps science and technology, which in turn help practice, but in unintended, nonteleological ways, as we will see later (in other words, it is
directed research
that may well be an illusion).
Let us return to the metaphor of the birds. Think of the following event: A collection of hieratic persons (from Harvard or some such place) lecture birds on how to fly. Imagine bald males in their sixties, dressed in black robes, officiating in a form of English that is full of jargon, with equations here and there for good measure. The bird flies. Wonderful confirmation! They rush to the department of ornithology to write books, articles, and reports stating that the bird has obeyed them, an impeccable causal inference. The Harvard Department of Ornithology is now indispensable for bird flying. It will get government research funds for its contribution.
Mathematics
→
Ornithological navigation and wing-flapping technologies
→
(ungrateful) birds fly
It also happens that birds write no such papers and books, conceivably because they are just birds, so we never get their side of the story. Meanwhile, the priests keep broadcasting theirs to the new generation of humans who are completely unaware of the conditions of the pre-Harvard lecturing days. Nobody discusses the possibility of the birds’ not needing lectures—and nobody has any incentive to look at the number of birds that fly without such help from the great scientific establishment.
The problem is that what I wrote above looks ridiculous, but a change of domain makes it look reasonable. Clearly, we never think that it is thanks to ornithologists that birds learn to fly—and if some people do hold such a belief, it would be hard for them to convince the birds. But why is it that when we anthropomorphize and replace “birds” with “men,” the idea that people learn to do things thanks to lectures becomes plausible? When it comes to human agency, matters suddenly become confusing to us.
So the illusion grows and grows, with government funding, tax dollars, swelling (and self-feeding) bureaucracies in Washington all devoted to helping birds fly better. Problems occur when people start cutting such funding—with a spate of accusations of killing birds by not helping them fly.
As per the Yiddish saying: “If the student is smart, the teacher takes the credit.” These illusions of contribution result largely from confirmation fallacies: in addition to the sad fact that history belongs to those who can write about it (whether winners or losers), a second bias appears, as those who write the accounts can deliver confirmatory facts (what has worked) but not a complete picture of what has worked and what has failed. For instance, directed research would tell you what has worked from funding (like AIDS drugs or some modern designer drugs), not what has failed—so you may have the impression that it fares better than random.
And of course iatrogenics is never part of the discourse. They never tell you if education hurt you in some places.
So we are blind to the possibility of the alternative process, or the role of such a process, a loop:
Random Tinkering (antifragile)
→
Heuristics (technology)
→
Practice and Apprenticeship
→
Random Tinkering (antifragile)
→
Heuristics (technology)
→
Practice and Apprenticeship …
In parallel to the above loop,
Practice
→
Academic Theories
→
Academic Theories
→
Academic Theories
→
Academic Theories … (with of course some exceptions, some accidental leaks, though these are indeed rare and overhyped and grossly generalized).
Now, crucially, one can detect the scam in the so-called Baconian model by looking at events in the days that preceded the Harvard lectures on flying and examining the birds. This is what I accidentally found (indeed, accidentally) in my own career as practitioner turned researcher in volatility, thanks to some lucky turn of events. But before that, let me explain epiphenomena and the arrow of education.
The Soviet-Harvard illusion (lecturing birds on flying and believing that the lecture is the cause of these wonderful skills) belongs to a class of causal illusions called
epiphenomena
. What are these illusions? When you spend time on the bridge of a ship or in the coxswain’s station with a large compass in front, you can easily develop the impression that the compass is directing the ship rather than merely reflecting its direction.
The lecturing-birds-how-to-fly effect is an example of epiphenomenal belief: we see a high degree of academic research in countries that are wealthy and developed, leading us to think uncritically that research is the generator of wealth. In an epiphenomenon, you don’t usually observe A without observing B with it, so you are likely to think that A causes B, or that B causes A, depending on the cultural framework or what seems plausible to the local journalist.
One rarely has the illusion that, given that so many boys have short hair, short hair determines gender, or that wearing a tie causes one to become a businessman. But it is easy to fall into other epiphenomena, particularly when one is immersed in a news-driven culture.
And one can easily see the trap of having these epiphenomena fuel action, then justify it retrospectively. A dictator—just like a government—will feel indispensable because the alternative is not easily visible, or is hidden by special interest groups. The Federal Reserve Bank of the United States, for instance, can wreak havoc on the economy yet feel convinced of its effectiveness. People are scared of the alternative.
Whenever an economic crisis occurs, greed is pointed to as the cause, which leaves us with the impression that if we could go to the root of greed and extract it from life, crises would be eliminated. Further, we
tend to believe that greed is new, since these wild economic crises are new. This is an epiphenomenon: greed is much older than systemic fragility. It existed as far back as the eye can go into history. From Virgil’s mention of
greed of gold
and the expression
radix malorum est cupiditas
(from the Latin version of the New Testament), both expressed more than twenty centuries ago, we know that the same problems of greed have been propounded through the centuries, with no cure, of course, in spite of the variety of political systems we have developed since then. Trollope’s novel
The Way We Live Now,
published close to a century and a half ago, shows the exact same complaint of a resurgence of greed and con operators that I heard in 1988 with cries over of the “greed decade,” or in 2008 with denunciations of the “greed of capitalism.” With astonishing regularity, greed is seen as something (a) new and (b) curable. A Procrustean bed approach; we cannot change humans as easily as we can build greed-proof systems, and nobody thinks of simple solutions.
1