Read Antifragile: Things That Gain from Disorder Online
Authors: Nassim Nicholas Taleb
Now use this idea of cooking as a platform to grasp other pursuits: do other activities resemble it? If we put technologies through scrutiny, we would see that most do in fact resemble cooking a lot more than physics, particularly those in the complex domain.
Even medicine today remains an apprenticeship model with some theoretical science in the background, but made to look entirely like science. And if it leaves the apprenticeship model, it would be for the “evidence-based” method that relies less on biological theories and more on the cataloging of empirical regularities, the phenomenology I explained in
Chapter 7
. Why is it that science comes and goes and technologies remain stable?
Now, one can see a possible role for basic science, but not in the way
it is intended to be.
3
For an example of a chain of unintended uses, let us start with Phase One, the computer. The mathematical discipline of combinatorics, here basic science, derived from propositional knowledge, led to the building of computers, or so the story goes. (And, of course, to remind the reader of cherry-picking, we need to take into account the body of theoretical knowledge that went nowhere.) But at first, nobody had an idea what to do with these enormous boxes full of circuits as they were cumbersome, expensive, and their applications were not too widespread, outside of database management, only good to process quantities of data. It is as if one needed to invent an application for the thrill of technology. Baby boomers will remember those mysterious punch cards. Then someone introduced the console to input with the aid of a screen monitor, using a keyboard. This led, of course, to word processing, and the computer took off because of its fitness to word processing, particularly with the microcomputer in the early 1980s. It was convenient, but not much more than that until some other unintended consequence came to be mixed into it. Now Phase Two, the Internet. It had been set up as a resilient military communication network device, developed by a research unit of the Department of Defense called DARPA and got a boost the days when Ronald Reagan was obsessed with the Soviets. It was meant to allow the United States to survive a generalized military attack. Great idea, but add the personal computer
plus
Internet and we get social networks, broken marriages, a rise in nerdiness, the ability for a post-Soviet person with social difficulties to find a matching spouse. All that thanks to initial U.S. tax dollars (or rather budget deficit) during Reagan’s anti-Soviet crusade.
So for now we are looking at the forward arrow and at no point, although science was at
some
use along the way since computer technology relies on science in most of its aspects; at no point did academic science serve in setting its direction, rather it served as a slave to chance discoveries in an opaque environment, with almost no one but college dropouts and overgrown high school students along the way. The process remained self-directed and unpredictable at every step. And the great fallacy is to make it sound irrational—the irrational resides in not seeing a free option when it is handed to us.
China might be a quite convincing story, through the works of a genius observer, Joseph Needham, who debunked quite a bit of Western beliefs and figured out the powers of Chinese science. As China became a top-down mandarinate (that is, a state managed by Soviet-Harvard centralized scribes, as Egypt had been before), the players somehow lost the zest for bricolage, the hunger for trial and error. Needham’s biographer Simon Winchester cites the sinologist Mark Elvin’s description of the problem, as the Chinese did not have, or, rather, no longer had, what he called the “European mania for tinkering and improving.” They had all the means to develop a spinning machine, but “nobody tried”—another example of knowledge hampering optionality. They probably needed someone like Steve Jobs—blessed with an absence of college education and the right aggressiveness of temperament—to take the elements to their natural conclusion. As we will see in the next section, it is precisely this type of uninhibited doer who made the Industrial Revolution happen.
We will next examine two cases, first, the Industrial Revolution, and second, medicine. So let us start by debunking a causal myth about the Industrial Revolution, the overstatement of the role of science in it.
Knowledge formation, even when theoretical, takes time, some boredom, and the freedom that comes from having another occupation, therefore allowing one to escape the journalistic-style pressure of modern publish-and-perish academia to produce cosmetic knowledge, much like the counterfeit watches one buys in Chinatown in New York City, the type that you know is counterfeit although it looks like the real thing. There were two main sources of technical knowledge and innovation in the nineteenth and early twentieth centuries: the hobbyist and the English rector, both of whom were generally in barbell situations.
An extraordinary proportion of work came out of the rector, the English parish priest with no worries, erudition, a large or at least comfortable house, domestic help, a reliable supply of tea and scones with clotted cream, and an abundance of free time. And, of course, optionality. The enlightened amateur, that is. The Reverends Thomas Bayes (as in Bayesian probability) and Thomas Malthus (Malthusian overpopulation) are the most famous. But there are many more surprises, cataloged in Bill Bryson’s
Home,
in which the author found ten times
more vicars and clergymen leaving recorded traces for posterity than scientists, physicists, economists, and even inventors. In addition to the previous two giants, I randomly list contributions by country clergymen: Rev. Edmund Cartwright invented the power loom, contributing to the Industrial Revolution; Rev. Jack Russell bred the terrier; Rev. William Buckland was the first authority on dinosaurs; Rev. William Greenwell invented modern archaeology; Rev. Octavius Pickard-Cambridge was the foremost authority on spiders; Rev. George Garrett invented the submarine; Rev. Gilbert White was the most esteemed naturalist of his day; Rev. M. J. Berkeley was the top expert on fungi; Rev. John Michell helped discover Uranus; and many more. Note that, just as with our episode documented with Haug, that organized science tends to skip the “not made here,” so the list of visible contribution by hobbyists and doers is most certainly shorter than the real one, as some academic might have appropriated the innovation by his predecessor.
4
Let me get poetic for a moment. Self-directed scholarship has an aesthetic dimension. For a long time I had on the wall of my study the following quote by Jacques Le Goff, the great French medievalist, who believes that the Renaissance came out of independent humanists, not professional scholars. He examined the striking contrast in period paintings, drawings, and renditions that compare medieval university members and humanists:
One is a professor surrounded and besieged by huddled students. The other is a solitary scholar, sitting in the tranquility and privacy of his chambers, at ease in the spacious and comfy room where his thoughts can move freely. Here we encounter the tumult of schools, the dust of classrooms, the indifference to beauty in collective workplaces,
There, it is all order and beauty,
Luxe, calme et volupté
As to the hobbyist in general, evidence shows him (along with the hungry adventurer and the private investor) to be at the source of the Industrial Revolution. Kealey, who we mentioned was not a historian and, thankfully, not an economist, in
The Economic Laws of Scientific
Research
questions the conventional “linear model” (that is, the belief that academic science leads to technology)—for him, universities prospered as a consequence of national wealth, not the other way around. He even went further and claimed that like naive interventions, these had iatrogenics that provided a negative contribution. He showed that in countries in which the government intervened by funding research with tax money, private investment decreased and moved away. For instance, in Japan, the almighty MITI (Ministry for Technology and Investment) has a horrible record of investment. I am not using his ideas to prop up a political program against science funding, only to debunk causal arrows in the discovery of important things.
The Industrial Revolution, for a refresher, came from “technologists building technology,” or what he calls “hobby science.” Take again the steam engine, the one artifact that more than anything else embodies the Industrial Revolution. As we saw, we had a blueprint of how to build it from Hero of Alexandria. Yet the theory didn’t interest anyone for about two millennia. So practice and rediscovery had to be the cause of the interest in Hero’s blueprint, not the other way around.
Kealey presents a convincing—very convincing—argument that the steam engine emerged from preexisting technology and was created by uneducated, often isolated men who applied practical common sense and intuition to address the mechanical problems that beset them, and whose solutions would yield obvious economic reward.
Now, second, consider textile technologies. Again, the main technologies that led to the jump into the modern world owe, according to Kealey, nothing to science. “In 1733,” he writes, “John Kay invented the flying shuttle, which mechanized weaving, and in 1770 James Hargreaves invented the spinning jenny, which as its name implies, mechanized spinning. These major developments in textile technology, as well as those of Wyatt and Paul (spinning frame, 1758), Arkwright (water frame, 1769), presaged the Industrial Revolution, yet they owed nothing to science; they were empirical developments based on the trial, error, and experimentation of skilled craftsmen who were trying to improve the productivity, and so the profits, of their factories.”
David Edgerton did some work questioning the link between academic science and economic prosperity, along with the idea that people believed in the “linear model” (that is, that academic science was at the source of technology) in the past. People were
no suckers
in the nineteenth
and twentieth centuries; we believe today that they believed in the said linear model then but they did not. In fact academics were mostly just teachers, not researchers, until well into the twentieth century.
Now, instead of looking into a scholar’s writings to see whether he is credible or not, it is always best to consider what his detractors say—they will uncover what’s worst in his argument. So I looked for the detractors of Kealey, or people opposing his ideas, to see if they address anything of merit—and to see where they come from. Aside from some comments by Joel Mokyr, who, as I said, has not yet discovered optionality, and an attack by an economist of the type that doesn’t count, given the devaluation of the currency of the economics profession, the main critique against Kealey, published in the influential journal
Nature
by a science bureaucrat, was that he uses data from government-sponsored agencies such as the OECD in his argument against tax-funded research. So far, no substantive evidence that Kealey was wrong. But, let us flip the burden of evidence: there is
zero
evidence that the opposite of his thesis is remotely right. Much of all of this is a religious belief in the
unconditional
power of organized science, one that has replaced unconditional religious belief in organized religion.
Note that I do not believe that the argument set forth above should logically lead us to say that
no
money should be spent by government. This reasoning is more against teleology than research in general. There has to be a form of spending that works. By some vicious turn of events, governments have gotten huge payoffs from research, but not as intended—just consider the Internet. And look at the recapture we’ve had of military expenditures with innovations, and, as we will see, medical cures. It is just that functionaries are too teleological in the way they look for things (particularly the Japanese), and so are large corporations. Most large corporations, such as Big Pharma, are their own enemies.
Consider
blue sky
research, whereby research grants and funding are given to people, not projects, and spread in small amounts across many researchers. The sociologist of science Steve Shapin, who spent time in California observing venture capitalists, reports that investors tend to back entrepreneurs, not ideas. Decisions are largely a matter of opinion strengthened with “who you know” and “who said what,” as, to use the venture capitalist’s lingo, you bet on the jockey, not the horse. Why?
Because innovations drift, and one needs flâneur-like abilities to keep capturing the opportunities that arise, not stay locked up in a bureaucratic mold. The significant venture capital decisions, Shapin showed, were made without real business plans. So if there was any “analysis,” it had to be of a backup, confirmatory nature. I myself spent some time with venture capitalists in California, with an eye on investing myself, and sure enough, that was the mold.
Visibly the money should go to the tinkerers, the aggressive tinkerers who you trust will milk the option.
Let us use statistical arguments and get technical for a paragraph. Payoffs from research are from Extremistan; they follow a power-law type of statistical distribution, with big, near-unlimited upside but, because of optionality, limited downside. Consequently, payoff from research should necessarily be linear to number of trials, not total funds involved in the trials. Since, as in
Figure 7
, the winner will have an explosive payoff, uncapped, the right approach requires a certain style of blind funding. It means the right policy would be what is called “one divided by n” or “1/N” style, spreading attempts in as large a number of trials as possible: if you face
n
options, invest in all of them in equal amounts.
5
Small amounts per trial, lots of trials, broader than you want. Why? Because in Extremistan, it is more important to be in something in a small amount than to miss it. As one venture capitalist told me: “The payoff can be so large that you can’t afford not to be in everything.”