Authors: Nicholas Carr
The mental functions that are losing the “survival of the busiest” brain cell battle are those that support calm, linear thought—the ones we use in traversing a lengthy narrative or an involved argument, the ones we draw on when we reflect on our experiences or contemplate an outward or inward phenomenon. The winners are those functions that help us speedily locate, categorize, and assess disparate bits of information in a variety of forms, that let us maintain our mental bearings while being bombarded by stimuli. These functions are, not coincidentally, very similar to the ones performed by computers, which are programmed for the high-speed transfer of data in and out of memory. Once again, we seem to be taking on the characteristics of a popular new intellectual technology.
ON THE EVENING
of April 18, 1775, Samuel Johnson accompanied his friends James Boswell and Joshua Reynolds on a visit to Richard Owen Cambridge’s grand villa on the banks of the Thames outside London. They were shown into the library, where Cambridge was waiting to meet them, and after a brief greeting Johnson darted to the shelves and began silently reading the spines of the volumes arrayed there. “Dr. Johnson,” said Cambridge, “it seems odd that one should have such a desire to look at the backs of books.” Johnson, Boswell would later recall, “instantly started from his reverie, wheeled about, and replied, ‘Sir, the reason is very plain. Knowledge is of two kinds. We know a subject ourselves, or we know where we can find information upon it.’”
55
The Net grants us instant access to a library of information unprecedented in its size and scope, and it makes it easy for us to sort through that library—to find, if not exactly what we were looking for, at least something sufficient for our immediate purposes. What the Net diminishes is Johnson’s primary kind of knowledge: the ability to know, in depth, a subject for ourselves, to construct within our own minds the rich and idiosyncratic set of connections that give rise to a singular intelligence.
THIRTY YEARS AGO,
James Flynn, then the head of the political science department at New Zealand’s University of Otago, began studying historical records of IQ tests. As he dug through the numbers, stripping out the various scoring adjustments that had been made through the years, he discovered something startling: IQ scores had been rising steadily—and pretty much everywhere—throughout the century. Controversial when originally reported, the Flynn effect, as the phenomenon came to be called, has been confirmed by many subsequent studies. It’s real.
Ever since Flynn made his discovery, it has provided a ready-made brickbat to hurl at anyone who suggests that our intellectual powers may be on the wane:
If we’re so dumb, why do we keep getting smarter?
The Flynn effect has been used to defend TV shows, video games, personal computers, and, most recently, the Internet. Don Tapscott, in
Grown Up Digital
, his paean to the first generation of “digital natives,” counters arguments that the extensive use of digital media may be dumbing kids down by pointing out, with a nod to Flynn, that “raw IQ scores have been going up three points a decade since World War II.”
1
Tapscott’s right about the numbers, and we should certainly be heartened by the rise in IQ scores, particularly since the gains have been sharpest among segments of the population whose scores have lagged in the past. But there are good reasons to be skeptical of any claim that the Flynn effect proves that people are “smarter” today than they used to be or that the Internet is boosting the general intelligence of the human race. For one thing, as Tapscott himself notes, IQ scores have been going up for a very long time—since well before World War II, in fact—and the pace of increase has remained remarkably stable, varying only slightly from decade to decade. That pattern suggests that the rise probably reflects a deep and persistent change in some aspect of society rather than any particular recent event or technology. The fact that the Internet began to come into widespread use only about ten years ago makes it all the more unlikely that it has been a significant force propelling IQ scores upward.
Other measures of intelligence don’t show anything like the gains we’ve seen in overall IQ scores. In fact, even IQ tests have been sending mixed signals. The tests have different sections, which measure different aspects of intelligence, and performance on them has varied widely. Most of the increase in overall scores can be attributed to strengthening performance in tests involving the mental rotation of geometric forms, the identification of similarities between disparate objects, and the arrangement of shapes into logical sequences. Tests of memorization, vocabulary, general knowledge, and even basic arithmetic have shown little or no improvement.
Scores on other common tests designed to measure intellectual skills also seem to be either stagnant or declining. Scores on PSAT exams, which are given to high school juniors throughout the United States, did not increase at all during the years from 1999 to 2008, a time when Net use in homes and schools was expanding dramatically. In fact, while the average math scores held fairly steady during that period, dropping a fraction of a point, from 49.2 to 48.8, scores on the verbal portions of the test declined significantly. The average critical-reading score fell 3.3 percent, from 48.3 to 46.7, and the average writing-skills score dropped an even steeper 6.9 percent, from 49.2 to 45.8.
2
Scores on the verbal sections of the SAT tests given to college-bound students have also been dropping. A 2007 report from the U.S. Department of Education showed that twelfth-graders’ scores on tests of three different kinds of reading—for performing a task, for gathering information, and for literary experience—fell between 1992 and 2005. Literary reading aptitude suffered the largest decline, dropping twelve percent.
3
There are signs, as well, that the Flynn effect may be starting to fade even as Web use picks up. Research in Norway and Denmark shows that the rise in intelligence test scores began to slow in those countries during the 1970s and ’80s and that since the mid-1990s scores have either remained steady or fallen slightly.
4
In the United Kingdom, a 2009 study revealed that the IQ scores of teenagers dropped by two points between 1980 and 2008, after decades of gains.
5
Scandinavians and Britons have been among the world’s pace setters in adopting high-speed Internet service and using multipurpose mobile phones. If digital media were boosting IQ scores, you’d expect to see particularly strong evidence in their results.
So what is behind the Flynn effect? Many theories have been offered, from smaller families to better nutrition to the expansion of formal education, but the explanation that seems most credible comes from James Flynn himself. Early in his research, he realized that his findings presented a couple of paradoxes. First, the steepness of the rise in test scores during the twentieth century suggests that our forebears must have been dimwits, even though everything we know about them tells us otherwise. As Flynn wrote in his book
What Is Intelligence?
, “If IQ gains are in any sense real, we are driven to the absurd conclusion that a majority of our ancestors were mentally retarded.”
6
The second paradox stems from the disparities in the scores on different sections of IQ tests: “How can people get more intelligent and have no larger vocabularies, no larger stores of general information, no greater ability to solve arithmetical problems?”
7
After mulling over the paradoxes for many years, Flynn came to the conclusion that the gains in IQ scores have less to do with an increase in general intelligence than with a transformation in the way people think about intelligence. Up until the end of the nineteenth century, the scientific view of intelligence, with its stress on classification, correlation, and abstract reasoning, remained fairly rare, limited to those who attended or taught at universities. Most people continued to see intelligence as a matter of deciphering the workings of nature and solving practical problems—on the farm, in the factory, at home. Living in a world of substance rather than symbol, they had little cause or opportunity to think about abstract shapes and theoretical classification schemes.
But, Flynn realized, that all changed over the course of the last century when, for economic, technological, and educational reasons, abstract reasoning moved into the mainstream. Everyone began to wear, as Flynn colorfully puts it, the same “scientific spectacles” that were worn by the original developers of IQ tests.
8
Once he had that insight, Flynn recalled in a 2007 interview, “I began to feel that I was bridging the gulf between our minds and the minds of our ancestors. We weren’t more intelligent than they, but we had learnt to apply our intelligence to a new set of problems. We had detached logic from the concrete, we were willing to deal with the hypothetical, and we thought the world was a place to be classified and understood scientifically rather than to be manipulated.”
9
Patricia Greenfield, the UCLA psychologist, came to a similar conclusion in her
Science
article on media and intelligence. Noting that the rise in IQ scores “is concentrated in nonverbal IQ performance,” which is “mainly tested through visual tests,” she attributed the Flynn effect to an array of factors, from urbanization to the growth in “societal complexity,” all of which “are part and parcel of the worldwide movement from smaller-scale, low-tech communities with subsistence economies toward large-scale, high-tech societies with commercial economies.”
10
We’re not smarter than our parents or our parents’ parents. We’re just smart in different ways. And that influences not only how we see the world but also how we raise and educate our children. This social revolution in how we think about thinking explains why we’ve become ever more adept at working out the problems in the more abstract and visual sections of IQ tests while making little or no progress in expanding our personal knowledge, bolstering our basic academic skills, or improving our ability to communicate complicated ideas clearly. We’re trained, from infancy, to put things into categories, to solve puzzles, to think in terms of symbols in space. Our use of personal computers and the Internet may well be reinforcing some of those mental skills and the corresponding neural circuits by strengthening our visual acuity, particularly our ability to speedily evaluate objects and other stimuli as they appear in the abstract realm of a computer screen. But, as Flynn stresses, that doesn’t mean we have “better brains.” It just means we have different brains.
11
N
ot long after Nietzsche bought his mechanical writing ball, an earnest young man named Frederick Winslow Taylor carried a stopwatch into the Midvale Steel plant in Philadelphia and began a historic series of experiments aimed at boosting the efficiency of the plant’s machinists. With the grudging approval of Midvale’s owners, Taylor recruited a group of factory hands, set them to work on various metalworking machines, and recorded and timed their every movement. By breaking down each job into a sequence of small steps and then testing different ways of performing them, he created a set of precise instructions—an “algorithm,” we might say today—for how each worker should work. Midvale’s employees grumbled about the strict new regime, claiming that it turned them into little more than automatons, but the factory’s productivity soared.
1
More than a century after the invention of the steam engine, the Industrial Revolution had at last found its philosophy and its philosopher. Taylor’s tight industrial choreography—his “system,” as he liked to call it—was embraced by manufacturers throughout the country and, in time, around the world. Seeking maximum speed, maximum efficiency, and maximum output, factory owners used time-and-motion studies to organize their work and configure the jobs of their workers. The goal, as Taylor defined it in his celebrated 1911 treatise
The Principles of Scientific Management
, was to identify and adopt, for every job, the “one best method” of work and thereby to effect “the gradual substitution of science for rule of thumb throughout the mechanic arts.”
2
Once his system was applied to all acts of manual labor, Taylor assured his many followers, it would bring about a restructuring not only of industry but of society, creating a utopia of perfect efficiency. “In the past the man has been first,” he declared; “in the future the system must be first.”
3
Taylor’s system of measurement and optimization is still very much with us; it remains one of the underpinnings of industrial manufacturing. And now, thanks to the growing power that computer engineers and software coders wield over our intellectual and social lives, Taylor’s ethic is beginning to govern the realm of the mind as well. The Internet is a machine designed for the efficient, automated collection, transmission, and manipulation of information, and its legions of programmers are intent on finding the “one best way”—the perfect algorithm—to carry out the mental movements of what we’ve come to describe as knowledge work.
Google’s Silicon Valley headquarters—the Googleplex—is the Internet’s high church, and the religion practiced inside its walls is Taylorism. The company, says CEO Eric Schmidt, is “founded around the science of measurement.” It is striving to “systematize everything” it does.
4
“We try to be very data-driven, and quantify everything,” adds another Google executive, Marissa Mayer. “We live in a world of numbers.”
5
Drawing on the terabytes of behavioral data it collects through its search engine and other sites, the company carries out thousands of experiments a day and uses the results to refine the algorithms that increasingly guide how all of us find information and extract meaning from it.
6
What Taylor did for the work of the hand, Google is doing for the work of the mind.
The company’s reliance on testing is legendary. Although the design of its Web pages may appear simple, even austere, each element has been subjected to exhaustive statistical and psychological research. Using a technique called “split A/B testing,” Google continually introduces tiny permutations in the way its sites look and operate, shows different permutations to different sets of users, and then compares how the variations influence the users’ behavior—how long they stay on a page, the way they move their cursor about the screen, what they click on, what they don’t click on, where they go next. In addition to the automated online tests, Google recruits volunteers for eye-tracking and other psychological studies at its in-house “usability lab.” Because Web surfers evaluate the contents of pages “so quickly that they make most of their decisions unconsciously,” remarked two Google researchers in a 2009 blog post about the lab, monitoring their eye movements “is the next best thing to actually being able to read their minds.”
7
Irene Au, the company’s director of user experience, says that Google relies on “cognitive psychology research” to further its goal of “making people use their computers more efficiently.”
8
Subjective judgments, including aesthetic ones, don’t enter into Google’s calculations. “On the web,” says Mayer, “design has become much more of a science than an art. Because you can iterate so quickly, because you can measure so precisely, you can actually find small differences and mathematically learn which one is right.”
9
In one famous trial, the company tested forty-one different shades of blue on its toolbar to see which shade drew the most clicks from visitors. It carries out similarly rigorous experiments on the text it puts on its pages. “You have to try and make words less human and more a piece of the machinery,” explains Mayer.
10
In his 1993 book
Technopoly
, Neil Postman distilled the main tenets of Taylor’s system of scientific management. Taylorism, he wrote, is founded on six assumptions: “that the primary, if not the only, goal of human labor and thought is efficiency; that technical calculation is in all respects superior to human judgment; that in fact human judgment cannot be trusted, because it is plagued by laxity, ambiguity, and unnecessary complexity; that subjectivity is an obstacle to clear thinking; that what cannot be measured either does not exist or is of no value; and that the affairs of citizens are best guided and conducted by experts.”
11
What’s remarkable is how well Postman’s summary encapsulates Google’s own intellectual ethic. Only one tweak is required to bring it up to date. Google doesn’t believe that the affairs of citizens are best guided by experts. It believes that those affairs are best guided by software algorithms—which is exactly what Taylor would have believed had powerful digital computers been around in his day.
Google also resembles Taylor in the sense of righteousness it brings to its work. It has a deep, even messianic faith in its cause. Google, says its CEO, is more than a mere business; it is a “moral force.”
12
The company’s much-publicized “mission” is “to organize the world’s information and make it universally accessible and useful.”
13
Fulfilling that mission, Schmidt told the
Wall Street Journal
in 2005, “will take, current estimate, 300 years.”
14
The company’s more immediate goal is to create “the perfect search engine,” which it defines as “something that understands exactly what you mean and gives you back exactly what you want.”
15
In Google’s view, information is a kind of commodity, a utilitarian resource that can, and should, be mined and processed with industrial efficiency. The more pieces of information we can “access” and the faster we can distill their gist, the more productive we become as thinkers. Anything that stands in the way of the speedy collection, dissection, and transmission of data is a threat not only to Google’s business but to the new utopia of cognitive efficiency it aims to construct on the Internet.
GOOGLE WAS BORN
of an analogy—Larry Page’s analogy. The son of one of the pioneers of artificial intelligence, Page was surrounded by computers from an early age—he recalls being “the first kid in my elementary school to turn in a word-processed document”
16
—and went on to study engineering as an undergraduate at the University of Michigan. His friends remember him as being ambitious, smart, and “nearly obsessed with efficiency.”
17
While serving as president of Michigan’s engineering honor society, he spearheaded a brash, if ultimately futile, campaign to convince the school’s administrators to build a monorail through the campus. In the fall of 1995, Page headed to California to take a prized spot in Stanford University’s doctoral program in computer science. Even as a young boy, he had dreamed of creating a momentous invention, something that “would change the world.”
18
He knew there was no better place than Stanford, Silicon Valley’s frontal cortex, to make the dream come true.
It took only a few months for Page to land on a topic for his dissertation: the vast new computer network called the World Wide Web. Launched on the Internet just four years earlier, the Web was growing explosively—it had half a million sites and was adding more than a hundred thousand new ones every month—and the network’s incredibly complex and ever-shifting arrangement of nodes and links had come to fascinate mathematicians and computer scientists. Page had an idea that he thought might unlock some of its secrets. He had realized that the links on Web pages are analogous to the citations in academic papers. Both are signifiers of value. When a scholar, in writing an article, makes a reference to a paper published by another scholar, she is vouching for the importance of that other paper. The more citations a paper garners, the more prestige it gains in its field. In the same way, when a person with a Web page links to someone else’s page, she is saying that she thinks the other page is important. The value of any Web page, Page saw, could be gauged by the links coming into it.
Page had another insight, again drawing on the citations analogy: not all links are created equal. The authority of any Web page can be gauged by how many incoming links it attracts. A page with a lot of incoming links has more authority than a page with only one or two. The greater the authority of a Web page, the greater the worth of its own outgoing links. The same is true in academia: earning a citation from a paper that has itself been much cited is more valuable than receiving one from a less cited paper. Page’s analogy led him to realize that the relative value of any Web page could be estimated through a mathematical analysis of two factors: the number of incoming links the page attracted and the authority of the sites that were the sources of those links. If you could create a database of all the links on the Web, you would have the raw material to feed into a software algorithm that could evaluate and rank the value of all the pages on the Web. You would also have the makings of the world’s most powerful search engine.
The dissertation never got written. Page recruited another Stanford graduate student, a math prodigy named Sergey Brin who had a deep interest in data mining, to help him build his search engine. In the summer of 1996, an early version of Google—then called BackRub—debuted on Stanford’s Web site. Within a year, BackRub’s traffic had overwhelmed the university’s network. If they were going to turn their search service into a real business, Page and Brin saw, they were going to need a lot of money to buy computing gear and network bandwidth. In the summer of 1998, a wealthy Silicon Valley investor came to the rescue, cutting them a check for a hundred grand. They moved their budding company out of their dorms and into a couple of spare rooms in a friend-of-a-friend’s house in nearby Menlo Park. In September they incorporated as Google Inc. They chose the name—a play on
googol
, the word for the number ten raised to the hundredth power—to highlight their goal of organizing “a seemingly infinite amount of information on the web.” In December, an article in
PC Magazine
praised the new search engine with the quirky name, saying it “has an uncanny knack for returning extremely relevant results.”
19
Thanks to that knack, Google was soon processing most of the millions—and then billions—of Internet searches being conducted every day. The company became fabulously successful, at least as measured by the traffic running through its site. But it faced the same problem that had doomed many dot-coms: it hadn’t been able to figure out how to turn a profit from all that traffic. No one would pay to search the Web, and Page and Brin were averse to injecting advertisements into their search results, fearing it would corrupt Google’s pristine mathematical objectivity. “We expect,” they had written in a scholarly paper early in 1998, “that advertising-funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers.”
20
But the young entrepreneurs knew that they would not be able to live off the largesse of venture capitalists forever. Late in 2000, they came up with a clever plan for running small, textual advertisements alongside their search results—a plan that would require only a modest compromise of their ideals. Rather than selling advertising space for a set price, they decided to auction the space off. It wasn’t an original idea—another search engine, GoTo, was already auctioning ads—but Google gave it a new spin. Whereas GoTo ranked its search ads according to the size of advertisers’ bids—the higher the bid, the more prominent the ad—Google in 2002 added a second criterion. An ad’s placement would be determined not only by the amount of the bid but by the frequency with which people actually clicked on the ad. That innovation ensured that Google’s ads would remain, as the company put it, “relevant” to the topics of searches. Junk ads would automatically be screened from the system. If searchers didn’t find an ad relevant, they wouldn’t click on it, and it would eventually disappear from Google’s site.
The auction system, named AdWords, had another, very important result: by tying ad placement to clicks, it increased click-through rates substantially. The more often people clicked on an ad, the more frequently and prominently the ad would appear on search result pages, bringing even more clicks. Since advertisers paid Google by the click, the company’s revenues soared. The AdWords system proved so lucrative that many other Web publishers contracted with Google to place its “contextual ads” on their sites as well, tailoring the ads to the content of each page. By the end of the decade, Google was not just the largest Internet company in the world; it was one of the largest media companies, taking in more than $22 billion in sales a year, almost all of it from advertising, and turning a profit of about $8 billion. Page and Brin were each worth, on paper, more than $10 billion.