The Shallows (21 page)

Read The Shallows Online

Authors: Nicholas Carr

BOOK: The Shallows
2.84Mb size Format: txt, pdf, ePub

But a technological solution to the problem of information overload was, Bush argued, on the horizon: “The world has arrived at an age of cheap complex devices of great reliability; and something is bound to come of it.” He proposed a new kind of personal cataloguing machine, called a memex, that would be useful not only to scientists but to anyone employing “logical processes of thought.” Incorporated into a desk, the memex, Bush wrote, “is a device in which an individual stores [in compressed form] all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility.” On top of the desk are “translucent screens” onto which are projected images of the stored materials as well as “a keyboard” and “sets of buttons and levers” to navigate the database. The “essential feature” of the machine is its use of “associative indexing” to link different pieces of information: “Any item may be caused at will to select immediately and automatically another.” This process “of tying two things together is,” Bush emphasized, “the important thing.”
47

With his memex, Bush anticipated both the personal computer and the hypermedia system of the World Wide Web. His article inspired many of the original developers of PC hardware and software, including such early devotees of hypertext as the famed computer engineer Douglas Engelbart and HyperCard’s inventor, Bill Atkinson. But even though Bush’s vision has been fulfilled to an extent beyond anything he could have imagined in his own lifetime—we are surrounded by the memex’s offspring—the problem he set out to solve, information overload, has not abated. In fact, it’s worse than ever. As David Levy has observed, “The development of personal digital information systems and global hypertext seems not to have solved the problem Bush identified but exacerbated it.”
48

In retrospect, the reason for the failure seems obvious. By dramatically reducing the cost of creating, storing, and sharing information, computer networks have placed far more information within our reach than we ever had access to before. And the powerful tools for discovering, filtering, and distributing information developed by companies like Google ensure that we are forever inundated by information
of immediate interest to us
—and in quantities well beyond what our brains can handle. As the technologies for data processing improve, as our tools for searching and filtering become more precise, the flood of relevant information only intensifies. More of what is of interest to us becomes visible to us. Information overload has become a permanent affliction, and our attempts to cure it just make it worse. The only way to cope is to increase our scanning and our skimming, to rely even more heavily on the wonderfully responsive machines that are the source of the problem. Today, more information is “available to us than ever before,” writes Levy, “but there is less time to make use of it—and specifically to make use of it with any depth of reflection.”
49
Tomorrow, the situation will be worse still.

It was once understood that the most effective filter of human thought is time. “The best rule of reading will be a method from nature, and not a mechanical one,” wrote Emerson in his 1858 essay “Books.” All writers must submit “their performance to the wise ear of Time, who sits and weighs, and ten years hence out of a million of pages reprints one. Again, it is judged, it is winnowed by all the winds of opinion, and what terrific selection has not passed on it, before it can be reprinted after twenty years, and reprinted after a century!”
50
We no longer have the patience to await time’s slow and scrupulous winnowing. Inundated at every moment by information of immediate interest, we have little choice but to resort to automated filters, which grant their privilege, instantaneously, to the new and the popular. On the Net, the winds of opinion have become a whirlwind.

Once the train had disgorged its cargo of busy men and steamed out of the Concord station, Hawthorne tried, with little success, to return to his deep state of concentration. He glimpsed an anthill at his feet and, “like a malevolent genius,” tossed a few grains of sand onto it, blocking the entrance. He watched “one of the inhabitants,” returning from “some public or private business,” struggle to figure out what had become of his home: “What surprise, what hurry, what confusion of mind, are expressed in his movement! How inexplicable to him must be the agency which has effected this mischief!” But Hawthorne was soon distracted from the travails of the ant. Noticing a change in the flickering pattern of shade and sun, he looked up at the clouds “scattered about the sky” and discerned in their shifting forms “the shattered ruins of a dreamer’s Utopia.”

 

IN 2007, THE
American Association for the Advancement of Science invited Larry Page to deliver the keynote address at its annual conference, the country’s most prestigious meeting of scientists. Page’s speech was a rambling, off-the-cuff affair, but it provided a fascinating glimpse into the young entrepreneur’s mind. Once again finding inspiration in an analogy, he shared with the audience his conception of human life and human intellect. “My theory is that, if you look at your programming, your DNA, it’s about 600 megabytes compressed,” he said, “so it’s smaller than any modern operating system, smaller than Linux or Windows…and that includes booting up your brain, by definition. So your program algorithms probably aren’t that complicated; [intelligence] is probably more about overall computation.”
51

The digital computer long ago replaced the clock, the fountain, and the factory machine as our metaphor of choice for explaining the brain’s makeup and workings. We so routinely use computing terms to describe our brains that we no longer even realize we’re speaking metaphorically. (I’ve referred to the brain’s “circuits,” “wiring,” “inputs,” and “programming” more than a few times in this book.) But Page’s view is an extreme one. To him, the brain doesn’t just resemble a computer; it
is
a computer. His assumption goes a long way toward explaining why Google equates intelligence with data-processing efficiency. If our brains are computers, then intelligence can be reduced to a matter of productivity—of running more bits of data more quickly through the big chip in our skull. Human intelligence becomes indistinguishable from machine intelligence.

Page has from the start viewed Google as an embryonic form of artificial intelligence. “Artificial intelligence would be the ultimate version of Google,” he said in a 2000 interview, long before his company’s name had become a household word. “We’re nowhere near doing that now. However, we can get incrementally closer to that, and that is basically what we work on.”
52
In a 2003 speech at Stanford, he went a little further in describing his company’s ambition: “The ultimate search engine is something as smart as people—or smarter.”
53
Sergey Brin, who says he began writing artificial-intelligence programs in middle school, shares his partner’s enthusiasm for creating a true thinking machine.
54
“Certainly if you had all the world’s information directly attached to your brain, or an artificial brain that was smarter than your brain, you’d be better off,” he told a
Newsweek
reporter in 2004.
55
In a television interview around the same time, Brin went so far as to suggest that the “ultimate search engine” would look a lot like Stanley Kubrick’s HAL. “Now, hopefully,” he said, “it would never have a bug like HAL did where he killed the occupants of the spaceship. But that’s what we’re striving for, and I think we’ve made it part of the way there.”
56

The desire to build a HAL-like system of artificial intelligence may seem strange to most people. But it’s a natural ambition, even an admirable one, for a pair of brilliant young computer scientists with vast quantities of cash at their disposal and a small army of programmers and engineers in their employ. A fundamentally scientific enterprise, Google is motivated by a desire to, in Eric Schmidt’s words, “us[e] technology to solve problems that have never been solved before,”
57
and artificial intelligence is the hardest problem out there. Why wouldn’t Brin and Page want to be the ones to crack it?

Still, their easy assumption that we’d all “be better off” if our brains were supplemented, or even replaced, by artificial intelligence is as unsettling as it is revealing. It underscores the firmness and the certainty with which Google holds to its Taylorist belief that intelligence is the output of a mechanical process, a series of discrete steps that can be isolated, measured, and optimized. “Human beings are ashamed to have been born instead of made,” the twentieth-century philosopher Günther Anders once observed, and in the pronouncements of Google’s founders we can sense that shame as well as the ambition it engenders.
58
In Google’s world, which is the world we enter when we go online, there’s little place for the pensive stillness of deep reading or the fuzzy indirection of contemplation. Ambiguity is not an opening for insight but a bug to be fixed. The human brain is just an outdated computer that needs a faster processor and a bigger hard drive—and better algorithms to steer the course of its thought.

“Everything that human beings are doing to make it easier to operate computer networks is at the same time, but for different reasons, making it easier for computer networks to operate human beings.”
59
So wrote George Dyson in
Darwin among the Machines
, his 1997 history of the pursuit of artificial intelligence. Eight years after the book came out, Dyson was invited to the Googleplex to give a talk commemorating the work of John von Neumann, the Princeton physicist who in 1945, building on the work of Alan Turing, drew up the first detailed plan for a modern computer. For Dyson, who has spent much of his life speculating about the inner lives of machines, the visit to Google must have been exhilarating. Here, after all, was a company eager to deploy its enormous resources, including many of the brightest computer scientists in the world, to create an artificial brain.

But the visit left Dyson troubled. Toward the end of an essay he wrote about the experience, he recalled a solemn warning that Turing had made in his paper “Computing Machinery and Intelligence.” In our attempts to build intelligent machines, the mathematician had written, “we should not be irreverently usurping His power of creating souls, any more than we are in the procreation of children.” Dyson then relayed a comment that “an unusually perceptive friend” had made after an earlier visit to the Googleplex: “I thought the coziness to be almost overwhelming. Happy Golden Retrievers running in slow motion through water sprinklers on the lawn. People waving and smiling, toys everywhere. I immediately suspected that unimaginable evil was happening somewhere in the dark corners. If the devil would come to earth, what place would be better to hide?”
60
The reaction, though obviously extreme, is understandable. With its enormous ambition, its immense bankroll, and its imperialistic designs on the world of knowledge, Google is a natural vessel for our fears as well as our hopes. “Some say Google is God,” Sergey Brin has acknowledged. “Others say Google is Satan.”
61

So what
is
lurking in the dark corners of the Googleplex? Are we on the verge of the arrival of an AI? Are our silicon overlords at the door? Probably not. The first academic conference dedicated to the pursuit of artificial intelligence was held back in the summer of 1956—on the Dartmouth campus—and it seemed obvious at the time that computers would soon be able to replicate human thought. The mathematicians and engineers who convened the month-long conclave sensed that, as they wrote in a statement, “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”
62
It was just a matter of writing the right programs, of rendering the conscious processes of the mind into the steps of algorithms. But despite years of subsequent effort, the workings of human intelligence have eluded precise description. In the half century since the Dartmouth conference, computers have advanced at lightning speed, yet they remain, in human terms, as dumb as stumps. Our “thinking” machines still don’t have the slightest idea what they’re thinking. Lewis Mumford’s observation that “no computer can make a new symbol out of its own resources” remains as true today as when he said it in 1967.
63

But the AI advocates haven’t given up. They’ve just shifted their focus. They’ve largely abandoned the goal of writing software programs that replicate human learning and other explicit features of intelligence. Instead, they’re trying to duplicate, in the circuitry of a computer, the electrical signals that buzz among the brain’s billions of neurons, in the belief that intelligence will then “emerge” from the machine as the mind emerges from the physical brain. If you can get the “overall computation” right, as Page said, then the algorithms of intelligence will write themselves. In a 1996 essay on the legacy of Kubrick’s
2001
, the inventor and futurist Ray Kurzweil argued that once we’re able to scan a brain in sufficient detail to “ascertain the architecture of interneuronal connections in different regions,” we’ll be able to “design simulated neural nets that will operate in a similar fashion.” Although “we can’t yet build a brain like HAL’s,” Kurzweil concluded, “we can describe right now how we could do it.”
64

Other books

Fractured by Erin Hayes
Vice and Virtue by Veronica Bennett
Strange Things Done by Elle Wild
Tying the Knot by Susan May Warren
The Angel Side by Heaven Liegh Eldeen
A Frog in My Throat by Frieda Wishinsky
Fellowship of Fear by Aaron Elkins
Mama Rides Shotgun by Deborah Sharp
Red by Kate Serine