Read The Human Age Online

Authors: Diane Ackerman

Tags: #Science, #General

The Human Age (27 page)

BOOK: The Human Age
6.66Mb size Format: txt, pdf, ePub
ads

A man with curly hair, chestnut-brown eyes, and a dimpled chin, Hod welcomes me into his cheerful office: tall windows, a work desk, a Dell computer with a triptych of screens, window boxes for homegrown tomatoes in summer, and a wall of bookshelves, atop which sits an array of student design projects. To me they look unfamiliar but strangely beautiful and compelling, like the merchandise in an extraterrestrial bazaar. A surprisingly tall white table and its chairs invite one to climb aboard and romp with ideas. At Lipson’s round table, if you’re under six feet tall, your feet will automatically leave the planet, which is good, I think, because even this limited levitation aids the imagination, untying gravity just enough to make magic carpet rides, wing-walkers, and spaceships humble as old rope. There’s a reason we cling to such elevating turns of phrase as “I was walking on air,” “That was uplifting,” “heightened awareness,”
“surmounting obstacles,” or “My feet never touched the ground.” The mental mischief of creativity—which thrives on such fare as deep play, risk, a superfluity of ideas, the useful application of obsession, and willingly backtracking or hitting dead ends without losing heart—is also fueled by subtle changes in perception. So why not cast off mental moorings and hover a while each day?

What’s the next hack for a rambunctious species full of whiz kids with digital dreams? Lipson is fascinated by a different branch of the robotic evolutionary tree than the tireless servant, army of skilled hands, or savant of finicky incisions with which we have become familiar. Over ten million Roomba vacuum cleaners have already sold to homeowners (who sometimes find them being ridden as child or cat chariots). We watch with fascination as robotic sea scouts explore the deep abysses (or sunken ships), and NOAA’s robots glide underwater to monitor the strength of hurricanes. Google’s robotics division owns a medley of firms, including some minting life-size humanoids—because, in public spaces, we’re more likely to ask a cherub-faced robot for info than a touchscreen. Both Apple and Amazon are diving into advanced robotics as well. The military has invested heavily in robots as spies, bionic gear, drones, pack animals, and bomb disposers. Robots already work for us with dedicated precision in factory assembly lines and operating rooms. In cross-cultural studies, the elderly will happily adopt robotic pets and even babies, though they aren’t keen on robot caregivers at the moment.

All of that, to Lipson, is child’s play. His focus is on a self-aware species,
Robot sapiens
. Our own lineage branched off many times from our apelike ancestors, and so will the flowering, subdividing lineage of robots, which perhaps needs its own Linnaean classification system. The first branch in robot evolution could split between AI and AL—artificial intelligence and artificial life. Lipson stands right at that fork in that road, whose path he’s famous for helping to divine and explore in one of the great digital adventures of our age. It’s the ultimate challenge, in terms of engineering, in terms of creation.

“At the end of the day,” he says with a nearly illegible smile, “I’m
trying to recreate
life
in a synthetic environment—not necessarily something that will look human. I’m not trying to create a person who will walk out the door and say ‘Hello!’ with all sorts of anthropomorphic features, but rather features that are truly alive given the principles of life—traits and behaviors they have evolved on their own. I don’t want to build something, turn it on, and suddenly it will be alive. I don’t want to
program
it.”

A lot of robotics today, and a lot of science fiction, is about a human who schemes at a workbench in a dingy basement, digitally darning scraps, and then figuring out how to command his scarecrow to do his bidding. Or a mastermind who builds the perfect robots that eventually go haywire in barely discernible stages and start to massacre us, sometimes on Earth, often in space. It assumes an infinite power that humans have (and so can lose) over the machine.

Engineering’s orphans, Lipson’s brainchildren would be the first generation of truly self-reliant machines, gifted with free will by their soft, easily damaged creators. These synthetic souls would fend for themselves, learn, and grow—mentally, socially, physically—in a body not designed by us or by nature, but by fellow computers.

That may sound sci-fi, but Lipson is someone who relishes not only pushing the envelope but tinkering with its dimensions, fabric, inertia, and character. For instance, bothered by a question that nags sci-fi buffs, engineers, and harried parents alike—
Where are all the robots we were told would be working for us by now?
—he decided to go about robotics in a new way. And also in the most ancient of ways, by summoning the “mother of all designers, Evolution,” and asking a primordial soup of robotic bits and pieces to zing through millions of generations of fluky mutations, goaded by natural selection. Of course, natural evolution is a slapdash and glacially slow mother, yielding countless bottlenecks for every success story. But computers can be programmed to “evolve” at great speed with digital finesse, and adapt to all the rigors of their environment.

Would they be able to taste and smell?
I wonder, realizing at once how outmoded the very question is. Taste buds rise like flaky volcanoes
on different regions of the tongue, with bitter at the back, lest we swallow poisons. How hard would it be to evolve a suite of specialized “taste buds” that bear no resemblance to flesh? Flavor engineers at Nestlé in Switzerland have already created an electronic “taster” of espresso, which analyzes the gas different pulls of ristretto give off when heated, translating each bouquet of ions into such human-friendly, visceral descriptions as “roasted,” “flowery,” “woody,” “toffee,” and “acidy.”

However innovative, Lipson’s entities are still primitive when compared to a college sophomore or a bombardier beetle. But they’re the essential groundwork for a culture, maybe a hundred years from now, in which some robots will do our bidding, and others will share our world as a parallel species, one that’s creative and curious, moody and humorous, quick-witted, multitalented, and 100 percent synthetic. Will we regard them as
life
, as a part of nature, if they’re not carbon-based—as are all of Earth’s plants and animals? Can they be hot-blooded without blood? How about worried, petulant, sly, envious, downright cussed? The future promises fleets of sovereign silicants and, ultimately, self-governing, self-reliant robotic angels and varmints, sages and stooges. To be able to ponder such possibilities is a testament to the infinite agility of matter and its great untapped potential.

Whenever Lipson talks of robots being truly
alive
, gently stressing the word, I don’t hear Dr. Frankenstein speaking, at one in the morning, as the rain patters dismally against the panes,

when, by the glimmer of the half-extinguished light, I saw the dull yellow eye of the creature open; it breathed hard, and a convulsive motion agitated its limbs. How can I describe my emotions at this catastrophe, or how delineate the wretch whom with such infinite pains and care I had endeavoured to form?

As in the book’s epigraph, lines from Milton’s
Paradise Los
t
: “Did I request thee, Maker, from my clay / To mould Me man?” Mary Shelley
suggests that the parent of a monster is ultimately responsible for all the suffering and evil he has unleashed. From her early years of seventeen to twenty-one, Shelley was herself consumed by physical creation and literally sparking life, becoming pregnant and giving birth repeatedly, only to have three of her four children die soon after birth. She was continually pregnant, nursing, or mourning—creating and being depleted by her own creations. That complex visceral state fed her delicately horrifying tale.

In her day, scientists were doing experiments in which they animated corpses with electricity, fleetingly bringing them back to life, or so it seemed. Whatever the image of Frankenstein’s monster may have meant to Shelley, it has seized the imagination of people ever since, symbolizing something unnatural, Promethean, monstrous that we’ve created by playing God, or from evil motives or through simple neglect (Dr. Frankenstein’s sin wasn’t in creating the monster but in abandoning it). Something we’ve created that, in the end, will extinguish us. And that’s certainly been a key theme in science-fiction novels and films about robots, androids, golems, zombies, and homicidal puppets. Such ethical implications aren’t Lipson’s concern; that’s mainly for seminars and summits in a future he won’t inhabit. But such discussions are already beginning on some campuses. We’ve entered the age of such college disciplines as “robo-ethics” and Lipson’s specialty, “evolutionary robotics.”

Has it come to this, I wonder, creating novel life forms to prove we can, because a restless mind, left to its own devices and given enough time, is bound to create equally restless devices, just to see what happens? It’s a new threshold of creators creating creative beings.

“Creating life is certainly a tall pinnacle to surmount. Is it also a bit like having children?” I ask Lipson.

“In a different way. . . . Having children isn’t so much an intellectual challenge, but other kinds of challenges.” His eyebrows lift slightly to underline the understatement, and a memory seems to flit across his eyes.

“Yes, but you set them in motion and they don’t remake themselves exactly, but . . .”

“You have very little control. You can’t program a child . . .”

“But you can shape its brain, change the wiring.”

“Maybe you can shape some of the child’s experiences, but there are others you can’t control, and a lot of the personality is in the genes: nature, not nurture. Certainly in the next couple of decades we won’t be programming machines, but . . . like children, exactly . . . we’ll shape their experiences a little bit, and they’ll grow on their own and do what they do.”

“And they’ll simply adjust to whatever job is required?”

“Exactly. Adaptation and adjustment, and with that will come other issues, and a lot of problems.” He smiles the smile of someone who has seen dust-ups on a playground. “Emotions will be a big part of that.”

“You think we’ll get to the point where machines have deep emotions?”

“They will have deep emotions,” Hod says, certain as the tides. “But they won’t necessarily be
human
emotions. And also machines will not always do what we want them to do. This is already happening. Programming something is the ultimate control. You get to make it do exactly what
you
want
when
you want it. This is how robots in factories are programmed to work today. But the more we give away some of our control over how the machine learns . . .”

As a cool gust of October air wafts through the screenless window, carrying a faint scent of crumbling magnolia leaves and damp earth, it trails gooseflesh across my wrist.

“Let me close the window.” Hod slides gingerly off the tall chair as if from a soda fountain seat and closes the gaping mouth of the window.

We were making eye contact; how did he notice my gooseflesh? Stare at something and only the center of your vision is in focus; the periphery blurs. Is his visual compass wider than most people’s,
or is he just being a thoughtful host and, sensing a breeze himself, reasoning that since I’m sitting closer to the window I might be feeling chillier? As we talk, his astonishingly engineered biological brain—with its flexible, self-repairing, self-assembling, regenerating components that won’t leave toxic metals when they decompose—is working hard on several fronts: picturing what he wants to say in all of its complexity; rummaging through a sea of raw and thought-rinsed ideas; gauging my level of knowledge—very low in his field; choosing the best way to translate his thoughts into words for this newly met and unfamiliar listener; reading my unconscious cues; rethinking some of his words when they’re barely uttered; revising them right as they’re leaving his mouth, in barely perceptible changes to a word’s opening sound; choosing the ones most accurate on several levels (literally, professionally, emotionally, intellectually) whose meaning I may nonetheless give subtle signs of not really understanding—signs visible to him though unconscious to me, as they surface from a dim warehouse of my previous thoughts and experiences and a vocabulary in which each word carries its own unique emotional valence—while at the same time he’s also forming impressions of me, and gauging the impression I might be forming of him . . .

This is called a “conversation,” the spoken exchange of thoughts, opinions, and feelings. It’s hard to imagine robots doing the same on as many planes of meaning, layered emotions, and spring-loaded memories.

Beyond the windows with their magenta-colored accordion blinds, and the narrow Zen roof garden of rounded stones, twenty yards across the courtyard and street, behind a flimsy orange plastic fence, giant earth-diggers and men in hard hats are tearing up rock and soil with the help of machines wielding fierce toothy jaws. Such brutish dinosaurs will one day give way to rational machines that can transform themselves into whatever the specific task requires—perhaps the sudden repair of an unknown water pipe—without a
boss telling them what to do. By then the din of jackhammers will also be antiquated, though I’m sure our hackles will still twitch at the scrape of clawlike metal talons on rock.

“When a machine learns from experience, there are few guarantees about whether or not it will learn what you want,” Lipson continues as he remounts his chair. “And it might learn something that you didn’t want it to learn, and yet it can’t forget. This is just the beginning.”

I shudder at the thought of traumatized robots.

He continues, “It’s the unspoken Holy Grail of a lot of roboticists—to create just this kind of self-awareness, to create consciousness.”

What do roboticists like Lipson mean when they speak of “conscious” robots? Neuroscientists and philosophers are still squabbling over how to define consciousness in humans and animals. On July 7, 2012, a group of neuroscientists met at the University of Cambridge to declare officially that nonhuman animals “including all mammals and birds, and many other creatures, including octopuses” are conscious. To formalize their position, they signed a document entitled “The Cambridge Declaration on Consciousness in Non-Human Animals.”

BOOK: The Human Age
6.66Mb size Format: txt, pdf, ePub
ads

Other books

Dragon Flight by Jessica Day George
No me cogeréis vivo by Arturo Pérez-Reverte
Girl Waits with Gun by Amy Stewart
The Given Sacrifice by S. M. Stirling
The Heart of the Mirage by Glenda Larke
This Bitter Earth by Bernice McFadden
Dead End Job by Vicki Grant
The Unseen by Zilpha Keatley Snyder