On Monsters: An Unnatural History of Our Worst Fears (45 page)

BOOK: On Monsters: An Unnatural History of Our Worst Fears
9.35Mb size Format: txt, pdf, ePub

The current advances in robotics have led many people (and scores of Hollywood films) to envision a time when a race of artificial slaves will rise up and overthrow their human masters.
3
The imagined scenario raises all kinds of interesting questions about free will and the evolution of self-agency in a digital system. It also rehearses ethical questions about slavery, questions that have all too real and painful associations with our recent national history. But with regard to the monsters issue, it really turns on the fear we have that something we control will twist around and start to control us.

Before he was governor of California, Arnold Schwarzenegger was best known for playing an unhinged robot in three
Terminator
films. The original 1984 film, by James Cameron, has spawned one of the most lucrative franchises in robot-loathing history, including a Niagara of films, books, games, and comics. The story begins in the not too distant future, when robots rule the world. An artificial intelligence computer, named Sky-net, evolves enough nefarious intentionality to pursue the annihilation of humankind. John Connor, a rebel human, develops an effective resistance movement called Tech-Con. As the tide turns toward human emancipation, the machines create a robot assassin, a terminator, to go back in time to kill John Connor’s mother before John is born. If we set aside
the ludicrous time-travel aspect of the story, we have the always emotionally satisfying battle between man and machine. The
Matrix
series is yet another mythos about the dehumanizing power nascent in computers and robotic machines and our fears of being turned into mere nodes on a staggering digital grid (the old fear of reductionism).

A nice variation on the imagined future of human slavery is the 2008 Pixar film
Wall-E
, which humorously envisions a time when machines have made life entirely too comfortable for humans. In this vision of our future, humans have become so immersed in virtual reality that they do not really walk or stand up or even interact with each other. Humans have become infantilized by convenience-enhancing technology, and they live almost entirely in the digital simulacrum. Generations of life lived on comfy chairs sipping fattening drinks has actually modified the human body; everyone is rendered an obese blob, incapable of physical activity, and their interior mental lives have been hijacked by the manipulations of mass-media consumer interests. The technology-based dehumanization is not one of oppressed slavery, but deadening anesthesia. The computers conspire against the humans when they eventually seek to reclaim their humanity.

Isaac Asimov famously framed the anxiety about robot autonomy when he imbued all his fictional robots with the “three laws of robotics.”
4
In his 1950s stories Asimov gave us a more nuanced version of robots, one that diverged from the usual
Frankenstein
story of terrible comeuppance. Robots were programmed with the following laws: (1) a robot may not injure a human being, or, through inaction, allow a human being to come to harm; (2) a robot must obey orders given to it by a human being, except where such orders conflict with the first law; (3) a robot must protect its own existence, except where such protection conflicts with law 1 or 2. These rules produce some fairly sympathetic robots and some interesting scenarios that Asimov explored in his series, but the worst-case scenario, robot insurgence, has been the preferred version of our AI future.

All this makes some highly dubious assumptions about the emergence of conscious intentionality and free will in artificial intelligence. But lest we shrug it all off as sci-fi nonsense, we should take note that robot drones are already fighting battles in Iraq and Afghanistan. American uncrewed aerial vehicles (UAVs) and robot tanks have already been crucial soldiers in battles in Fallujah and in the Tora Bora caves. In fact, by 2015 the U.S. Department of Defense projects that one-third of its fighting force will be robot vehicles, and by 2035 they expect to have autonomous humanoid robots climbing onto the battlefield.
5

Autonomous
robots will be free-standing rather than cabled directly to a human operator. They will operate according to hierarchically stacked
rules of behavior programmed by Boolean binary logic. Insect-size robots, and even some small humanoids, are already functioning according to this algorithmic intelligence in laboratories at MIT, Honda, the Space and Naval Warfare Systems Command, and Sony. They navigate difficult terrain by quickly processing logically programmed on-off directives; if blocked by a wall, they move left, then right, until the blockade is passed. Micro versions of these rules govern the adjustments of each robotic foot or tread. Likewise, most robots have already been successfully equipped with rules that instruct them to repower themselves when they run low on energy, giving them a kind of self-sufficient
nutrition
system. In addition to nutritive systems, many artificial organisms have been programmed to
reproduce
, either physically building others like themselves or digitally copying themselves, as in computer viruses. This means they have waded into the stream of chance variation and natural selection. They are evolving.

Dave Bullock spent time in 2008 with the Navy’s new MDARS-E armed robot and was unnerved by its sophisticated tracking ability.
6
After the Navy lab in San Diego explained to Bullock that their robot could track anything that moves, they assigned Bullock as the target for a demonstration. “Told that I was the target,” Bullock reported, “the unmanned vehicle trained its guns on me and ordered, ‘Stay where you are,’ in an intimidating robot voice. And yes, it was frightening.” The technical director at the lab, Bart Everett, assured Bullock, “We’re not building Skynet,” a reference to the malignant supercomputer in the
Terminator
films. But the new levels of robotic autonomy are provocative.

True autonomy exists when an agent can problem-solve beyond the parameters of preset programming. Many cognitive scientists doubt that this kind of free will can really emerge out of progressively more complex Boolean rule systems, but an increasing number of such scientists and philosophers are wondering if human autonomy isn’t just a very complex version of this digital processing. In this view, the difference between artificial and human intelligence is one of degree, not kind. Even simple insect robots and computer entities act in surprising and unpredicted ways when their programmers put them into novel environments. String enough of this novel problem-solving behavior together and the machines begin to seem like biological systems.

A simple example comes from Steven Levy’s book
Artificial Life
. Levy describes a common occurrence, a programmer who is surprised by the adaptive resources of his creation. When Mitchel Resnick designed a small robot to follow a straight line on the ground, he set the creature rolling without actually programming a protocol for when it reached the end of the line. “We wrote the program and it was following this line,” explained Resnick. “And all of a sudden it struck me that I had no idea what was
going to happen when it reached the end of the line….When it got to the

end of the line it turned around and started following the line in the other direction! If we had
planned
for it to do something, that would have been the ideal thing for it to do.”
7

True robot
autonomy
may not be a realistic fear in the near future, but robot manipulation by an enemy force of human hackers is entirely realistic. A good hacker could access the control system and turn missiles, robots, whatever, back at us just by rewriting the target protocols. In this case, the monsters are not the robots and computers themselves, but these machines become the inflexible tools of human oppressors.

The uniquely frightening aspect of robot-based political and social control is that such mechanical police do not have empathic emotional checks on their behaviors. Will robots be able to follow the rules of the Geneva Convention, for example? No amount of screaming, crying, or human desperation will touch the heart of a robot policeman because, of course, it has no heart. One hopes that even the most hardened human soldier or police officer can be touched in the throes of warfare by the pained entreaties of human suffering. Although, as I’ve argued earlier, this is precisely the assumption (of natural human empathy) that many human monsters invalidate. Human monsters demonstrate their lack of empathy regularly. Perhaps robot soldiers, programmed with citizen-saving protocols, won’t do any worse than flesh-and-blood warriors.

CYBORGS
 

According to a recent issue of
Nanotech Report
, a newsletter analyzing investment opportunities for this cutting-edge technology, “Nanotechnol-ogy is about rebuilding mother nature atom by atom!”
8
This is a dramatic way of pointing out that such technological advancements as the Scanning Tunneling Microscope and such processes as nanolithography have allowed us to manipulate nanoscale structures in ways previously unimaginable. We might, in theory, be able to design chemical programs that disassemble molecules or even organisms into their atomic parts and then rebuild those parts into better organisms or even different organisms. For example, we may be able to create populations of nanobots (little chemical factories) that will eventually live inside the blood stream, releasing insulin into the veins of diabetics to control their blood glucose levels.

The U. S. National Nanotech Initiative, established by President Clinton and continued by President Bush, has already received 6.5 billion dollars in funding for nanotech research.
9
Originally formulated as a thought experiment by Richard Feynman, this cutting-edge attempt to manipulate
micronature has recently brought together government, academia, and corporations like DuPont, IBM, and Sony, to name a few. Optimists like Marvin Minsky believe that the new technology will usher in a new and improved posthuman species.
10

In this age of cloning, nanotech, genetic engineering, and neurophar-macology, a new breed of posthuman philosopher is emerging. It is not just the fiction writers and filmmakers who are intoxicated with the idea of transcending our human limits. A handful of forward-thinking, slightly lunatic artists, scientists, and cultural theorists are exploring the increasingly fuzzy boundary between technology and the biological body.
Post-human
(or
transhuman
in the United Kingdom) refers to the idea that we will eventually transcend our frustratingly finite flesh. But we won’t have to wait for an
afterlife
to achieve this liberation; we will attain it by the application of new technology. Technology, these theorists believe, will usher in a superior life for our species; we will no longer be limited by the spatial and temporal constraints of our corporeal self. For many theorists, this transhumanism is already well under way.

Nick Bostrom, an Oxford professor and the founder of the World Transhumanist Association, has predicted many significant transformations in the near future. In a lecture at the Technology, Entertainment, and Design Conference in 2005, Bostrom predicted that we humans will soon be able to expand our palette of sensory faculties, rearrange our body morphology, and even alter our hormonal makeup to ensure that love will not fade over time.
11
Erectile dysfunction drugs have already allowed men to stave off aging and performance anxiety; soon they will be able to tweak their chemistry to stay faithful and happily partnered. Bostrom believes that the new technology will allow us to alter our basic nature in ways that will enhance our experience—indeed, enhance our lives.

Kevin Warwick, a professor of cybernetics at the University of Reading, states, “I was born human. But this was an accident of fate—a condition merely of time and place. I believe it’s something we have the power to change.” To that end, he has implanted microchips in his body that communicate to computers in his lab, which respond by flipping on lights and opening doors when he approaches. He and his wife, Irena, plan to get his-and-hers implants that will send signals back and forth between computers and their nervous systems. Soon, he says, they will attempt to download and swap digital versions of their personal sensations and emotions. Warwick describes his wife’s intentions: “The way she puts it, is that if anyone is going to jack into my limbic system—to know definitively when I’m feeling happy, depressed, angry, or even sexually aroused—she wants it to be her.”
12

The distinction between our physical self and our cyberself, stretched in all directions by Internet nodes and ubiquitous microprocessors, will blur irreversibly some day, the posthumanists explain. Our bodies will be accessorized with hardware and software improvements, our minds ready for uploading and downloading. Our intellectual aspirations will no longer be hindered by the wet sacks we currently call home.

In the next few years, the French performance artist Orlan will conclude her decades-long work-in-progress titled “The Reincarnation of St. Orlan” by having a team of plastic surgeons construct the largest nose that her face is capable of supporting. Under a local anesthetic, Orlan will lecture on postmodern theory, reading from Baudrillard, Kristeva, and Lacan, while surgeons flay her face and perform her rhinoplasty. She has already done this sort of surgery ten times. In New York Orlan had plastic structures implanted under the skin of her forehead so that it would approximate that of Leonardo’s
Mona Lisa
. In another operation she had her chin reconstructed on the model of Botticelli’s
Venus
. While these operations are performed she is dressed in outlandish costumes and her audience asks questions of her via fax machine, phone, and e-mail. When the surgeries are completed, the excess bits of skin and fat are stored in jars for display at future performances.

Other books

Dragons Don't Cry by Suzie Ivy
To Love Again by Danielle Steel
All Said and Undone by Gill, Angelita
Tomorrow Is Today by Julie Cross
The Hundred: Fall of the Wents by Prescott, Jennifer