Emotional Design (28 page)

Read Emotional Design Online

Authors: Donald A. Norman

BOOK: Emotional Design
3.03Mb size Format: txt, pdf, ePub
FIGURE 6.6
Kismet, a robot designed for social interactions, looking surprised.
(Image courtesy of Cynthia Breazeal.)
Interacting with Kismet is a rich, engaging experience. It is difficult to believe that Kismet is all emotion, with no understanding. But walk up to it, speak excitedly, show it your brand-new watch, and Kismet
responds appropriately: it looks at your face, then at the watch, then back at your face again, all the time showing interest by raising its eyelids and ears, and exhibiting perky, lively behavior. Just the interested responses you want from your conversational partner, even though Kismet has absolutely no understanding of language or, for that matter, your watch. How does it know to look at the watch? It doesn't, but it responds to movement so it looks at your rising hand. When the motion stops, it gets bored, and returns to look at your eyes. It shows excitement because it detected the tone of your voice.
Kismet's emotional system.
The heart of Kismet's operation is in the interaction of perception, emotion, and behavior.
(Figure redrawn, slightly modified with permission of Cynthia Breazeal, from
http://www.ai.mit.edu/projects/sociable/emotions.html
.)
Note that Kismet shares some characteristics of Eliza. Thus, although this is a complex system, with a body (well, a head and neck), multiple motors that serve as muscles, and a complex underlying model of attention and emotion, it still lacks any true understanding.
Therefore, the interest and boredom that it shows toward people are simply programmed responses to changes—or the lack thereof—in the environment and responses to movement and physical aspects of speech. Although Kismet can sometimes keep people entranced for long periods, the enhancement is somewhat akin to that of Eliza: most of the sophistication is in the observer's interpretations.
Aibo, the Sony robot dog, has a far less sophisticated emotional repertoire and intelligence than Kismet. Nonetheless, Aibo has also proven to be incredibly engaging to its owners. Many owners of the robot dog band together to form clubs: some own several robots. They trade stories about how they have trained Aibo to do various tricks. They share ideas and techniques. Some firmly believe that their personal Aibo recognizes them and obeys commands even though it is not capable of these deeds.
When machines display emotions, they provide a rich and satisfying interaction with people, even though most of the richness and satisfaction, most of the interpretation and understanding, comes from within the head of the person, not from the artificial system. Sherry Turkle, both an MIT professor and a psychoanalyst, has summarized these interactions by pointing out, “It tells you more about us as human beings than it does the robots.” Anthropomorphism again: we read emotions and intentions into all sorts of things. “These things push on our buttons whether or not they have consciousness or intelligence,” Turkle said. “They push on our buttons to recognize them as though they do. We are programmed to respond in a caring way to these new kinds of creatures. The key is these objects want you to nurture them and they thrive when you pay attention.”
CHAPTER SEVEN
The Future of Robots
SCIENCE FICTION CAN BE a useful source of ideas and information, for it is, in essence, detailed scenario development. Writers who have used robots in their stories have had to imagine in considerable detail just how they would function within everyday work and activities. Isaac Asimov was one of the earliest thinkers to explore the implications of robots as autonomous, intelligent creatures, equal (or superior) in intelligence and abilities to their human masters. Asimov wrote a sequence of novels analyzing the difficulties that would arise if autonomous robots populated the earth. He realized that a robot might inadvertently harm itself or others, both through its actions or, at times, through its lack of action. He therefore developed a set of postulates that might prevent these problems; but, as he did so, he also realized that they were often in conflict with one another. Some conflicts were simple: given a choice between preventing harm to itself or to a human, the robot should protect the human. But other conflicts were much more subtle, much more difficult. Eventually, he postulated three laws of robotics (laws one, two,
and three) and wrote a sequence of stories to illustrate the dilemmas that robots would find themselves in, and how the three laws would allow them to handle these situations. These three laws dealt with the interaction of robots and people, but as his story line progressed into more complex situations, Asimov felt compelled to add an even more fundamental law dealing with the robots' relationship to humanity itself. This one was so fundamental that it had to come first; but, because he already had a law labeled First, this fourth law had to be labeled Zeroth.
Asimov's vision of people and of the workings of industry was strangely crude. It was only his robots that behaved well. When I reread his books in preparation for this chapter, I was surprised at the discrepancy between my fond memories of the stories and my response to them now. His people are rude, sexist, and naïve. They seem unable to converse unless they are insulting each other, fighting, or jeering. His fictional company, the
U.S. Robots and Mechanical Men Corporation
doesn't fare well either. It is secretive, manipulative, and allows no tolerance for error: make one mistake and the company would fire you. Asimov spent his entire life in a university. Maybe that is why he had such a weird view of the real world.
Nonetheless, his analysis of the reaction of society to robots—and of robots to humans—was interesting. He thought society would turn against robots; and, indeed, he wrote that “most of the world governments banned robot use on earth for any purpose other than scientific research between 2003 and 2007.” (Robots, however, were allowed for space exploration and mining; and in Asimov's stories, these activities are widely deployed in the early 2000s, which allow the robot industry to survive and grow.) The Laws of Robotics are intended to reassure humanity that robots will not be a threat and will, moreover, always be subservient to humans.
Today, even our most powerful and functional robots are far from the stage of Asimov's. They do not operate for long periods without human control and assistance. Even so, the laws are an excellent tool for examining just how robots and humans should interact.
Asimov's Four Laws of Robotics
Zeroth Law:
A robot may not injure humanity, or, through inaction, allow humanity to come to harm.
First law:
A robot may not injure a human being, or, through inaction, allow a human being to come to harm, unless this would violate the Zeroth Law of Robotics.
Second Law:
A robot must obey orders given it by human beings, except where such orders would conflict with the Zeroth or First Law.
Third Law:
A robot must protect its own existence as long as such protection does not conflict with the Zeroth, First, or Second Law.
Many machines already have key aspects of the laws hard-wired into them. Let's examine how these laws are implemented.
The Zeroth Law—that “a robot may not injure humanity, or, through inaction, allow humanity to come to harm,” is beyond current capability, for much the same reasons that Asimov did not need this law in his early stories: to determine just when an action—or lack of action—will harm all humanity is truly sophisticated, probably beyond the abilities of most people.
The first law—that “a robot may not injure a human being, or, through inaction, allow a human being to come to harm, unless this would violate the Zeroth Law of Robotics,” could be labeled “safety.” It isn't legal, let alone proper, to produce things that can hurt people. As a result, all machines today are designed with multiple safeguards to minimize the likelihood that they can harm by their action. Liability laws guarantee that robots—and machines in general—are outfitted with numerous safeguards to prevent their actions from harming people. Industrial and home robots have proximity and collision sensors. Even simple machines such as elevators and garage doors have sensors that stop them from closing on people. Today's robots try to avoid bumping into people or objects. Lawn mower and vacuum cleaner
robots have sensing mechanisms that cause them to stop or back away whenever they bump into anything or come too close to an edge, such as a stairway. Industrial robots are often fenced off, so that people can't get near them when they are operating. Some have people detectors, so they stop when they detect someone nearby. Home robots have many mechanisms to minimize the chance of damage; but at the moment, most of them are so underpowered that they couldn't hurt even if they tried to. Moreover, the lawyers are very careful to guard against potential damage. One company sells a home robot that can be used to teach children by reading books to them and that can also serve as a home sentinel, wandering about the house, taking photographs of unexpected encounters and notifying its owners, by email if necessary (through its wireless internet connection, attaching the photographs along with the message, of course). Despite these intended applications, the robot comes with stern instructions that it is not to be used near children, nor is it to be left unattended in the house.
A lot of effort has gone into implementation of the safety provision of the first law. Most of this work can be thought of as applying to the visceral level, where fairly simple mechanisms are used to shut down the system if safety regulations are violated.
The second part of the law—do not allow harm through inaction—is quite difficult to implement. If determining how a machine's actions might affect people is difficult, trying to determine how the lack of an action might have an impact is even more so. This would be a reflective level implementation, for the robot would have to do considerable analysis and planning to determine when lack of action would lead to harm. This is beyond most capabilities today.
Despite the difficulties, some simple solutions to the problem do exist. Many computers are plugged into “non-interruptible power supplies” to avoid loss of data in cases of power failure. If the power failed and no action were taken, harm would occur, but in these cases, when the power fails, the power supply springs into action, switching to batteries, converting the battery voltage to the form the computer requires. It can also be set to notify people and to turn off the computer
gracefully. Other safety systems are designed to act when normal processes have failed. Some automobiles have internal sensors that watch over the path of the car, adjusting engine power and braking to ensure that the auto keeps going as intended. Automatic speed control mechanisms attempt to keep a safe distance from the car in front, and lane-changing detectors are under investigation. All of these devices safeguard car and passengers when inaction would lead to accident.
Today, these devices are simple and the mechanisms built in. Still, one can see the beginnings of solutions to the inaction clause of the first law, even in these simple devices.
The second law—that “a robot must obey orders given it by human beings, except where such orders would conflict with the Zeroth or First Law,

is about obeying people, in contrast to the first, which is about protecting them. In many ways, this law is trivial to implement, but for elementary reasons. Machines today do not have an independent mind, so they must obey orders: they have no choice but to follow the commands given them. If they fail, they face the ultimate punishment: they are shut off and sent to the repair shop.
Can a machine disobey the second law in order to protect the first law? Yes, but not with much subtlety. Command an elevator to take you to your floor, and it will refuse if it senses that a person or object is blocking the door. This, however, is the most trivial of ways to implement the law, and it fails when the situation has any sophistication. Actually, in cases where safety systems prevent a machine from following orders, usually a person can override the safety system to permit the operation to take place anyway. This has been the cause of many an accident in trains, airplanes, and factories. Maybe Asimov was correct: we should leave some decisions up to the machines.
Some automatically deployed safety systems are an example of the “through inaction” clause of the law. Thus, if the driver of an automobile steps on the brakes rapidly, but only depresses the brake pedal halfway, most autos would only slow halfway. The Mercedes Benz, however, considers this “harm through inaction,” so when it detects a rapid brake deployment, it puts the brakes on full, assuming that the
owner really wants to stop as soon as possible. This is a combination of the first and second laws: the first law, because it prevents harm to the driver; and the second law because it is violating the “instructions” to apply the brakes at half strength. Of course, this may not really be a violation of the instructions: the robot assumes that full power was intended, even if not commanded. Perhaps the robot is invoking a new rule: “Do what I mean, not what I say,” an old concept from some early artificial intelligence computer systems.

Other books

The Girl in the Mask by Marie-Louise Jensen
Determine by Viola Grace
(1993) The Stone Diaries by Carol Shields
The Fenway Foul-Up by David A. Kelly
Cyborg by Kaitlyn O'connor
Wulfsyarn: A Mosaic by Phillip Mann
Slow Release (Ebony and Ivory Book 1) by Steele, Suzanne, Weathers, Stormy Dawn