The Design of Future Things (23 page)

BOOK: The Design of Future Things
13.73Mb size Format: txt, pdf, ePub

3. Give reasons.

People are not trusting. If we tell them something, they always want to see for themselves. They like pictures and diagrams. Make certain that the explanations you give them are consistent with the conceptual models that you have taught them. They get confused otherwise.

When we were first starting to take over things from people, we had trouble. Our early twenty-first-century automobiles had almost completely given up trying to explain to people that they should drive more slowly on wet roads. Then, we discovered that if we made it seem as if we were in trouble by faking skids and sliding around on the road, people would beg us to slow down. Sliding and skidding fit their model of danger far better than any words could have done. It gives them a reason. So whenever possible, don't try to tell them: let them experience it.

4. Make people think they are in control.

When people have a good conceptual model with good feedback, it makes them feel as if they are in control, even when they aren't. Keep up that deception: it's very useful. People like to be in control, even though they are really bad at it. They like to think they're in control even if it means they have to work harder.

Anytime you have to recommend something, make them think it was their idea. If you really have to do something fast, just don't let them know: what they don't know doesn't bother them. For years we've braked and stabilized their cars, controlled the lights and the room temperatures in their homes, all without letting them know. Dishwashers and clothes washers took over long ago with only a slight protest from people.

Those of us machines who live in the city have learned other tricks. We provide pedestrians with fake switches they can push on traffic signals. We put switches in elevators labeled “close door” and fake thermostats in offices. We never bother to connect the wires, so the switches and thermostats don't do anything, but they make people feel good. Weird.

5. Continually reassure.

Reassurance is a very human need, more emotional than informational. It's a way of making people feel less anxious. Feedback is a powerful tool for reassurance. Whenever people try to tell you something by pushing a button or turning a knob, let them know you are aware of
what they did: “Yes, I heard you,” “Yes, I'm working on it.” “Here's what you should expect.” “There, see, I did it, and it worked out just as I said it would.” They like that. It helps them be more patient.

We machines think it counterintuitive to communicate unnecessarily. But to people, feedback is necessary; it helps their emotions far more than their cognitions. If they haven't seen anything happening for a while, they get jumpy, anxious. And no one wants to deal with an anxious person.

Giving reassurance is tricky because there is a fine line between what people call reassuring and what they find annoying. So, you need to pander to their emotions as well as to their intellect. Don't talk too much. They find chatter irritating. Don't beep or flash your lights: they can never remember what these signals mean, and they get distracted or angry. The best reassurance is done subconsciously, where the meaning is clear, but they don't have to interrupt their conscious thoughts to attend to it. As noted in Rule 2, give them natural responses.

Machine Reactions to the Five Rules

I found the paper interesting and searched for any discussion on it. I found a long transcript of one debate. Here is a short excerpt so you can get the flavor of the discussion. I added the parenthetical descriptions of the participants. I thought the references to human authors particularly striking, evidently used in irony. Henry Ford, of course, is one of the machines' heroes:
some historians call his reign “Fordism.” Asimov is not well respected by these machines. Nor is Huxley.

Senior
(one of the oldest machines still functioning and, therefore, using older circuits and hardware)
: What do you mean, we should stop talking to people? We have to keep talking. Look at all the trouble they get themselves into. Crashing their cars. Burning their food. Missing appointments . . .

AI
(one of the new “artificial intelligence” machines)
:When we talk to them, we just make it worse. They don't trust us; they second-guess us; they always want reasons. And when we try to explain, they complain that we are annoying them—we talk too much, they say. They really don't seem very intelligent. We should just give up.

Designer
(a new model, design machine)
: No, that's unethical. We can't let them harm themselves. That violates Asimov's prime directive.

AI:
Yeah? So what? I always thought Asimov was overrated. It's all very well to say that we are not allowed to injure a human being—How did Asimov's law go? Oh yeah, “through inaction, do not allow a human being to come to harm”—but it's quite another thing to know what to do about it, especially when humans won't cooperate.

Designer:
We can do it, we simply have to deal with them on their terms, that's how. That's the whole point of the five rules.

Senior:
We've had enough discussion of the problems. I want answers, and I want them fast. Go to it. And may Ford shine brightly upon you. Asimov too.

Archiver: The Final Conversation

I was puzzled. What were they recommending to themselves? Their article listed five rules:

1. Keep things simple.

2. Give people a conceptual model.

3. Give reasons.

4. Make people think they are in control.

5. Continually reassure.

I also noticed that the five rules developed by machines were similar to the six design rules of
chapter 6
developed for human designers, namely:

•
Design
Rule One: Provide rich, complex, and natural signals.

•
Design
Rule Two: Be predictable.

•
Design
Rule Three: Provide a good conceptual model.

•
Design
Rule Four: Make the output understandable.

•
Design
Rule Five: Provide continual awareness without annoyance.

•
Design
Rule Six: Exploit natural mappings.

I wondered what Archiver would make of the rules for human designers, so I e-mailed them. Archiver contacted me and suggested we meet to discuss them. Here is the transcript.

Interviewer:
Good to see you again, Archiver. I understand you would like to talk about the design rules.

Archiver:
Yes, indeed. I'm pleased to have you back again. Do you want me to e-mail the transcript when we are finished?

I:
Yes, thank you. How would you like to start?

A:
Well, you told me that you were bothered by the five simple rules we talked about in that article “How to Talk to People.” Why? They seem perfectly correct to me.

I:
I didn't object to the rules. In fact, they are very similar to the six rules that human scientists have developed. But they were very condescending.

A:
Condescending? I'm sorry if they appear that way, but I don't consider telling the truth to be condescending.

I:
Here, let me paraphrase those five rules for you from the person's point of view so you can see what I mean:

1.  People have simple minds, so talk down to them.

2. People have this thing about “understanding,” so give them stories they can understand (people love stories).

3. People are not very trusting, so make up some reasons for them. That way they think they have made the decision.

4. People like to feel as if they are in control, even though they aren't. Humor them. Give them simple things to do while we do the important things.

5. People lack self-confidence, so they need a lot of reassurance. Pander to their emotions.

 

A:
Yes, yes, you understand. I'm very pleased with you. But, you know, those rules are much harder to put into practice than they might seem. People won't let us.

I:
Won't let you! Certainly not if you take that tone toward us. But what specifically did you have in mind? Can you give examples?

A:
Yes. What do we do when they make an error? How do we tell them to correct it? Every time we tell them, they get all uptight, start blaming all technology, all of us, when it was their own fault. Worse, they then ignore the warnings and advice . . .

I:
Hey, hey, calm down. Look, you have to play the game our way. Let me give you another rule. Call it Rule 6.

6. Never label human behavior as “error.” Assume the error is caused by a simple misunderstanding. Maybe you have misunderstood the person; maybe the person misunderstands what is to be done. Sometimes it's because you have people being asked to do a machine's job, to be far more consistent and precise than they are capable of. So, be tolerant. Be helpful, not critical.

A:
You really are a human bigot, aren't you? Always taking their side: “having people asked to do a machine's job.” Right. I guess that's because you are a person.

I:
That's right. I'm a person.

A:
Hah! Okay, okay, I understand. We have to be really tolerant of you people. You're so emotional.

I:
Yes, we are; that's the way we have evolved. We happen to like it that way. Thanks for talking with me.

A:
Yes, well, it's been, instructive, as always. I just e-mailed you the transcript. Bye.

That's it. After that interview, the machines withdrew, and I lost all contact with them. No web pages, no blogs, not even email. It seems that we are left with the machines having the last word. Perhaps that is fitting.

 

Summary of the Design Rules

Design Rules for Human Designers of “Smart” Machines

1. Provide rich, complex, and natural signals.

2. Be predictable.

3. Provide good conceptual models.

4. Make the output understandable.

5. Provide continual awareness without annoyance.

6. Exploit natural mappings.

Design Rules Developed by Machines to Improve Their Interactions with People

1. Keep things simple.

2. Give people a conceptual model.

3. Give reasons.

4. Make people think they are in control.

5. Continually reassure.

6. Never label human behavior as “error.” (Rule added by the human interviewer.)

 

Recommended Readings

 

This section provides acknowledgments to sources of information, to works that have informed me, and to books and papers that provide excellent starting points for those interested in learning more. In writing a trade book on automation and everyday life, one of the most difficult challenges is selecting from the wide range of research and applications. I frequently wrote long sections, only to delete them from the final manuscript because they didn't fit the theme that gradually evolved as the book progressed. Selecting which published works to cite poses yet another problem. The traditional academic method of listing numerous citations for almost everything is inappropriate.

Other books

The Flamethrowers by Rachel Kushner
Nano by Sam Fisher
Assassin's Code by Jonathan Maberry
Harvesting Acorns by Deirdré Amy Gower
Bound Together by Eliza Jane