Terminator and Philosophy: I'll Be Back, Therefore I Am (24 page)

Read Terminator and Philosophy: I'll Be Back, Therefore I Am Online

Authors: Richard Brown,William Irwin,Kevin S. Decker

BOOK: Terminator and Philosophy: I'll Be Back, Therefore I Am
13.08Mb size Format: txt, pdf, ePub
 
2
Georg Wilhelm Friedrich Hegel,
The Philosophy of History
, trans. J. Sibree (New York: Dover Publications, 1956). One of the reasons
The Philosophy of History
is so accessible is that Hegel never wrote the work himself. The book was pieced together from Hegel’s lecture notes and the notes of some of his students, and published posthumously in 1837. In contrast,
The Phenomenology of Spirit
is hideously written but is often used as an introduction to Hegel’s thought because it outlines his overall philosophical system. A reliable and accessible secondary source that explains the basics of Hegel is Peter Singer’s
Hegel: A Very Short Introduction
(New York: Oxford Univ. Press, 2001).
 
3
Hegel,
The Philosophy of History
, 73.
 
4
Ibid., 33.
 
5
Ibid., 30.
 
6
Peter S. Fosl comes up with the same conclusion in his chapter in this volume, “Should John Connor Save the World?”
 
PART FOUR
 
THE ETHICS OF TERMINATION
 
12
 
WHAT’S SO TERRIBLE ABOUT JUDGMENT DAY?
 
Wayne Yuen
 
 
Three billion human lives ended on August 29, 1997.
Survivors of the nuclear fire called the war “Judgment
Day” and they lived only to face a new nightmare: the
war against the machines.
—Sarah Connor,
Terminator 2: Judgment Day
 
 
What’s so terrible about Judgment Day? Given that burning in nuclear fire would be more than enough to ruin a day for most people, this question may sound strange. But the philosopher Bertrand Russell (1872-1970) once said, “The point of philosophy is to start with something so simple as not to seem worth stating, and to end with something so paradoxical that no one will believe it.”
1
Now this quote is probably meant to be taken in a tongue-in-cheek way, but there is a kernel of truth in it. Rarely do people think through all of the logical implications of their basic beliefs. It seems obvious that Sarah should kill Miles Dyson if that would stop three billion people from dying in a nuclear holocaust. It was Dyson’s work that ultimately led to the development of Skynet, the self-aware computer system that turned against its human operators.
2
Sarah does not kill Dyson, however, and it isn’t entirely clear that killing Dyson would have been the morally right thing to do. We’re never privy to Sarah’s thoughts as to why she changed her mind, but her rationale may be understood somewhat by considering her son John Connor’s command to his Terminator bodyguard: “You just can’t go around killing people.” When pressed to explain, John’s best shot was, “You just can’t.”
 
How does Sarah Connor’s decision not to kill Miles Dyson measure up against Russell’s belief about how philosophy works? Is the decision to spare the creator of Skynet absurd, given the consequences of doing so? As we’ll see, this test leads us to a very counterintuitive conclusion.
 
“Blowing Dyson Away”: Kant or Consequences
 
So why might it be wrong to kill Dyson? Compare this scenario to the well-known thought experiment about the morality of killing Hitler before he began World War II and the Holocaust. If I could kill Hitler prior to 1939, then I would have been able to prevent six million Jews from being executed in the concentration camps. Similarly, Sarah Connor must be thinking that if she can kill Dyson before Skynet is activated, Sarah can save three billion lives on August 29, 1997. These are both very simple
consequentialist
approaches to the matter. Consequentialists believe that the consequences of our actions determine the rightness or wrongness of our acts. For consequentialists, acts themselves are neither right nor wrong. Killing, lying, even nuclear war could be morally permissible acts, so long as the consequences are more favorable than other alternatives. Clearly, Sarah and John are not simple consequentialists, since they don’t opt for this kind of solution. So they must be approaching the problem in another way.
 
Probably the most popular non-consequentialist approach to ethics is found in the philosophy of Immanuel Kant (1724-1804). Kant argued that the only morally acceptable actions would (1) not create a contradiction when we imagined that everyone behaved similarly, and (2) would treat moral agents with respect and dignity. These rules are two different ways of understanding what Kant called the categorical imperative.
3
Kant believed that some actions are always intrinsically wrong, even if they produce good consequences, because even though some actions have good consequences, the actions would violate the humanity of particular individuals. For example, people should always keep their promises, even when keeping a promise would be incredibly inconvenient. Not keeping the promise would violate the first rule, since within the very concept of the promise is keeping it. If everyone were to constantly make promises they didn’t intend to keep, the very idea of “promising” would go up in smoke. Kant thinks that willing something immoral—or making an exception for ourselves to general laws—creates the strongest kind of contradiction, a logical contradiction. Interestingly, the first formulation of the categorical imperative can help us find the
rights
that people have. For example, the idea that “everyone has the right to defend themselves from attackers” is something that can be willed universally. All persons could obey this rule, and no logical contradiction would arise.
 
The second formulation of the categorical imperative adds a dimension of dignity and respect to persons. John makes his pet Terminator swear not to kill anyone, which seems to reinforce his non-consequentialist approach. Kant would argue that this kind of policy is the only one that truly respects the dignity of persons. What makes the scene amusing in an ironic way is that the Terminator ignores the dignity of the guard but follows the rules set by John, and so the Terminator undermines Kantian morality by not respecting the dignity of the guard, but he does obey John’s command. This would violate the second formulation of the categorical imperative, since the guard was not treated with respect or dignity.
 
But we know that not every instance of killing is wrong. Even Kant would approve of the morality of killing in self-defense, for example. It seems obvious that three billion people would have a serious beef with Dyson, since it’s in their interest to pursue self-defense for their continued existence. Here, it’s helpful to notice the differences between the Hitler and Dyson scenarios. Hitler killed approximately six million Jews in the Holocaust. Skynet ultimately kills five hundred times as many—three billion people. If the consequentialists would stop Hitler’s Holocaust, surely they have a case for stopping Dyson. However, Hitler’s decisions were the direct cause of the extermination of the Jews, while Dyson’s “holocaust” was purely accidental. Typically, we don’t hold people morally responsible for actions that they cause accidentally, because there was no malicious intent behind their act. Whereas Hitler is guilty of premeditatedly attempting genocide, Dyson seems merely the first cause in a very unlikely series of events that leads to mass murder. It wouldn’t even make much sense to say that Dyson was being negligent in his work, so that he could be accused of acting irresponsibly and endangering the lives of others, which is usually how we define “manslaughter.” Because of the lack of intent, what Dyson did was an accident, like spilling milk, yet three billion people died because of this particular tip of the glass.
 
As we’ll soon see, there are good reasons for Sarah’s and John’s decisions not to kill Dyson, but let’s be cautious about examining them. In the case of Dyson, our frustrated inability to pin blame on anyone for the Skynet incident might sway us toward accepting the consequentialist’s view that it would be better to kill him in order to reclaim the lives of so many others. In order to evaluate this position, we have to examine the underlying assumptions of the belief, and this returns us to our question, “What is so terrible about Judgment Day?”
 
Machines Have Feelings, Too
 
Let’s ask this question from the perspective of
utilitarianism
, the most common consequentialist approach to ethics. This view says that we should try to maximize the “utility,” or satisfaction, of as many different interests as possible.
4
Jeremy Bentham (1748-1832), together with James Mill (1773-1836) and his son, John Stuart Mill (1806-1873), make up the British school of utilitarianism; Bentham noted that utilitarianism aims to simply maximize the greatest happiness possible. In this case, the utilitarian is likely to think: “Surely three billion people living while Dyson dies would make more people happy than Dyson being allowed to live while three billion people become ash in a nuclear wind.”
 
But in fact, this is a shortsighted view of the scenario. Utilitarians need to take the long-term, as well as the short-term, consequences into consideration. This analysis extends only to Judgment Day and does not project beyond it. More important, the utilitarian formula of maximizing interests is so simple that its implications are often overlooked—the statement says nothing, for instance, about counting only human interests. Animals, for example, can also be said to have interests, specifically the avoidance of pain and suffering. Peter Singer, a prominent Princeton philosopher, argues that utilitarianism dictates that we have a moral obligation to treat animals with compassion and to minimize their unnecessary suffering. Parallel to Singer’s point about animals, it seems that the interests of Skynet and the intelligent machines subsequently produced are not being taken into consideration.
 
In
Terminator 2: Judgment Day
, the Terminator protecting John and Sarah explains that Skynet computers were put in control of all of the U.S. military defense systems, taking decisions out of human control. This worked perfectly until Skynet became self-aware. Its human operators panicked and tried to turn it off. In what can only be interpreted as an act of self-preservation, Skynet began a nuclear war (note: it would go against both of Kant’s rules to say that three billion people have a right to self-defense, yet Skynet does not have the right to defend itself as well). If Skynet is considered to be a person with moral value like humans, then Skynet must be treated with dignity and respect according to Kant’s second formulation of the categorical imperative. As a person it would also have the same rights as every other person under the first formulation of the categorical imperative, including the right to defend itself. Refusing to give Skynet this right would mean that the rule of self-defense does not apply to all persons, and we would be denying Skynet respect, violating both formulations of the categorical imperative.
 
But these principles apply only to persons, and arguably Skynet isn’t a person, so perhaps we don’t have to acknowledge its right to defend itself. Of course, it’s not easy to define what a “person” is. The task has become more urgent in recent years because of what hangs in the balance of the definition. Today, nothing less than the moral acceptability of abortion, euthanasia, and the rights of the disabled and animals are at stake.
 
Some have argued that the requirement for “personhood” is to be a human being, so that no other animals, and certainly no artificial beings, could be considered persons. This isn’t too satisfying, though, since intelligent machines could in principle exist and behave in morally responsible ways. Both the android Data in
Star Trek: The Next Generation
and HAL in
2001: A Space Odyssey
are examples of machines that audiences judge in terms of moral blame or praise. They are treated not just as the cause of certain events, but also as
responsible
for the events. Instead of this narrow definition, the critical ingredients of personhood may involve not merely the possession of human DNA but instead intelligence, self-awareness, empathy, and moral reasoning. To be a person, a being may need to possess a larger
degree
of each of these traits as well. My cat, Bogo, is intelligent in that he can sit when he is told and he knows how to high-five. He is not, however, intelligent enough to enjoy an episode of
The Sarah Connor Chronicles
,
5
so he wouldn’t count as a full person.
 
All of the Terminators we meet in the film series exhibit at least some of these traits, often in great measure: they show their intelligence through careful planning of traps for their targets, as when the T-1000 murders John’s foster parents and poses as one of them. Skynet’s very existence was threatened by its achievement of self-awareness, and the Terminator sent to kill Sarah Connor passed the behavioral test of recognizing itself in a mirror, even after having suffered disfiguring injuries. This machine even feels a kind of empathy: at the end of
T2
, it tells John and Sarah, “I know now why you cry.” It seems to understand people’s emotions and empathizes with John and Sarah at their loss, even as it allows itself to be destroyed for the future good. Its act of self-sacrifice perhaps indicates its understanding of the basics of utilitarianism, for utilitarians acknowledge that individuals, even themselves, sometimes may have to be sacrificed to maximize the general happiness.

Other books

Esta noche, la libertad by Dominique Lapierre y Larry Collins
Mira's Diary by Marissa Moss
John Racham by Dark Planet
Grave Danger by K.E. Rodgers
The Divorce Club by Jayde Scott
My Reaper's Daughter by Charlotte Boyett-Compo
Midnight Secrets by Ella Grace