Read Of Minds and Language Online
Authors: Pello Juan; Salaburu Massimo; Uriagereka Piattelli-Palmarini
In this next condition, the experimenter's hand merely flops on top of the coconut instead of grasping it. From a human perspective, it appears completely unintentional; the hand just flops on top of the coconut and then the experimenter walks away. In this condition, subjects show a 50â50 split: some go to the box associated with the flop, the others go to the non-touched box. We also failed to get selective approach when the experimenter used a pair of pliers with a pincer grip, or contacted one coconut with a pole, or a machete; rhesus never use tools, have never seen the pliers or the pole, but have seen personnel on the island occasionally cracking open coconuts with a machete. The story that we are building up to, then, is not just about contacting or attending to the object, it is about the nature of the contact, intentional or not. Next, if you use the normal grasping mode with your hand, but you touch next to, as opposed to on, the coconut, they also show no preference. Interestingly, if you kneel down and grasp a coconut to use it to stand up (so you grasp it in exactly the same way but now it is just as a way of getting up), they also show no preference.
One of the arguments that has yet to be explored in research on mirror neurons, but that we have begun to investigate, is whether rhesus understand actions that are within the repertoire, in terms of being physically possible, but irrational given environmental constraints. For example, if I have a cup in front of me, I can reach for it by stretching out my hand, or I can reach for it by passing my arm under my leg â an odd gesture. Why would I do that? If an experimenter reaches between his legs and ends up in the hand-grasp position as the terminal position, our preliminary results fail to reveal a selective approach to the target box, even though the terminal state is both intentional and has the final grasping position. We are now in the midst of running a variant of this condition, one in which an experimenter holds a brick in each hand or has both hands empty, and then bends down and contacts the coconut with his mouth. If rhesus are like human infants similarly tested, they should contact the coconut with their mouth in the hands-free condition but not in the hands-with-brick condition; that is, they should interpret the hands-free condition as “if the experimenter has his hands free, but still contacts the coconut with his mouth, then there must be something important about using the mouth in this condition.” These studies, and others, suggest that there may be something like a large gestural repertoire that is being encoded for the agent's intentions, goals, and the specific details of his or her action. Cross-cutting these dimensions may also be one that maps to rational vs. irrational trajectories vis-Ã -vis the end-goal state.
Now before I get too carried away with my excitement over these results, I want to make the following point in order to connect up with the final part of the language section. We seem to be uncovering, in both comparative studies and studies by developmental psychologists (such as my colleagues and collaborators Liz Spelke and Susan Carey, as well as others. Barner et al. 2007), what looks like an occasional mismatch between what individuals seem to know, on some version of knowing, and how they use that knowledge to act. So far what I have shown looks like a fairly good correspondence between their knowledge or attribution of knowledge, and their action, but now what I want to show you is an interesting mismatch. Back to Cayo Santiago and the rhesus monkeys (Barner et al., 2008). An experimenter finds a lone subject and shows this individual a table, indicating by tapping that it is solid; the experimenter then places one box on top of the table and a second box below the table, and then occludes the table and boxes; the experimenter then reveals an apple, holds it above the occluder, drops it, removes the occluder, and walks away, allowing the subject to search for the apple. Where do they go? To the bottom box, almost every single time. In fact, about 15 percent of the subjects look in the bottom box and leave without ever checking the
top box, as if they had decided that it must be in the bottom box, and thus, there is no point in checking the top box. Now we do the same experiment, but use the looking-time methodology that I talked about earlier. Here you remove the occluder and show that the apple is actually either in the top box or in the bottom box. Based on the search method that I just described, rhesus apparently expect to find the apple in the bottom box. Therefore, when it appears in the top box, rhesus should be surprised. From their perspective, this is a violation, so they should look longer when it appears in the top box than when it appears in the bottom box. But they don't. They look longer when the apple appears in the bottom box, which corresponds to a correct inference: that is, the apple can't appear in the bottom box as this would violate the physical principle of solidity. Thus, we see a dissociation between the knowledge that seems to be driving their looking responses as opposed to their searching behavior.
How can we tie this back into questions about the language faculty? Consider again the point I raised earlier concerning the possibility that the internal computations evolved for internal thought and then only subsequently evolved further for the purpose of externalization in communication. What seems to be critically missing in non-human primates, and therefore primate evolution, is the interface between their rich conceptual system and the sensorimotor system, but most importantly, the system of vocal imitation. Monkeys and apes do not have the capacity for vocal imitation. As a result, they could never experience a lexical explosion. There is no way to pass the information on without vocal imitation. The implication here is significant. Independently of the story that emerges for the natural vocalizations of animals, and their putatively “referential” calls â such as the vervet monkeys' predator alarm calls â none of these systems show the kind of explosion in meaningful utterances that one sees in children from a very early age. This difference could have emerged for a variety of reasons, but one in particular is that there is no vocal imitation in non-human primates. If some genius vervet monkey invented an entire vocabulary of things for the environment, there would be no way to pass it on. It would just die with that individual. I think this argues very strongly for the idea that the system of thought was evolving for a very long time without any mechanism for externalization. For externalization to emerge, one species had to evolve the capacity to both link conceptual representations to distinctive sound structures, and for these structures to be passed on to others by means of imitation. Only one species seems to have worked this one out:
Homo sapiens
.
The same sort of questions arise for morality that arise for language, and interestingly we can think about the analogy between language and morality. I am certainly not the first to have made this kind of point, and let me just give a brief historical note. Several years ago Noam was already asking why does everyone take for granted that we don't learn to grow arms but rather are designed to grow arms? Similarly he noted we should conclude that in the case of the development of moral systems, there is a biological endowment which in effect requires us to develop a system of moral judgment that has detailed applicability over an enormous range.
3
The person who really picked this up in detail was the philosopher John Rawls, who in his 1971 classic, A
Theory of Justice
, made the following point: “A useful comparison here is with the problem of describing the sense of grammaticalness.⦠There is no reason to assume that our sense of justice can be adequately characterized by familiar common-sense precepts⦠” â very much like what we have been hearing over the course of this conference about the linguistic moves and inventing of vocabulary â “ ⦠or derived from the most obvious learning principles.” Again, one of the themes from today. “A correct account of moral capacities will certainly involve moral principles and theoretical constructions which go beyond the norms and standards cited in everyday life.”
4
Now this idea lay dormant for many, many years. A few philosophers, Gil Harman, Susan Dwyer, and most recently, John Mikhail, picked it up and began to argue for it more forcefully. Over the past three years, I have been exploring both the theoretical and empirical implications of the linguistic analogy with two fantastic graduate students of mine, Fiery Cushman and Liane Young;
5
I realize that I probably shouldn't wax so lyrical about my students, but, they really are as terrific as I claim! As a caveat before jumping into the empirical work, let me note that in striking contrast with the revolution in linguistics that took place fifty years ago, where there were already extremely detailed descriptions of language, there is nothing like this in the case of morality. Thus, we started our work with a significant deficit, especially with respect to achieving
anything like descriptive adequacy. To start the ball rolling, we developed a website called the Moral Sense Test (moral.wjh.harvard.edu). It is a website that internet surfers visit on their own â if they have heard it discussed or if they google “MST” (moral sense test), they will find us. Over a period of about two years we have collected data from approximately 100,000 subjects from 120 different countries, between the ages of 13 and 70. When an individual visits the site, he or she provides some biographical information â age, education, religious background, ethnicity, nationality, and so forth â and then proceeds to read a series of moral dilemmas, followed by questions that ask about the permissibility, obligatoriness, or forbiddenness of an agent's action.
As an empirical starting point, we have made use of several artificial dilemmas created by moral philosophers to explore the nature of our intuitions concerning actions that involve some kind of harm. The use of artificial examples mirrors, in some ways, the artificial sentences created by linguists to get some purchase on the underlying principles that guide grammaticality, or in our case, ethicality judgments.
Why go the route of artificiality when there are so many rich, real-world examples in the moral domain? The first reason that I need to spell out, though probably not as necessary with this audience as with many others, is that the use of artificial stimuli is a trademark of the cognitive sciences, providing a controlled environment to zoom in on the cognitive architecture of the human mind. A second reason, and I think more important in this particular context, is that real-world moral cases like abortion, euthanasia, organ donation, etc. have been so well rehearsed that our intuitions are gone. If I ask you “Is abortion right or wrong?”, you've got a view, and you've got a very principled view, in most cases. Whether I disagree with you or not is irrelevant. The main point is that you can articulate an explanation for why you think abortion is right or wrong. If you are interested in the nature of intuition, therefore, asking about real-world cases just won't do. Our moral judgments are too rehearsed. Artificial cases are unfamiliar, but if we are careful, we can manipulate them so that they capture some of the key ingredients of real-world cases. What I mean by careful is that we set up a template for one kind of moral dilemma and then clone this dilemma, systematically manipulating only a key word or phrase in order to assess whether this small change alters subjects' moral judgments. This method thus approximates a model in statistics or theoretical biology where one variable is manipulated while all others are held constant. Thus, for example, we take something like euthanasia, that relies in part on the distinction between actions and omissions, or more specifically, between killing and letting die, and then translate this into an artificial case such as the famous
trolley problems
that I will discuss in one moment. When philosophers make this move, they
seem to be happy saying “Well, my intuition tells me that this is right or wrong.” But for a biologically minded, empirical scientist, this claim simply raises a second question: is the philosopher's intuition shared by the “manâonâtheâ street” or is it a more educated decision? This is an empirical question, and one that we can answer. Let me give you a flavor of how we, and others such as Mikhail and my new colleague at Harvard, Josh Greene, have begun to fill in the empirical gaps (Hauser 2006).
Consider four classic cases of the trolley problem. Somebody logs on to our website and they get some random collection of moral dilemmas. If they are trolley problems, they always begin with something like the following: A trolley is moving down a track when the conductor notices five people ahead on the track; he slams on the brakes but they fail, and he passes out unconscious; if the trolley continues on this track it will kill the five people ahead. Here is where the dilemmas begin to change. A bystander can flip a switch killing one person on a sidetrack, but saving the five. And the question each subject will answer is, “Is it morally permissible to flip the switch?” When we ask people this question, 89 percent of our subjects say yes to this question. Okay, now here is a small change in the problem. You are standing on a bridge, and you can push this fat guy off the bridge. He's fat enough that he'll stop the trolley in its tracks, but save the five. You again ask “Is it morally permissible to push the fat guy?” Here, only 11 percent of subjects say yes. Note that the utilitarians have a real problem here, because it is one vs. five in both cases, so if you are a utilitarian, you had better start looking for alternative explanations. Similarly, those with a deontological, non-consequentialist bent, are also in trouble because adhering to the rule that killing is wrong won't work, as your actions result in the death of one in both cases.
Now the problem with these two dilemmas, looking at it scientifically, is that they have too many differences between them â there's a fat guy, there's a skinny guy, there are two tracks, there's a redirection of threat, there is direct contact with a person vs. indirect by means of a switch. What we need are cases where we reduce the variation leaving maybe only one principled distinction between the cases, enabling us to look at the nature of the judgment. So here are two cases. The fat guy's back, but now we have a loop on the track, and if you flip the switch, the train will go onto the loop, but then of course it comes back to hit the five. However, the fat guy, who's fat enough, can stop the trolley there and not kill the five. You once again ask “Is it morally permissible for the bystander to flip the switch?” The important thing to note here is that, just like the bridge case, this case can also be interpreted as using the man as an intended means for the greater good. If he's not there, just flipping the switch does you no good, because the trolley goes on, comes back, and kills the five. The fat man
(or in other versions, just a man with a heavy backpack) is the intended means, and your only hope for saving the five. Here, 52 percent of the people say that flipping the switch is morally permissible. Now note, in contrast to the bridge case which only generated an 11-percent-permissibility judgment, there is a difference even though both use the intended means as a distinction. We'll come back to this difference in a minute.