Read Of Minds and Language Online
Authors: Pello Juan; Salaburu Massimo; Uriagereka Piattelli-Palmarini
There are many interesting problems in the relation between grammatical form classically understood, and logical form in the old sense (i.e., the structure of the proposition, or truth conditions). I have tried to deal with some of these and I will mention a couple here. Consider (4):
(4)Â Â Â Â An occasional sailor strolled by
Suppose I am saying what happened while we were sitting at the café. An assertion of (4) could mean that Jerry Fodor came by, because he
is
an occasional sailor, but that is not what I am likely to mean in the context. What I am likely to mean is, roughly speaking, that occasionally some sailor or another strolled by. So here we have a case where the apparent adjective,
occasional
, modifying
sailor
, is actually interpreted as an adverbial over the whole sentence. The case is the same for
the average man
, as in
The average man is taller than he used to be
, and things of that kind.
Faced with such examples, there is a temptation to try to keep the syntax off which the semantic computation is run very simple, and correspondingly to complicate the semantics. I myself suspect that this move is a mistake, because complicating the semantics is bound to mean the introduction of a far more powerful logic than the language itself otherwise bears witness to. In fact, Montague's Intensional Logic, commonly deployed in semantics, is of order v, and so allows all finite levels (Montague 1974). But it is important to stress â I'd be willing to discuss this â as there is no evidence whatsoever for such a logic outside the domain of puzzles such that I have just posed, together with
the assumption (unargued for) that the linguistic level that enters semantic computation is identical to superficial syntax. There is no independent evidence in language (or in mathematics, I think, though this is a matter of serious debate) for a strong higher-order logic. Rather, language allows first-order and some weak second-order quantification, and that's it (Higginbotham 1998). Appeal to higher-order logic in linguistics constitutes the importation of machinery that is not wanted for any purpose, except to keep the syntax simple. There must be a tradeoff between components: maybe the syntax is more abstract than you think.
There is also a temptation to suppose (maybe Jackendoff is guilty of this; he thinks he is not, but certainly some people are guilty of this
1
) that once we go mentalistic about semantics, there are certain sets of problems that we don't have to worry about, like Santa Claus. So if somebody says,
(5)Â Â Â Â It's perfectly true that Higginbotham is standing; it's also true that Santa Claus comes down the chimney on Christmas (that's what I tell my child).
you can add to the semantics at which these things come off the assembly line together, and then we can have some further note about the fact that Santa Claus doesn't really exist. But I don't think you can do semantics in this way. I mean, again, that you can't do the semantics with your left hand, and then pretend by waving your right hand that you were really only talking. Moreover, it is very important to recognize that part of knowing the interpretation, the meaning of the word
Santa Claus
, is knowing that there is no such thing. That is important, and that means that the semantics should
not
treat the two clauses in (5) in parallel, but should take account of this further dimension of meaning.
Something similar might be said about the example of generics that came up in earlier discussion here. From the point of view of early learning, these surely must be among the simplest things learned. Dogs bark, cats meow, fire burns, and so forth. From the point of view of the full understanding, as we work up our system of the world, they are in fact extremely complicated, these generic sentences. And I do agree with the critical comment, made by Nick Asher among others (Asher and Morreau 1995), that the fashionable use of a made-up “generic quantifier” for the purpose of giving them an interpretation is not an advance. Rather, what you have to do is take
dogs bark
(if x is a dog, x barks), and you have to delimit through our understanding of the world what it is that will count as a principled objection to
dogs bark
, and what it is that will count as simply an exception. All of that is part of common-sense systematic knowledge. It can't be swept under the rug just on the grounds that you're doing Linguistics.
So those are two kinds of things that I have been interested in, syntactic/ semantic differences amongst languages and the nature of semantic computation, and the relations of semantics to our systematic beliefs about the world. I should say that Noam and I years ago used to have discussions about whether the semantics ought to give you a real theory of real honest-to-God truth about the kind of world in which we live, which is full of independent objects that don't at all depend on my existence or that of any mind for their existence, or whether in a way it is more mentalistic than that, as he suggested. And after an hour or so of conversation, we would agree that we had reached a point where the debate was merely philosophical in the old sense. That is to say, which view we took about this probably didn't matter much for the nature of our research, whether we should be full realists or not.
David Hume once said (in the Treatise), “'Tis in vain to ask Whether there be body or not,” but he added that what we can ask is what causes us to believe in the existence of bodies.
2
So similarly, we might say that it's in vain to ask whether what we systematically take there to be really exists, but we can ask what causes us to think and speak as we do. If we can do that, if we can replace one kind of question with another, then perhaps the arguments about realism and anti-realism or mentalism in semantics will go away.
What is left to the future? I think there are many interesting questions. One of them, on which I think there has been almost no progress, is the nature of combinatorial semantics. We have the notion of truth, maybe, as a nice, primitive notion, but where does the notion of predication come from? You see, if you think about it, you couldn't possibly learn a language without knowing what's the subject and what's the predicate, because these play fundamentally different semantic roles. You can't make judgments without using predicates. On the other hand, you couldn't tell a child, “Now look here, in
dogs bark
, the
word bark
is the predicate and it's true or false of dogs.” You couldn't do that because the child would have to already understand predication in order to understand what it was being told. Now sometimes this problem gets swept under the rug. I've had people say to me that it's simple enough, in that predicates refer to functions and names refer to their arguments. But that's not the answer to the question; that's the same question. And in fact Frege, who invented this way of talking, recognized it as the same question. What's the difference between the meaning of a functional expression and the meanings of its arguments? I guess I would like to see progress made on this kind of question, the question whether language as we have it, or perhaps must necessarily have it, must be cast in this mold, whether it must employ notions of subject, predicate, and quantification. So far we don't know any other way to do it. It would be nice to know where predication comes from and whether language makes predication possible or predication is merely an articulation of something more basic.
Those, then, are the summaries of the kinds of things that I think we might try to think about in the syntaxâsemantics interface, where it comes to general principles, and where it is really special to language. In the clarification of these metaphysical questions that inevitably arise about the semantics, we have a semantics of events. “But tell me more about the nature of these objects,” one might say. A theory of events for semantic purposes really doesn't tell you much about their nature, it's true. And in the further articulation of the subject, which will be a really very refined discipline showing how exactly these very simple (to us) and primitive concepts get articulated, we'll see their complexity.
Let me give you another very simple example, with which I'll conclude, having to do with the English perfect (Higginbotham 2007). Every native speaker of English knows that if I have a cup of coffee and I tap it with my hand and knock it over, what I say â what I must say â is (6):
(6)Â Â Â Â I have spilled my coffee
That is, I must use the present perfect. If somebody gives me a mop and I mop the spill up and put the mop away, I can no longer say (6). Instead, I must say (7):
(7)Â Â Â Â I spilled my coffee
These are the sort of data that led Otto Jespersen (who regarded English as a very conservative language, relative to other Germanic languages) to say that the English perfect is not itself a past tense, but talks about “present results of past events” (Jespersen 1942). That the perfect is thus restricted, if that is true, is a rather special property of English. If you try to work out the semantics of (6) versus (7), I think you do get somewhere if you think of the perfect as a
purely aspectual element, shifting the predicate from past events to present states that are the results of those events. But the investigation requires very careful probing into exactly what you are warranted in asserting and exactly when. It is not at all a trivial matter. It takes much reflection if one is, so to speak, to get inside a language and to articulate its semantics self-consciously, even if it is one's native language. As a native speaker, you get things right without knowing what it is you are getting right. Conversely, non-native speakers often have a tin ear. The English present perfect is a good example of what goes without saying in one language, but is strange in another. If you take, say, ten Romance speakers who are speaking English as a second language, eleven of them will get it wrong: they always slip in the perfect forms when they're not warranted in English.
I look, then, for considerable progress in the (as it were) backyard field of lexical semantics. I think that lexical semantics holds a great deal more promise, not only for clarifying common concepts expressed by nouns and verbs, but also clarifying notions of aspect, tense, and so forth, than it has generally been credited for. And my hope is that as that research goes on, simultaneously with combinatorial semantics, we shall succeed in reducing the burden on the combinatorics.
But there is a fond memory, and a fond quote, here. My friend Gennaro Chierchia and I once had a conversation about some of these matters, and Gennaro said, “But Jim, if you're right, then the combinatorial semantics should be trivial.” And I replied, “That's right; that's the way I'd like it to be.”
Goodness knows how it will turn out.
P
IATTELLI
-P
ALMARINI
: You say, and it is very interesting, that the English to doesn't exist in Italian, and probably the English past tense does not exist in Italian either. Now, you say that you would like such facts to be principled, not to be sort of isolated facts. Great, but my understanding of the minimalist hypothesis is that all parameters are now supposed to be morpho-lexical. Is this acceptable to you? One can stress that, even if it's lexical, the non-existence of English to in Italian looks like a lexical datum, and maybe also the non-existence of the English past tense in Italian may be an issue of auxiliaries. So all this can be principled even if it is morpho-lexical. Is it so?
H
IGGINBOTHAM
: Of course the absence of
to
(also
into, onto
, the motion sense of under (it. sotto) and so forth) has to be a matter of principle. I think the thing that was distinctive about the view that I was offering is that these words couldn't exist because a certain kind of combinatorics is not possible in Italian,
specifically the combinatorics which says you take something which is not the syntactic head, and you make it the semantic head. That's something that is generally impossible, and it would be a principled absence, explained on the grounds of general language design. Conversely, to permit the semantic head to be other than the syntactic head would constitute an interface parameter that says: in this kind of language you are allowed to mesh the syntax with the semantics in such and such a way. But of course the working hypothesis in the field is that combinatorial parameters are universal. I would think that, like the compositionality hypothesis, it's probably very close to being true, but it's not entirely true, and it would be interesting to know where it breaks down. If I'm on the right track, it breaks down in this particular place.
B
OECKX
: I know that you have written on this extensively, but could you give me a one-line argument for thinking that the parameter that you are talking about by itself is semantic as opposed to syntactic? I guess it touches on the tradeoff between syntax and semantics and where the combinatorics, or the limitations of the combinations, come from.
H
IGGINBOTHAM
: Well, it's an interface phenomenon. The first part of the line of thought is the following, and is due to Jerry Fodor. Jerry pointed out that if you take a predicate and its result, and you modify the combination with an adverbial, then the position of adverbial modification becomes unique; the sentences are not ambiguous.
3
So his original argument compared
John caused Bill to die by X
(some kind of action) versus
John killed Bill by X
. In
John caused Bill to die by X
the
by
-phrase may modify cause or die. But with
kill
, you only get the modification of the whole verb. And it's the same with causative verbs, like
I sat the guests on the floor
versus
The guests sat on the floor
. Now it's also the same with
wipe the table clean
. So if to
I wipe the table clean
you add some kind of adverbial phrase, it's only the whole business â the wiping clean, not the wiping or the being clean alone â that gets modified. That's at least a consideration in favor of saying that
wipe clean
is a complex verb, just as an ordinary predicate like
cross the street
, and that the event has two parts. You don't just have an event e of crossing. There's an e1 and an e2, where in the case of
cross the street
e1 is a process, say stepping off the curb and proceeding more or less across, and e2 is the onset of being on the other side of the street, the end of the process. Similarly, in the case of
wipe clean
, you have the process signaled by
wipe
and the onset of the end, signaled by
clean
. Once you have said that, you are not just putting the verb and the adjective together, you're not just saying
I wiped the table until it became clean
, you're actually creating a
complex predicate,
wipe-clean
as it were. Then you would expect that, as in the case of
kill
or cross, you have only one position for adverbial modification, and that's in fact what you get. However, the capacity to form resultative predicates like
wipe-clean
is language-specific (common in English and Chinese, but not in Korean or Italian, for example). There is a combinatorial option, with effects on interpretation, available in some languages but not others. In that sense, the parametric difference between them is not purely syntactic, as it involves the interface between syntax and semantics.