Of Minds and Language (25 page)

Read Of Minds and Language Online

Authors: Pello Juan; Salaburu Massimo; Uriagereka Piattelli-Palmarini

BOOK: Of Minds and Language
3.58Mb size Format: txt, pdf, ePub

(i)    need to follow from
something
;

(ii)    if interface systems are not the right place to look for them (and no empirical evidence from comparative cognition to my knowledge suggests they are), and

(iii)    syntax and semantics very closely “correspond,” then

(iv)    human syntax has to provide the vertical hierarchies in question; but,

(v)    it can do so only if it is multi-dimensional; and

(vi)    it can be multi-dimensional only if it does not reduce to Merge (though it may
begin
there, a point to which I return shortly).
8

In short, if we are to explain the semantic richness of language – and not merely its systematicity – we need a multi-layered architecture, like the one we found in the human number system (Uriagereka 1995; 2008:
Chapters 7
–
8
). The hierarchical architecture of the syntactic system will need to reflect the very architecture of
meanings
(or “thoughts”), as constructed by the computational system of language.

9.6 Deriving the ontology of language

The specific ontology of natural language might in principle be there for purely
metaphysical
reasons. A world out there might be assumed that inherently, or by itself, and independently of human cognition or even the existence of humans, is a very orderly place categorially: it comes structured into objects, events, propositions, and so on, all as a matter of metaphysical fact, viewed
sub specie aeterni
. But where does this ontology come from? And how do we approach it? Which generative process underlies it? On standard philosophical methodologies, they will follow from a systematization of our conceptual intuitions. But that begs the questions. Our conceptual intuitions are what we want to explain and to study in formal and generative terms. Will it not be inherently
syntactic
distinctions that we have to appeal to when starting to study these ontologies empirically, like that between a noun and a verb phrase, or a transitive and an unaccusative verb? Would we take ourselves to think about
propositions
, if we were not creatures implementing a computational system that, specifically, and for unknown reasons, generated
sentences
? How would we characterize what a proposition is,
if not
by invoking syntactic distinctions, like that between an argument and an adjunct, an event and a proposition, a tensed proposition and one of which truth is predicated, and so on?

I do not question here that Merge is for real, in either arithmetic or language. The point is that it yields no ontologies, and therefore is only a subsystem of language. I even suspect that this subsystem is quite clearly identifiable. A linguistic subsystem that exhibits a systematic and discretely infinite semantics, but no ontology, is the adjunct system. When a syntactic object adjoins to a
syntactic object, the latter's label merely reproduces, but there is no categorial change. Moreover, an adjunct structure like (13), at least when read with a slight intonation break after each adjunct, has a flat, conjunctive kind of semantics (see Pietroski 2002; Larson 2004):

(13)    (walk) quickly, alone, thoughtfully, quietly…

Walk quickly
simply means, for some event,
e
, that
e
is a walking and it is quick:

(14)    [walking (e) & quick (e)]

The adjunct system, therefore, contains predicates and it can conjoin them, but no matter how many predicates we add adjunctively, no new ontology emerges. This is not the kind of structure or the kind of semantics that we need in order to make a judgment of truth, or to approach what I called the intentional dimension of language. It also yields no entailments: a solitary event of walking, say, entails nothing about whether it was quick or slow. Argument structures, by contrast, lack this conjunctive semantics, and they do generate entailments: [kill Bill], say, a verb phrase, does not mean that there was an event, and it was a killing and it was Bill. As for entailments, a killing of Bill not only and necessarily entails Bill, as an event participant, it also entails, as an event, a state, like Bill's being dead.

A killing that is inherently one
of Bill
is something that adjunct syntax cannot describe. Nor could a thought structured by adjunct syntax alone ever be about any such thing. The C-I systems would thus be deprived of such thoughts or means of recognizing them, unless the computational system of language or something equivalent to it restructured them, in line with the novel architecture I proposed.

Perhaps, then, here, in the adjunctive subsystem, and only here, interface explanations work: for the only thing that the non-syntactic, external “semantic systems” have a chance of “motivating” or explaining is something that does not have much syntax. But
adjuncts
are precisely what has been argued to fall into this category (Ernst 2002). Therefore an interface explanation of the standard minimalist kind may rationalize the existence of adjuncts (or at least a sub-category of them) – and little else. In effect, adjuncts are mostly characterized
negatively
: basically, they have never really fitted into the apparatus of syntax that minimalism has tried to derive from “virtual conceptual necessities.” They do not receive theta-roles, and do not take part in the agreement system; as Chomsky puts it, moreover, adjunction of α to β does not change any properties of β, which behaves “as if α is not there, apart from semantic interpretation,” which makes adjunction a largely semantic phenomenon; as he further argues, the resulting structure is not the projection
of any head, which makes adjunct-syntax a projection-free one; and adjunction cannot encode the argument-of relation correlated with head–complement dependencies (see Chomsky 2004b:117–118). These are properties that we may suspect a system to have that is based on unidimensional Merge. Disparities with principles of argument and A'-syntax suggest a radical dichotomy between arguments and adjuncts, however, and that their mode of attachment and connectivity with the syntactic object to which they attach is radically different.
9
This syntactic dichotomy, if I am right about strict form–meaning correspondences above, should affect the principles of semantic interpretation for adjunct structures; as we have seen, it does.

In the “extended” argument system (extended, to cover cartographic hierarchies, as in Cinque 1999), a form of hierarchy emerges that is completely different from the horizontal discrete infinity that adjuncts yield. We now see categories rigidly piling up on top of other categories, forming the quintessential V-Ν-T-C cycles that the sentential organization of language entails. This is not the kind of cycle that we can see in a successor-function-based system: we can cycle indefinitely in generating the natural numbers by iteratively applying the operation “ + 1,” with each such operation implying the completion of one cycle. In language, we are looking at a cycle that inherently constructs ultimately only one particular kind of object: a proposition, and that necessarily goes through a number of other objects along the way, such as an object, an event, a Tensed event, and so on.

Broadly speaking, what I suggest pursuing here, then, is an
internalist
direction in the explanation of semantics. Philosophy for the last one hundred years has pursued the opposite, externalist orientation: it has attempted to explain what we mean by what the world is like, or what objects it contains, and which physical relations (like causation) we stand in with respect to them.
10
Standard minimalist syntax, on the other hand, as I have pointed out, blames ontological cuts on language-external C-I systems. Neither option, I contend, seems as promising as what I have now proposed: to blame these cuts on syntax. The C-I systems are nonlinguistic ones. Ipso facto, to whatever extent the very identity of certain kinds of thoughts lies in the way they are universally structuralized in language, they wouldn't be found in the C-I systems. They would literally arise as and only as the computational system of language constructs them (i.e., endows them with the very structures
and identities that define them in logical space). While the extent to which this has to happen is not total, it is not nil either. But then it will follow that for
some
of the thoughts we are thinking it will be true that we are only thinking them because the computational system of language makes them accessible to us. Fully propositional, truth-evaluated thoughts that can be placed in discourse (for example, to make a claim of truth) are a good candidate for such thoughts.

As for the externalist option above, modern physics has made virtually all of the intuitive categories that enter into our ordinary ways of understanding and talking obsolete. Early modern naturalists still found a world inconceivable where matter can act where it is not. But they didn't conclude from this that such a world could not be real, but rather that one had to give up the hope that the world will validate or find much use for human conceptual intuitions. Soon after Newton, physicists even concluded that matter was unextended, throwing overboard the one crucial “essential” feature of matter that Descartes had kept. So the intuitive ontology of language is radically different from a physical ontology, and it is not that physical ontology that will explain what we take our expressions to mean, and what categorial distinctions they entail. These could in principle also come from an entirely different module of “thought,” but as I have argued, this requires, in fairness, to show that a different computational system is operative there than there is in language. If on the other hand this presumed separate module merely recapitulates syntactic distinctions under another name, it becomes explanatorily vacuous.

9.7 Conclusions

The standard formulation of the Strong Minimalist Thesis (SMT) may have misled us: in its pursuit of principled explanations of why language is the way it is, it has tended to
minimize
the contribution of the syntax to what thoughts are assumed available to the C-I systems, and thus to
deflate
syntax to an only minimally hierarchical system that is mono-categorial in the way the natural number sequence is. But this strategy is likely to succeed only if all “vertical” hierarchical cuts, whose reality is empirically manifest in language, and which intimately correlate with syntactic complexity, are, implausibly, dumped on the
non
linguistic side of the interface, in the so-called conceptual–intentional (C-I) systems. To be specific, the proposition that “C-I incorporates a dual semantics, with generalized argument structure as one component, the other being discourse-related and scopal properties” (Chomsky 2005a), hence that essentially the
entire
semantic potential of language is available
independently
of the very syntactic structures it is meant to explain or motivate, is very likely far too optimistic and unsupported by empirical evidence, as far as I can see (maybe even in principle, as there are many methodological obstacles in the empirical investigation of “thought without language”).

If that optimism is unwarranted, and from one point of semantic complexity onwards I have argued it likely will be, a proper explanation for such semantic knowledge has to come from the inherent organization of syntax itself. It has to be sought in a more internalist direction. Genuine hierarchy in the system calls for dimensional shifts in the derivational dynamics, of a kind that can create necessary entailments between different kinds of objects on formal grounds. This system will generate an ontology: ontological innovativeness will lie on the
linguistic
side of the semantic interface.

Discussion

L
AKA
: You are arguing that there should be no intentional interface. Everything that is a little complex or relational is a part of syntax, roughly speaking. You also said that there might not be a conceptual interface either, and your examples were argument structure, discourse factors, and so forth. So my question is, what is your view of the relationship between syntax and concepts, just bare concepts. We know that animals have some sort of – I don't want to say brilliance, but something similar, maybe not the same as us, but we have evidence that there are nonverbal features that have at least something we can call concepts.

H
INZEN
: On the bare concepts, if we accept that word meanings are atomic, then there are atomic concepts, and if not, we will need to reduce them further. If we wish to spell out the meaning of these atomic concepts, then any appeal to a notion of reference, in particular, is I think entirely circular, so I believe we are essentially stuck with those conceptual atoms. I don't think we can reduce them further, they are primitives of reality. As for the interface with the syntax, I suppose that they are carried into the syntax as indivisible units, but I do believe in what I have elsewhere called “exploding” the lexical atom. If we explode the lexical atom, we give it structure such that specific referential properties arise. The extent to which these bare concepts are shared, and how many of them are, is, I think, a totally open question. As Chomsky emphasized earlier here (see page 27 above), the basic properties of human words seem to be so different from any other thing in existence in animal communication that I would say that at this moment, it is a totally open issue. Maybe there are a few concepts and maybe there are none. So in fact the whole enterprise of motivating language externally or semantically, by conditions imposed on it, might actually
stop at concepts already, not at more complex stuff like argument structures, say. Now, as for the D-structure-like forms of complexity, in my view, if you just have adjuncts and a very simple form of combination to work with, and a very simple form of semantics correlating with that, then complexity increases very significantly as you go on to something like argument structure, because then we have at least theta-roles – and their semantics is not a conjunctive semantics any more, as with adjuncts. So, for example, if you say “John runs,” this does not mean that there is a running and it is John. It's a running
of
John. There is something new there which I wouldn't know how to motivate from anything other than itself – and not from so-called interface conditions in particular. So maybe this is the place where motivations from the interface have to stop, but as I said, maybe such motivations stop earlier, at the level of word meaning already. In any case, at some specific point the external motivation does certainly stop, and my point was that wherever it does, at that point we have to start thinking more deeply about what the properties of syntax are that give us these elements, which language itself has brought into being.

Other books

The Wayfinders by Wade Davis
A Feast in Exile by Chelsea Quinn Yarbro
The Scratch on the Ming Vase by Caroline Stellings
Limassol by Yishai Sarid
A Murderous Glaze by Melissa Glazer
Beyond Repair by Stein, Charlotte