Read Of Minds and Language Online
Authors: Pello Juan; Salaburu Massimo; Uriagereka Piattelli-Palmarini
Le bon sens est la chose du monde la mieux partagée; car chacun pense en être si bien pourvu que ceux même qui sont les plus difficiles à contenter en toute autre chose n'ont point coutume d'en désirer plus qu'ils en ont. En quoi il n'est pas vraisemblable que tous se trompent: mais plutôt cela témoigne que la puissance de bien juger et distinguer le vrai d'avec le faux, qui est proprement ce qu'on nomme le bon sens ou la raison, est naturellement égale en tous les hommes.
2
Unveiling the basis for human judgments of truth would thus seem to be of prime philosophical importance and interest. In what follows I will describe some steps which I think are needed to understand the origin of truth, and hence of human intentionality, continuing to make an assumption I have made in these past years, that the computational system of language â the generative system of rules and principles that underlies the construction of expression in any one human language â is causally responsible for how we think propositionally and why we have a concept of truth in the first place. I want to argue that if this is right, and the generative system of language underlies and is actually indistinguishable from the generative system that powers abstract thought, today's most common and popular conception of the architecture of the language faculty is actually mistaken, as is our conception of the basic structure-building operation in the language, the recursive operation Merge.
Today's “standard” theory of the architecture of the human language faculty has been arrived at principally through a consideration of which features and components this faculty
has
to have if it is to be usable, in the way we use language, at all. In particular, the standard view goes, there has to be:
(i) a computational, combinatorial system that combines expressions from a lexicon, LEX (i.e., a syntax) and employs a basic structure-building operation, Merge;
(ii) a realm of “meanings” or “thoughts” that this combinatorial system has to express or “interface with”;
(iii) a realm of sound, or gesture (as in sign languages), that the system has to equally interface with, else language could not be
externalized
(or be heard/seen).
If the syntax does nothing but construct interface representations, and there are no more than two interfaces, we get the picture shown in
Fig. 9.1
, where PHON and SEM are the relevant representations.
Fig. 9.1. The standard model.
From these first demarcations of structure, further consequences follow: in particular, whatever objects the computational system constructs need to satisfy conditions on “legibility” at the respective interfaces, imposed by the two relevant language-external systems (sensorimotor or “S-M”-systems, on the one side, and systems of thought, “Conceptual-Intentional” or “C-I”-systems, on the other). Ideally, indeed, whatever objects the syntax delivers at one of these interfaces should
only
contain features and structures that the relevant external system can “read” and do something useful with.
The “Strong Minimalist Thesis” (SMT) attempts to explain language from the very need for the language system to satisfy such interface conditions: language satisfies this thesis to whatever extent it is rationalizable as an
optimal solution to conditions imposed by the interfaces
. In the course of pursuing this thesis, these conditions have come to be thought to be very substantive indeed, and effectively to explain much of the diversity of structures that we find in human syntax. For example, there is said to be a semantic operation of “predicate composition” in the language-external systems of “thought” with which language interfaces, and thus (or, therefore) there is an operation in the syntax, namely “adjunction,” which as it were “answers” that external condition. By virtue of that fact, it is argued, adjunction as a feature of syntax finds a “principled explanation”: its answering an interface condition is what
rationalizes its existence (Chomsky 2004b).
3
This example illustrates a way in which empirically certified syntactic conditions in syntax are meant to correlate one-to-one with certain conditions inherent to the “semantic component” â or the so-called “Conceptual-Intentional Systems” thought to be there irrespective of language â and how we may argue for such optimality in an effort to give substance to the SMT.
The existence of a semantic interface that plays the explanatory role just sketched is often said to be a “virtual conceptual necessity,” hence to come “for free.” But note that all that is really conceptually necessary here â and even that is not quite necessary, it is just a fact â is that language is used. This is a much more modest and minimal requirement than that language interfaces with “outside systems” of thought which are richly structured in themselves â as richly as language is, in fact â so as to impose conditions on which contents language has to express. Language could be usable, and be used, even if such independently constituted systems did not exist and the computational system of language would literally
construct
all the semantic objects there are. As Chomsky points out in personal conversation, at least the outside systems would have to be
rich enough to use
the information contained in the representations that the syntax constructs. Even that, I argue here, is too strong, and the more radical option is that the outside systems simply do not exist.
The new architecture I have in mind is roughly as shown in
Fig. 9.2
, and I will try to motivate it in the next section.
Fig. 9.2. The “radical” model.
The differences to the previous architecture are quite obvious: now there is no semantic component, no independent generative system of “thought,” no “mapping” from the syntax to such a system, no semantic “interface.” There is a computational system (syntax), which constructs derivations; periodically, after each “phase” of a computation, the generated structure is sent off to the sensorimotor systems; and there are no structured semantic representations beyond the ones that the syntax is inherently tuned to construct.
One way of putting this somewhat radical idea is in the form of the question: is syntax the dress or the skeleton of thought? Is syntactic complexity a contingent way of dressing up human thought, viewed as something independent from language, in a linguistic guise? Or is syntax what literally
constructs
a thought and gives it its essential shape, much as our bones give shape and structure to our body? If we stripped away syntax, would thought merely stand naked, or would it fall apart?
The former picture is far more conservative, especially in the philosophical tradition, where ever since Frege and Russell, sentence meanings are being looked at as language- and mind-independent “propositions,” to which our brain, although they are external to it, somehow connects. Often they are thought to be deprived of structure altogether, sometimes they are thought to have a logical structure only; that they are not only structured, but that they can be deflated altogether into the structures that the system of human syntax provides, is, I think, a new idea.
Notice now that thought is as generative and discretely infinite as language is: there is no finite bound on the thoughts you can think, and every propositional thought (the kind of thought that can enter rational inferences) is a unique and discrete object. Such productivity is only possible if there is a generative system behind thought that powers it. Could that system really employ radically different generative principles than the ones that we now think the computational system of language (syntax) exhibits? Could it do that, after we have come to
minimalize
syntax in the course of the minimalist program, to an extent that only the barest essentials of a computational system that yields discrete infinity are left? If Merge, which is thought to be the basic computational operation of human syntax, is what is minimally needed to get a system with the basic properties of language, could it
fail
to exist in another system, the system of “thought,” that exhibits these very properties as well? Having seen, moreover, that it is the generative system of language that accounts for
particularly the
logical
properties of linguistic expressions (Huang 1982) â the ones that account for their behavior in rational inferences â can we really assume that the logical properties of “thought” are driven by an entirely different generative system? That there two skeletons, rather than one?
Notice also that language is
compositional
: sets of words which we informally call “sentences” contain other such sets, and the meaning of the sentences depends on the interpretation of these subsets inherently. These subsets are discrete syntactic objects in their own right, which have distinctive semantic interpretations themselves: thus, for example, a morpheme or word is interpreted differently from a sentence, a noun phrase or sentence differently from a verb phrase. Consider, to be specific, a set of words like (1):
(1) {the, man, who, left, a, fortune}
Some of its subsets, such as {the, man} or {a, fortune} or {left, {a fortune}} are discrete sub-units in the above sense. The first two have one type of semantic interpretation (they are, intuitively speaking, “object-denoting”); the third has a different type of interpretation (it is “event-denoting”). Other subsets are no such units, such as {left, a}, or {man, who}. These objects have no distinctive semantic interpretations at all â they are seriously incomplete; and they are no syntactic units either. This is an intriguing correlation that needs to be explained, along with the more general fact that “correspondences” between form and meaning are much more systematic than these sketchy remarks would let you suspect. They go far beyond âevent'-denotations for VPs and âobject'-denotations for NPs. A candidate initial list for a more detailed account of correspondences is (though I won't go into details here): Nouns correspond to kinds (âman', âwolf,' etc.), D(eterminer)P(hrase)s to objects (âthis man,' âthat wolf'), Ï
Ps (verbs with full argument structure, without Tense specification) to propositions/events (âCaesar destroy Syracuse'), T(ense)P(hrase)s to tensed propositions/events, C(omplementizer)P(hrase)s to truth values, adjuncts to predicate compositions, bare Small Clauses to predications (Moro 2000), headâcomplement (H-XP) constructions to event-participants, possessive syntax to integral relations (Hinzen 2007a), and so on.
4
One way of looking at lists such as this is to suppose that there exists an independently constituted semantic system or system of thought, which forces the syntax to pattern along units such as {left {a, fortune}}, but not {left, a}, say. This is a rather unattractive view, as it presupposes the semantic objects in
question and has nothing at all to offer by way of explaining them. It is like saying that there are sentences (CPs) because there are propositions they need to express. But what are propositions? They are the meanings, specifically, of sentences. So, a more attractive and intriguing view is to say that something else, internal to the syntax, forces it to pattern around certain interpretable units. This supposition is what grounds the architecture in
Fig. 9.2
.
To get there, suppose, to use traditional terminology at least for a moment (like a ladder, which we will soon throw away after use), that
all linguistic representations are interface representations, hence that every syntactic representation and hierarchical unit in it inherently subserves (computes) a semantic task
. Different kinds of syntactic objects thus intrinsically correlate with different kinds of semantic objects, such that in the absence of the syntactic construction of the latter at the semantic interface, they would not exist. Their reality is at the interface and nowhere else. In that case we might as well give up speaking of an “interface” (now throw away our ladder), since on this strictly constructive view the only reality of semantic objects is due to the syntax itself. The phased dynamics of derivations is all there is. Certain semantic objects arise at phase boundaries and have an ephemeral existence at these very moments. No external demands are imposed on this dynamics. There are
executive systems
using the constructs in question, for sure, but now one wouldn't say these systems have to “match” the constructs in richness or impose conditions on them, except those imposed by executive systems that place the semantic objects in question in discourse, in line with online constraints on the construction of an ongoing discourse model.