Read Of Minds and Language Online

Authors: Pello Juan; Salaburu Massimo; Uriagereka Piattelli-Palmarini

Of Minds and Language (6 page)

BOOK: Of Minds and Language
9.91Mb size Format: txt, pdf, ePub
ads

An elementary fact about the language faculty is that it is a system of discrete infinity. In the simplest case, such a system is based on a primitive operation that takes objects already constructed, and constructs from them a new object. Call that operation Merge. There are more complex modes of generation, such as
the familiar phrase structure grammars explored in the early years of generative grammar. But a Merge-based system is the most elementary, so we assume it to be true of language unless empirical facts force greater UG complexity. If computation is efficient, then when X and Yare merged, neither will change, so that the outcome can be taken to be simply the set {X, Y}. That is sometimes called the No-Tampering condition, a natural principle of efficient computation, perhaps a special case of laws of nature. With Merge available, we instantly have an unbounded system of hierarchically structured expressions. For language to be usable, these expressions have to link to the interfaces. The generated expressions provide the means to relate sound and meaning in traditional terms, a far more subtle process than had been assumed for millennia. UG must at least include the principle of unbounded Merge.

The conclusion holds whether recursive generation is unique to the language faculty or found elsewhere. If the latter, there still must be a genetic instruction to use unbounded Merge to form linguistic expressions. Nonetheless, it is interesting to ask whether this operation is language-specific. We know that it is not. The classic illustration is the system of natural numbers, raising problems for evolutionary theory noted by Alfred Russel Wallace. A possible solution is that the number system is derivative from language. If the lexicon is reduced to a single element, then unbounded Merge will easily yield arithmetic. Speculations about the origin of the mathematical capacity as an abstraction from language are familiar, as are criticisms, including apparent dissociation with lesions and diversity of localization. The significance of such phenomena, however, is far from clear. As Luigi Rizzi has pointed out,
20
they relate to use of the capacity, not its possession; for similar reasons, dissociations do not show that the capacity to read is not parasitic on the language faculty. The competence– performance distinction should not be obscured. To date, I am not aware of any real examples of unbounded Merge apart from language, or obvious derivatives from language, for example, taking visual arrays as lexical items.

We can regard an account of some linguistic phenomena as principled insofar as it derives them by efficient computation satisfying interface conditions. A very strong proposal, called “the strong minimalist thesis,” is that all phenomena of language have a principled account in this sense, that language is a perfect solution to interface conditions, the conditions it must satisfy to some extent if it is to be usable at all. If that thesis were true, language would be something like a snowflake, taking the form it does by virtue of natural law, in which case UG would be very limited.

In addition to unbounded Merge, language requires atoms, or word-like elements, for computation. Whether these belong strictly to language or are appropriated from other cognitive systems, they pose extremely serious problems for the study of language and thought and also for the study of the evolution of human cognitive capacities. The basic problem is that even the simplest words and concepts of human language and thought lack the relation to mind-independent entities that has been reported for animal communication: representational systems based on a one–one relation between mind/brain processes and “an aspect of the environment to which these processes adapt the animal's behavior,” to quote Randy Gallistel (1990b). The symbols of human language and thought are sharply different.

These matters were explored in interesting ways by seventeenth- and eighteenth-century British philosophers, developing ideas that trace back to Aristotle. Carrying their work further, we find that human language appears to have no reference relation, in the sense stipulated in the study of formal systems, and presupposed – mistakenly I think – in contemporary theories of reference for language in philosophy and psychology, which take for granted some kind of word–object relation, where the objects are extra-mental. What we understand to be a house, a river, a person, a tree, water, and so on, consistently turns out to be a creation of what seventeenth-century investigators called the “cognoscitive powers,” which provide us with rich means to refer to the outside world from certain perspectives. The objects of thought they construct are individuated by mental operations that cannot be reduced to a “peculiar nature belonging” to the thing we are talking about, as David Hume summarized a century of inquiry. There need be no mind-independent entity to which these objects of thought bear some relation akin to reference, and apparently there is none in many simple cases (probably all). In this regard, internal conceptual symbols are like the phonetic units of mental representations, such as the syllable /ba/; every particular act externalizing this mental entity yields a mind-independent entity, but it is idle to seek a mind-independent construct that corresponds to the syllable. Communication is not a matter of producing some mind-external entity that the hearer picks out of the world, the way a physicist could. Rather, communication is a more-or-less affair, in which the speaker produces external events and hearers seek to match them as best they can to their own internal resources. Words and concepts appear to be similar in this regard, even the simplest of them. Communication relies on shared cognoscitive powers, and succeeds insofar as shared mental constructs, background, concerns, presuppositions, etc. allow for common perspectives to be (more or less) attained. These semantic properties of lexical items seem to be unique to human language and thought, and have to be accounted for somehow in the study of their evolution.

Returning to the computational system, as a simple matter of logic, there are two kinds of Merge, external and internal. External Merge takes two objects, say
eat
and
apples
, and forms the new object that corresponds to
eat apples
. Internal Merge – often called Move – is the same, except that one of the objects is internal to the other. So applying internal Merge to
John ate what
, we form the new object corresponding to
what John ate what
, in accord with the No-Tampering condition. As in the examples I mentioned earlier, at the semantic interface, both occurrences of
what
are interpreted: the first occurrence as an operator and the second as the variable over which it ranges, so that the expression means something like:
for which thing x
,
John ate the thing x
. At the sensorimotor side, only one of the two identical syntactic objects is pronounced, typically the structurally most salient occurrence. That illustrates the ubiquitous displacement property of language: items are commonly pronounced in one position but interpreted somewhere else as well. Failure to pronounce all but one occurrence follows from third-factor considerations of efficient computation, since it reduces the burden of repeated application of the rules that transform internal structures to phonetic form – a heavy burden when we consider real cases. There is more to say, but this seems the heart of the matter.

This simple example suggests that the relation of the internal language to the interfaces is asymmetrical. Optimal design yields the right properties at the semantic side, but causes processing problems at the sound side. To understand the perceived sentence

(7) What did John eat?

it is necessary to locate and fill in the missing element, a severe burden on speech perception in more complex constructions. Here conditions of efficient computation conflict with facilitation of communication. Universally, languages prefer efficient computation. That appears to be true more generally. For example, island conditions are at least sometimes, and perhaps always, imposed by principles of efficient computation. They make certain thoughts inexpressible, except by circumlocution, thus impeding communication. The same is true of ambiguities, as in the examples I mentioned earlier. Structural ambiguities often fall out naturally from efficient computation, but evidently pose a communication burden.

Other considerations suggest the same conclusion. Mapping to the sensorimotor interface appears to be a secondary process, relating systems that are independent: the sensorimotor system, with its own properties, and the computational system that generates the semantic interface, optimally insofar as the strong minimalist thesis is accurate. That's basically what we find. Complexity,
variety, effects of historical accident, and so on, are overwhelmingly restricted to morphology and phonology, the mapping to the sensorimotor interface. That's why these are virtually the only topics investigated in traditional linguistics, or that enter into language teaching. They are idiosyncracies, so are noticed, and have to be learned. If so, then it appears that language evolved, and is designed, primarily as an instrument of thought. Emergence of unbounded Merge in human evolutionary history provides what has been called a “language of thought,” an internal generative system that constructs thoughts of arbitrary richness and complexity, exploiting conceptual resources that are already available or may develop with the availability of structured expressions. If the relation to the interfaces is asymmetric, as seems to be the case, then unbounded Merge provides only a language of thought, and the basis for ancillary processes of externalization.

There are other reasons to believe that something like that is true. One is that externalization appears to be independent of sensory modality, as has been learned from studies of sign language in recent years. More general considerations suggest the same conclusion. The core principle of language, unbounded Merge, must have arisen from some rewiring of the brain, presumably the effect of some small mutation. Such changes take place in an individual, not a group. The individual so endowed would have had many advantages: capacities for complex thought, planning, interpretation, and so on. The capacity would be transmitted to offspring, coming to dominate a small breeding group. At that stage, there would be an advantage to externalization, so the capacity would be linked as a secondary process to the sensorimotor system for externalization and interaction, including communication. It is not easy to imagine an account of human evolution that does not assume at least this much. And empirical evidence is needed for any additional assumption about the evolution of language.

Such evidence is not easy to find. It is generally supposed that there are precursors to language proceeding from single words, to simple sentences, then more complex ones, and finally leading to unbounded generation. But there is no empirical evidence for the postulated precursors, and no persuasive conceptual argument for them either: transition from ten-word sentences to unbounded Merge is no easier than transition from single words. A similar issue arises in language acquisition. The modern study of the topic began with the assumption that the child passes through a one- and two-word stage, telegraphic speech, and so on. Again the assumption lacks a rationale, because at some point unbounded Merge must appear. Hence the capacity must have been there all along even if it only comes to function at some later stage. There does appear to be evidence about earlier stages: namely, what children produce. But that carries little weight. Children understand far more than what
they produce, and understand normal language but not their own restricted speech, as was shown long ago by Lila Gleitman and her colleagues.
21
For both evolution and development, there seems little reason to postulate precursors to unbounded Merge.

In the 1974 biolinguistics conference, evolutionary biologist Salvador Luria was the most forceful advocate of the view that communicative needs would not have provided “any great selective pressure to produce a system such as language,” with its crucial relation to “development of abstract or productive thinking.” His fellow Nobel laureate François Jacob added later that “the role of language as a communication system between individuals would have come about only secondarily, as many linguists believe,” perhaps referring to discussions at the symposia.
22
“The quality of language that makes it unique does not seem to be so much its role in communicating directives for action” or other common features of animal communication, Jacob continues, but rather “its role in symbolizing, in evoking cognitive images,” in “molding” our notion of reality and yielding our capacity for thought and planning, through its unique property of allowing “infinite combinations of symbols” and therefore “mental creation of possible worlds,” ideas that trace back to the seventeenth-century cognitive revolution and have been considerably sharpened in recent years.

We can, however, go beyond speculation. Investigation of language design can yield evidence on the relation of language to the interfaces. There is, I think, mounting evidence that the relation is asymmetrical in the manner indicated. There are more radical proposals under which optimal satisfaction of semantic conditions becomes close to tautologous. That seems to me one way to understand the general drift of Jim Higginbotham's work on the syntax–semantics border for many years.
23
And from a different point of view, something similar would follow from ideas developed by Wolfram Hinzen (2006a, 2007a; Hinzen and Uriagereka 2006), in line with Juan Uriagereka's suggestion that it is “as if syntax carved the path interpretation must blindly follow” (Uriagereka 1999).

The general conclusions appear to fit reasonably well with evidence from other sources. It seems that brain size reached its current level about 100,000 years ago, which suggests to some specialists that “human language probably evolved, at least in part, as an automatic but adaptive consequence of increased absolute brain size,” leading to dramatic changes of behavior (quoting George Striedter, in
Brain and Behavioral Sciences
February 2006, who adds
qualifications about the structural and functional properties of primate brains). This “great leap forward,” as some call it, must have taken place before about 50,000 years ago, when the trek from Africa began. Even if further inquiry extends the boundaries, it remains a small window in evolutionary time. The picture is consistent with the idea that some small rewiring of the brain gave rise to unbounded Merge, yielding a language of thought, later externalized and used in many ways. Aspects of the computational system that do not yield to principled explanation fall under UG, to be explained somehow in other terms, questions that may lie beyond the reach of contemporary inquiry, Richard Lewontin has argued.
24
Also remaining to be accounted for are the apparently human-specific atoms of computation, the minimal word-like elements of thought and language, and the array and structure of parameters, rich topics that I have barely mentioned.

BOOK: Of Minds and Language
9.91Mb size Format: txt, pdf, ePub
ads

Other books

Mignon by James M. Cain
Trojan Slaves by Syra Bond
Blest by Blaise Lucey
Vampires 3 by J R Rain
Kingdoms of the Night (The Far Kingdoms) by Allan Cole, Chris Bunch
Petrified by Graham Masterton