Read Of Minds and Language Online

Authors: Pello Juan; Salaburu Massimo; Uriagereka Piattelli-Palmarini

Of Minds and Language (51 page)

BOOK: Of Minds and Language
5.55Mb size Format: txt, pdf, ePub
ads

(f) implications of this model in general (a potential solution to constraining the abduction of generalizations, and learning grammar as intrinsically motivated problem solving)

This line of argument follows a general research program of isolating true linguistic universals.

The concept of “language” is like those of …“organ”, as used in biological science … grammatical structure “is” the language only given the child's intellectual environment … and the processes of physiological and cognitive development… Our first task in the study of a particular [linguistic] structure in adult language behavior is to ascertain its source rather than immediately assuming that it is
grammatically
relevant…Many an aspect of adult … linguistic structure is itself partially determined by the learning and behavioral processes that are involved in acquiring and implementing that structure … Thus, some formally possible structures will never appear in any language because no child can use [or learn] them. (Bever 1970: 279–280)
2

Here I focus on the dynamic role of the individual language learner in shaping properties of attested languages (aka E-languages). Certain linguistic universals that seem to be structural are in fact emergent properties of the interaction of genetic endowment, social context, and individual learning dynamics. My argument is this: Language acquisition recruits general mechanisms of growth, learning, and behavior in individual children: only those languages that comport with these mechanisms will be learned. I first review some non-syntactic universals, to outline relatively clear examples of the role of development, as background for the main focus of this paper.

18.2 Neurological foundations of language: the enduring case of cerebral asymmetries

The left hemisphere is the dominant neurological substrate for much of language – true for everyone, including the vast majority of left-handers (Khedr et al. 2002). This leads directly to
post hoc propter hoc
reasoning about the biological basis for language: the unique linguistic role of the left hemisphere reflects some unique biological property, which itself makes language possible. This argument has been further buttressed by claims that certain primates have left-hemisphere asymmetries for species specific calls (Weiss et al. 2002), claims that infants process language more actively in the left hemisphere (Mehler et al. 2000), demonstrations that artificial language learning selectively activates the left hemisphere (Musso et al. 2003; Friederici 2004, this volume). However plausible, this argument overstates the empirical case. First, we and others demonstrated that asymmetries involve differences in computational “style” (“propositional” in the left, “associative” in the right; Bever 1975, Bever and Chiarello 1974). In nonlinguistic mammals, the asymmetries may nonetheless parallel those for humans: for example, we have shown that rats learn mazes relying on serial ordering in the left hemisphere, and specific locations in the right (Lamendola and Bever 1997), a difference with the computational flavor
of the human difference. Second, the facts about asymmetries for language could follow from a simple principle: the left hemisphere is slightly more powerful computationally than the right (Bever 1980). Even the simplest sentence involves many separate computations, which during acquisition compound a small incremental computational superiority into a large categorical superiority and apparent specialization. Thus the left hemisphere's unique relation to language function accumulates from a very small quantitative difference in the individual learner.

18.3 Heritable variation in the neurological representation of language

Loss of linguistic ability results from damage to specific areas of the left neocortex. The fact that normal language depends on (rather small) specific areas suggests that it may be critically “caused” by those areas. However, certain aspects of language may have considerable latitude in their neurological representation. For example, Luria and colleagues noted that right-handed patients with left-handed relatives (“FLH + ”) recover faster from left-hemisphere aphasia, and show a higher incidence of right-hemisphere aphasia than those without familial left-handers (FLH—) (Hutton et al. 1977). They speculated that FLH + right handers have a genetic disposition towards bilateral representation for language, which often surfaces in their families as explicit left-handedness. We have found a consistent behavioral difference between the two familial groups in how language is processed, which may explain Luria's observation. Normal FLH+ people comprehend language initially via individual words, while FLH— people give greater attention to syntactic organization (a simple demonstration is that FLH+ people read sentences faster and understand them better in a visual word-by-word paradigm than a clause-by-clause paradigm; the opposite pattern occurs for FLH— people). The bilateral representation of language in FLH+ people may be specific to lexical knowledge, since acquiring that is less demanding computationally than syntactic structures, and hence more likely to find representation in the right hemisphere. On this view, FLH+ people have a more widespread representation of individual lexical items, and hence can access each word more readily and distinctly from syntactic processing than FLH— people (Bever et al. 1987, 1989a; Townsend et al. 2001).

This leads to a prediction: lexical processing is more bilateral in FLH+ righthanders than FLH— right-handers, but syntactic processing is left-hemisphered for all right-handers. Recently, we tested this using fMRI brain imaging of
subjects while they are reordering word sequences according to syntactic constraints or according to lexico-semantic relations between the words. We found that the lexical tasks activated the language areas bilaterally in FLH+ righthanders, but activated only the left hemisphere areas in the FLH— right handers: all subjects showed strong left-hemisphere dominance in the syntactic tasks (Chan et al. in preparation). This confirms our prediction, and supports our explanation for Luria's original clinical observations. It also demonstrates that there is considerable lability in the neurological representation of important aspects of language.

18.4 The critical period: differentiation and segregation of behaviors

The ostensible critical period for learning language is another lynchpin in arguments that language writ broadly (aka E-language) is (interestingly) innate. The stages of acquisition and importance of exposure to language at characteristic ages is often likened to stages of learning birdsong - a paradigmatic example of an innate capacity with many surface similarities to language (Michel and Tyler 2005). However, certain facts may indicate a somewhat less biologically rigid explanation. First, it seems to be the case that adult mastery of semantic structures in a second language is much less restricted than mastery of syntax, which in turn is less restricted than mastery of phonology (Oyama 1976). This decalage invites the interpretation that the critical period is actually a layering of different systems and corresponding learning sequences. Phonological learning involves both tuning perceptual systems and forming motor patterns, which is ordinarily accomplished very early: linguistically unique semantic knowledge may be acquired relatively late, draws on universals of thought, and hence shows relatively little sensitivity to age of acquisition.

Noam suggested (in email) a non-maturational interpretation of this decalage, based on the specificity of the stimulus that the child receives, and the corresponding amount which must be innately available, and hence not due to different mechanisms of learning with different time courses. The semantic world is vast: much of semantics must be universally available innately, and hence a critical period for semantic acquisition is largely irrelevant. In contrast, all the phonological information needed for learning it is available to the child, and can be learned completely in early childhood.

The notorious case is syntactic knowledge of an explicit language, which is neither determined by sensory/motor learning nor related directly to universals of thought. I have argued that the critical period for syntax learning is a natural
result of the functional role that syntax plays in learning language - namely, it assigns consistent computational representations that solidify perceptual and productive behavioral systems, and reconciles differences in how those systems pair forms with meanings (Bever 1975, 1981). On this view, the syntactic derivational system for sentences is a bilateral filter on emerging perceptual and productive capacities: once those capacities are complete and in register with each other, further acquisition of syntax no longer has a functional role, and the syntax acquisition mechanisms decouple from disuse, not because of a biological or maturationally mechanistic change. (See Bever and Hansen 1988 for a demonstration of the hypothesis that grammars act as cognitive mediators between production and perception in adult artificial language learning).

This interpretation is consistent with our recent finding that the age of the critical period differs as a function of familial handedness: FLH+ deaf children show a younger critical age for mastery of English syntax than FLH— children (Ross and Bever 2004). This follows from the fact that FLH+ people access the lexical structure of language more readily, and access syntactic organization less readily than FLH— people: FLH+ children are acquiring their knowledge of language with greater emphasis on lexically coded structures, and hence depend more on the period during which vocabulary grows most rapidly (between 5 and 10 years: itself possibly the result of changes in social exposure, and emergence into early teenage). Consistent with my general theme, it attests to the role of general mechanisms of learning and individual neurological specialization in shaping how language is learned.

18.5 Language learning as hypothesis testing and the EPP

Of course, how language learning works computationally is the usual determinative argument that the capacity for language is innate and independent from individual mechanisms of learning or development. Typically cited problems for a general inductive experience-based empiricist learning theory are:

(3) a. The poverty of the stimulus. How do children go beyond the stimulus given?

b. The frame problem: how do children treat different instances as similar?

c. The motivational problem: e.g., what propels a 4-year-old to go beyond his already developed prodigious communicative competence?

d. The universals problem: how do all languages have the same universals?

Parameter-setting theory is a powerful schematic answer to all four questions at the same time. On this theory, a taxonomy of structural choices differentiates possible languages. For example, phrases are left- or right-branching; subjects
can be unexpressed or not; wh-constructions move the questioned constituent or it remains in situ. The language-learning child has innate access to these parameterized choices. Metaphorically, the child has a bank of dimensionalized “switches” and “learning” consists of recognizing the critical data setting the position of each switch: the motivation to learn is moot, since the switches are thrown automatically when the appropriate data are encountered. This is a powerful scheme which technically can aspire to be explanatory in a formal sense and has made enormous contributions in defining the minimally required data (Lightfoot 1991; Pinker 1984; J. D. Fodor 1998, 2001; Fodor and Sakas 2004; Fodor, this volume): but it is also very far removed from the motivational and daily dynamics of individual children. We are left with an abstract schema and no understanding of what the individual child might be doing, why it might be doing it, and how that activity might itself constrain possible choices of parameters, and hence, attested linguistic universals.

My hypothesis, and that of a few others who accept the idea that children in fact acquire generative grammar (e.g., Gillette et al.1999; Gleitman 1990, this volume; Papafragou et al. 2007) is that neither a parameter-setting scheme, nor inductive learning alone is adequate to the facts. On this view, acquisition involves both formation of statistical generalizations available to the child and the availability of structures to rationalize violations of those generalizations. A traditional view of this kind is “hypothesis testing,” which allows for hypotheses to be inductively generated and deductively tested, and conversely.

Now to the central thesis of this discussion: there is a model of acquisition that integrates inductive and deductive processes; such a model requires the existence of canonical forms in languages; this motivates the facts underlying the Extended Projection Principle, which requires that (almost) every sentence construction maintain a basic configurational property of its language. The exposition starts with a narrowly focused discussion of how inductive and deductive processes can be combined in a model of comprehension – itself experimentally testable and tested with adults. Then I suggest that this kind of model can be generalized to a model of acquisition, with corresponding empirical predictions – at least a few of which are confirmed.

18.6 Integrating derivations into a comprehension model

The first question is, do speakers actually use a psychological representation of generative grammar – a “psychogrammar” – of the particular form claimed in derivational models, or only a simulation of it? If adult speakers do not actually use the computational structures posited in generative grammars as part of their language behavior, we do not have to worry about how children
might learn it. In fact, fifty years of research and intuition have established the following facts about adult language behavior (4):

(4) a. Syntactic processes in generative models are “psychologically real”: derivational representations are used in language comprehension and production (see Townsend and Bever 2001).

b. Syntactic processes are recursive and derivational: they range over entire sentences in a “vertical” fashion (as opposed to serial) with successive reapplications of computations to their own output. These properties have been true of every architectural variant of generative grammar, from Syntactic Structures (Chomsky 1957), to the minimalist program (Chomsky 1995).

BOOK: Of Minds and Language
5.55Mb size Format: txt, pdf, ePub
ads

Other books

Mastodonia by Clifford D. Simak
Storm of Visions by Christina Dodd
Deception by Cyndi Goodgame
Many Lives by Stephanie Beacham
Soul Mates by Watier, Jeane
Lifting the Sky by Mackie d'Arge
Mecha Rogue by Brett Patton
When Summer Fades by Shaw, Danielle