Read Of Minds and Language Online

Authors: Pello Juan; Salaburu Massimo; Uriagereka Piattelli-Palmarini

Of Minds and Language (4 page)

BOOK: Of Minds and Language
6.11Mb size Format: txt, pdf, ePub
ads

Information theory was taken to be a unifying concept for the behavioral sciences, along the lines of Warren Weaver's essay in Shannon and Weaver's famous monograph.
4
Within the engineering professions, highly influential in these areas, it was a virtual dogma that the properties of language, maybe all human behavior, could be handled within the framework of Markov sources, in fact very elementary ones, not even utilizing the capacity of these simple automata to capture dependencies of arbitrary length. The restriction followed from the general commitment to associative learning, which excluded such dependencies. As an aside, my monograph
Syntactic Structures
in 1957 begins with observations on the inadequacy in principle of finite automata, hence Markovian sources, but only because it was essentially notes for courses at MIT, where their adequacy was taken for granted. For similar reasons, the monograph opens by posing the task of distinguishing grammatical from un-grammatical sentences, on the analogy of well-formedness in formal systems, then assumed to be an appropriate model for language. In the much longer and more elaborate unpublished monograph LSLT two years earlier, intended only for a few friends, there is no mention of finite automata, and a chapter is devoted to the reasons for rejecting any notion of well-formedness: the task of the theory of language is to generate sound–meaning relations fully, whatever the status of an expression, and in fact much important work then and since has had to do with expressions of intermediate status: the difference, say, between such deviant expressions as (1) and (2).

(1)   *which book did they wonder why I wrote

(2)   *which author did they wonder why wrote that book

Empty category principle (ECP) vs. subjacency violations, still not fully understood.

There were some prominent critics, like Karl Lashley, but his very important work on serial order in behavior,
5
undermining prevailing associationist assumptions, was unknown, even at Harvard where he was a distinguished professor. Another sign of the tenor of the times.

This is a bit of a caricature, but not much. In fact it is understated, because the prevailing mood was also one of enormous self-confidence that the basic answers had been found, and what remained was to fill in the details in a generally accepted picture.

A few graduate students in the Harvard–MIT complex were skeptics. One was Eric Lenneberg, who went on to found the biology of language; another was Morris Halle. One change over the past fifty years is that we've graduated from sharing a cramped office to being in ample adjacent ones. From the early 1950s, we were reading and discussing work that was then well outside the canon: Lorenz, Tinbergen, Thorpe, and other work in ethology and comparative psychology. Also D'Arcy Thompson,
6
though regrettably we had not come across Turing's work in biology,
7
and his thesis that “we must envisage a living organism as a special kind of system to which the general laws of physics and chemistry apply… and because of the prevalence of homologies, we may well suppose, as D'Arcy Thompson has done, that certain physical processes are of very general occurrence.” The most recent evaluation of these aspects of Turing's work that I've seen, by Justin Leiber,
8
concludes that Thompson and Turing “regard teleology, evolutionary phylogeny, natural selection, and history to be largely irrelevant and unfortunately effective distractions from fundamental ahistorical biological explanation,” the scientific core of biology. That broad perspective may sound less extreme today after the discovery of master genes, deep homologies, conservation, optimization of neural networks of the kind that Chris Cherniak has demonstrated,
9
and much else, perhaps even restrictions of evolutionary/developmental processes so narrow that “replaying the protein tape of life might be surprisingly repetitive” (quoting a report on feasible mutational paths in
Science
a few weeks ago,
10
reinterpreting a famous image of Steve Gould's). Another major factor in the development of the biolinguistic perspective was work in recursive function theory and the general theory of computation and algorithms, then just becoming readily available, making it possible to undertake more seriously the inquiry into the formal mechanisms of generative grammars that were being explored from the late 1940s.

These various strands could, it seemed, be woven together to develop a very different approach to problems of language and mind, taking behavior and corpora to be not the object of inquiry, as in the behavioral sciences and structural linguistics, but merely data, and not necessarily the best data, for discovery of the properties of the real object of inquiry: the internal mechanisms that generate linguistic expressions and determine their sound and meaning. The whole system would then be regarded as one of the organs of the body,
in this case a cognitive organ, like the systems of planning, interpretation, reflection, and whatever else falls among those aspects of the world loosely “termed mental,” which reduce somehow to “the organical structure of the brain.” I'm quoting chemist/philosopher Joseph Priestley in the late eighteenth century, articulating a standard conclusion after Newton had demonstrated, to his great dismay and disbelief, that the world is not a machine, contrary to the core assumptions of the seventeenth-century scientific revolution. It follows that we have no choice but to adopt some non-theological version of what historians of philosophy call “Locke's suggestion”: that God might have chosen to “superadd to matter a faculty of thinking” just as he “annexed effects to motion which we can in no way conceive motion able to produce” – notably the property of action at a distance, a revival of occult properties, many leading scientists argued (with Newton's partial agreement).

It is of some interest that all of this seems to have been forgotten. The American Academy of Arts and Sciences published a volume summarizing the results of the Decade of the Brain that ended the twentieth century.
11
The guiding theme, formulated by Vernon Mountcastle, is the thesis of the new biology that “Things mental, indeed minds, are emergent properties of brains, [though] these emergences are … produced by principles that… we do not yet understand.”
12
The same thesis has been put forth in recent years by prominent scientists and philosophers as an “astonishing hypothesis” of the new biology, a “radical” new idea in the philosophy of mind, “the bold assertion that mental phenomena are entirely natural and caused by the neurophysiological activities of the brain,” opening the door to novel and promising inquiries, a rejection of Cartesian mind–body dualism, and so on. All, in fact, reiterate formulations of centuries ago, in virtually the same words, after mind–body dualism became unformulable with the disappearance of the only coherent notion of body (physical, material, etc.) – facts well understood in standard histories of materialism, like Friedrich Lange's nineteenth-century classic.
13

It is also of some interest that although the traditional mind–body problem dissolved after Newton, the phrase “mind–body problem” has been resurrected for a problem that is only loosely related to the traditional one. The traditional mind–body problem developed in large part within normal science: certain phenomena could not be explained by the principles of the mechanical philosophy, the presupposed scientific theory of nature, so a new principle was proposed, some kind of
res cogitans
, a thinking substance, alongside of material
substance. The next task would be to discover its properties and to try to unify the two substances. That task was undertaken, but was effectively terminated when Newton undermined the notion of material substance.

What is now called the mind–body problem is quite different. It is not part of normal science. The new version is based on the distinction between the first person and the third person perspective. The first person perspective yields a view of the world presented by one's own experience – what the world looks like, feels like, sounds like to me, and so on. The third person perspective is the picture developed in its most systematic form in scientific inquiry, which seeks to understand the world from outside any particular personal perspective.

The new version of the mind–body problem resurrects a thought experiment of Bertrand Russell's eighty years ago, though the basic observation traces back to the pre-Socratics. Russell asked us to consider a blind physicist who knows all of physics but doesn't know something we know: what it's like to see the color blue.
14
Russell's conclusion was that the natural sciences seek to discover “the causal skeleton of the world. Other aspects lie beyond their purview.”

Recasting Russell's experiment in naturalistic terms, we might say that like all animals, our internal cognitive capacities reflexively provide us with a world of experience – the human Umwelt, in ethological lingo. But being reflective creatures, thanks to the emergence of human intellectual capacities, we go on to seek a deeper understanding of the phenomena of experience. If humans are part of the organic world, we expect that our capacities of understanding and explanation have fixed scope and limits, like any other natural object – a truism that is sometimes thoughtlessly derided as “mysterianism,” though it was understood by Descartes and Hume, among others. It could be that these innate capacities do not lead us beyond some theoretical understanding of Russell's causal skeleton of the world. In principle these questions are subject to empirical inquiry into what we might call “the science-forming faculty,” another “mental organ,” now the topic of some investigation – Susan Carey's work, for example (Carey 1985, 2001; Barner et al. 2005, 2007). But these issues are distinct from traditional dualism, which evaporated after Newton.

This is a rough sketch of the intellectual background of the biolinguistic perspective, in part with the benefit of some hindsight. Adopting this perspective, the term “language” means internal language, a state of the computational system of the mind/brain that generates structured expressions, each of which can be taken to be a set of instructions for the interface systems within which the
faculty of language is embedded. There are at least two such interfaces: the systems of thought that use linguistic expressions for reasoning, interpretation, organizing action, and other mental acts; and the sensorimotor systems that externalize expressions in production and construct them from sensory data in perception. The theory of the genetic endowment for language is commonly called universal grammar (UG), adapting a traditional term to a different framework. Certain configurations are possible human languages, others are not, and a primary concern of the theory of human language is to establish the distinction between the two categories.

Within the biolinguistic framework, several tasks immediately arise. The first is to construct generative grammars for particular languages that yield the facts about sound and meaning. It was quickly learned that the task is formidable. Very little was known about languages, despite millennia of inquiry. The most extensive existing grammars and dictionaries were, basically, lists of examples and exceptions, with some weak generalizations. It was assumed that anything beyond could be determined by unspecified methods of “analogy” or “induction” or “habit.” But even the earliest efforts revealed that these notions concealed vast obscurity. Traditional grammars and dictionaries tacitly appeal to the understanding of the reader, either knowledge of the language in question or the shared innate linguistic capacity, or commonly both. But for the study of language as part of biology, it is precisely that presupposed understanding that is the topic of investigation, and as soon as the issue was faced, major problems were quickly unearthed.

The second task is to account for the acquisition of language, later called the problem of explanatory adequacy (when viewed abstractly). In biolinguistic terms, that means discovering the operations that map presented data to the internal language attained. With sufficient progress in approaching explanatory adequacy, a further and deeper task comes to the fore: to transcend explanatory adequacy, asking not just what the mapping principles are, but why language growth is determined by these principles, not innumerable others that can be easily imagined. The question was premature until quite recently, when it has been addressed in what has come to be called the minimalist program, the natural next stage of biolinguistic inquiry, to which I'll briefly return.

Another question is how the faculty of language evolved. There are libraries of books and articles about evolution of language – in rather striking contrast to the literature, say, on the evolution of the communication system of bees. For human language, the problem is vastly more difficult for obvious reasons, and can be undertaken seriously, by definition, only to the extent that some relatively firm conception of UG is available, since that is what evolved.

Still another question is how the properties “termed mental” relate to “the organical structure of the brain,” in Priestley's words.
15
And there are hard and important questions about how the internal language is put to use, for example in acts of referring to the world, or in interchange with others, the topic of interesting work in neo-Gricean pragmatics in recent years.

Other cognitive organs can perhaps be studied along similar lines. In the early days of the biolinguistic program, George Miller and others sought to construct a generative theory of planning, modeled on early ideas about generative grammar.
16
Other lines of inquiry trace back to David Hume, who recognized that knowledge and belief are grounded in a “species of natural instincts,” part of the “springs and origins” of our inherent mental nature, and that something similar must be true in the domain of moral judgment. The reason is that our moral judgments are unbounded in scope and that we constantly apply them in systematic ways to new circumstances. Hence they too must be founded on general principles that are part of our nature though beyond our “original instincts,” those shared with animals. That should lead to efforts to develop something like a grammar of moral judgment. That task was undertaken by John Rawls, who adapted models of generative grammar that were being developed as he was writing his classic
Theory of Justice
(1971) in the 1960s. These ideas have recently been revived and developed and have become a lively field of theoretical and empirical inquiry, which Marc Hauser discusses below.
17

BOOK: Of Minds and Language
6.11Mb size Format: txt, pdf, ePub
ads

Other books

Heirs of War by Mara Valderran
Demiourgos by Williams, Chris
Candleland by Martyn Waites
The Box by Peter Rabe
Manhattan Nocturne by Colin Harrison