Read Of Minds and Language Online
Authors: Pello Juan; Salaburu Massimo; Uriagereka Piattelli-Palmarini
L
AKA
: When you said that you think that children have a more refined singular/ plural quantification system that is due to language (so the idea is that there is some conceptual part that is shared between rhesus monkeys and us humans, but there is the difference as well between babies and rhesus monkeys), your hypothesis was that this has to do with language. I realize that you are not saying that babies' knowledge of quantification is driven by language directly. My question is, do you mean to say that human babies have this capacity because they are endowed with the language faculty, or do you mean to say that they will develop this faculty as language matures?
H
AUSER
: I think I was referring to the former. Due to the evolution of the language faculty, babies already have ontological commitments prior to the maturation of language.
H
IGGINBOTHAM
: I have two remarks. One is a detailed question on children and their behavior with respect to mass/count distinctions. You know there are languages in which there is simply no plural morphology at all, e.g. Chinese, where it appears vestigially in the personal pronouns, but that's it. Moreover, the nominal (like
book
, let's say) is number neutral, so if you say
I bought book
, that could be one, two, or any number of books. So you do not get
morphological marking with this thing, although, in contrast to others, I think that it is pretty clear that you have exactly the same distinction. I mean,
book
is a count noun in Chinese, and
stone
is not a count noun, but a mass noun. But that suggests, now, that the distinction is fundamentally in place, independently of any question of anybody's morphology. But then I think you are going to have to ask yourself, with respect to human beings and with respect perhaps also to the animals, what is the peculiar status of the fact that you never get numerals with mass terms. Try saying
three sands
or
three sand
, or something like that, or in Chinese
three stone
â it makes no sense. One of the interesting questions, it seems to me, is why does it make no sense? (Of course not everybody agrees to that.) A possibility which I have explored,
5
and other people are sympathetic to too, I think, is that it makes no sense because the realm of counting is simply alien to this. You do not have a domain of objects. There would be a fundamental and physical distinction there. That would be a kind of thinking that one could look for in children, I would think, and something that might provide insight into how the ontology really changes once you get language into the picture.
G
ELMAN
: We actually have evidence to support that â Lila, myself, and two post-docs â which I will present.
U
RIAGEREKA
: I am among the ones who are convinced that the FLB/FLN distinction is not only useful, but probably even right, but now we have another wonderful research program ahead, because as we get closer to understanding how FLN came to be, now the big question is going to be, how about FLB? In other words, thought in animals, and so on.
H
AUSER
: I think one of the challenges for all of us â certainly one that rings through at this Conference â is that it has been hard for us experimental biologists to do the translation from the abstractions that linguists invoke to actually flesh out the research program. I think it is going to require multiple steps. What is exciting â and a significant historical change, I hope â is that the acrimonious debates of the past between biologists and linguists are hopefully gone. But I think it is going to require more than this for the research project to be fruitful. It is going to require a way of articulating what the computational procedures are that are of relevance that enter into language (whether they are FLN or FLB doesn't matter), in such a way that there is a research program that can go forward both in ontogeny and phylogeny. That is a serious challenge. For example, I think that many of the comparative experiments conducted thus
far have focused on fairly easy problems, easy at least from an experimental perspective. Take categorical perception: this was easy to look at in animals because you could use the same materials and methods that had been used with human infants. Similarly, it was relatively easy for my lab to explore the commonalities between rhythmic processing in human infants and tamarins because we could exploit the same test materials and habituation methods. But once you move to the domains of semantics and syntax, the methods are unclear, and even with some fairly solid experimental designs, the results are not necessarily clear. In the work that I have done with Fitch, for example, in which we tested tamarins on a phrase structure grammar, we now understand that neither negative nor positive evidence is really telling with respect to the underlying computation.
Added to this is the problem of methods that tap spontaneous abilities as opposed to those that entail training. I think both methods are useful, but they tap different problems. We must be clear about this. When the work on starlings was published, claiming that unlike tamarins, these songbirds can compute the phrase structure grammar, we are left with a mess because there are too many differences between the studies. For example, though both species were tested on the same A
n
B
n
grammar, the tamarins were tested with a non-training habituation-discrimination method whereas the starlings were operantly trained, requiring tens of thousands of trials of training before turning to the key transfer trials. Further, the tamarins were tested on speech syllables, where the starlings were tested on starling notes. And lastly, starlings are exquisite vocal learners, whereas tamarins do not show any sign of vocal learning. The fact that starlings can learn following massive training shows something potentially very interesting about learnability, on the one hand, and the computational system on the other. I think that is extremely interesting. But it might turn out that for many of the most interesting computations observed in humans that they are available spontaneously, with no training or teaching required. Animals may require a serious tutorial. In the end, therefore, we need a comparative research program that specifies not only which kinds of computation we share with other animals, but also, how they are acquired.
Gabriel Dover
It cannot be denied that the faculty of language is a part of human biological development in which the particular path taken by any one individual is influenced by a unique, interactive milieu of genetics, epigenetics, and environment. The same can be said of all other features of human biology, even though the operative poetics are not known in detail for any one process. Hence, unraveling (if that were at all possible) the route through which language gets established, whether as a problem of ontogeny or evolution, needs to take note of current advances in research into the ways of biology. No matter what the specific locus of attention might be (“broad” or “narrow” language faculty; “principles” or “parameters”; “I”- or “E”-language; “core” or “peripheral” domains; and so on), the same kinds of developmental and evolutionary factors will be concerned.
On this premise, I describe the sorts of features of evolved biological structures that dominate current research, and which can be expected to be no less involved with the biology of human language than any other known function, including consciousness and ultimately the biology of free will. But I'm getting ahead of myself.
Although it is often said (following the lead of Theodosius Dobzhansky) that nothing makes sense in biology except in the light of evolution, the problem is that not much makes sense in evolution. Contemporary structures and processes are the result of a three and a half billion year span of time in which random and unpredictable perturbations have been the dominant contributions. Evolution is a consequence of three major recurrent operations (natural
selection; genetic drift; molecular drive) each of which is essentially stochastic. Natural selection relies on the occurrence of spontaneous, undirected mutations alongside a fortuitous match (that is, a greater level of reproductive success) between such mutant phenotypes and a fluctuating environment. The process of genetic drift, whereby some mutations accumulate over others without interference from natural selection, depends on the vagaries of randomly fluctuating populations, whether of haploid gametes or diploid organisms. In essence, it is due to sampling errors. The process of molecular drive, whereby some genetic elements fluctuate in number in the germ line of single individuals, and may accumulate in a sexual population with the passing of the generations, depends on a variety of mechanisms of DNA turnover (for example, transposition, gene conversion, DNA slippage, unequal crossing over, and so on).
Each process is operationally independent of the other two, although there is a complex three-way interaction between them which has led to the evolution of bizarre structures and functions, not all of whose features are optimized solutions to problems of adaptation, the sole prerogative of natural selection (Dover 2000). Nevertheless, such seemingly exotic features have survived and continue to survive. This is life as the cookie crumbled.
This tripartite phenomenon of evolution impinges on our discussion regarding the existence of “laws of form” in biology and their lower-level reliance on the laws of physics and chemistry. Such a discussion in turn impinges on the conceptualization of the faculty of language (or, at minimum, recursive syntax) as an inevitably evolved universal structure, not unlike a “law of form.”
There are a number of key features that have come to the fore over the last decade in the study of biology. I describe them briefly in order to indicate the general territory from which an understanding of the ontogeny and evolution of language may one day emerge.
The newer concepts are given a number of names of which modularity, redundancy, networks, turnover, and degeneracy take priority. The first, modularity, concerns the observation that at all levels of organization from genes through to organs, a number of basic modular units can coalesce to form a higher-level structure, and that the arrangement of such units can vary from one structure to another. In other words, with reference to genes, the structure and subsequent function of a given gene (and its encoded protein) depend on the specific combination of units that have gone into its (evolved) making. Significantly, the modular units are frequently and widely shared by other, unrelated
genes and each unit may change in its number of copies from gene to gene â that is, the modular units are redundant and versatile. The combined effects of modularity and redundancy in biological structures are not unlike the game of Lego in which many elaborate structures can be constructed from a few repetitive building blocks that can combine one with another in a bewildering number of permutations. Such flexibility, stemming from pre-existing modular units, begs the question as to the meaning of “complexity” as one moves up the tree of life to “higher organisms”; and also imposes considerable caution on the notion of “laws of form” (see below).
There is no average gene or protein with regard to the types, numbers, and distributions of units that go into their making. Importantly, each module contains the sequence information that determines to what other structures it binds, whether they are sequences of DNA/RNA, stretches of protein poly-peptides, or other metabolites, and so on. Hence, multi-module proteins are capable of forming extensive networks of interaction, from those regulating the extent of gene expression in time and space, through to neuronal networks that lie at the basis of brain functions.
It is important to stress that biological interactions of whatever sort are the result of differences between the participating molecules with regard to the distribution of protons and electrons at the points of contact. In other words, the dynamics of all living processes are based on the expected laws of physics and chemistry, as is every other process in the universe (or at least in the single universe with which we are most familiar). Which particular interaction takes effect during ontogeny is a consequence of the perseverance of chemical contacts over evolutionary time. The argument that chemistry/physics provide invariant laws not “transgressable” by biology cannot lie at the level of protons and electrons â for without all the paraphernalia of fundamental physics there would be no biology. Hence, the locus of any such argument that biology reflects universal and rational laws of form, based on universal features of chemistry and physics, must need be at a “higher” level. Is there, or could there be, a higher level in biology obeying universal decrees? Or does universality stop at the level of the differences in redox at the point of contact of our fundamental modules?
A population of biological molecules, or organisms, is unlike a population of water molecules in that there are no predictable regularities of events from which universal and timeless laws can be drawn. The liquidity of water is a property of a collection of water molecules; no single molecule is liquid. There
have been attempts to explain consciousness as an emergent property of a collective of neurons on the assumption that no single neuron is conscious. Setting aside recent hints in brain research that single neurons are more consciously expressive than has been assumed, the metaphoric, or perhaps even literal, comparison with water is illegitimate. The one certain point of biological evolution is that variation is the name of the game, as a combined result of well-characterized mutagenic processes amongst the genes, the random features of sexual reproduction, and the combinatorial flexibility of interacting modules. Hence, no two neurons, from the billions on hand, are alike with regard to their inputs and outputs. Whatever the explanation of consciousness turns out to be, it will need to take on board the massive, inbuilt variation of evolved modular systems and the interactive networks to which they give rise. Consciousness, based on this heaving sea of constantly variable interactions, does not appear to be fixed according to regular, predictable, and universal laws of form.