Read Of Minds and Language Online
Authors: Pello Juan; Salaburu Massimo; Uriagereka Piattelli-Palmarini
The question ahead is whether the putative “viral” role of uninterpretable morphology, in more or less the sense I sketched, could be meaningfully connected to some real viral process. We shall see, but that might shed some light on such old chestnuts as why the language faculty appears to be so unique, so nuanced, and to have emerged so rapidly within entire populations, the only place where language is useful.
I can't resist mentioning that the Beat generation may have had it roughly right when, in the voice of William Burroughs, it told us that “language is a virus from outer space.” I don't know about the “outer space” bit, but the viral part might not be as crazy as it sounds, given the observable fact that uninterpretable morphology is simply there, and the system goes to great lengths to eliminate it.
G
ALLISTEL
: In computer science there is an important distinction between tail recursion and embedded recursion, because in tail recursion you don't need a stack. A stack is in effect a way of keeping track of which X is which, right? You keep track of it by where they are in the stack, and then the stack tells you where you are in your evaluation of that. And the whole point of reverse Polish in the development of the theory of computation was that it turned out you could do all of the arithmetic with this tail recursion. You could always execute your operations as they came up if you structured it the right way, and therefore you only needed a set that was three-deep, or two-deep. Does that connect with the recursion that you see as central to language?
U
RIAGEREKA
: Well, I'm an observer here as well, but as far as I can see, the thought processes that you have shown us over the years will, I am convinced, require a lot of mind â even more mind than what you are assuming now. I mean, you could even go into quantifying, context-sensitivity, and so forth; one just has to examine each case separately. But I also think that Hauser, Chomsky, and Fitch raised a valid issue, and as you know, one of the central points in that paper was in terms of recursion.
9
But I don't think they fall into a contradiction, if we separate competence and performance. This is because in the case of the type of recursion you are talking about, not only is there recursion in the thought processes, but it is also a construct that somehow I am actually projecting outwards and that you are reconstructing as we speak. And I am projecting it into the one-dimensional speech channel, which would seem to involve a massive compression from what may well be multidimensional structuring to single-dimensional expression â Jim Higginbotham's point two decades ago.
If you have something like Kayne's LCA (the Linear Correspondence Axiom â Kayne 1994) you actually succeed in the task, humans do anyway, or for that matter any similar, reliable, procedure may do the trick. But I think that is what we are trying to see here. What is it that introduced that extra step, call it LCA
or whatever turns out to be correct, that allows you to reconstruct from my speech signal all my complicated recursions? So the only point in principle that I am raising is that I disagree with Jackendoff and Pinker when they criticize the paper on the basis of something like this puzzle.
Actually, I should say they don't exactly criticize the paper on the basis of what I said â theirs would have been a better argument, actually, if they had, but I won't go into that. At any rate, I disagree with their conclusion, and think that you
can
have a recursion that is solipsistic, literally trapped inside your mind, and I would be prepared to admit that other animals have that. The issue then is how we push that thing out, making it public, and that is where I think something like this uninterpretable morphology business might have a very interesting consequence, if you have to excise it along the lines I sketched. This is why Massimo and I have often used a virus image, because a virus is an element that is crucially not part of the system, and you want to kick it out. And the way the system kicks it out (I won't go into the details, but you have to use a procedure with very convoluted consequences, much like those in adaptive immunity) is that the mechanism “forces,” as a result of its workings, some kind of context-sensitive dependency. It is a bit like the RNA pseudo-knots that result from retro-viral interactions, if I understand this, which David Searls (2002) has shown have mild context-sensitive characteristics. Those presumably result from the need to eliminate the virus, or, if you wish, to modulate its activity.
The only new element in the system is on the one hand the extraneous virus, and on the other a certain topology that the system goes into in order to get rid of it â with lots of consequences. So I would argue that what Noam calls “edge features” â which at least in the early stages of minimalism he related to uninterpretable morphology â in fact are the actual push that took us to this new system, of successfully communicated recursive thought.
C
HOMSKY
: Well, the only comment I wanted to make is that there is a gap in the argument, which in fact is crucial, and that is that granting whatever richness you do for the kinds of things that Randy is talking about, still, to go from there to recursion requires that it be embedded in a bigger structure of the same kind and so on, indefinitely. There is no evidence for that. So however rich those thoughts or constructions may be, that's arbitrary; it doesn't carry us to recursion.
G
ELMAN
: I actually want to repeat Randy's question in a somewhat different way. You can do the natural numbers within a recursion, in terms of competence, production, and understanding â it is always an X, not a natural number. To my knowledge, you can't do linguistics without some kind of embedded recursion. It's axiomatic.
U
RIAGEREKA
: That's right, so if language is more than just right-branching, you have a problem in communicating those structures. So your point is completely relevant, because if you think of left-branching together with right-branching â that's actually the place where something like Kayne's LCA gets messy. Kayne's LCA for right-branching would be trivial: you just map a simple-minded c-command structure involving only right branches to precedence among the terminals, and you're done. Then there's no issue. But the minute you have left-branching as well, then you have to have an induction step in the procedure, and here different authors attempt different things. In effect, you need to treat the complex left branch with internal structure as a terminal, and linearize that as a unit including all its hanging terminals, and of course introduce some sort of asymmetry so that the mutually c-commanding branches (to the left and to the right) do not interfere with each other's linearization. That asymmetry is stipulated by everyone, and it shows you that we are dealing with a very messy procedure.
So in essence that is the question â what carried humans to that next step, which somehow introduced some, hopefully elegant, version of the procedure to linearize complex branchings? The speculation I discussed here had to do with the elimination of uninterpretable features; there might be other rationalizations, but the general point remains. Now I think Noam's point is right, you're still concerned about how you got to that larger system to start with, and I have nothing to say about that. It is a great question and I am presupposing that it may have been answered ancestrally for many other animals, not just humans.
C
HOMSKY
: Even with simple tail recursion, when you are producing the natural numbers, you remember the entire set that you produced. Suppose you keep adding numbers, you have to know that it is not like taking steps. When you are taking steps, one after another, the next step you take is independent of how many steps you've taken before it. However, if you really have a numbering system, by the time you get to 94, you have to know that the next one is going to be 95.
G
ELMAN
: Right. Basically, what Noam is saying is that 94 has compressed everything up to 94, and the 1 that you now add gives you the next number, so you don't mix up the 1 you add with the first 1 that you counted.
H
INZEN
: I have a question about Uriagereka's conception of Case features. If you think about the number of times that you suggested what is the actual difference between talking about uninterpretable Case features and talking about morphological features that get used or signal some kind of second-order semantics, some kind of second-order computation, wouldn't it be the
case that as you have this mechanics of elimination of these features, you have certain technical or semantic consequences, and it is a sequel of that? So why would we be forced to set up the morphological features as uninterpretable, as opposed to using some other kind of interpretation?
U
RIAGEREKA
: Well, in part the question is how you manage to have access to those higher-order interpretations, to put it in your own terms. There is a stage where, in one version of the hypothesis Massimo and I are pushing, you actually do not have access to that, and there is another stage where you do â I mean in evolution. Prior to the emergence of this crazy uninterpretable morphology you arguably wouldn't have needed this very warped syntax that emerges as a result of excising the virus. You could get away with something much simpler, for better and for worse. For better in the sense that you wouldn't have all these complications we have been talking about, which serious recursion brings in (and we only scratched the surface, because the minute you introduce displacement things get even more complicated); for worse also in the sense that maybe then you wouldn't have access to these kinds of higher-order structure that your question implies, which necessitates the convoluted syntax.
But maybe when you introduce this extra element in the system, whatever you call it â a virus, edge feature, or whatever â you get this kind of elaborate syntax, but now you also gain new interpretive possibilities. I actually read Noam's recent papers in that way as well. Perhaps I'm biased by my own take, but in essence, once you get to what he calls edge features, well that plainly brings with it another bundle of things, syntactically and, consequently, in the semantics as well, criterial stuff of the sort Luigi was talking about in his talk. And again, it's a very serious separate issue whether those other things have now been literally created, or whether they were there already, latent if you wish, and somehow you now have access to them as a result of the new syntax. I personally don't have much to say about that, although I know you have a view on this. What I am saying is compatible with both takes on the matter. Simply, without a complicated syntax, you are not going to get generalized quantification, unless you code all of that, more or less arbitrarily, in a semantics that is also generative. So complicated syntax is necessary, somewhere: separately or packed into the semantics itself. The question is, how do you get that complexity? And it seems that these “viral” elements have this intriguing warping consequence, which the language faculty may have taken advantage of.
Angela D. Friederici
In a recent paper on the faculty of language, Marc Hauser, Noam Chomsky, and Tecumseh Fitch (2002) asked three critical questions stated already in the title: What is it, who has it, and how did it evolve? In their answer to the “what-is-it” question, they formulated the hypothesis that the language faculty in the narrow sense comprises the core computational mechanism of recursion. In response to the “who-has-it” question, the hypothesis was raised that only humans possess the mechanism of recursion which, interestingly, is crucial not only for language, but also, as they claim, maybe for music and mathematics â that is, three processing domains that seem to be specific to humans, at least as far as we know.
As a first attempt to provide empirical data with respect to the evolutionary part of the question, Tecumseh Fitch and Marc Hauser (2002) presented data from an experiment (see page 84 above) comparing grammar learning in cotton-top tamarin monkeys and in humans. In this study, they presented these two groups of “participants” with two types of grammars. The first was a very simple probabilistic grammar where a prediction could be made from one element to the next (AB AB), which they called a finite state grammar (FSG, Grammar I). They also tested a second, phrase structure grammar (PSG, Grammar II) whose underlying structure could be considered hierarchical. Interestingly enough, the cotton-top tamarins could learn the FSG, but not PSG, whereas humans easily learned both. So now, at least for a functional neuroanatomist, the question arises: what is the neurological underpinning for this behavioral difference? Certainly there is more to it than this one question, but today I can only deal with this one, and would be happy to discuss this with you.
In this presentation I will propose that the human capacity to process hierarchical structures may depend on a brain region which is not fully developed in monkeys but is fully developed in humans, and that this phylogenetically younger piece of cortex may be functionally relevant for the learning of PSG. I think at this point we need to take a look at the underlying brain structure of the two species. Unfortunately, however, we do not have exact data on the neural structure of the brain of the cotton-top tamarin; for the moment we only have the possibility of comparing human and macaque brains. In a seminal study Petrides and Pandya (1994) have analyzed the cytoarchitectonic structure of the frontal and prefrontal cortexes of the brain in humans and the macaque (see
Fig. 13.1
). Anterior to the central sulcus (CS) there is a large area which one could call, according to Korbinian Brodmann (1909), BA 6. This area is particularly large in humans and in the monkey. However, those areas that seem to be relevant for language, at least in the human brain, are pretty small in the macaque (see gray shaded areas BA44, BA45B in
Fig. 13.1
). According to the color coding scheme used here, the lighter the gray shading, and the more anterior in the brain, the more granular the cortex.