Of Minds and Language (70 page)

Read Of Minds and Language Online

Authors: Pello Juan; Salaburu Massimo; Uriagereka Piattelli-Palmarini

BOOK: Of Minds and Language
8.65Mb size Format: txt, pdf, ePub

Over time, we've got to the point after many years where we can move to simpler systems of recursive generation of expressions, which have eliminated
phrase structure grammar totally. The last residue are the Merge-based systems. Remember that Merge is the simplest possible mode of recursive generation; you can't get below it. Phrase structure grammar is much more complex, concatenation is more complex, and anything you can dream of is more complex. This is the absolute minimum and if you can get to that, you're finished. It looks like you can get to that. Merge automatically gives hierarchically structured expressions; you can eliminate the structure by adding an associative principle, but if you don't tamper with it, it's just a structured expression. It could be that those are the only ones they need. I can't prove it, but it could be.

Cedric Boeckx brought up an important point (see
Chapter 3
), namely that even if we could show that everything is just Merge-based (language is a snowflake, Merge came along, everything else is there), there's still got to be more to distinguish language from other things. He put it in terms of decomposing Merge, which I suspect is the wrong way to look at it, but you can look at it in terms of adding to Merge, which I think is the right way to look at it. There is something you add to Merge which makes it language specific and that says that reliance on Merge is (as Jim Higginbotham pointed out) “close to true” (see page 143), so close to true that you think it's really true, but there are exceptions. It's close to true that Merge is always a head and another object (a head is just a lexical item, one of the atoms, so Merge is a lexical item and some other object). To the extent that this is true – and it's overwhelmingly true – you eliminate the last residue of phrase structure grammar (projections or labels), because the head is just the thing that you find by minimal search. So a simple computational principle, minimal search (which is going to be much more general), will capture everything about headedness. And the same thing works for both its internal operations and its external relations. That also works for internal Merge, Move. The only major exception that I know is external arguments and they have all sorts of other properties and problems that I talked about. So it looks like it's close to true and probably is true.

Getting a little more explicit, Janet Fodor opened her main presentation (
Chapter 17
) with the sentence:

(5) Pat expects Sue to win.

This is what's called an ECM
22
structure and the interesting thing about these structures, which are pretty rare – English has them but a lot of other languages don't – is that
Sue to win
is a semantic unit, kind of like a clause. Yet
Sue
is treated as if it were in the upper sentence, as the object of
expect
, which can't be as Pat is not expecting Sue, she's expecting Sue to win. The way it functions
(quantification scope and so on) is as if
Sue
were the object of
expect
. This is a problem that goes back to the work of Paul Postal from the early 1970s.
23
There has been a lot of work on it, it's very puzzling and it doesn't make any sense. I tried for years to resist the evidence for it because it was so senseless, but by now it turns out that there's a principled argument that it has to be that way. Just from straight computational optimality measures – I can't talk about it now, but it goes into phase theory which minimizes computation and inherence of features which is required to make it work. It's a slightly involved argument but it goes back to just one principle, minimalize computation, from which it turns out that
Sue
needs to be up there. If that's the case, the child doesn't have to learn it; it's just going to follow from the laws of nature and you can knock out the problem of learning that. There is the parametric problem which Janet mentioned, that might be settled by earlier parameter settings, having to do with inflection and stuff like that. That's the kind of example that one should be looking for in trying to get over some of the massive difficulties of acquisition in terms of parameters.

The third of the problems that came up was Lila Gleitman's: how do you get the words? (See
Chapter 16
.) It doesn't really mean words, remember, it means the smallest meaning-bearing elements. In English they are word-like, but in other languages they may be stuck in the middle of long words, and so on. So how do you get the meanings of the words? One issue that comes up is whether there are parameters. Almost all the parameters that we know about are in the phonology and morphology. It's conceivable that there are none in the syntax, but are there parameters on the semantic side? There are some that Jim Higginbotham talked about, which are non-compositional, and those are very important, I think. But for the words themselves, are there parameters? The only thing I know about this has to do with what were once called semantic fields. Semantic field theory has been forgotten but it was pretty important. The last work I've seen on it was by Stephen Ullmann, who was a linguist at Leeds around forty years ago.
24
A lot of this was done by German scholars years back, and the basic idea was to try to show that there are some semantic domains which are cut up differently by different languages, but they always fill the same semantic domain. It's analogous to structural phonology: there's some domain and you pick different options. One case of it is pretty well known: colors. There is a lot of work about how colors are cut up differently in different languages and what the principles are.

A more interesting case, which was studied by the German semanticists, is mental processes: words like
believe
,
know
, and
think
. It turns out that languages differ in how they break up the field. They seem to cover about the same domain, but in different ways. This is another one of those cases where the fact that English was studied first was very misleading as English has a very idiosyncratic way of doing this. So the English word
know
, or even
believe
is very hard to translate.
Belief
is almost impossible to translate, and
know a language
is not commonly said that way in other languages. They say you
speak a language
,
a language is in
you or you
have a language
whereas in English you say you know a language. And that has led down a huge garden path making people think that “having” a language involves propositional attitudes – you only know something if there are beliefs, which must be verified beliefs, and so forth. You know the rest of the story. But nothing like that is true of “having” language; there's no propositional attitude, there's no beliefs, there's no verification, so none of these questions arise. If only we said
I have a language
instead of
I know a language
all that probably would have been eliminated. The same is true of a word like
belief
; those who speak other languages recognize that they just don't have a word like that. But English happens to be a highly nominalizing language, so everything is nominalized and there are “beliefs.”

And that can lead to the idea that there's a belief, and a belief-desire psychology, and all sorts of other things which may or may not be true but don't have the obvious linguistic anchor in other languages. The difference between
I believe that
and
I think that
… there are languages that have a word that really means believe. Hebrew, for example, has a word
believe
but it doesn't mean what English means by
believe
but rather something like
I have faith in it
. The word used for English
believe
is just
I think
. Lots of languages are like that. The point is that there is a semantic field there that's broken up in different ways and you can be very seriously misled if you take one of the ways of breaking it up as if it had metaphysical implications. It doesn't tell us what the world is like, it's just the way we break up the field, which goes back to Hume's point.

Lila pointed out correctly that, in the learning of words, there are questions about lexical semantics and I don't know how to answer them, but the way to look at this heuristically might be to go back to something like field theory. Lila pointed out that the learning of words is very complex, which is okay, but I think it makes more sense to say that it's not hard enough.

For example, Lila correctly remarked that there is a cue, namely the reference to the world, which gives straightforward information, like in the case of
elephant
. But in fact it doesn't, for the reasons that Locke gave. An elephant is not that thing over there, but rather it is something that has psychic continuity, like Sylvester in my grandchildren's stories. And there's nothing in the
thing that tells you that. Even for a real elephant in the zoo, there's nothing that tells you it's going to be the same elephant if it turns into a rock and it ends up as in a story. That's all the things we know, basically the expansion of Locke's point, and those are things that are foundational in a cognitive sense.

So you have this huge structure of semantic space, of perceptual space, that we don't know much about and that's determining where these things are placed in it and they don't end up having any ontological character. At this point the question of Jerry Fodor's atomism came up.
25
Jerry gives a strong argument that you can't define words; you can take almost any word you like and you're not going to find a full definition of it. His conclusion is that they're atoms, but that's too strict a demand; there are positions in between there. We're familiar with them from phonology. Take, for example, my pronunciation of
ba
. There's never going to be a definition of that – it varies every time I talk, it's different if I have a cold, it's different from my brother's and so on. And nobody expects us to have a definition of it, but we don't just say it's an atom. It's different from, say, pa, and it's different from Arabic, which doesn't have
pa
, and you can make all kinds of observations about
ba
. This is all within the context of distinctive feature theory, which imposes a kind of grid on these systems and identifies all these relations that are real, but they don't define the act; rather they give you some kind of framework within which the act takes place.

So you neither have a definition nor an atom; you have a structure and it looks to me as if words are the same. To take Jerry's famous example,
kill
and
cause to die
.
26
He points out that they're not synonymous, but on the other hand there is something similar about them: there's a victim and he ends up dead. If we knew enough about the multi-dimensionality of this system, we'd probably say that's like a distinct feature and these things fit into a grid somehow. We don't get a full definition but we do get structure, so there is something to look at between atomism and definitions.

Let's turn to the question which came up about ontology: if I say “there is a something or other,” can we introduce Quine's Principle of Ontological Com-mitment
27
(which Jim Higginbotham brought up – see page 154)? I think we can make some distinctions here, going back to Rochel Gelman's distinctions between core and HoW (see page 226). In the core system – the common-sense system – we'll get some answers, but we'll get different answers in the HoW systems. To take an example that's irritating me, take Madrid, where I wasted eight hours the other day, and take the sentence:

(6) Madrid exists.

(Unfortunately, it does, that's why I wasted eight hours at the airport there the other day. Incidentally, for any super Basque nationalists around here, the best argument I've heard for secession of the Basque Country is that you don't have to go through the Madrid airport to get here.) Madrid certainly exists but I know that nothing exists that is simultaneously abstract and concrete. Yet Madrid is simultaneously abstract and concrete.
28
That's obvious as Madrid could burn down to the ground, so it's obviously concrete, and it could be rebuilt somewhere else out of different materials, maybe two millennia later (like Carthage could be rebuilt) and it would still be Madrid, so it's highly abstract. I know perfectly well that nothing can exist that's simultaneously abstract and concrete, so I'm in a contradiction: Madrid exists and it can't exist. That may be true at the common-sense core level – my common-sense concepts can't deal with this situation. But that's fine as there's no reason why they should. On the other hand, if I move to the HoW level, I'm not going to posit an entity, Madrid, at the same levels of abstraction from the physical world – and remember that anything you do with the HoW system is at some level of abstraction.

Gabby Dover raised the question whether there are laws of form (see
Chapter 6
) – you could similarly ask whether there are laws of nature. If you want to be just a string theorist or a quantum theorist, saying that there is nothing but strings or quarks, then there aren't any laws of nature of the kind usually assumed, there are just quarks moving around. Hilary Putnam once made a good comment about that: he said it's a boring law of nature that square pegs can't fit into round holes.
29
It's not a very interesting law but it's a law of nature, yet you can't state it in quantum theory, so if you're a quantum theorist, it's not a law of nature but just some freak accident. But we know that that doesn't make sense; you can't even talk unless you pick some level of abstraction. Incidentally, Gabby Dover picked a level of abstraction – individuals and phenotypes – which makes sense as you can say interesting things about them, but they are very abstract. An individual from the point of view of physics is an incredibly complex notion: particular individuals are changing every second, and so are phenotypes. Every time you take a breath, or think a thought, the phenotype is changing. However, we sensibly abstract away from all of that and we're still going to be interested in what's inside our skin. That is an individual, and we keep it that way even though it changes and so on. There's nothing
wrong with that but it is a very high level of abstraction. It makes sense because it has internal coherence, you can make comments about it and you can formulate the theory of evolution in terms of it, but the same is true of any other levels of abstraction, so why do you have to pick that one?

Other books

The Second Siege by Henry H. Neff
A World of My Own by Graham Greene
Pull by Kevin Waltman
Shouldn't Be by Melissa Silvey