Read Of Minds and Language Online

Authors: Pello Juan; Salaburu Massimo; Uriagereka Piattelli-Palmarini

Of Minds and Language (22 page)

BOOK: Of Minds and Language
2.41Mb size Format: txt, pdf, ePub
ads

One such possible strategy discernible above is optimization “for free, directly from physics.” That is, as some structures develop, physical principles
cause them automatically to be optimized. We reviewed above some evidence for arbor optimization via fluid dynamics, and for nematode ganglion layout optimization via “mesh of springs” force-directed placement simulation. As could be seen for each of the neural optimization examples above, some of this structure from physics depends in turn on exploiting anomalies of the computational order (Cherniak, 2008). While neuron arbors seem to optimize on an embryological timescale, component placement optimization appears to proceed much slower, on an evolutionary timescale. For component placement optimization, there is the chicken-egg question of whether components begin in particular loci and make connections, or instead start with their interconnections and then adjust their positions, or some mix of both causal directions. It is worth noting that both a force-directed placement algorithm for ganglion layout, and also genetic algorithms for layout of ganglia and of cortex areas, suggest that simple “connections ! placement” optimization processes can suffice.

If the brain had unbounded connection resources, there would be no need or pressure to refine employment of wiring. So, to begin with, the very fact of neural finitude appears to drive “save wire” fine-grained minimization of connections. Another part of the functional role of such optimization may be the picture here of “physics → optimization → neural structure.” Optimization may be the means to anatomy. At least our own brain is often characterized as the most complex structure known in the universe. Perhaps the harmony of neuroanatomy and physics provides an economical means of self-organizing complex structure generation, to ease brain structure transmissibility through the “genomic bottleneck” (Cherniak 1988, 1992) – the limited information carrying-capacity of the genome. This constitutes a thesis of non-genomic nativism, that some innate complex biological structure is not encoded in DNA, but instead derives from basic physical principles (Cherniak 1992, 2005).

The moral concerns not only “pre-formatting” for evolutionary theory, but also for modeling mind. Seeing neuroanatomy so intimately meshed with the computational order of the universe turns attention to constraints on the computationalist thesis of hardware-independence of mind; practical latitude for alternative realizations narrows.

Discussion

P
ARTICIPANT
: I am a biologist and I'm interested in this concept of minimality or perfect design in terms of language. Coming from immunology, we have a mixture of very nice design and also huge waste. That is to say, every day you make a billion cells which you just throw in the bin because they make
antibodies you don't need that day. And I am wondering whether in the brain there is a combination of huge waste in terms of enormous numbers of cells, and beautiful design of the cell itself and the way it copes with incoming information. Some neurons take something like 40,000 inputs, and there doesn't seem to be any great sense in having 40,000 inputs unless the cell knows how to make perfect use of them. And that seems to be something that very little is written about. The assumption is that the cell just takes inputs and adds them up and does nothing much with them. But I would suggest that there may be something much more interesting going on inside the cell, and that focusing on the perfect design of the cell might be more attractive and more productive than looking at perfect design in terms of the network as a whole, which is hugely wasteful in having far too many cells for what is needed. I wonder if you would like to comment on that.

C
HERNIAK
: Just to start by reviewing a couple of points my presentation garbled: anyone around biology, or methodology of biology, knows the wisdom is that evolution
satisfices
(the term “satisfice” is from Herbert Simon 1956). The design problems are so crushingly difficult that even with the Universe as Engineer, you can't optimize perfectly; rather, you just satisfice. And so, I remember literally the evening when we first pressed the button on our reasonably debugged code for brute-force search of ganglion layouts of that worm I showed you, to check on how well minimized the wiring was; I certainly asked myself what I expected. We had already done some of the work on neuron arbor optimization, and so I figured that the nematode (C.
elegans
) wiring would be doing better than a kick in the head, but that it would be like designing an automobile: you want the car to go fast, yet also to get good mileage – there are all these competing desiderata. So when our searches instead found perfect optimization, my reaction was to break out in a cold sweat. I mean, quite happily; obviously the result was interesting.

One open question, of course: it is easy to see why you would want to save wire; but why you would want to save it to the
n
th degree is a puzzle. One pacifier or comfort blanket I took refuge in was the work Randy Gallistel referred to on sensory optimalities (see “Foundational Abstractions,” this volume). Just in the course of my own education, I knew of the beautiful Hecht, Schlaer, and Pirenne (1942) experiments showing the human retina operating at absolute quantum limits. And the similar story, that if our hearing were any more sensitive, we would just be hearing Brownian motion: you can detect a movement of your eardrum that is less than the diameter of a hydrogen atom. A third sensory case (obviously, I'm scrambling to remember these) is for olfactory sensitivity – the Bombyx silk moth, for example. Romance is a complicated project; the
moths' “antennas” are actually noses that are able to detect single pheromone molecules. If you look at the titration, males are literally making Go/No-go decisions on single molecules in terms of steering when they are homing in like that. However, these are all peripheral cases of optimality, and they don't go interior; so that is one reason why I wanted to see if we could come up with
mechanisms
to achieve internal wiring minimization. Another reassurance we sought was to look at other cases of possible neural optimization. The claim cannot be that everywhere there is optimization, we cannot say that on the basis of what we are seeing. Rather, the issue is whether or not there are other reasonably clear examples of this network optimization. Now, some of the work that got lost in my talk improvisation is on cortex layout; so you are moving from the nematode's approximately one-dimensional nervous system, to the essentially two-dimensional one of the cerebral cortex (which is much more like a microchip in terms of layout). And cortex results are similar to the worm. For cortex, you need more tricks to evaluate wiring optimality. But still, when we search alternative layouts, we can argue that the actual layout of cat cortex is attaining wiring-minimization at least somewhere in the top one-billionth of all possible layouts. As an outside admirer, I find the single cell a prettier, less messy world than these multi-cellular systems. I would point out that the work I showed you on arbor optimization is at the single-cell level – actually at the sub-cellular level, in the sense that it is for the layout of single arbors. (The one caveat is that those arbors are approximately two-dimensional. The mathematics is somewhat simpler than for 3D.)

H
AUSER
: I may not have the story completely right, but I was reading some of the work of Adrian Bejan (Bejan and Marden 2006), an engineer at Duke, who has made somewhat similar kinds of arguments as you have about tree structure, and especially about the notion of optimal flow of energy or resources. In a section of one of his books, he makes the argument that there is a necessary binary bifurcation in many tree structures at a certain level of granularity. This is probably a leap, but in thinking about some of the arguments concerning tree structure in language, is it possible that there is more than mere metaphor here? In other words, could the fact that trees, lightning, neurons, and capillaries all show binary branching indicate that this is an optimal solution across the board, including the way in which the mind computes tree structures in language? Could this be the way language
had
to work?

C
HERNIAK
: Yes, that is a classic sort of inter-level connection, and I don't think it is just metaphorical. When we went into this field, all the network optimization theory, all the graph theory for arbors, had been done for what are called Steiner trees. (The usual history of mathematics story, misnamed after
Jacob Steiner of the nineteenth century; but in fact you can find work on the idea going back to the Italian Renaissance, within the research program of Euclidean geometry.) The classical models assume trunks cost the same as branches, and so we had to retrofit four centuries of graph theory to cover cases where trunks cost more than branches – as they usually do in nature. So that is the one caveat on this. But if you go back to the classic uniform wire-gauge models, then the usual theorems are in fact that optimal trees will have such bifurcating nodes; this is a completely abstract result. A caution I hasten to add is: there is another type of tree, the minimal spanning tree. With Steiner trees, you are allowed to put in internodal junctions, and you get a combinatorial explosion of alternative topologies. The largest Steiner trees that have been solved by supercomputer have perhaps around a hundred nodes. There are more towns than that in Tennessee, so the computational limits on Steiner trees are very much like the traveling salesman problem. But if you instead look at this other type of tree (“minimal spanning tree” probably approximates a standard name), in this case junctions are only permitted at nodes or terminals, which is not of course what you see for neuron arbors. However, minimal spanning trees are incredibly fast to generate, and indeed the most beautiful algorithms in the universe that I know of are for generating minimal spanning trees. You see quarter-million-node sets being solved. Anyway, if you look at the neuron cell body, you can treat that one case as a local minimal spanning tree, and the theorem there is: Not two, but six branches maximum. And indeed micrographs of retinal ganglion cells show six branches from the soma. Anyway, again, regarding your query, it's a theorem of graph theory that optimal Steiner trees have binary bifurcations. And, yes, I agree, this is germane to theorizing about tree structures in linguistics.

PART II
On Language
CHAPTER 9
Hierarchy, Merge, and Truth
*

Wolfram Hinzen

9.1 The origin of truth

I'd like to speak about what I think is a rather novel problem on the scientific landscape, the origin and explanation of human semantics – the system of the kind of meanings or thoughts that we can express in language. In the last decades we have seen a very thorough description and systematization of semantics, using formal tools from logic, but moving from there to explanation requires, I believe, quite different tools and considerations. I'd like to offer some thoughts in this direction.

It is fairly clear that the realm of human meanings is highly
systemic
: you cannot know the meaning of only seventeen linguistic expressions, say, or 17,000. That's for the same reason that you can't know, say, only seventeen natural numbers, or 17,000. If you know one natural number – you really know what a particular number term means – then you know infinitely many: you master a
generative principle
. The same is true for your understanding of a single sentence: if you know one, you know infinitely many. So, this is what I call the systemic or “algebraic” aspect of number or language. The question, then, is where this system of meanings comes from, and how to explain it.

Actually, though, this systemic aspect of human meaning is not what is most interesting and mysterious about it. Even more mysterious is what I will call the
intentional
dimension of human semantics. You could, if you wanted to, simply use language to generate what I want to call a
complex concept
: you begin with “unicorn,” say, a noun. Then you modify it by, say, “bipedal,” which results in the object of thought “bipedal unicorn,” and then you can modify again, resulting in “sleepless bipedal unicorn,” “quick, sleepless, bipedal unicorn,” “bald, quick, sleepless, bipedal unicorn,” and so on, endlessly. Each of these constructions describes a discrete and novel object of thought, entirely irrespective of whether such an object ever existed or will exist: our conceptual combinatorics is unconstrained, except by the rules of grammar. It is unconstrained, in particular, by what is true, physically real, or by what exists. We can think about what does not exist, is false, and could only be true in a universe physically different from ours. We approach the intentional dimension of language or thought when we consider
applying
a concept to something (“this here is a bald, … bipedal unicorn”), or if we make a
judgment of truth
(“that there are no bipedal unicorns is true”).

Crucially, there is an asymmetric relation between the (complex) concepts that we construct, on the one hand, and the judgments into which they enter, on the other. In order to apply a concept, we need to have formed the concept first; it is hard to see how we could refer to a person, say, without having a concept of a person. Similarly, in order to make a judgment of truth, we need to have assembled the proposition first that we make the judgment about. Progressing from the concept to the truth value also requires quite different grammatical principles, and for all of these reasons, the distinction between
conceptual
and intentional
information
seems to be quite real (see further, Hinzen 2006a).
1

Our basic capacity of judgment, of distinguishing the true from the false, is likely a human universal, and I take it that few parents (judging from myself) find themselves in a position of actually having to explain to an infant what truth is. That understanding apparently comes quite naturally, as a part of our normal ontogenetic and cognitive maturation, and seems like a condition for learning anything. Descartes characterizes this ability in the beginning of his
Discours
(1637):

BOOK: Of Minds and Language
2.41Mb size Format: txt, pdf, ePub
ads

Other books

Ambrosia Shore by Christie Anderson
A Time of Omens by Katharine Kerr
The Boy I Love by Marion Husband
The Wars of Watergate by Stanley I. Kutler
Yield to Me by Tory Richards
Hungry Hill by Daphne Du Maurier
BEFORE by Dawn Rae Miller
The Christmas Stalking by Lillian Duncan
Tears of a Hustler 2 by White, Silk