Read Of Minds and Language Online
Authors: Pello Juan; Salaburu Massimo; Uriagereka Piattelli-Palmarini
Fig. 22.14. Grand averaged ERPs for the critical verb time locked to the onset of the verb complex for prosodically correct (solid line) and prosodically incorrect (dotted line) condition. Stress is on the word ANNA in both conditions.
Source
: adapted from Steinhauer et al. 1999
At this point we can formulate a tentative conclusion. We can say that auditory language comprehension is supported by separable but distinct fronto-temporal networks for semantic and for syntactic processes in the LH and for prosodic processes mainly in the RH. Syntactic structure-building precedes lexical-semantic processes and can block these. That is, when word category information is not correct, semantic integration is not licensed, and thus is not done. During normal auditory language comprehension syntactic processes interact with prosodic processes. A good prediction concerning the neural basis of this interaction might be that there must be interhemispheric communication in order to guarantee this very fast online interaction between syntactic and prosodic processes. But how can we test this?
Ultimate evidence for interhemispheric interaction comes from patients with lesions in the corpus callosum, the neural structure connecting the two hemispheres (CC patients) (Friederici et al. 2007b). These are very rare patients. In our patient pool of 1,500, we found only ten subjects with those lesions, but they are interesting to study. In our subjects, the CC was not interrupted entirely but at different portions (see
Fig. 22.15
), and that is very interesting for the following reason. We know that the two temporal areas, namely the left and right STG, are connected by fibers crossing the CC in its posterior portion (Huang et al. 2005). The prediction here is that if the prosodic mismatch effect at the verb, which we observed in the previous experiment with normals, really is due to an interaction between the LH and RH, then such an effect should not be observable in CC patients, particularly in those with lesions in the posterior portion of the CC. We also included patients with lesions in the anterior portion of the CC. Note that those have larger lesions. Thus, if we found that those patients with lesions in the posterior portion, in contrast to those with anterior CC lesions, did not show the interaction effect, we could at least say it was not due to the size of the lesion.
Fig. 22.16
displays the results for the critical verb. For our control subjects an N400 can be observed.
4
For the anterior lesion CC patients, the N400 is
somewhat reduced but is significant. In contrast, for those with lesions in the posterior CC, there is no effect whatsoever. From this finding we may conclude that due to the lesions in the posterior portion of the CC, prosodic information (RH) cannot misguide the syntactic parser (LH). That is, patients with lesions in the posterior CC do not make a wrong prediction for a particular verb category and therefore do not show a prosody-included mismatch effect.
Fig. 22.15. Lesion location of the corpus callosum (CC) in the patients tested. Quantitative measures of lesions in the CC from the anterior to the posterior part are presented in the lower part of the figure.
Source
: adapted from Friederici et al. 2007b
Fig. 22.16. Grand averaged ERPs for the critical verb complex in the prosodically incorrect (dotted line) and correct (solid line) condition for different groups atelectrode Pz.
Source
: adapted from Friederici et al. 2007b
Fig. 22.17. Grand averaged ERPs for the critical verb complex in the semantically incorrect (dotted line) and correct (solid line) condition for different groups at electrode Pz.
Source
: adapted from Friederici et al. 2007b
But before this conclusion can be drawn, it must be demonstrated that the CC patients, and in particular those with lesions in the posterior portion, do show an N400 in principle, that is, when not dependent on prosodic information. To test this we used our sentence material that in previous experiments had elicited an N400. All our patient groups, and certainly the controls, show a nice N400 (see
Fig. 22.17
). From this we can conclude that auditory language comprehension is supported by separable specific temporo-frontal networks for semantic and syntactic processes in the LH and for prosodic processes in the RH, and that the two hemispheres normally interact during the comprehension of spoken language. The posterior portion of the CC plays a crucial role in the interaction between syntactic and prosodic information.
Before ending, just a little experiment to entertain you on the interaction of prosody and semantics. Going beyond language as such, we can look at emotional prosody. Earlier we showed the interaction between the LH and RH with respect to structural issues, but how about semantics? As the only semantics
really encoded in prosody is emotional information, we conducted a priming study (Schirmer et al. 2008) in which our subjects were presented with sentences that had either a happy or sad intonation with quite neutral wording, for example:
(9)
Icb komme gerade vom Essen
âI am just coming back from lunch'
So, what would happen once we primed target words with either a happy or sad sentence prosody? The target words were either positive, like
Geschmack
(taste), or negative, like
Ãbelkeit
(nausea). Subjects had to listen to the sentences and then hear one of the two target words and make a lexical decision on the target words. We varied the following parameters. We had either a 200 ms lag between the sentence offset and the word onset, or a 750 ms lag. Then subjects had to do the lexical decision task. What one would expect, if the prosodic information is encoded by the semantic-conceptual system, is to see an N400. The observed results were different for men and women. Men did not show any N400 effect for the short interstimulus interval, while for the long interval they did. Women, in contrast, showed the semantic mismatch effect between the target word and the prior sentence for the short interval. From this we tentatively concluded that semantic-emotional and prosodic-emotional processes interact during language comprehension, and that women use prosodic-emotional information earlier than men. You may reach your own conclusions on that. But now the question is, is it that men cannot process prosodic information early in principle [laughter], or can they just decide whether they want to do it or not? [laughter]
In the next experiment we used the stimulus material with the short interval between the target word and the offset of the sentence. But now in addition to the lexical decision task used in the previous experiment, all subjects also had to make an emotional judgment â that is, they had to pay attention to emotional information. Not surprisingly, now, men showed the N400 even with the short interval of 200 ms. So the conclusion is that women always process emotional prosody early [laughter], and that men only do so when required by the circumstances. I have to tell you we had a hard time trying to get that published [laughter]. We were even given the feedback that these findings and their interpretation were not politically correct. But these are the data.
With this talk I hope to have shown you that we can look at the brain as it processes language online. In the beginning we started with a model of language processing, and in the end I think we have a good idea of how these different processes are mapped spatially and temporally within the brain.
Let me stress that all this work would not be possible without excellent colleagues and particularly without the work of a lot of excellent Ph.D. students.
G
LEITMAN
: I was very puzzled, because although not brain scanned, perhaps I have been brainwashed by my very close colleagues, Trueswell and Tanenhaus, and others who I suppose are talking about rapid online interaction between syntactic and semantic processes (for instance in studies that Merrill Garrett and colleagues are carrying out at the University of Arizona). These processes are incremental and there wasn't a prior stage of simply structure-building.
F
RIEDERICI
: Yes, I think there are two issues here. Looking at the effects for local structure-building, they show up between 150 and 200 ms prior to semantic processes. That is one issue. The other issue concerns the material used â and I posed the question to Trueswell and Tanenhaus and everybody else working with their material.
5
I always ask them about the prosody of their material. Mostly they use auditory input, as they also apply it in studies with children, and they always tell me that prosody is “normal,” and I do not know what that means. I think even with subtle prosodic cues in their material, you can influence where you do your attachment of the prepositional phrases and how you solve the ambiguity.
G
LEITMAN
: Well, I do not want to badger, but the first studies they did were reading studies, eye-tracking reading, so there is no question of prosody there. It is self-paced reading, so they get the same results there. Those were their first results.
F
RIEDERICI
: Well, I think self-paced reading is not the same thing as looking at the brain directly. During self-paced reading you have to process the information and then you have to make a reaction. I think these reading data are compatible with the third phase in our model, where we assume that all information types are interacting. And this is around 500â600 ms.
P
ARTICIPANT
: Thanks for your talk, it has been very enlightening. Do you see a connection between your findings and work about first-language acquisition where the mother is speaking to her children and it is mainly language lessons?
F
RIEDERICI
: Well, I think it would be a complete lesson, to give you the relevant data on acquisition. But to answer your question briefly: yes I do, in the following sense. First of all, in the closure positive shift that we see with the
processing of the intonational phrase boundaries; we also observe this in very young infants. Secondly, we have recent data which I really think demonstrate that infants pick up the acoustic, phonological information quite early. It is in a collaborative study with Anne Christophe from Paris.
6
What we have been looking at is the age at which infants are able to detect the stress pattern of their native language. In German, as in English, two-syllable words are mostly stressed on the first syllable. But in French the stress is on the second syllable. In a mismatch negativity paradigm, where you hear for example a succession of three stimuli and then a deviant stimulus, that is stress on the first, first, first, and then on the second syllable, infants by the age of 4â5 months react to those deviant stimuli. Now here comes the interesting issue. The German infants are significantly more likely to react to the deviant with the stress on the second syllable than to the deviant with the stress on the first syllable. For the French infants by the age of 4â5 months we find the reverse pattern. So they do not react to all deviants in the experiment, but only to the deviants that are rare in their target language. So the input from the mother is really important during early acquisition.