communicating brains from neuroscience to phenomenology

6

Click here to load reader

Upload: ulm

Post on 19-Dec-2016

216 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Communicating brains from neuroscience to phenomenology

Trends in Neuroscience and Education 2 (2013) 7–12

Contents lists available at SciVerse ScienceDirect

Trends in Neuroscience and Education

2211-94http://d

E-m

journal homepage: www.elsevier.com/locate/tine

Research in Perspective

Communicating brains from neuroscience to phenomenology

Manfred Spitzer Ulm

a r t i c l e i n f o

Keywords:CommunicationfMRINeuronalcouplingPhenomenologyImprovisation

Contents

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

Within the scientific community, language is often discussedwith disregard for its communicative function: Grammar, pho-netics, semantics, pragmatics, i.e., the entire field of semiotics,takes language as a product. Psycholinguistics is about the produc-tion and comprehension of language, with the mental lexicon,articulation, lexical access, and other functions of a single mind.Just as well, for more than 150 years, good old hermeneutics in thehumanities departments is about the art of comprehendingwritten text and has the single person reading a book and thinkingabout its meaning as its subject of interest. Only in the 1950 s, thephilosopher Ludwig Wittgenstein initiated another way of lookingat language—no longer regarded as a product but rather as anaction. People act with one another, and language is an actionperformed together. As there is no script, this action improvised onthe spot, and a dialogue on a theme is therefore quite like animprovised session in music.

Research in improvisation shows that skilled improvisers canaccomplish better performance when two people agree to jointlyact [14,15,20,21], instead of one person leading another person,who agrees to follow. In a joint motion paradigm, which was usedto study the speed and complexity of the movements madespontaneously by two people in a motion interaction game, suchjoint action was only accomplished by people with years oftraining in improvisation, either in music or in theatre.

As regards language and dialogue, however, we can safelyassume that given the immense training human beings get when

93/$ - see front matter & 2013 Elsevier GmbH. All rights reserved.x.doi.org/10.1016/j.tine.2013.03.001

ail address: [email protected]

growing up immersed in their life-world and mother tongue, mosthumans are masters in conjoint language improvisation. Humanbeings spend several hours a day producing joint language actions,ranging from gossip, chatting, and small talk all the way to seriousdialogue and discussion [11,13].

Upon jointly remembering a story, two or three people com-plete each others’ sentences, mutually fill in words in each others’utterances, check and cross-check what the others are saying, andchange the roles of speaker and listener with incredible ease andspeed, as experimental studies have demonstrated [2,3,19]. Thispresupposes a degree of coordination and cooperation that amazesthe scientists who investigate human communication. At the sametime, such conjoint behavior is so common that its underlyingcomplexity is hardly noticed by anyone engaged in such “routine”action.

In yet another experiment, two partners who could only listento one another but were prevented from seeing each other had tocommunicate in order to solve a problem: Each partner had a deckof cards, and the deck of partner 1 was randomly laid out and seenby partner 1. Partner 2 had to put his cards down in the sameorder, just by listening to the instructions of partner 1. On the firstround this took quite some time, as each card had to be carefullydescribed by partner 1 such that it could be identified by partner2 and sorted according to the given order, which also had to bedescribed [1]. However, upon repetition of the task (with the samecards, but in a different order) it was accomplished much faster, asthe terms used for describing the cards had already been estab-lished. And when the task was performed a third time, both

Page 2: Communicating brains from neuroscience to phenomenology

U. Manfred Spitzer / Trends in Neuroscience and Education 2 (2013) 7–128

partners almost used “telegraph style” language for quick andefficient communication.

During the process of communication, it changes and becomesincreasingly efficient and synchronized, and displays convergenceon different levels of description. In their paper two minds,one dialog: Coordinating speaking and understanding, theauthors sum up their methods and findings as follows: “Thecombination of behavioral evidence in the context of anexperimentally controlled setting, synchronized with speechdocumented in the transcript, has provided powerful evidencefor common ground or partially and mutually shared mentalrepresentations that presumably accumulate in the minds of bothpartners as they interact (whether in a laboratory experiment or ineveryday conversation). […] The process typically results inentrainment, or convergence and synchronization between part-ners on various linguistic and paralinguistic levels—including inwording, syntax, speaking rate, gestures, eye-gaze fixations,body position, postural sway, and sometimes pronunciation” ([1],p 306).

Using functional magnetic resonance imaging (fMRI), Stephenset al. [22] from Princeton University took the analysis of theprocess of communication between people one step further, asthey characterized the brain activation patterns that correspond tosuccessful communication, in one speaker and eleven listeners.While in the scanner, the speaker had to produce a story, from reallife, and on the spot (i.e., the story was not previously memorized)and was instructed “to speak as if telling the story to a friend”([22], p. 14425). The story was recorded and then played to thelisteners, one by one, in the MRI-scanner and brain activation wasrecorded, “thereby capturing the time-locked neural dynamicsfrom both sides of the communication” ([22], p. 14425). Finally,the degree of listener comprehension was assessed using adetailed questionnaire.

Data analysis was not performed as usual in fMRI, i.e., bycontrasting brain activity from various and previously well-defined states of mind. Instead, brain activity of the speaker overthe time course of the story was correlated with the brain activityof the 11 listeners upon hearing the story. In this way, small brainvolumina (voxels) were identified that show a similar time courseof activation.

In previous studies of subjects watching a video, this methodhad already been used to find increased activity in the very brainarea that processes faces whenever a face in portrait format wasseen on the screen. The point here is that the face-area can thus bedetected without searching for it, i.e., without showing faces andnon-faces and then contrasting brain activity. Instead, you look at

Fig. 1. Principle of data analysis by Stephens and coworkers. The time-course of activitytime course of activity in corresponding voxels in the brains of the listeners (redrawn andspoken on the recording.

common, time-locked activity among the subjects watching thevideo, and whenever there is a spot in the brain at a given point intime that gets activated more or less in all the subjects’ brains, youlook at the video and check what's there. Interestingly, using thismethod, a “hand-action area” was found, as there was correlatedactivation across subjects whenever a hand doing something waspresent on the screen [7].

This method of analyzing fMRI data by now is well established[6]. It has the advantage of being less constrained by the experi-mental design, which of course comes with the downside of a lessstraight forward interpretation of the data. The search space fromwhich to take hypotheses can be huge! Moreover, upon watchinga video not only stimulus-driven brain areas are active, but thereis also “intrinsic” brain activity, which has been intensely studiedand described as default network activation [5]. You can evenmathematically (Fourier-) transform the time course of brainactivity and identify frequency components common to allsubjects over time. Thus a hierarchy of temporal receptivewindows in Human cortex [8] has been described: Occipitalareas respond earlier with fast activation frequencies to stimuluscharacteristics, whereas fronto-temporal areas respond primarilywith slower frequencies to high-level (e.g., social) aspects ofstimuli [12].

In the study of Stephens et al. [22], the correlation method ofdata analysis was applied as follows: The time-locked commonbrain activity across all the listeners was calculated and thencorrelated with the time-locked brain activity of the speaker (seeFig. 1). At first blush, this appears to make little sense, as the brainactivity during such different activities such as speaking andlistening are clearly different. So why should you look for commonbrain activity in speaker and listeners? If language is produced inmotor language areas and understood in sensory language areas(after all, this distinction between Broca's and Wernicke's areadates back more than a century and is one of the most well-established facts about brain function), such an analysis shouldlead to nothing!

This criticism, however, turns out to rest on a too simpleminded theory of the mindful brain. Moreover, given that thespeaker hears what he says, his sensory language comprehensionsystem should display activation quite similar to the same brainareas of the listeners [16,19]. Furthermore, we know for some timethat activity in cortical leg motor areas is increased just uponhearing the word “kick” [17]. Thus, the neuronal representation ofthe meaning of “kick” includes motor programs for the movementof kicking. Even phonemes are represented down to their motoraspects: If you listen to the labial closure consonant “p”, lip motor

in small volume elements (voxels) in the brain of the speaker is used to predict theadapted from [22]). Synchronizationwas achieved by using the timing of the words

Page 3: Communicating brains from neuroscience to phenomenology

Fig. 2. Schematic rendition of the results of the study by Stephens et al. [22](redrawn from their Fig. 2). Left hemisphere sagittal slices, from periphery towardscenter (top to bottom), with brain regions of significant speaker–listener neuronalcoupling depicted in orange color (named from left to right). (a) Auditory areas;(b) superior temporal gyrus, temporo-parietal junction (both overlap withWernicke's area; W); inferior occipital gyrus; (c) dorsolateral prefrontal cortex,Broca's area [B], parietal lobule; (d) orbitofrontal cortex, striatum; (e) medialprefrontal cortex, precuneus. (For interpretation of the references to color in thisfigure legend, the reader is referred to the web version of this article.)

U. Manfred Spitzer / Trends in Neuroscience and Education 2 (2013) 7–12 9

neurons become active, whereas upon listening to a “t” (a tongueclosure consonant), tongue motor neurons become active [18].

In addition, we know that the production and comprehensionof language is performed by a large number of interacting corticalareas. “Mirror neurons”, “imitation centers”, the “motor theory oflanguage comprehension” [4], and a large number of relatedconcepts provide support to the view that the cortex is organizedin a modular way of a few dozen hierarchically organized andbidirectionally connected hidden layers sandwiched betweenareas devoted to sensory input and motor output.

To sum up the rationale for the data analysis, as speaker andlisteners hear the spoken story in synchrony, we can safelyassume that cortical areas involved in hearing should be activein synchrony. And if the production and understanding oflanguage involve similar areas, up to the thought processesinvolved in planning and comprehending, then there shouldeven be more overlap in brain activation in speaker andlisteners. This was in fact the case: The analysis of speaker–listener neural coupling revealed areas of common activationover and above the expected cortical areas of auditory informa-tion processing (cf. Fig. 2).

Speaker–listener neural coupling turned out to be widespread,extending well beyond low-level auditory areas (Fig. 2a), languagecomprehension areas (b), motor language areas (c) all they way toareas related to thinking, planning, and cognitive control (c–e). Inaddition areas in the right hemisphere homologous to the lefthemisphere language areas were found to be active, which isindicative of remote associative processes being involved(Weisbrod et al., 1998).

A second neuronal coupling analysis was performed betweenthe listeners only, which yielded quite similar results, i.e., anextensive overlap with the maps shown in Fig. 2, although theareas were larger. This was to be expected as all the listenersperformed exactly the same task, and therefore their brain activa-tion patterns over time should be similar.

The similarity of the speaker–listener neuronal coupling (indi-cative of communication) with the listener–listener neuronalcoupling (indicative of mere comprehension) led the authors toconduct two more experiments. They served as controls and wereperformed for the single purpose to demonstrate that the areasdepicted in Fig. 2 are in fact serving the function of communica-tion. For the first control, the entire experiment was repeated witha Russian speaker and 11 non-Russian speaking listeners, and withthe same type of analysis. Neither speaker–listener coupling norlistener–listener coupling was found. When using lower signifi-cance thresholds, only low-level auditory areas were found. Thusthe areas found in the first experiment (and shown in Fig. 2) canbe inferred to be driven by the “successful processing of incominginformation”, as the authors (p. 14426) note.

A second control experiment was performed, in which thespeaker told another story in the scanner and this activity wasused for an analysis of neuronal coupling with the listeners’activity from the first experiment. No coupling was found, eventhough there was sending and receiving of information. But just asin the first control experiment, there was no communicationbetween speaker and listeners. “We therefore conclude thatcoupling across interlocutors emerges only while engaged inshared communication”, the authors (p. 14427) conclude.

The authors did not stop their analyses here but ratherperformed additional coupling analyses, with the time shifted, in1, 5 s intervals, such that the speaker's brain activation wascorrelated with the listeners brain activation not just at the sametime, but at four additional time points, with the listener's brainactivation lagging up to 6 s behind the speaker's brain activity. Thismakes sense as the listeners’ brains quite naturally should lagbehind in their activity compared to the speaker, as a listener re-

Page 4: Communicating brains from neuroscience to phenomenology

Fig. 3. Neuronal coupling across all significantly coupled brain areas in thelistener–listener (blue) and speaker–listener (red) coupling analyses, with timeshifted up to 6 s in both directions from the moment of utterance (i.e., time point0). Between the listeners, coupling is largest with their brains synchronized to thetime of the utterance, whereas coupling between speaker and listeners is largestwhen the speaker precedes the listeners by 1.5–3 s (redrawn from [22], Fig. 3,p 14428). (For interpretation of the references to color in this figure legend, thereader is referred to the web version of this article.)

Fig. 4. Schematic rendition of the results of the study by Stephens et al. [22] (redrawnstriatum in (d)), from periphery towards center (top to bottom), with brain regions of sare depicted in yellow, areas in blue depict areas in the listeners lagging behind the spauditory and perceptual language areas (yellow) are simultaneously active in speakerbehind in the listener (blue); (c–e) in areas of thinking and planning (mPFC, dlPFC, and(For interpretation of the references to color in this figure legend, the reader is referred

U. Manfred Spitzer / Trends in Neuroscience and Education 2 (2013) 7–1210

produces, what a speaker produces. In addition, as it may bepossible that the listener anticipates what the speaker is going tosay, neuronal coupling was also analyzed with the temporal orderreversed, i.e., with the listeners preceding the speaker.

As a control, this type of time-shift neuronal coupling analysiswas also performed on the listener–listener coupling. This sowedthat their brains functioned in sync with time locked to themoment of the utterance (cf. Fig. 3, blue dotted line). In contrast,the speaker–listener time-shifted neuronal coupling analysisdemonstrated that most areas in the listeners’ brains indeedlagged behind the speaker's brain activity (Fig. 3, red line).

However, the general picture of the speech comprehendingbrain lagging behind the speech producing brain turned out to bemore complex upon close scrutiny to single brain areas. Areas ofearly auditory processing became simultaneously active in speakerand listener, areas of language comprehension lagged behind inthe listener. As these two types of areas made up most of theactivated areas, they determined the mean, depicted in red inFig. 3. Most interestingly, however, there were also areas that wereactive at an earlier point in time in the listener compared to thespeaker, i.e., whose activity in the listener preceded the activity inthe corresponding areas of the speaker (cf. Fig. 4). These areasincluded the striatum as well as anterior frontal areas, such as themedial prefrontal cortex (mPFC) and the dorsolateral prefrontal

from their Fig. 3B). Left hemisphere sagittal slices (with the exception of theignificant speaker–listener neuronal coupling. Areas with synchronous couplingeaker, areas in red depict areas in the listener preceding the speaker. (a and b)and listener; (b and e) brain areas engaged in deeper language processing lagstriatum) the listeners’ brain activity precedes the speaker's brain activity (red).to the web version of this article.)

Page 5: Communicating brains from neuroscience to phenomenology

U. Manfred Spitzer / Trends in Neuroscience and Education 2 (2013) 7–12 11

cortex (dlPCF). Thus, the listener not only follows the speaker, butalso anticipates what he is going to be told.

The higher level cognitive (thinking) part of comprehension is ahighly active process of constructing in advance while lower levelareas reconstruct from sensory data. This obviously serves under-standing: Once I know what I am going to be told next, I am in thepossession of a powerful filter that allows me to decode even themost noisy language input signals. This corresponds to thematched-filter theorem in information theory, which is used inadvanced telecommunication technology. Further studiesmay investigate whether this effect of preceding coupling oncomprehension depends upon the quality of the language input(hypothesis to be tested: the noisier the input, the larger the effectof such anticipatory filtering and therefore the larger the amountof preceding neuronal coupling in the listener).

Most importantly, the amount of comprehension was assessedin each listener and turned out to be a function of neuronalcoupling. If listeners were ranked according to their amount ofstory comprehension, this rank correlated with the amount (thenumber of voxels) of neuronal coupling between speaker andlistener with a significant r of 0.55. In other words, the more thebrain of the speaker is in sync with that of the speaker, the betterthe communication. “These finding suggest that the stronger theneural coupling between interlocutors the better the understanding”(Stephenson et al., 2010, p. 3).

This correlation even increases further to r¼0, 75 (po0.01) ifonly the areas of preceding activity in the listener are used for thecomputation of the correlation coefficient. Successful communica-tion, it appears, is not just passive reception of the input, butrather consists of the listener's active planning of what is going tobe said by the speaker in the imminent future. This takes time, anda good speaker will allow for enough time such that the listeners

Fig. 5. The mathematician and philosopher Edmund Husserl (1859–1939) foundedthe philosophical investigation of the mind he called phenomenology.

will be able to produce anticipatory brain activity and henceaccomplish their task. If the speaker talks too slowly, however,the fleeting, ephemeral continuously adapting filtering activitywill already have vanished and not accomplish what it is supposedto do. In the real world, the good speaker will be in a flow-likestate, being fully immersed in the minds of the listeners, withclose eye contact monitoring their level of attention, in order tomaximize comprehension. In the ideal case, the speaker–listener-dyade will be in a true dialogue, with both engaged in the art ofimprovisation, with none of them leading, exactly therefore bothproducing the maximal amount of communication.

The study by Stephenson et al. (2010) shed new light uponanalyses of perceptual processes by the phenomenologist, EdmundHusserl (1905/1966), dating back a century ago (Fig. 5). Accordingto Husserl, perception should not be construed as passive recep-tion but rather as active construction. In this process, most recentconscious content as well as anticipated contend play an impor-tant role. Husserl demonstrated this in his famous and very simpleexample of perceiving a tone: If we conceive what we refer to as“now” as a mere point in time, a tone (which by definition istemporally extended) cannot exist. There needs to be an extended“horizon” of the immediate past and the imminent future, as wellas some form of integration. Husserl used technical terms—reten-tion and protention for these two features of conscious perceptionof events extended in time: the immediate past and the imminentfuture, which both have to be integrated into the continuouspresent. Of course, this is even more the case when one listens to,and comprehends, a story.

As Husserl had been trained as a mathematician, the data andtheir analyses provided by Stephens and coworkers would havedelighted him. In essence, the data give empirical substance andelaboration to what Husserl deduced from detailed descriptionand careful analysis of human experience. The voice of the speaker“still sounds in me” (retention) when I, the listener, understandwhat he is saying, while I am already engaged in figuring out whatthe speaker is going to say next (protention). As [10] put it in hisPhenomenology of internal consciousness of time, “Every memorycontains expectations, the fulfillment of which lead to thepresence”.

Some might say that phenomenology, i.e., the analysis ofconscious acts by the thinking and experiencing subject withoutany further means than introspection and though, is a century oldendeavor, which has nothing to do with present day neuroscience.But I think that this does not take philosophy seriously, but insteadprevents true understanding and reflection. Once I realize

that my brain processes information within distinct modulesdedicated to specific functions (i.e., types of informationprocessing and data analysis), which are networked togetherin a highly ordered way, such that spatio-temporal input-patterns are processed in a way which can, at least in principle,be described by neural network theory of pattern recognitionand generation;

that stored information (in the form of synaptic connectionsshaped by past experience) is used continuously in neuralinformation processing of input signals in order to predictwhat is most likely to happen next (in order to act, and notmere re-act);

and that the firing patterns of high level (cognitive) areas(sandwiched between sensory input and motor output) can,and do, adaptively chance the firing patterns of the inputmodules, thereby adaptively filtering the input

I understand Husserl in a deeper way than just by reciting hiswords. I may even have some means for clarifying some ambig-uous point he makes. He writes that “protentions of what is to

Page 6: Communicating brains from neuroscience to phenomenology

U. Manfred Spitzer / Trends in Neuroscience and Education 2 (2013) 7–1212

come are constituted as empty” ([10], p 52) but later explains thatthe future horizon “is opened continuously and becomes morelively and richer” ([10], p 53). Thus he clearly sees that protentionstake part in determining conscious content, i.e., are not strictly“empty”. Instead, they contribute to the process of comprehension.

Because Husserl paid close attention to the way we experiencetime, he would have been highly interested in knowing that uponcomprehending a story, our language comprehension brain areaslag behind and are engaged in retention (integrate new materialwith past experiences), whereas brain areas related to planningand language production engage in opening up new horizons andthereby adaptively filter the input by continuous prediction. Whatwe call “now” should not be searched in an “even higher” brainarea, but rather is present in the senses, i.e., in the activationpatterns of low-level sensory cortical areas. The mathematicianHusserl would have highly appreciated that we use mathematicaltheories in order to describe how retention and protentionincrease the reliability of our perceptual processes, and thatprotention in particular plays a major role in successful commu-nication! He would not have engaged in power struggles betweenhumanities and sciences about who has a say in the ultimateelucidation of human thought and inquiry. His interest were thethings themselves.

And these things could not be more interesting! Human beingsspend a lot of time communicating (about 4 h per day; cf. [11,13]).Who would have thought that this most complex social behavior canbe investigated in unprecedented detail using brain research meth-ods? How would we have figured out, without these methods, thatlanguage comprehension entails active planning, precedingthe speaker's mental processes. There is now way to pinpoint thecomponents of comprehension, their material substrate (brain mod-ules) and the intricate interaction between these components inspace and time. Even Husserl did not think he could do hisphenomenological analysis of the communicative process betweentwo people, as this was far too messy and complex in order toyield clear results. So he turned to the most simple phenomenaof mental life. Systems neuroscience just as well started withbursts of light and sound. And back then we would not haveeven dreamt about studying movies and dialogue, empathy andtrust, communication and social norm compliance. Thus sys-tems neuroscience moved on to complex phenomena, and forsome years now there is social neuroscience, i.e., the carefulanalysis, with neuroscience methods, of human beings as inter-acting agents. Husserl would have been delighted! And as longas we make progress in understanding ourselves, he would havebeen fine with whatever means it takes. Philosophers andNeuroscientists take note: no quarrels needed!

References

[1] Brennan SE, Galati A, Kuhlen AK. Two minds, one dialog: coordinating speakingand understanding. Psychology of Learning and Motivation 2010;53:301–44.

[2] Clark HH, Krych MA. Speaking while monitoring addressees for understanding.Journal of Memory and Language 2004;50:62–81.

[3] Ekeocha JO, Brennan SE. Collaborative recall in face-to-face and electronicgroups. Memory 2008;16:245–61.

[4] Galantucci B, Fowler CA, Turvey MT. The motor theory of speech perceptionreviewed. Psychonomic Bulletin and Review 2006;13:361–77.

[5] Golland Y, Bentin S, Gelbard H, Benjamini Y, Heller R, Nir Y, Hasson U, Malach R.Extrinsic and intrinsic systems in the posterior cortex of the human brainrevealed during natural sensory stimulation. Cerebral Cortex 2007;17:766–77.

[6] Hasson U, Malach R, Heeger D. Reliability of cortical activity during naturalstimulation. Trends in Cognitive Sciences 2010;14:40–8.

[7] Hasson U, Nir Y, Fuhrmann G, Malach R. Intersubject synchronization of corticalactivity during natural vision. Science 2004;303:1634–40.

[8] Hasson U, Yang E, Vallines I, Heeger DJ, Rubin N. A hierarchy of temporalreceptive windows in human cortex. Journal of Neuroscience 2008;28:2539–50.

[9] Husserl E. (1905/1966) Zur Phänomenologie des inneren Zeitbewusstseins, Hg.R. Boehm. Husserliana Bd. X, M. Nijhoff, Den Haag.

[10] Kahnemann D, Krueger AB, Schkade DA, Schwarz N, Stone A. A survey methodfor characterizing daily life experience: the day reconstruction method.Science 2004;306:1776–80.

[11] Kauppi J-P, Jääskeläinen IP, Sams M, Tohka J. Inter-subject correlation of brainhemodynamic responses during watching a movie: localization in space andfrequency. Frontiers in Neuroinformatics 2010;4(article 5):1–10, http://dx.doi.org/10.3389/fninf.2010.00005.

[12] Kean S. Red in tooth and claw among the literati. Science 2011;332:654–6.[13] Noy L, Dekel E, Alon U. The mirror game as a paradigm for studying the

dynamics of two people improvising motion together. Proceedings of theNational Academy of Sciences of the United States of America2011;108:20947–52.

[14] Oullier O, de Guzman GC, Jantzen KJ, Lagarde J, Kelso JA. Social coordinationdynamics: measuring human bonding. Society for Neuroscience 2008;3(2):178–92.

[15] Pickering MJ, Garrod S. Do people use language production to make predic-tions during comprehension? Trends in Cognitive Sciences 2007;11:105–10.

[16] Pulvermüller F. Brain mechanisms linking language and action. NatureReviews Neuroscience 2005;6:576–82.

[17] Pulvermüller F, Huss M, Kherif F, Martin FM, Hauk O, Shtyrov Y. Motor cortexmaps articulatory features of speech sounds. Proceedings of the NationalAcademy of Sciences of the United States of America 2006;103:7865–70.

[18] Richardson DC, Dale R. Looking to understand: the coupling between speakers’and listeners’ eye movements and its relationship to discourse comprehension.Cognitive Science 2005;29:39–54.

[19] Sebanz N, Bekkering H, Knolich G. Joint action: bodies and minds movingtogether. Trends in Cognitive Science 2006;10:70–6.

[20] Shockley K, Richardson D, Dale R. Conversation and coordinative structures.Topics in Cognitive Science 2009;1:305–19.

[21] Stephens GJ, Silbert LJ, Hasson U. Speaker–listener neural coupling underliessuccessful communication. Proceedings of the National Academy of Sciencesof the United States of America 2010;107:14425–30.

[22] Weisbrod M, Maier S, Harig S, Himmelsbach U, Spitzer M. Lateralized semanticand indirect semantic priming effects in people with schizophrenia. BritishJournal of Psychiatry 1998;172:142–6.