auditory perception article

Upload: marktiangco

Post on 09-Apr-2018

219 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/8/2019 Auditory Perception Article

    1/18

    Cognitive Brain Research 12 (2001) 181198www.elsevier.com/locate/bres

    Research report

    Auditory perception of laughing and crying activates human amygdalaregardless of attentional state

    *Kerstin Sander , Henning Scheich

    Leibniz Institute for Neurobiology, Brenneckestrasse 6, 39118 Magdeburg, Germany

    Accepted 20 March 2001

    Abstract

    Adequate behavioral responses to socially relevant stimuli are often impaired after lesions of the amygdala. Such lesions concern

    especially the recognition of facial and sometimes of vocal expression of emotions. Using low-noise functional magnetic resonance

    imaging (fMRI), we investigated in which way the amygdala, auditory cortex and insula are involved in the processing of affective

    nonverbal vocalizations (Laughing and Crying) in healthy humans. The same samples of male and female Laughing and Crying were

    presented in different experimental conditions: Simply listening to the stimuli, self-induction of the corresponding emotions while

    listening, and detection of artificial pitch shifts in the same stimuli. All conditions activated the amygdala similarly and bilaterally,

    whereby the amount of activation was larger in the right amygdala. The auditory cortex was more strongly activated by Laughing than by

    Crying with a slight right-hemisphere advantage for Laughing, both likely due to acoustic stimulus features. The insula was bilaterally

    activated in all conditions. The mean signal intensity change with stimulation was much larger in the amygdala than in auditory cortex

    and insula. The amygdala results seem to be in accordance with the right-hemisphere hypothesis of emotion processing which may not be

    applicable as strongly to the level of auditory cortex or insula. 2001 Elsevier Science B.V. All rights reserved.

    Theme: Neural basis of behavior

    Topic: Neuroethology

    Keywords: Amygdala; Auditory cortex; Insula; Audition; Affect; fMRI; Nonverbal; Vocalization; Human

    1. Introduction affected. Dysfunctions mainly concerned the recognition of

    fear [3,6,8,21,66], while the cognitive concept of fear of

    A precondition for efficient communication is the ability these patients (expressed in language) was not impaired

    to differentiate between socially relevant and irrelevant [3,8,66].

    information. Responses to socially relevant stimuli are Various imaging studies of the amygdala also provided

    often inappropriate after bilateral lesions of the amygdala, insight into hemispheric and valence processing of emo-

    indicating the prominent involvement of this structure in tional information. The activation of the amygdala by

    the processing of such signals in animals [46] as well as fearful, sad and happy facial expressions in healthyhumans [4,20,21,27,53]. Thereby the human amygdala humans was revealed using functional magnetic resonance

    seems to be primarily involved in the recognition and imaging (fMRI) [20,64,72] and positron emission tomog-

    expression of emotions [27]. Impairments of such func- raphy (PET) [53]. Results of these studies varied with

    tions were reported after bilateral as well as unilateral respect to the laterality of amygdala activation, viz. left

    amygdala damage [1,21]. Especially the recognition of amygdala activation during the perception of fearful faces

    emotional information conveyed by facial [3] and in some [53], left amygdala activation during the perception of both

    instances by vocal expressions of emotion [66] was fearful and happy faces [64], bilateral amygdala activation

    in response to fearful faces, but only left amygdala

    activation in response to happy faces [20], and bilateral*Corresponding author. Tel.: 149-391-6263-126; fax: 149-391-6263-

    amygdala activation in response to fearful and happy faces,328.E-mail address: [email protected] (K. Sander). whereby signal intensity increased for fearful faces but

    0926-6410/01/$ see front matter 2001 Elsevier Science B.V. All rights reserved.P II: S 0 9 2 6 -6 4 1 0 (0 1 )0 0 0 4 5 -3

  • 8/8/2019 Auditory Perception Article

    2/18

    182 K. Sander, H. Scheich / Cognitive Brain Research 12 (2001) 181 198

    decreased for happy faces [72]. As another type of visual classical conditioning studies, in animals electrophysio-

    stimulation viewing emotional video clips resulted in a logically with footshocks as unconditioned stimuli (audit-

    larger right amygdala PET activation than viewing emo- ory conditioning) [58], and with fMRI in humans with

    tionally neutral video clips [26]. In a fMRI study using aversive tones, respectively, electric wrist shocks as un-

    pictures with emotional scenarios the overall brain activity conditioned stimuli (visual conditioning) [23,49]. Auditory

    was lateralized to the left hemisphere for positive pictures inputs to the amygdala are known [50], and may serve

    and to the right hemisphere for negative pictures when auditory conditioning as well as unconditioned processing

    experience of valence was equated for arousal [28] where- of auditory information, e.g. animal studies revealed toneby no significant activation of the amygdala in response to responsive amygdala neurons in rats [50,58], cats [30,31],

    negative pictures was found. and primates [47] independent of fear conditioning.

    Taken together, these studies addressing the processing To further elucidate the role of the amygdala in the

    of visually presented emotional stimuli neither provide an processing of acoustically expressed emotions, the present

    unequivocal support for the valence hypothesis of emotion study with low noise fMRI [11,63] investigated affective

    processing (left hemisphere processing of positive emo- nonverbal vocalizations (ANVOCs), such as laughing and

    tions, right hemisphere of negative emotions) [33] nor for crying, as stimuli in healthy humans. It is implied by

    the lateralization hypothesis (right hemisphere specializa- general concepts of amygdala function [50] that such

    tion for emotion processing regardless of its valence) [16]. biologically fundamental emotional stimuli should activate

    In the auditory domain the present status with respect to this structure. The study pursued two general aims: (1) to

    the above hypotheses and amygdala involvement is even test whether the amygdala is activated differentially by

    less clear presumably due to fewer studies. But, both with these positive and negative emotionally loaded ANVOCs

    dichotic listening to different emotionally toned sentences including lateralization questions, (2) whether an amygdala

    [40] as well as with dichotic listening to nonverbal activation is modulated either by the emotional quality or

    emotional human voices (laughing, crying, shrieking) [29] by emotionally irrelevant cues in the ANVOCs.

    a left ear advantage was found in support of the lateraliza- Experiment I investigated whether the perception of

    tion hypothesis. stimuli here addressed as Laughing and Crying simply

    In contrast to the numerous studies of visually expressed played to the subjects might be sufficient to activate the

    emotions, there are only a few studies supporting an amygdala and whether there is any differential activation.

    amygdala involvement in the processing of emotionally The emphasis was not on determining the selectivity of

    relevant acoustic stimuli in humans. Impaired auditory amygdala activation by these stimuli with respect to other

    recognition of prosodic and nonverbal expressions of fear auditory stimuli but rather on possible differential effects

    and anger in a patient following bilateral amygdala damage due to emotional valence. In experiment II, the same

    was reported [66], but also intact recognition of these vocal Laughing and Crying samples were used but, in order to

    expressions of fear in another patient with bilateral scrutinize some of the results of experiment I, in a slightlyamygdala damage [8] (see also Ref. [2]). In a PET study modified block design of the fMRI experiment and with

    with healthy humans, no significant activation of the different tasks. One task was an emotional self-induction

    amygdala was found during emotion identification in by imagining either happy or sad situations. The other task

    differentially intonated words [42]. This finding was was to detect emotionally irrelevant cues, i.e. short pitch

    attributed to the missing of emotional intonations of fear in shifts introduced in the same laughing and crying stimuli.

    the task, although negative emotional intonations such as The hypothesis was that focusing on the emotional quality

    surprise, anger, and disgust were tested. Likewise in of Laughing and Crying would lead to a larger amygdala

    another PET study while subjects listened to music of activation than performing an emotionally irrelevant de-

    varying degree of dissonance (known to elicit negative tection task on these same stimuli. In experiment II with a

    emotional responses, i.e. unpleasantness) activated struc- modified plane of imaging auditory cortex and insular

    tures were primarily found in the right hemisphere for cortex in addition to the amygdala were analyzed. The

    dissonant and consonant music but no amygdala activation insula is known to be responsive to acoustic stimulationwas seen [15]. In contrast, in a fMRI study an activation of [9,14,70]. Furthermore, in primates there exist anatomical

    the right human amygdala in response to nonverbal connections between the amygdala and the auditory cortex

    expressions of fear was demonstrated [57]. In addition, the [7,48] as well as between the amygdala and the insula

    authors also reported an activation lateralization to the [9].

    right hemisphere, thus supporting the lateralization hypoth-

    esis of emotion processing.2. Materials and methods

    With respect to learning the role of the amygdala seemsto be more evident. Buchel et al. [22] using fMRI 2.1. Experiment I

    demonstrated rapid habituation of human amygdala activa-

    tion to acoustically conditioned stimuli in an aversive 2.1.1. Subjects

    (auditory) trace conditioning experiment. Such a habitua- Eleven right handers [56] with normal hearing partici-

    tion of amygdala activation was previously shown in pated in the first experiment (ten women, one man; mean

  • 8/8/2019 Auditory Perception Article

    3/18

    K. Sander, H. Scheich / Cognitive Brain Research 12 (2001) 181 198 183

    age 20.4 years, range 1923 years). Subjects gave written images of five contiguous coronal slices with slice thick-2

    informed consent to the study which was approved by the ness of 8 mm and in plane resolution of 2.832.8 mm2

    ethical committee of the University of Magdeburg. (64364 voxels, 18318 cm FOV) were scanned. Coronal

    slices were centered on the amygdala (Fig. 4A). During

    each reference and stimulus block five images were

    2.1.2. Stimuli and procedure collected for all slices, one image every 12 s. From the

    The acoustic stimuli were recordings of continuous resulting 60 images the first image of each block was

    laughing and crying of both a woman and a man produced excluded from data analysis to compensate for delayedby professional actors (Figs. 1 and 2). In the fMRI block hemodynamic responses.

    design a Laughing block was constructed of 10 s male

    laughing followed by 10 s female laughing, and so forth 2.1.4. Data analyses

    until 60 s were reached (Fig. 3A). These blocks were Functional data were controlled for motion artifacts by

    alternated with Silence blocks of the same duration and the AIR package [73,74]. The criterion for excluding data

    repeated six times (sequence 1). Sequence 2 was designed was a deviation of more than 18 from one of the three

    like sequence 1 but all but the first Silence block were rotation angles or a translation of more than half a voxel.

    replaced by Task blocks (see below). Furthermore there This procedure reduced sample size and left complete data

    was a corresponding sequence of Crying alternated with sets of six subjects for statistical analyses. Thereafter the

    Silence (sequence 3) and Crying alternated with the Task matrix size of the functional images was set to 1283128

    (sequence 4). There were two alternative orders of se- by pixel replication. Images were spatially smoothed by a

    quences: A laughing versus task crying versus task gaussian filter to reduce in-plane noise (FWHM52 pixels

    laughing versus silence crying versus silence, B crying of the extended matrix, i.e. 2.8 mm), and were linearly

    versus task laughing versus task crying versus trend corrected.

    silence laughing versus silence, which were counterbal- The amygdala including some periamygdaloid surround-

    anced across subjects. ing as region of interest (ROI) of both hemispheres was

    In the Task block two acoustically presented numbers analyzed by correlation analysis to obtain statistical

    had to be subtracted from a first number. After a break of parametric maps [10,38]. Functional data analysis was

    11 s another three numbers were presented acoustically carried out with the custom-made software package

    from which the correct result could be chosen. The correct KHORFu [37]. In each participant significantly activated

    number was indicated by key pressing with the right index voxels (P,0.05) were determined for the four above

    finger. This calculation task with acoustically presented described experimental conditions, e.g. Laughing versus

    numbers was introduced to interrupt possible emotional Task, Laughing versus Silence etc. The basis for further

    aftereffects of Laughing or Crying in the alternating data analysis were intensity weighted volumes (IWV) in

    sequence of the experiment. In addition, the spoken the ROI. This is the product of the absolute number ofnumbers provided at least some neutral speech control to significantly activated voxels and their mean change in

    the ANVOCs. Silence blocks served as additional refer- signal intensity. The IWV cannot be generally assumed to

    ences. be normally distributed and were therefore analyzed with a

    Subjects were asked to simply listen to the stimuli and three-way rank analysis of variance with data alignment

    to perform the cognitive task when required. Immediately (nonparametric ANOVA) [17]. The aim of the data align-

    before and after the experiment a questionnaire about the ment (in a multi-factorial design) was to correct the effect

    subjects momentary mood was filled out (Mehrdimen- of one special factor with respect to the possible effects of

    sionaler Befindlichkeitsfragebogen) [69]. Stimuli were the other factors and corresponding interactions, except for

    presented binaurally through fMRI-compatible electrody- error effects, before performing a KruskalWallis H-test

    namic headphones [12] with an individually adjusted, on the data. In a two-factorial design for example, before

    comfortable listening level (between 68 and 75 dB SPL). the main effect of factor A was tested, the influence of

    Stimulus files were presented via a personal computer. factor B and of the interaction A3

    B was eliminated. Thiswas done, in accordance to the linear model of an

    2.1.3. Data acquisition ANOVA, by subtracting from the original data the corre-]

    Subjects were scanned in a Bruker Biospec 3 T/ 60-cm sponding mean of the cell AB and adding the mean ofij]

    system with a birdcage head-coil and an asymmetric the column A Through the data alignment the maini.gradient system (30 mT / m). Coronal slices were measured effects of the three factors Experimental Condition, Hemi-

    using anatomical MRI (T1 weighted, five slices, slice sphere, and Sequence Order (to proof the counterbalancing

    thickness 8 mm, TR 25.3 ms, TE 6.0 ms, matrix 2563256, of sequences) as well as their first and second order

    FOV 200 mm, flip angle 408) with the same orientation as interactions were assessed by the KruskalWallis H-test

    the functional images. Functional images were collected (H-test) or the MannWhitney U-test (U-test). In our case

    using a noise-reduced gradient echo sequence (58 dB SPL) these are the three two-way interactions of Experimental

    [11,63] with the following measurement parameters: TR5 Condition by Hemisphere, Experimental Condition by

    260.83 ms, TE541.2 ms, flip angle5158. Functional Sequence Order, and Hemisphere by Sequence Order as

  • 8/8/2019 Auditory Perception Article

    4/18

    184 K. Sander, H. Scheich / Cognitive Brain Research 12 (2001) 181 198

    Fig. 1. Exemplary spectrograms of Laughing and Crying of the woman. Sonograms are shown in A, corresponding oscillograms in B. The top channels of

    A and B display Laughing, the bottom channels Crying. Each stimulus here has a duration of 45 s. Laughing has a higher average sound level and a higher

    repetition frequency of elements than Crying. The bottom of panel A and B shows the time scale (045 s), the right side in A the frequency scale (020

    kHz), in B the percentage of maximum intensity.

  • 8/8/2019 Auditory Perception Article

    5/18

    K. Sander, H. Scheich / Cognitive Brain Research 12 (2001) 181 198 185

    Fig. 2. Exemplary spectrograms of Laughing and Crying of the man. Sonograms are shown in A, corresponding oscillograms in B. The top channels of A

    and B display Laughing, the bottom channels Crying. Each stimulus here has a duration of 45 s. Laughing has a higher average sound level and a higher

    repetition frequency of elements than Crying. The bottom of panel A and B shows the time scale (045 s), the right side in A the frequency scale (020

    kHz), in B the percentage of maximum intensity.

  • 8/8/2019 Auditory Perception Article

    6/18

    186 K. Sander, H. Scheich / Cognitive Brain Research 12 (2001) 181 198

    Fig. 3. (A) Schematic overview of the fMRI block design of stimulus presentation (Laughing or Crying) and reference blocks ( Silence or Task). Each

    block comprised 60 s, the whole experiment 12 min. (B) Mean activation strength (IWV) and S.E.M. during the perception of Laughing and Crying

    (collapsed for the two reference blocks Silence and Task) are shown separately for the left and right amygdala. (C) Mean signal time courses of

    significantly activated voxels of the experimental conditions in the left and right amygdala. The mean deviation from the overall mean is given in percent.

    A moving average of three neighbored values was used. IWV5product of the absolute number of significantly activated voxels and their mean change in

    signal intensity. n56; * P50.002.

  • 8/8/2019 Auditory Perception Article

    7/18

    K. Sander, H. Scheich / Cognitive Brain Research 12 (2001) 181 198 187

    Table 1

    Experiment I: Results of the three-way rank analysis of variance with data alignment of the influence of the factors Experimental condition, Hemisphere,

    and Sequence order as well as their first and second order interactions on amygdala activation as tested by the KruskalWallis H-test and the

    MannWhitney U-test

    Effect Statistical characteristic value P value

    2Experimental condition x 59.847, df53 0.012*

    Hemisphere Z522.882 0.002*

    Sequence order Z520.926 0.5302Experimental condition3Hemisphere x 519.726, df57 0.002*2

    Experimental condition3Sequence order x 51.890, df57 0.9782

    Hemisphere3Sequence order x 55.038, df53 0.1712

    Experimental condition3Hemisphere3Sequence order x 512.630, df515 0.693

    * Significant results (n56).

    well as the three-way interaction of Experimental Con- quality (Laughing, Crying) and gender (female, male

    dition by Hemisphere by Sequence Order. Post-hoc tests voice) of the ANVOCs.

    were Bonferroni corrected MannWhitney U-tests. The

    level of significance used throughout was five percent error 2.2.3. Tasks and procedure

    probability, Bonferroni adjusted for multiple comparisons In experiment II the emphasis lay on experimental

    if necessary. The focus was on statistically significant modulations of the amygdala activation by the ANVOCs.

    results of the rank analysis of variance, including calcu- Therefore, two Tasks with an identical time frame were

    lated P values. A summary of all calculated P values of carried out by each subject on two successive experimental

    the three-way nonparametric ANOVA is presented in Table sessions. In the emotion-induction Task, subjects were

    1. Despite the fact that statistical analyses were based on instructed to internally evoke the emotional state repre-

    nonparametric procedures, diagrams show arithmetic sented by the ANVOCs, for instance, to remember situa-

    means and standard errors of mean (S.E.M.). tions which were in accordance to the presented stimuli. In

    The scale values obtained directly before and after the the pitch-shift detection Task, occasional pitch shifts of the

    experiment of the three scales (good /bad mood, awake- same ANVOCs should be detected. Thereby spectral

    ness/ tiredness, rest/ restlessness) of the mood ques- frequencies of the ANVOCs were increased artificially by

    tionnaire were analyzed with the Wilcoxon matched-pairs- about 30% for 2 s (CoolEditPro; Syntrillium Software

    test. Corporation). This was done randomly three to four times

    per stimulus block, giving a sum of 34 targets. The order

    2.2. Experiment II of tasks was counterbalanced across subjects. The criterionfor a correct response was sharp. During the 2-s of each

    2.2.1. Subjects pitch shift a response had to appear, otherwise it was not

    Thirteen right handers [56] with normal hearing partici- registered as a correct response. The detection task was

    pated in the second experiment (nine women, four men; difficult, warranting optimal attention on the presented

    mean age 24.5 years, range 2036 years). The subjects stimuli.

    were different from those of experiment I. They gave In experiment II, the same mood questionnaire (before

    written informed consent to the study which was approved and after the experimental sessions) [69] and stimulus

    by the ethical committee of the University of Magdeburg. delivery [12] as in experiment I was used.

    2.2.2. Stimuli 2.2.4. Data acquisition

    Acoustic stimuli were the same recordings of laughing Scanner and scanning procedure was as in experiment I

    and crying as in experiment I (Figs. 1 and 2). But since the but slice orientation was parallel to the left Sylvian fissureresults of experiment I suggested some effect of the long (Fig. 5A).

    60 s sequences of stimuli and reference blocks the block

    design was changed. In experiment II blocks of 45 s 2.2.5. Data analyses

    duration consisted of either the stimuli Male Laughing, The data were controlled for motion artifacts by the AIR

    Female Laughing, Male Crying, Female Crying or silence. package [73,74] with the same criteria as in experiment I.

    Each stimulus block was followed by Silence but the By this procedure complete data sets of nine subjects were

    sequence of ANVOCs was randomized. The complete left for statistical analyses. Extension of matrix size, spatial

    sequence contained five repetitions of each Laughing and smoothing, and linear trend correction were made as in

    Crying. At the beginning of a sequence there was a experiment I.

    baseline period (silence for 3 min). Sequences began either Beside regions of interest (ROI) for the amygdala

    with Laughing or Crying and were counterbalanced for complex, ROI for auditory cortex and insula of both

  • 8/8/2019 Auditory Perception Article

    8/18

    188 K. Sander, H. Scheich / Cognitive Brain Research 12 (2001) 181 198

    Fig. 4. (A) Left parasagittal view of the amygdala (turquoise area) with

    five coronal slices centered on the amygdala (experiment I). (B)

    Amygdala activation (P,0.05) of a single subject listening to ANVOCs.

    While Laughing activated the amygdala bilaterally, Crying activated only

    the left amygdala in this subject. R, right; L, left hemisphere.Fig. 5

  • 8/8/2019 Auditory Perception Article

    9/18

    K. Sander, H. Scheich / Cognitive Brain Research 12 (2001) 181 198 189

    Table 2

    Experiment II: Results of the four-way rank analysis of variance with data alignment of the influence of the factors Gender of subjects, Hemisphere,

    Stimulus, and Task as well as their first order interactions on amygdala, auditory cortex, and insula activation as tested by the KruskalWallis H-test and

    the MannWhitney U-test

    Effect Amygdala Auditory cortex Insula

    Statistical P value Statistical P value Statistical P value

    characteristic characteristic characteristic

    value value value

    Gender Z520.258 0.904 Z522.066 0.045* Z521.807 0.095

    Hemisphere Z523.311 ,0.0001* Z521.192 0.254 Z522.340 0.017*

    Stimulus Z521.280 0.224 Z522.517 0.012* Z522.782 0.005*

    Task Z520.927 0.388 Z521.192 0.259 Z520.044 1.0002 2 2

    Gender3Hemisphere x 55.105, df53 0.164 x 50.439, df53 0.947 x 59.363, df53 0.009*2 2 2

    Gender3Stimulus x 59.830, df53 0.006* x 53.620, df53 0.327 x 512.637, df53 ,0.0001*2 2 2

    Gender3Task x 52.579, df53 0.493 x 59.421, df53 0.008* x 56.333, df53 0.0882 2 2

    Hemisphere3Stimulus x 53.432, df53 0.328 x 54.068, df53 0.266 x 51.817, df53 0.6262 2 2

    Hemisphere3Task x 50.050, df53 0.997 x 50.667, df53 0.884 x 52.502, df53 0.4782 2 2

    Stimulus3Task x 57.267, df53 0.062 x 51.692, df53 0.652 x 523.249, df53 ,0.00018

    * Significant results (n59).

    hemispheres defined as previously described were analyzed tive task was 85.3% (S.E.M.52.9), indicating that the

    [11,63]. subjects attended carefully to the presented stimuli. The

    Data acquisition and statistical analyses were the same results of the mood questionnaire showed no significant

    as in experiment I. In addition to the two main experimen- differences between scale values obtained directly before

    tal conditions (Laughing versus Silence and Crying versus and after the experiment for the scales good/ bad mood

    Silence), cortical activation was also directly compared and rest/ restlessness. There was a significant difference

    between Laughing and Crying blocks. In experiment II, the between values of the scale awakeness/ tiredness (Wil-

    IWV were analyzed with a four-way rank analysis of coxon matched-pairs-test, P50.008). Subjects felt some-

    variance with data alignment [17]. The main effects of the what tired after the experiment, but their average score on

    four factors Task, Hemisphere, Stimulus, and Subjects this scale lay only slightly under the scales mean (11.3

    Gender (referred to as Gender in the following text) as versus 12). These results show that there was apparently

    well as their first order interactions were analyzed by the no interaction between experiencing the emotional stimuli

    KruskalWallis H-test or the MannWhitney U-test. These in the experiment and subjects momentary mood before

    first order interactions are six two-way interactions of Task and after the experiment.by Hemisphere, Task by Stimulus, Task by Gender,

    Hemisphere by Stimulus, Hemisphere by Gender, and

    Stimulus by Gender. A summary of all calculated P values 3.1.2. Amygdala

    of the four-way nonparametric ANOVA in each ROI is Listening to both Laughing and Crying activated the

    presented in Table 2. The mood questionnaire was ana- amygdala (Figs. 3B and 4B). While the mean activation

    lyzed as in experiment I with the Wilcoxon matched-pairs- was bilateral, bilaterality was not seen in all subjects. In

    test. the nonparametric ANOVA, there was a main effect of

    Hemisphere (U-test, P50.002): The activation of the right

    amygdala across experimental conditions and sequence3. Results

    orders was significantly stronger than that of the left

    amygdala as illustrated in Fig. 3B. The main effect of3.1. Experiment I

    Experimental Condition was also significant (H-test, P5

    3.1.1. Performance and state 0.012, data not shown). The Laughing versus Task con-

    The mean percentage of correct responses to the cogni- dition resulted in the strongest activation and differed

    significantly from both Crying versus Task and Laughing

    versus Silence (U-tests, P50.008 for both comparisons,Fig. 5. (A) Slice orientation in experiment II parallel to the left Sylvian

    Bonferroni corrected). However, the interaction of thefissure with one slice dorsally and three slices ventrally to the Sylvian

    factors Experimental Condition3Hemisphere was signifi-fissure including the amygdala, insula and auditory cortex. (B) Activationpattern (P,0.05) of a single subject listening to Laughing and Crying in cant (H-test, P50.002). This means that the influences ofthe pitch-shift detection Task. The activation during Laughing is shown in both factors Experimental Condition and Hemisphere onthe top slice for the amygdala (blue; slice 1), in the middle slice for the amygdala activation cannot be considered as being com-insula (green; slice 2), and in all slices (13) for the auditory cortex

    pletely independent from each other. With respect to the(yellow to red, graded significance). The corresponding ROI are

    complete results of the three-way nonparametric ANOVAschematically outlined as white rectangles. Significance level of activatedvoxels: yellow P,0.05, red P,0.00001. R, right; L, left hemisphere. see Table 1.

  • 8/8/2019 Auditory Perception Article

    10/18

    190 K. Sander, H. Scheich / Cognitive Brain Research 12 (2001) 181 198

    3.1.3. Signal time courses than in the left hemisphere. This was different from

    Fig. 3C shows that the mean time course of significantly amygdala and insula (see below).

    activated voxels of the experimental conditions followed Auditory cortex activation was also significantly larger

    sufficiently well the block design of stimulation, i.e. was in men than in women (U-test, P50.045). But this main

    not particularly phasic at the onset of stimulation nor effect of Gender cannot be considered independently from

    habituated during the course of the experiment. The signal Task because the Gender3Task interaction was significant

    time courses of the left and right amygdala did not differ (H-test, P50.008). For women, auditory cortex activation

    significantly. was significantly larger in the pitch-shift detection than inthe emotion-induction Task (U-test, P50.004; Fig. 7C),

    3.2. Experiment II while none of the other post-hoc comparisons did reach

    significance. With respect to the complete results of the

    3.2.1. Performance and state four-way nonparametric ANOVA see Table 2.

    The mean detection rate was 64.4% (S.E.M.57.7)

    correct identifications of artificial pitch shifts, reflecting the 3.2.4. Insula

    difficulty of the task. In the first and second experimental In the insula, similar to amygdala and auditory cortex,

    session, the results of the mood questionnaire showed no there was no Task-specific effect (Fig. 5B middle, Fig.

    significant differences between scale values obtained di- 6B). There was a main effect of Hemisphere (U-test,

    rectly before and after the experiment for the three scales P50.017). The insula was more strongly activated in the

    good/ bad mood, awakeness/ tiredness and rest/rest- left than in the right hemisphere. But the Gender3

    lessness. This again showed that there was apparently no Hemisphere interaction was also significant (H-test, P5

    interaction between experiencing the emotional stimuli 0.009; Fig. 7B), with no significant post-hoc comparisons.

    during the experiment and subjects momentary mood Thus the influences of the factors Hemisphere and Gender

    before and after the experiment. on insula activation cannot be considered as being com-

    pletely independent from each other.

    3.2.2. Amygdala The main effect of Stimulus was also significant (U-test,

    Attempts to self-induce emotions during listening to P50.005). The perception of Crying led to a larger

    Laughing or Crying activated the amygdala as well as activation than of Laughing. But also this main effect

    detecting pitch shifts in the same material with no differ- cannot be considered as being independent from the

    ence between the two Tasks, and no difference between subjects gender because the Gender3Stimulus interaction

    Laughing and Crying (Fig. 5B top, Fig. 6A). While the was significant (H-test, P,0.0001). In women, perceiving

    mean activation was bilateral, bilaterality was not seen in Laughing led to a significantly larger activation than

    all subjects (Table 3). There was a main effect of Crying (U-test, P50.002; Fig. 7B). None of the other

    Hemisphere: The right amygdala was significantly more post-hoc comparisons did reach significance.activated than the left amygdala (U-test, P,0.0001). Furthermore, the Stimulus3Task interaction was signifi-

    Furthermore, the Gender3Stimulus interaction was signifi- cant (H-test, P,0.0001). Laughing led to a significantly

    cant (H-test, P50.006). For women, the amygdala activa- larger activation during the pitch-shift detection than the

    tion was significantly larger when listening to Laughing in emotion-induction Task (U-test, P50.002), while the

    comparison to Crying (U-test, P50.002; Fig. 7A). None of relationship in response to Crying was reversed (U-test,

    the other post-hoc comparisons did reach significance. P,0.0001). In the emotion-induction Task listening to

    With respect to the complete results of the four-way Crying led to a significantly stronger activation than

    nonparametric ANOVA see Table 2. Laughing (U-test, P,0.0001), while in the pitch-shift

    detection Task the activation due to Laughing was sig-

    3.2.3. Auditory cortex nificantly stronger than to Crying (U-test, P50.004). The

    The perception of Laughing and Crying led to strong other post-hoc comparisons did not reach significance.

    bilateral activation of the auditory cortex with no differ- This means that the influences of all three factors Gender,ence between the emotion-induction and pitch-shift de- Task and Stimulus on insula activation cannot be consid-

    tection Tasks (Fig. 5B bottom, Fig. 6C). Different from the ered as being completely independent from each other.

    amygdala there was no main effect of Hemisphere for With respect to the complete results of the four-way

    Laughing and Crying together. However, Laughing re- nonparametric ANOVA see Table 2.

    sulted in a slightly larger activation than Crying. Thus,

    there was a main effect of Stimulus (U-test, P50.012). 3.2.5. Signal time courses

    Furthermore, when cortical activation was directly com- The mean time course of significantly activated voxels

    pared between Laughing and Crying blocks, a significant in amygdala, auditory cortex and insula followed well the

    main effect of Hemisphere was observed for this com- block design of stimulation, i.e. it was not phasic at the

    parison (U-test, P50.023). The perception of Laughing onset of stimulation. The mean time course in the

    activated the auditory cortex more strongly in the right amygdala was more irregular and had the largest amplitude

  • 8/8/2019 Auditory Perception Article

    11/18

    K. Sander, H. Scheich / Cognitive Brain Research 12 (2001) 181 198 191

    Fig. 6. Mean activation strength (IWV) and S.E.M. of the analyzed ROI. Shown is the activation during listening to Laughing and Crying in the

    emotion-induction and in the pitch-shift detection Task in both hemispheres. (A) Activation of the amygdala, (B) the insula, and (C) the auditory cortex.

    Note the different ordinate scaling. The rank order of amount of activation in these ROI for both tasks was auditory cortex .insula.amygdala (n59).

  • 8/8/2019 Auditory Perception Article

    12/18

    192 K. Sander, H. Scheich / Cognitive Brain Research 12 (2001) 181 198

    Table 3a

    Experiment II: Number of subjects with unilateral and bilateral activation in amygdala, insula, and auditory cortex

    Laughing Crying

    Left Right Bilateral Left Right Bilateral

    Emotion-induction task

    Amygdala 2 5 1 6 6 4

    Insula 8 7 6 9 8 8

    Auditory cortex 9 9 9 9 9 9

    Pitch-shift detection task

    Amygdala 5 8 5 3 5 1

    Insula 9 9 9 9 8 8

    Auditory cortex 9 9 9 9 9 9

    aActivation during listening to Laughing or Crying is shown for both tasks the emotion-induction and the pitch-shift detection task (n59).

    (Fig. 8). Furthermore, there were no significant hemis- investigate such complicated interactions which were

    pheric differences in the amygdala, insula or auditory largely circumvented by the more simplified design of

    cortex in none of the two tasks. experiment II.

    Laughing as well as Crying activated the amygdala

    bilaterally on average, and more strongly the right

    4. Discussion amygdala. This lateralized activation in favor of the right

    amygdala found here may relate to earlier findings on the

    4.1. Experiment I processing of affective verbal and nonverbal stimuli with

    dichotic listening [29,40], of nonverbal vocalizations of

    Based on general concepts of amygdala function [50], fear with fMRI [57] as well as of affective visual stimuli

    clinical data [66], data on healthy humans [57] and with PET [24] indicating that the right hemisphere [29,40],

    electrophysiological animal data [30,47,50,58] we hypoth- respectively, the right amygdala [24,57] is dominantly

    esized that the amygdala may be activated by affective involved in the processing of affect (but see the elec-

    nonverbal vocalizations independent of any aversive con- troencephalography study of Ref. [51]). The present results

    ditioning paradigm [22,23,49,58]. The results of experi- can therefore be interpreted to support the lateralization

    ment I (and experiment II) show that the human amygdala hypothesis of emotion processing which assumes a right-

    is indeed strongly activated during the simple perception of hemisphere specialization for emotions regardless of val-

    Laughing and Crying. Thus, as shown by Scott et al. [66] ence [16].

    with verbal and nonverbal vocalizations, the amygdala is The fact that simply listening to the affective stimulialso involved in the processing of socially relevant acous- caused amygdala activations throughout the stimulus

    tic stimuli, not only of socially relevant visual stimuli blocks suggests that in the present experimental conditions

    [1,5,8]. the amygdala activation was rather robust. This is interest-

    With respect to the first question formulated in the ing as it is known that amygdala neuronal responses to

    Introduction, namely, whether the amygdala is differential- other stimuli are phasic [58,59] and that amygdala activa-

    ly activated by positive and negative emotionally loaded tion as measured with fMRI habituates fast [20,22,49].

    ANVOCs, the present results suggest that there is no

    activation difference by Laughing and Crying (main ef- 4.2. Experiment II

    fect). But in contrast to the main effect a more detailed

    analysis of the experimental conditions used here, Laugh- Confirming the results of the first experiment, bilateral

    ing and Crying versus a cognitive Task or versus Silence, amygdala activation as well as a stronger right than left

    showed differential amygdala activations. For Laughing amygdala activation were obtained during the perceptionthe activation was larger with respect to the following of Laughing and Crying in the second experiment. This

    activation during the Task, while for Crying the activation result with a different group of subjects strongly supports

    was larger with respect to the following Silence. These the right-hemisphere hypothesis of emotion processing

    differences might be a result of sequential influences of the [16].

    experimental design. For instance, it may be assumed that Our initial hypothesis was that focusing on the affective

    the long 60-s sequences of the same stimulus led to quality of the nonverbal vocalizations would lead to a

    different degrees of emotional habituation to Laughing and larger amygdala activation than performing the emotional-

    Crying. Then, such emotionally enduring effects of the ly irrelevant detection task on the same (emotion) stimuli.

    ANVOCs after stimulation could be more or less in- But in contrast, a focus on the emotional quality of the

    fluenced by the following reference blocks with different stimuli in the emotion-induction task did not lead to a

    demands (Silence or Task). It seemed difficult to further stronger amygdala activation than in the pitch-shift de-

  • 8/8/2019 Auditory Perception Article

    13/18

    K. Sander, H. Scheich / Cognitive Brain Research 12 (2001) 181 198 193

    Fig. 7. Mean activation strength ( IWV) and S.E.M. of the analyzed ROI. Shown is the activation of women and men listening to both Laughing and

    Crying in the emotion-induction and in the pitch-shift detection Task in both hemispheres. (A) Activation of the amygdala, (B) the insula, and (C) the

    auditory cortex. Note the different ordinate scaling (n59).

  • 8/8/2019 Auditory Perception Article

    14/18

    194 K. Sander, H. Scheich / Cognitive Brain Research 12 (2001) 181 198

    Fig. 8. Mean signal time courses of significantly activated voxels in the emotion-induction (left) and in the pitch-shift detection task (right) for the left andright amygdala (A), insula (B), and auditory cortex (C). The mean deviation from the overall mean (without baseline) is given in percent. A moving

    average of three neighbored values was used (n59).

    tection task. From these results the conclusion may be tion from attention to the ANVOCs is supported by the

    drawn that amygdala activation by such biologically emotion model of LeDoux [50] via the fast thalamic way

    fundamental expressions of emotion is a robust phenom- of acoustic information to the amygdala, and by the

    enon which is not weakened by a non-emotional distract- observation of automatic processing of facial emotion

    ing task. Thus, not only the emotional stimulus which is in expressions in the amygdala [72].

    the focus of attention activates the amygdala, but also the The large amplitude of the mean time course of activa-

    unattended emotional background is assessed by the tion in the amygdala might reflect differences between

    amygdala. The relative independence of amygdala activa- activations of subcortical and cortical structures, insula and

  • 8/8/2019 Auditory Perception Article

    15/18

    K. Sander, H. Scheich / Cognitive Brain Research 12 (2001) 181 198 195

    auditory cortex in this experiment. We are presently emotion studies, defensiveness was shown to correlate

    investigating this possibility. with left frontal brain activation dominance in women,

    The auditory cortex was strongly activated in both while high-defensive men exhibited a right frontal activa-

    hemispheres by Laughing and Crying. The stronger activa- tion dominance [45]. The coping strategy defensiveness is

    tion during Laughing, when both Laughing and Crying characterized by the tendency to ignore stressful events,

    were compared to the silence blocks, may depend on the i.e. an orientation towards positive event aspects. Different

    louder and more spectrotemporally modulated Laughing cortical activation patterns were also demonstrated in

    stimulus (on average Laughing was 2 dB SPL louder than women and men during remembering sad and happy pastCrying). The two stimulus samples were not matched for life-events [39]. In contrast, Meyers and Smith [51]

    such acoustic features in order to preserve their natural reported no hemisphere nor gender differences in frontal

    character. Furthermore, when auditory cortex activation brain areas during the processing of affective nonverbal

    during Laughing was directly referred to activation during stimuli. Gender differences are also known from language

    Crying, a lateralization to the right auditory cortex was studies. Shaywitz et al. [67] reported left side activation in

    observed. This lateralization may reflect the functional men during phonological processing, while women showed

    specialization of the right auditory cortex for processing a more diffuse representation of this language function

    changes of pitch, timbre, and tonal patterns [52,60,62,75], across hemispheres [61]. In contrast, Vignolo et al. [71] as

    in essence of frequency modulations [19,35] and similar well as Kimura [44] reported gender differences with

    cues of emotional prosody [43]. Another possibility is that respect to left intrahemispheric not interhemispheric differ-

    Laughing engages more the described right-hemisphere ences.

    dominant voice-selective areas [13] which are covered by The exact nature of the gender differences reported in

    our global auditory cortex analysis. Whether these two the present experiment II remains unclear. They might be

    possibilities are mutually exclusive remains a question for caused by differences in emotion processing (see differ-

    future research. ences in amygdala and insula activation) or attention or by

    At the level of the auditory cortex there were also no unknown differences in hemispheric specialization. None-

    activation differences between the emotion-induction and theless, one has to be cautious not to over-interpret these

    the pitch-shift detection task. This points to a comparable gender differences because the experiments sample size

    amount of attention to the stimuli involved in these tasks. was rather small (six women, three men).

    Likewise no activation differences between the two

    tasks were observed in the insula. The perception of

    Laughing and Crying activated the insula bilaterally, but 5. General discussion

    no clear-cut effects of either hemisphere or stimulus were

    obtained. The insula activation is in line with data from The results of both experiments show that the human

    animal [14] as well as human studies [36,68] which show amygdala is strongly activated bilaterally by Laughing andinsula activation in response to frequency modulations and Crying with a stronger activation in the right hemisphere.

    with respect to speech production (articulation). These results are in accordance with the right-hemisphere

    In all structures amygdala, auditory cortex, and insula hypothesis of emotion processing [16]. But they do not

    there were influences of the subjects gender on exclude the valence hypothesis which was postulated with

    activation as reflected by the interactions of Gender by respect to lateralization of positive and negative emotional

    Stimulus or Task or Hemisphere as well as by the main processing in left and right prefrontal cortex, respectively

    effect of Gender in the auditory cortex. In the auditory [34]. What seems to be necessary is the investigation of

    cortex, men showed a larger overall activation, while how prefrontal cortex and amygdala interact in the percep-

    gender interacted with task in that way that women showed tion and production of emotion. Promising are data re-

    a larger activation in the pitch-shift detection task. In the ported by Hariri et al. [41] who showed that during a

    amygdala and insula, gender interacted with the quality of matching task of facial expressions of fear and anger there

    the stimulus; women showed a larger activation during was bilateral amygdala activation, but when subjects werelistening to Laughing, men during Crying (not significant). required to label the emotional face expressions amygdala

    This is partially in line with a recent fMRI study, which activation diminished while right prefrontal cortex activa-

    showed significant right amygdala activation only for men tion increased. To what extent this finding might be

    and exclusively during negative affect [65]. A differential generalizable to facial expressions of happiness will have

    gender pattern of amygdala activation in relation to to be addressed by future research as well as the implica-

    emotional memory was observed in a recent PET study tion that different brain structures seem to be more

    [25], where activity of the right amygdala was correlated involved in perceptual (matching of facial expressions of

    with enhanced memory for emotional films in men, but emotion) than cognitive aspects (labeling of facial expres-

    activity of the left amygdala in women. sions of emotion) of emotion processing, here amygdala

    Functional gender differences are known from the and right prefrontal cortex, respectively.

    processing of emotion and language. In the context of As the results of both experiments show, the activation

  • 8/8/2019 Auditory Perception Article

    16/18

    196 K. Sander, H. Scheich / Cognitive Brain Research 12 (2001) 181 198

    recognition of emotion in facial expression following bilateralof the amygdala obtained is a rather robust phenomenon.damage to the human amygdala, Nature 372 (1994) 669672.Amygdala activation can be elicited by simply listening to

    [5] R. Adolphs, D. Tranel, H. Damasio, A.R. Damasio, Fear and theLaughing and Crying, by self-induction of corresponding

    human amygdala, J. Neurosci. 15 (1995) 58795891.emotions as well as by performing distracting tasks on [6] R. Adolphs, D. Tranel, S. Hamann, A.W. Young, A.J. Calder, E.A.these stimuli. Thus amygdala activation is not prevented Phelps, A. Anderson, G.P. Lee, A.R. Damasio, Recognition of facial

    emotion in nine individuals with bilateral amygdala damage, Neuro-by such distracting tasks. This implicit, unattended pro-psychologia 37 (1999) 11111117.cessing of emotion by the amygdala was also shown for

    [7] D.G. Amaral, Amygdalo-cortical interconnections in the primate

    visually presented, masked stimuli in other brain imaging brain, Learn. Mem. 6 (1990) 2432.studies [54,55,72]. Critchley et al. [32] using fMRI demon- [8] A.K. Anderson, E.A. Phelps, Intact recognition of vocal expressionsstrated larger amygdala activation during the condition of of fear following bilateral lesions of the human amygdala, NeuroRe-

    port 9 (1998) 36073613.naming the gender of faces showing emotional expressions[9] J.R. Augustine, Circuitry and functional aspects of the insular lobethan during naming the facial emotion expression itself,

    in primates including humans, Brain Res. Rev. 22 (1996) 229244.with a stronger right amygdala activation in the former

    [10] P.A. Bandettini, A. Jesmanowicz, E.C. Wong, J.S. Hyde, Processingcondition. This is not necessarily in contrast to our results strategies for time-course data sets in functional MRI of the humanof the second experiment (similar bilateral amygdala brain, Magn. Reson. Med. 30 (1993) 161173.

    [11] F. Baumgart, B. Gaschler-Markefski, M.G. Woldorff, H.-J. Heinze,activation in both tasks, lateralized to the right hemi-H. Scheich, A movement-sensitive area in auditory cortex, Naturesphere), because attention to some aspects of emotion-400 (1999) 724726.

    inducing stimuli like the gender of facial expressions may[12] F. Baumgart, T. Kaulisch, C. Tempelmann, B. Gaschler-Markefski,

    enhance the elicited amygdala activation. It seems unlikely C. Tegeler, F. Schindler, D. Stiller, H. Scheich, Electrodynamicthat this discrepancy might be caused by the different

    headphones and woofers for application in magnetic resonanceimaging scanners, Med. Phys. 25 (1998) 20682070.communication channels addressed in these two studies,[13] P. Belin, R.J. Zatorre, P. Lafaille, P. Ahad, B. Pike, Voice-selectivevisual versus auditory, because as Bradley and Lang [18]

    areas in human auditory cortex, Nature 403 (2000) 309312.showed emotional reactions to acoustically presented

    [14] A. Bieser, Processing of twitter-call fundamental frequencies instimuli were essentially very similar to those elicited by insula and auditory cortex of squirrel monkeys, Exp. Brain Res. 122visually presented stimuli. (1998) 139148.

    [15] A.J. Blood, R. Zatorre, P. Bermudez, A.C. Evans, EmotionalIn conclusion with respect to those studies whichresponses to pleasant and unpleasant music correlate with activity inreported an amygdala involvement in the perception ofparalimbic brain regions, Nat. Neurosci. 2 (1999) 382387.emotionally relevant acoustic stimuli, our findings on

    [16] J.C. Borod, Cerebral mechanisms underlying facial, prosodic, androbust amygdala involvement in the perception of Laugh- lexical emotional expression: a review of neuropsychological studiesing and Crying corroborate and extend the insights ob- and methodological issues, Neuropsychology 7 (1993) 445463.

    [17] J. Bortz, G.A. Lienert, K. Boehnke,Verteilungsfreie Methoden in dertained by Scott et al. [66] on deficits in the recognition ofBiostatistik, Springer, Berlin, 1990.verbal and nonverbal vocalizations after bilateral amygdala

    [18] M.M. Bradley, P.J. Lang, Affective reactions to acoustic stimuli,lesions as well as on amygdala activation in response to Psychophysiology 37 (2000) 204215.certain auditory stimuli by Phillips et al. [57] and in [19] A. Brechmann, H. Scheich, The right human auditory cortex isresponse to an aversive auditory conditioning paradigm by predominantly involved in the discrimination of the direction of

    frequency modulated tones, NeuroImage 11 (2000) S799.Buchel et al. [22].[20] H.C. Breiter, N.L. Etcoff, P.J. Whalen, W.A. Kennedy, S.L. Rauch,

    R.L. Buckner, M.M. Strauss, S.E. Hyman, B.R. Rosen, Response

    and habituation of the human amygdala during visual processing ofAcknowledgements facial expression, Neuron 17 (1996) 875887.

    [21] P. Broks, A.W. Young, E.J. Maratos, P.J. Coffey, A.J. Calder, C.L.

    Isaac, A.R. Mayes, J.R. Hodges, D. Montaldi, E. Cezayirli, N.The present study was in part supported by the DeutscheRoberts, D. Hadley, Face processing impairments after encephalitis:Forschungsgemeinschaft (SFB 426). We thank an anony-amygdala damage and recognition of fear, Neuropsychologia 39

    mous referee for constructive criticism on an earlier (1998) 5970.version of this manuscript. [22] C. Buchel, R.J. Dolan, J.L. Armony, K.J. Friston, Amygdala

    hippocampal involvement in human aversive trace conditioningrevealed through event-related functional magnetic resonance imag-

    ing, J. Neurosci. 19 (1999) 1086910876.References

    [23] C. Buchel, J. Morris, R.J. Dolan, K.J. Friston, Brain systems

    mediating aversive conditioning: an event-related fMRI study,

    Neuron 20 (1998) 947957.[1] R. Adolphs, L. Cahill, R. Schul, R. Babinsky, Impaired declarative

    [24] L. Cahill, R.J. Haier, J. Fallon, M.T. Alkire, C. Tang, D. Keator, J.memory for emotional material following bilateral amygdala dam-

    Wull, J.L. McGaugh, Amygdala activity at encoding correlated withage in humans, Learn. Mem. 4 (1997) 291300.

    long-term, free recall of emotional information, Proc. Natl. Acad.[2] R. Adolphs, D. Tranel, Intact recognition of emotional prosody

    Sci. USA 93 (1996) 80168021.following amygdala damage, Neuropsychologia 37 (1999) 1285

    [25] L. Cahill, R.J. Haier, N.S. White, J. Fallon, L. Kilpatrick, C.1292.

    Lawrence, S.G. Potkin, M.T. Alkire, Sex-related difference in[3] R. Adolphs, D. Tranel, A.R. Damasio, The human amygdala in

    amygdala activity during emotionally influenced memory storage,social judgment, Nature 393 ( 1998) 470474.

    Learn. Mem. 75 (2001) 19.[4] R. Adolphs, D. Tranel, H. Damasio, A.R. Damasio, Impaired

  • 8/8/2019 Auditory Perception Article

    17/18

    K. Sander, H. Scheich / Cognitive Brain Research 12 (2001) 181 198 197

    [26] L. Cahill, J.L. McGaugh, Mechanisms of emotional arousal and the inferior temporal cortex in squirrel monkey (Saimiri sciureus),

    Behav. Neural Biol. 47 (1987) 5472.lasting declarative memory, Trends Neurosci. 21 (1998) 294299.[48] A. Kosmal, M. Malinowska, D. Kowalska, Thalamic and[27] A.J. Calder, A.W. Young, D. Rowland, D.I. Perrett, J.R. Hodges,

    amygdaloid connections of the auditory association cortex of theN.L. Etcoff, Facial emotion recognition after bilateral amygdalasuperior temporal gyrus of the rhesus monkey (Macaca mulatta),damage: differentially severe impairment of fear, Cogn. Neuro-Acta Neurobiol. Exp. 57 (1997) 165188.psychol. 13 (1996) 699745.

    [49] K.S. LaBar, J.C. Gatenby, J.C. Gore, J.E. LeDoux, E.A. Phelps,[28] T. Canli, J.E. Desmond, Z. Zhao, G. Glover, J.D.E. Gabrieli,Human amygdala activation during conditioned fear acquisition andHemispheric asymmetry for emotional stimuli detected with fMRI,extinction: a mixed-trial fMRI study, Neuron 20 (1998) 937945.NeuroReport 9 (1998) 32333239.

    [50] J. LeDoux, Emotion circuits in the brain, Annu. Rev. Neurosci. 23[29] A. Carmon, I. Nachshon, Ear asymmetry in perception of emotional(2000) 155184.non-verbal stimuli, Acta Psychol. 37 (1973) 351357.

    [51] M. Meyers, B.D. Smith, Hemispheric asymmetry and emotion:[30] D.R. Collins, D. Pare, Reciprocal changes in the firing probability ofeffects of nonverbal affective stimuli, Biol. Psychol. 22 (1986)lateral and central medial amygdala neurons, J. Neurosci. 19 (1999)1122.

    836844.[52] B. Milner, Laterality effects in audition, in: V. Mountcastle (Ed.),

    [31] D.R. Collins, D. Pare, Spontaneous and evoked activity of interca-Interhemispheric Relations and Cerebral Dominance, Johns Hopkins

    lated amygdala neurons, Eur. J. Neurosci. 11 (1999) 34413448.University Press, Baltimore, MD, 1962, pp. 177195.

    [32] H. Critchley, E. Daly, M. Phillips, M. Brammer, E. Bullmore, S.[53] J.S. Morris, C.D. Frith, D.I. Perrett, D. Rowland, A.W. Young, A.J.

    Williams, T. Van Amelsvoort, D. Robertson, A. David, D. Murphy,Calder, R.J. Dolan, A differential neural response in the human

    Explicit and implicit neural mechanisms for processing of socialamygdala to fearful and happy facial expressions, Nature 383 (1996)

    information from facial expressions: a functional magnetic reso-812815.

    nance imaging study, Hum. Brain Mapp. 9 (2000) 93105. [54] J.S. Morris, A. Ohmann, R.J. Dolan, Conscious and unconscious[33] R.J. Davidson, Cerebral asymmetry, emotion, and affective style, in: emotional learning in the human amygdala, Nature 393 (1998)

    R.J. Davidson, K. Hugdahl ( Eds.), Brain Asymmetry, MIT Press, 467470.

    Cambridge, MA, 1995, pp. 361387. [55] J.S. Morris, A. Ohmann, R.J. Dolan, A subcortical pathway to the[34] R.J. Davidson, W. Irwin, The functional neuroanatomy of emotion right amygdala mediating unseen fear, Proc. Natl. Acad. Sci. USAand affective style, Trends Cogn. Sci. 3 (1999) 1121. 96 (1999) 16801685.

    [35] P.L. Divenyi, A.J. Robinson, Nonlinguistic auditory capabilities in [56] R.C. Oldfield, The assessment and analysis of handedness: theaphasia, Brain Lang. 37 (1989) 290326. Edinburgh Inventory, Neuropsychologia 9 (1971) 97113.

    [36] N.F. Dronkers, A new brain region for coordinating speech articula- [57] M.L. Phillips, A.W. Young, S.K. Scott, A.J. Calder, C. Andrew, V.tion, Nature 384 (1996) 159161. Giampietro, S.C.R. Williams, E.T. Bullmore, M. Brammer, J.A.

    [37] B. Gaschler, F. Schindler, H. Scheich, KHORFu: a KHOROS-based Gray, Neural responses to facial and vocal expressions of fear andfunctional image post processing system. A statistical software disgust, Proc. R. Soc. Lond. B 265 (1998) 18091817.package for functional magnetic resonance imaging and other [58] G.J. Quirk, J.L. Armony, J.E. LeDoux, Fear conditioning enhancesneuroimage data sets, in: Proceedings COMPSTAT, XII Symposium different temporal components of tone-evoked spike trains inon Computational Statistics, Barcelona, 1996, pp. 5758. auditory cortex and lateral amygdala, Neuron 19 (1997) 613624.

    [38] B. Gaschler-Markefski, F. Baumgart, C. Tempelmann, F. Schindler, [59] G.J. Quirk, C. Repa, J.E. LeDoux, Fear conditioning enhancesD. Stiller, H.-J. Heinze, H. Scheich, Statistical methods in functional short-latency auditory responses of lateral amygdala neurons: paral-magnetic resonance imaging with respect to nonstationary time- lel recordings in the freely behaving rat, Neuron 15 (1995) 1029

    series: auditory cortex activity, Magn. Reson. Med. 38 (1997) 1039.811820. [60] D.A. Robin, D. Tranel, H. Damasio, Auditory perception of tempo-[39] M.S. George, T.A. Ketter, P.I. Parekh, B. Horwitz, P. Herscovitch, ral and spectral events in patients with focal left and right cerebral

    R.M. Post, Brain activity during transient sadness and happiness in lesions, Brain Lang. 39 (1990) 539555.healthy women, Am. J. Psychiatry 152 (1995) 341351. [61] R. Salmelin, A. Schnitzler, L. Parkkonen, K. Biermann, P. Helenius,

    [40] M.P. Haggard, A.M. Parkinson, Stimulus and task factors as K. Kiviniemi, K. Kuukka, F. Schmitz, H.J. Freund, Native language,determinants of ear advantages, Q. J. Exp. Psychol. 23 (1971) gender, and functional organization of the auditory cortex, Proc.168177. Natl. Acad. Sci. USA 96 (1999) 1046010465.

    [41] A.R. Hariri, S.Y. Bookheimer, J.C. Mazziotta, Modulating emotional [62] S. Samson, R.J. Zatorre, Contribution of the right temporal lobe toresponses: effects of a neocortical network on the limbic system, musical timbre discrimination, Neuropsychologia 32 (1994) 231

    NeuroReport 11 (2000) 4348. 240.

    [42] S. Imaizumi, K. Mori, S. Kiritani, R. Kawashima, M. Sugiura, H. [63] H. Scheich, F. Baumgart, B. Gaschler-Markefski, C. Tegeler, C.

    Fukuda, K. Itoh, T. Kato, A. Nakamura, K. Hatano, S. Kojima, K. Tempelmann, H.J. Heinze, F. Schindler, D. Stiller, Functional

    Nakamura, Vocal identification of speaker and emotion activates magnetic resonance imaging of a human auditory cortex involved in

    different brain regions, NeuroReport 8 (1997) 28092812. foreground-background decomposition, Eur. J. Neurosci. 10 (1998)

    [43] Y. Joanette, P. Goulet, D. Hannequin (Eds.), Prosodic Aspects of 803809.Speech. Right Hemisphere and Verbal Communication, Springer, [64] F. Schneider, W. Grodd, U. Weiss, U. Klose, K.R. Mayer, T. Nagele,

    New York, 1990, pp. 132159. R.C. Gur, Functional MRI reveals left amygdala activation during

    [44] D. Kimura, Neuromotor Mechanisms in Human Communication, emotion, Psychiatry Res. Neuroimag. Sect. 76 (1997) 7582.

    Oxford University Press, New York, 1993. [65] F. Schneider, U. Habel, C. Kessler, J.B. Salloum, S. Posse, Gender

    [45] J.P. Kline, J.J.B. Allen, G.E. Schwartz, Is left frontal brain activation differences in regional cerebral activity during sadness, Hum. Brain

    in defensiveness gender specific?, J. Abnorm. Psychol. 107 (1998) Mapp. 9 (2000) 226238.

    149153. [66] S.K. Scott, A.W. Young, A.J. Calder, D.J. Hellawell, J.P. Aggleton,

    M. Johnson, Impaired auditory recognition of fear and anger[46] A.S. Kling, L.A. Brothers, The amygdala and social behavior, in:

    following bilateral amygdala lesions, Nature 385 (1997) 254257.J.P. Aggleton (Ed.), The Amygdala: Neurobiological Aspects of

    [67] B.A. Shaywitz, S.E. Shaywitz, K.R. Pugh, R.T. Constable, P.Emotion, Memory, And Mental Dysfunction, Wiley-Liss, New York,

    Skudlarski, R.K. Fulbright, R.A. Bronen, J.M. Fletcher, D.P. Shan-1992, pp. 353377.

    kweiler, L. Katz, J.C. Gore, Sex differences in the functional[47] A.S. Kling, R.L. Lloyd, K.M. Perryman, Slow wave change in

    organization of the brain for language, Nature 373 (1995) 607609.amygdala to visual, auditory, and social stimuli following lesions of

  • 8/8/2019 Auditory Perception Article

    18/18

    198 K. Sander, H. Scheich / Cognitive Brain Research 12 (2001) 181 198

    [68] J. Shuren, Insula and aphasia, J. Neurol. 240 (1993) 216218. [73] R.P. Woods, S.T. Grafton, C.J. Holmes, S.R. Cherry, J.C. Mazziotta,

    [69] R. Steyer, P. Schwenkmezger, P. Notz, M. Eid, Der Mehrdimen- Automated image registration. I. General methods and intrasubject,

    sionale Befindlichkeitsfragebogen (MDBF), Hogrefe, Gottingen, intramodality validation, J. Comput. Assist. Tomogr. 22 (1998)

    1997. 139152.

    [70] L.A. Stowe, C.A. Broere, A.M. Paans, A.A. Wijers, G. Mulder, W. [74] R.P. Woods, S.T. Grafton, J.D. Watson, N.L. Sicotte, J.C. Mazziotta,

    Vaalburg, F. Zwarts, Localizing components of a complex task: Automated image registration. II. Intersubject validation of linear

    sentence processing and working memory, NeuroReport 9 (1998) and nonlinear models, J. Comput. Assist. Tomogr. 22 (1998) 153

    29952999. 165.

    [71] L.A.Vignolo, E. Boccardi, L. Caverni, Unexpected CT-scan findings [75] R.J. Zatorre, A.C. Evans, E. Meyer, Neural mechanisms underlyingin global aphasia, Cortex 22 (1986) 5569. melodic perception and memory for pitch, J. Neurosci. 14 (1994)

    [72] P.J. Whalen, S.L. Rauch, N.L. Etcoff, S.C. McInerney, M.B. Lee, 19081919.M.A. Jenike, Masked presentations of emotional facial expressions

    modulate amygdala activity without explicit knowledge, J. Neurosci.

    18 (1998) 411418.