multivariate predictors of music perception and … variables were selected that effectively...

15
120 J Am Acad Audiol 19:120–134 (2008) *School of Music, University of Iowa; †Department of Communication Sciences and Disorders, University of Iowa; ‡Iowa Cochlear Implant Clinical Research Center, University of Iowa Hospitals and Clinics, University of Iowa; §Department of Biostatistics, University of Iowa; **Department of Psychology, University of Iowa Kate Gfeller, Department of Otolaryngology, 200 Hawkins Drive, 21201 PFP, Iowa City, IA 52242-1078 Portions of this article were presented in the keynote address for Music Perception for Cochlear Implant Workshops, University of Washington, Seattle, October 17, 2006, and the 9th International Conference on Cochlear Implants, Vienna, Austria, June 16, 2006. This study was supported by grant 2 P50 DC00242 from the National Institutes on Deafness and Other Communication Disorders, NIH; grant M01-RR-59 from the General Clinical Research Centers Program, National Center for Research Resources, NIH; Lions Clubs International Foundation; and Iowa Lions Foundation. Multivariate Predictors of Music Perception and Appraisal by Adult Cochlear Implant Users Kate Gfeller*†‡ Jacob Oleson§ John F. Knutson** Patrick Breheny§ Virginia Driscoll‡ Carol Olszewski‡ Abstract The research examined whether performance by adult cochlear implant recipients on a variety of recognition and appraisal tests derived from real-world music could be predicted from technological, demographic, and life experience variables, as well as speech recognition scores. A representative sample of 209 adults implanted between 1985 and 2006 participated. Using multiple linear regression models and generalized linear mixed models, sets of optimal predictor variables were selected that effectively predicted performance on a test battery that assessed different aspects of music listening. These analyses established the importance of distinguishing between the accuracy of music perception and the appraisal of musical stimuli when using music listening as an index of implant success. Importantly, neither device type nor processing strategy predicted music perception or music appraisal. Speech recognition performance was not a strong predictor of music perception, and primarily predicted music perception when the test stimuli included lyrics. Additionally, limitations in the utility of speech perception in predicting musical perception and appraisal underscore the utility of music perception as an alternative outcome measure for evaluating implant outcomes. Music listening background, residual hearing (i.e., hearing aid use), cognitive factors, and some demographic factors predicted several indices of perceptual accuracy or appraisal of music. Key Words: Cochlear implant, cognitive, music, speech perception Abbreviations: AIC = Akaike’s information criteria; CI = cochlear implant; CNC = Consonant-Nucleus Consonant Test; FMR = Familiar Melody Recognition test; GLMM = generalized linear mixed model; HINT = Hearing in Noise Test; MBQ = Musical Background Questionnaire; MEAM-I = Musical Excerpt Appraisal Measure, no lyrics; MEAM-L = Musical Excerpt Appraisal Measure, lyrics; MERT-I = Musical Excerpt Recognition Test, no lyrics; MERT-L = Musical Excerpt Recognition Test, lyrics; ROC = receiver operating characteristics; RPM = Raven Progressive Matrices; SLT = Sequence Learning Test; VMT = Visual Monitoring Task DOI: 10.3766/jaaa.19.2.3

Upload: duongque

Post on 29-Apr-2018

214 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Multivariate Predictors of Music Perception and … variables were selected that effectively predicted performance on a test battery that assessed different aspects of music listening

120

J Am Acad Audiol 19:120–134 (2008)

*School of Music, University of Iowa; †Department of Communication Sciences and Disorders, University of Iowa; ‡Iowa CochlearImplant Clinical Research Center, University of Iowa Hospitals and Clinics, University of Iowa; §Department of Biostatistics,University of Iowa; **Department of Psychology, University of Iowa

Kate Gfeller, Department of Otolaryngology, 200 Hawkins Drive, 21201 PFP, Iowa City, IA 52242-1078

Portions of this article were presented in the keynote address for Music Perception for Cochlear Implant Workshops, University ofWashington, Seattle, October 17, 2006, and the 9th International Conference on Cochlear Implants, Vienna, Austria, June 16, 2006.

This study was supported by grant 2 P50 DC00242 from the National Institutes on Deafness and Other Communication Disorders,NIH; grant M01-RR-59 from the General Clinical Research Centers Program, National Center for Research Resources, NIH; LionsClubs International Foundation; and Iowa Lions Foundation.

Multivariate Predictors of Music Perception andAppraisal by Adult Cochlear Implant Users

Kate Gfeller*†‡Jacob Oleson§John F. Knutson**Patrick Breheny§Virginia Driscoll‡Carol Olszewski‡

Abstract

The research examined whether performance by adult cochlear implantrecipients on a variety of recognition and appraisal tests derived from real-worldmusic could be predicted from technological, demographic, and life experiencevariables, as well as speech recognition scores. A representative sample of209 adults implanted between 1985 and 2006 participated. Using multiple linearregression models and generalized linear mixed models, sets of optimalpredictor variables were selected that effectively predicted performance on atest battery that assessed different aspects of music listening. These analysesestablished the importance of distinguishing between the accuracy of musicperception and the appraisal of musical stimuli when using music listening asan index of implant success. Importantly, neither device type nor processingstrategy predicted music perception or music appraisal. Speech recognitionperformance was not a strong predictor of music perception, and primarilypredicted music perception when the test stimuli included lyrics. Additionally,limitations in the utility of speech perception in predicting musical perceptionand appraisal underscore the utility of music perception as an alternativeoutcome measure for evaluating implant outcomes. Music listening background,residual hearing (i.e., hearing aid use), cognitive factors, and some demographicfactors predicted several indices of perceptual accuracy or appraisal of music.

Key Words: Cochlear implant, cognitive, music, speech perception

Abbreviations: AIC = Akaike’s information criteria; CI = cochlear implant; CNC= Consonant-Nucleus Consonant Test; FMR = Familiar Melody Recognitiontest; GLMM = generalized linear mixed model; HINT = Hearing in Noise Test;MBQ = Musical Background Questionnaire; MEAM-I = Musical ExcerptAppraisal Measure, no lyrics; MEAM-L = Musical Excerpt Appraisal Measure,lyrics; MERT-I = Musical Excerpt Recognition Test, no lyrics; MERT-L = MusicalExcerpt Recognition Test, lyrics; ROC = receiver operating characteristics; RPM= Raven Progressive Matrices; SLT = Sequence Learning Test; VMT = VisualMonitoring Task

DOI: 10.3766/jaaa.19.2.3

Page 2: Multivariate Predictors of Music Perception and … variables were selected that effectively predicted performance on a test battery that assessed different aspects of music listening

The cochlear implant (CI) is a prosthetichearing device developed primarily toassist persons who are severely to pro-

foundly deaf with verbal communication. Thedevice picks up acoustic signals through anexternally worn microphone, and these sig-nals are then processed to filter and extractthose components of sound critically impor-tant for speech perception. Those componentsare conveyed via electrical signals to an arrayof electrodes in the cochlea, resulting in elec-trical stimulation of the auditory nerve. Thissignal is then transmitted to the central audi-tory pathway for interpretation. Although thedevice does not provide an exact replica ofnormal hearing, the majority of postlinguallydeafened implant recipients using modernCIs score above 80% on high-context sen-tences in quiet listening conditions, evenwithout visual cues (Wilson, 2000).

Although CIs have been quite successfulin providing implant recipients with speechperception, they are less effective in trans-mitting the fine structural features of soundthat contribute to music perception (e.g.,Gfeller et al, 2000a, 2002a, 2003, 2005, 2007;Leal et al, 2003; Kong et al, 2004;McDermott, 2004). Because people often lis-ten to music for pleasure, perceptual accura-cy (e.g., pitch perception, melody recognition)alone is not sufficient with regard to implantbenefit; the quality of the sound, or howenjoyable the sound, is also an importantcomponent of implant benefit.

When considering the evaluation of musicperception and enjoyment by CI users, it isimportant to recognize that the word musicrepresents a diverse body of styles and struc-tural combinations of individual pitches playedsequentially (melodies) and concurrently

Multivariate Predictors of Music Perception/Gfeller et al

121

Sumario

La investigación examinó si el desempeño, por parte de adultos receptoresde un implante coclear, sobre una variedad de pruebas de reconocimiento yevaluación derivadas de la música del mundo real, podrían predecirse a partirde variables tecnológicas, demográficas y de experiencias de vida, así comode puntajes de reconocimiento del lenguaje. Participó una muestrarepresentativa de 209 adultos implantados entre 1965 y el 2006. Usandomúltiples modelos de regresión lineal y modelos mixtos lineales generalizados,se seleccionaron grupos de variables óptimas de predicción, que pudieranpredecir efectivamente el desempeño por medio de una batería de pruebasque permitiera evaluar diferentes aspectos de la apreciación musical. Estosanálisis establecieron la importancia de distinguir entre la exactitud en lapercepción musical y la evaluación de estímulos musicales cuando se utilizala apreciación musical como un índice de éxito en la implantación.Importantemente, ningún tipo de dispositivo o estrategia de procesamientopredijo la percepción o la evaluación musical. El desempeño en elreconocimiento del lenguaje no fue un elemento fuerte de predicción, y llegóa predecir primariamente la percepción musical cuando los estímulos deprueba incluyeron las letras. Adicionalmente, las limitaciones en la utilidad dela percepción del lenguaje a la hora de predecir la percepción y la evaluaciónmusical, subrayan la utilidad de la percepción de la música como una medidaalternativa de resultado para evaluar la implantación coclear. La música defondo, la audición residual (p.e., el uso de auxiliares auditivos), los factorescognitivos, y algunos factores demográficos predijeron varios índices deexactitud y evaluación perceptual de la música.

Palabras Clave: Implante coclear, cognitivo, música, percepción del lenguaje

Abreviaturas: AIC = criterios de información de Akaike; CI = implante coclear;CNC = Prueba de Consonante-Núcleo-Consonante; FMR = Reconocimientode Melodía Familiar; GLMM = modelo mixto lineal generalizado; HINT =Prueba de Audición en Ruido; MBQ = Cuestionario de Antecedentes Musicales;MEAM-l = Medida de Evaluación de Extractos Musicales, sin letra; MEAM-L= Medida de Evaluación de Extractos Musicales, con letra; MERT-I = Pruebade Reconocimiento de Extractos Musicales; sin letra; MERT-L = Prueba deReconocimiento de Extractos Musicales, con letra; ROC = características derecepción del receptor; RPM = Matrices Progresivas de Raven; SLT = Tareade Aprendizaje Secuencial; VMT = Tarea de Monitoreo Visual

Page 3: Multivariate Predictors of Music Perception and … variables were selected that effectively predicted performance on a test battery that assessed different aspects of music listening

(harmony) in different rhythmic patternsand tempos, by one or more timbres (e.g.,instrumental solos and blends, such as bandsor orchestras). Thus, one cannot presumethat one single measure of music perceptionwould fully represent the perceptual accura-cy or appraisal (ratings of sound quality orenjoyment) experienced by cochlear implantrecipients for music perception and/or enjoy-ment. Just as the assessment of speechrecognition has involved a range of stimuliand response tasks (e.g., perception of isolat-ed computer-generated sounds, closed-setword recognition, open-set word recognition,phoneme recognition, recognition of connect-ed discourse), there are a variety of measuresthat reflect perceptual accuracy on severalkey structural features of music (e.g., pitch,melody, harmony, timbre, and combinationsthereof).

Research indicates that some structuralfeatures of music are more effectively per-ceived than others through the implant. Forexample, CI recipients perform with similaraccuracy as normal-hearing adults withregard to simple rhythmic patterns presentedat a moderate rate (Dorman et al, 1991;Gfeller and Lansing, 1991, 1992; Schultz andKerber, 1994; Gfeller et al, 1997); however,implant recipients are significantly less accu-rate than normal-hearing adults on pitch per-ception, including detecting pitch change (fre-quency difference limens) (McDermott, 2004;Gfeller et al, 2002a), identifying direction ofpitch change (higher or lower) (Gfeller et al,2002a; Laneau et al, 2004; McDermott, 2004),and discrimination of brief pitch patterns(Gfeller and Lansing, 1991, 1992). In mostcases, recipients of CIs require considerablylarger frequency differences than adults withnormal hearing for detecting pitch changeand direction of pitch change (e.g., Gfeller etal, 2002a, 2007; Kong et al, 2004; Looi et al,2004; McDermott, 2004).

Because melodies and harmonies are com-prised of sequential pitch patterns and sever-al concurrently presented pitches respective-ly, the poor transmission of pitch has nega-tive implications for recognition of melodieswith or without harmony (Schultz andKerber, 1994; Pijl, 1997; Fujita and Ito, 1999;Gfeller et al, 2002a; Leal et al, 2003; Kong etal, 2004; McDermott, 2004). Thus, it is notsurprising that implant recipients as a groupare significantly less accurate than normal-hearing nonmusicians in closed- and open-

set recognition of melodies, especially whenlyrics or rhythmic cues are unavailable(Gfeller et al, 2002a, 2005; Kong et al, 2004).In addition, CI recipients are less accuratethan normal-hearing listeners on timbrerecognition (recognizing musical instrumentsby sound alone), and they tend to rate(appraise) the tone quality as less pleasantthan do normal-hearing adults (Schultz andKerber, 1994; Gfeller et al, 1998, 2000b;Fujita and Ito, 1999).

Thus, to illustrate how an everyday listen-ing activity could be compromised for animplant user, a well-known song such as“Happy Birthday” may sound monotonic(only one pitch) or like random pitches orsounds and thus be unrecognizable. In addi-tion, the tone quality may sound tinny,unnatural, or essentially like noise. Becausemusic is a pervasive art form and a valuedpart of cultural rituals, social events, andemotional expression (Nielzen and Cesarec,1982; Gfeller, 2001; Huron, 2004), andbecause implant recipients are likely to beexposed to music on a regular basis,improved music perception is often anexpressed interest of CI users.

While the majority of implant recipientshave significantly poorer accuracy andappraisal of music than normal-hearing peo-ple (e.g., Gfeller et al, 1997, 1998, 2002a;McDermott, 2004), there is considerableintersubject variability among implant recip-ients for various measures of music percep-tion and appraisal. For example, in oneexperiment examining recognition of “real-world” musical excerpts (Gfeller et al, 2005),some individuals received 0% correct, andthe mean score for 79 participants was15.6%. However, there were individualssometimes referred to as “star” users whoachieved 94.1% correct on that same test—alevel of accuracy that seems nearly impossi-ble given the technical limitations of thedevice with regard to transmission of finespectral features.

In addition to individual differencesdemonstrated in specific measures of musicperception and appraisal, there can be signif-icant intrasubject variability across indices ofmusic perception. That is, individual CIrecipients may do very poorly on pitch per-ception but show better performance onrecognition of musical instruments (Gfelleret al, 1997, 1998, 2002b, 2005, 2006; Gfellerand Lansing, 1991, 1992).

Journal of the American Academy of Audiology/Volume 19, Number 2, 2008

122

Page 4: Multivariate Predictors of Music Perception and … variables were selected that effectively predicted performance on a test battery that assessed different aspects of music listening

Understanding the factors that contributeto the enormous variability among CI recipi-ents with respect to music perception andenjoyment could have important implica-tions for implant design as well as advisingCI users with respect to musical opportuni-ties and benefits. Thus, the purpose of thisresearch was to identify those factors thatare most strongly associated with greaterperceptual accuracy and enjoyment of musicby adult cochlear implant recipients.Although prior studies have examined therelations between a given outcome (e.g.,melody recognition) and individual predictorvariables (e.g., the relation between cognitiveprocessing, hearing history, and melodyrecognition; Gfeller et al, 2002a), they havenot provided a more comprehensive exami-nation of the relations among multiple musi-cal variables and multiple subject character-istics. Such multivariate considerations mayhelp us better understand individual differ-ences in music perception by CI users. Thus,this paper will examine the perception ofreal-world musical stimuli in a more compre-hensive manner, using multiple linearregression and generalized linear mixedmodels to help identify those factors that, incombination, best predict performance in therecognition and appraisal of excerpts of musi-cal stimuli of the sort heard in everyday life.

These analyses utilize data from a batteryof tests used in prior experiments that exam-ined perceptual accuracy and appraisal of avariety of stimuli ranging from simple andhighly controlled computer-generated stim-uli (i.e., pure tones) to complex blends ofmusical sounds (i.e., excerpts of “real-world”music on radio or in recordings) often heardin everyday life. We have also gathered dataon individual difference variables (e.g.,length of profound deafness; age at time oftesting; speech and cognitive measures), par-ticulars regarding assistive hearing deviceused (e.g., type of cochlear implant and strat-egy used; whether hearing aids are used),and life experiences (e.g., musical trainingand listening habits) that theoreticallyshould have important relations with successin music perception and enjoyment. Thus,the factors within these analyses fall intothree primary categories: individual differ-ences, technical differences, and environ-mental influences (i.e., life experiences). Theanalyses used to examine these relationshipswere conducted using multiple linear regres-

sion and generalized linear mixed models.These analyses were not intended to focus onor model the hearing mechanism per se, andthe relative contribution of peripheral andcentral auditory pathways. Rather, theseanalyses were designed to evaluate the rela-tive importance of external factors that inter-face with the hearing mechanism andimplant technology in determining musicperception and appraisal.

METHODS

Participants

The participants in this study included209 adult cochlear implant recipients with atleast nine months implant experience (meanmonths of use = 42.3, SD = 45.4). The sampleincluded 97 male and 112 female implantrecipients ranging in age from 23.7 to 92.5years (M = 60.2, SD = 15.4) with less thanone year to 58 years (M = 10.8, SD = 12.9) ofbilateral profound deafness prior to implan-tation. Age at time of testing, length of pro-found deafness, months of use, and yearwhen implanted were all included as demo-graphic predictor variables in the testedmodels.

Recipients used one of the following devicetypes: CI22, CI24M, Contour, Ineraid,Clarion, HiFocus, CIIHF1/2, 90K (all 22 mmelectrodes). The following speech processingstrategies were used as appropriate to thedevice: MPEAK, SPEAK, ACE, Analog SAS,CIS, HiRes, and Conditioner (High-ratepseudospontaneous conditioner as describedin Rubinstein et al, 1999; Hong andRubinstein, 2003). The specific breakdown ofdevice by strategy is shown in Table 1. Inaddition to including the different device andstrategy types as independent variables, wealso examined whether or not the individualwas a bilateral CI user (Bilat, yes or no) andwhether or not the individual used a hearingaid in addition to their CI (HA, yes or no).

The CI recipients were tested during theirannual visit to our center. All were nativeEnglish-speaking adults culturally affiliatedwith the United States, an important factorgiven that recognition of and response to par-ticular musical selections is influenced byexposure to particular styles of music. Mostof the implant recipients described them-selves as having been informal music listenersprior to deafness. Only two of the participants

Multivariate Predictors of Music Perception/Gfeller et al

123

Page 5: Multivariate Predictors of Music Perception and … variables were selected that effectively predicted performance on a test battery that assessed different aspects of music listening

had college-level musical training, as deter-mined through the Iowa Music BackgroundQuestionnaire (Gfeller et al, 2000a).

The sample in the present study wasdrawn from all adult CI recipients enrolled ina clinical trial from 1985 through 2006 whohad participated in extensive pre- and post-implant assessment on measures of speechand cognition, as well as measures of musicperception; measures of speech perceptionwere made available to this project. Thissample did not include those patients usingshort electrodes, which are based on a differ-ent principle of stimulation (acoustic pluselectric), or persons with known severe intel-lectual deficits or other sensory limitations(e.g., blindness) that could preclude testing.

The CI recipients participated in differentexperiments for several projects within ourcenter (audiology, electrophysiology, music)that had to be completed during one- or two-day visits. Because of personal circum-stances, acute illnesses, and scheduling con-flicts, not all participants completed all of themeasures included in the present analyses.Importantly, because data are essentiallymissing at random and not due to the factorsunder investigation, the sample can be con-sidered unselected with respect to the vari-ables of interest in this research and repre-sentative of CI recipients at this center.Although some participants providing datafor the present study were included in previ-ously published papers investigating musicperception by implant recipients, the fullsample has not been described in previouspapers.

Test Protocols

General descriptions of the music and cog-nitive and speech measures are providedbelow. The test content and protocols for

many of the individual measures aredescribed in full in previously published arti-cles as noted.

Music Perception and Appraisal Tests

A number of tests were used to reflect dif-ferent aspects of music perception andappraisal (liking or enjoyment), from highlycontrolled isolated elements to more complexand combined elements of music that moreclosely resemble real-world musical sounds.

Pitch Perception Test. A Pitch Ranking Test(PRT) provided a measure of pitch rankingfor pure tones ranging from 131 to 1048 Hz(Gfeller et al, 2007). The PRT was tested in asound field using a computer- controlled,two-alternative, forced-choice (2AFC) adap-tive procedure. The participant was asked todetermine if the second pitch in a pair of twopitches was higher or lower than the firstpitch. The presentation level of the test tones(average level of 87 dB SPL) was randomizedfor each presentation over an 8 dB range tominimize loudness cues that might haveresulted from standing waves in the double-walled sound booth. The mean presentationlevel for the test tones was adjusted individ-ually for each cochlear implant participant ateach test frequency to insure that the toneswere presented at least 20 dB above the par-ticipant's sound-field threshold. The numberof correct responses on pitch discriminationwas determined as a function of interval size(1 to 5 semitones) and frequency (131 to 1048Hz). An average number correct over allinterval sizes and base frequencies was cal-culated, yielding an average range of 0 to 6correct.

Melody Recognition. Two measures ofmelody recognition were included in theanalysis: a Familiar Melody Recognition test(FMR) (described in detail in Gfeller et al,

Journal of the American Academy of Audiology/Volume 19, Number 2, 2008

124

Table 1. Sample Sizes for the Devices and Strategies Present in the Sample

Note: Strategy data is missing for two participants. *Cond = Conditioner.

ACE Analog IS Cond* HiRes MPEAKSAS SPEAK Total

90K - - - - 9 - - - 9

CI22 - - - - - 1 - 19 20

CI24M 21 - 3 - - - - 26 50

CIIHF1 - - 18 2 12 - 6 - 38

CIIHF2 - - 1 - 10 - - - 11

Clarion - - 28 - - - - - 28

Contour 29 - 1 - - - - 6 36

HiFocus - - 3 - - - - - 3

Ineraid - 2 10 - - - - - 12

Total 50 2 64 2 31 1 6 51

CIS KSAS

Page 6: Multivariate Predictors of Music Perception and … variables were selected that effectively predicted performance on a test battery that assessed different aspects of music listening

2002a) and a Musical Excerpt RecognitionTest (MERT) (described in detail in Gfeller etal, 2005). The FMR tested recognition of 12well-known children’s or folk melodies thatwere prepared using MIDI (musical instru-ment digital interface) software and synthe-sizers. Administration of this test requiredprior familiarity with 75% of the test items(as determined through a 27-item check listof songs well known in U.S. culture). Eachitem (melody only and melody plus harmonyconditions, no lyrics) was heard two times,and response was open set. Scores wererecorded as the percent correct of those songsknown prior to testing (range of 0–100%)(Gfeller et al, 2002a).

The MERT tested recognition of excerptsof real-world melodies of the sort heard onthe radio or in recordings. The stimuli for theMERT were excerpts from 24 real-worldrecordings of musical items of three differentstyles (pop, country, classical) determinedthrough media rankings to be highly familiarto the U.S. population. Each participant lis-tened to eight songs without lyrics (MERT-I)and 16 songs with lyrics (MERT-L). After lis-tening to each song, they attempted to iden-tify whether or not they recognized the item(coded as success or failure). Administrationof this test required the participant to haveprior familiarity with at least 25% of the testitems (as determined through a 79-itemcheck list). The analyses included the datafor only those songs that the participantreported knowing beforehand so that differ-ent participants were tested on a differingnumber of songs. Separate scores (percentcorrect) were obtained for excerpts with andwithout lyrics (Gfeller et al, 2005).

Timbre Recognition. The Timbre Recognitiontest (TR; described in detail in Gfeller et al,2002c) presented recordings of eight realmusical instruments playing a standardizedsolo melody of seven notes in length. Scoresreflected the percent correct in a closed-settest (16 options) of those instruments thesubjects reported knowing prior to testing(Gfeller et al, 2002c). Participation in thistest required reporting prior familiarity withat least 50% of the instruments included inthe test.

Appraisal Measure. A Musical ExcerptAppraisal Measure (MEAM) (described indetail in Gfeller et al, 2003) was designed tomeasure appraisal of the sound quality ofreal-world music when transmitted through

the CI. Each participant listened to eightsongs without lyrics (MEAM-I) and 16 songswith lyrics (MEAM-L). This measure yieldedratings for the pleasantness or likeability ofthose same excerpts presented in the MERT,that is, excerpts of recordings of real-worldexcerpts of pop, country, and classicalmelodies. The mean score of appraisal wasobtained by using a visual analog scale witha range of 0–100.

The FMR, MERT, TR, and MEAM wereadministered in a sound-treated room via aMac computer (model M3409) and AltecLansing external speakers (model ACS 340).Participants responded using a Sony touchscreen monitor (model CPD-2000SF). Thetest stimuli were transmitted through themicrophone of the speech processor, whichapproximates to a considerable extent every-day listening experiences in quiet. The soundlevel was averaged at 70 dB SPL; however,implant recipients were permitted to adjusttheir processor for maximum comfort.

Musical Background Questionnaire(MBQ). The extent and type of formal musi-cal training prior to implantation, as well aslistening habits and enjoyment prior to andfollowing implantation, were quantifiedthrough the MBQ (described in detail inGfeller et al, 2000a). The questionnaire yield-ed four scores that were included in themodel: two scores for formal musical training(one for elementary school [MT1] and a sec-ond for secondary education and beyond[MT2]), a score for music listening habits andenjoyment prior to implantation (MLE-pre),and a score for music listening habits andenjoyment following implantation (at thetime of testing) (MLE-post).

Cognitive Measures

Because implant recipients must learn touse a new signal, there has long been aninterest in identifying cognitive factors thatmight predict implant outcome. Because evi-dence regarding the role of general intelli-gence in implant outcome has been mixed(c.f. Knutson et al, 1991; Waltzman et al,1995), we have focused our efforts on specificcognitive and attentional processes thatmight underlie the task confronted by animplant user attempting to perceive music.These processes include the ability to extracttarget features from an array of sequentiallypresented information, associative memory,

Multivariate Predictors of Music Perception/Gfeller et al

125

Page 7: Multivariate Predictors of Music Perception and … variables were selected that effectively predicted performance on a test battery that assessed different aspects of music listening

working memory, and the ability to processsignal changes quickly and accurately. In thepresent study we have included several suchcognitive tests to determine whether somemeasures of cognitive abilities, when com-bined with other predictive variables,enhance the multivariate prediction of theaccuracy of music perception and theappraisal of music. All of the tests areadministered in a visual modality so thatscores are not confounded with audition.

Tests of Sequential Processing. Two testswere included to assess the ability of partic-ipants to accurately identify sequentialinformation without placing a premium onspeed of responding. The first is theSequence Learning Test (SLT; Simon andKotovsky, 1963). The SLT presents partici-pants with a graded series of items, each ofwhich presents a sequence of letters that evi-dences a discernable pattern. Correct identi-fication of the pattern is evidenced by a par-ticipant correctly providing the next four let-ters in the sequence. The second test ofsequence recognition is the RavenProgressive Matrices (RPM; Raven et al,1977), a test in which the participant mustcorrectly identify a two-dimensional progres-sion and select the correct item that wouldbe placed in a missing cell. Although devel-oped as a nonverbal test of intelligence, theRPM was selected for use on the basis ofstudies of eye tracking which establishedthat performance on the RPM reflects theaccurate identification of sequential pat-terns (Carpenter et al, 1990).

Visual Monitoring Task (The VMT). The VMT(Knutson et al, 1991) also requires partici-pants to correctly identify sequential infor-mation, but the VMT requires rapidresponding and working visual memory. Onthe VMT, participants view single numberspresented on a computer monitor at either aone/sec rate or a one/2 sec rate. When thecurrently presented number, combined withthe preceding two numbers, produces aneven-odd-even pattern, the participant isrequired to strike the space bar on a PC key-board. Thus, a participant must continuous-ly update their recollection of precedingsequences and respond to the current stimu-lus. Participant performance reflects correctresponses and correct rejections as well aserroneous responses (i.e., false positiveresponses) and failures to response (i.e.,false rejections). These four response types

are aggregated into a single score usingreceiver operating characteristic (ROC)analysis to provide a measure of signal dis-crimination that is independent of the par-ticipant’s response biases and decision crite-rion. One commonly used measure of signaldetectability is the proportion of the area ofthe graph that lies below the ROC curve(Gescheider, 1997), which is a distribution-free measure of signal detection (Green andSwets, 1988). In the present study, this areais computed from the empirical ROC pointsusing the trapezoidal rule. Thus, for eachpresentation rate, the participant’s score isthe trapezoidal area under the ROC curve.The mean score from the two presentationrates was used in the predictive models test-ed.

Speech Measures

Speech perception data has been collect-ed using a variety of speech measures sincethe inception of our clinical research centerin 1985. The two tests included in this studywere the Consonant-Nucleus ConsonantTest (CNC) (Tillman and Carhart, 1966),and the Hearing in Noise Test (HINT)(Nilsson et al, 1994). These two measureshave been used most frequently over thistime span and are commonly used measuresof speech reception among CI centers.Speech perception was measured in quietusing recorded CNC monosyllabic wordsand HINT sentences. CNC scoring wasbased on percent-correct performance at theword level, and the HINT sentences werescored by dividing the total number of keywords correctly identified by the total num-ber of key words possible. Two lists of CNCwords and four lists of HINT sentences werepresented to each participant. All speechperception lists were randomized betweenparticipants, and no participant receivedtwo of the same lists during this study.Speech recognition tests were presented at70 dB(C) or 60 dB(A).

Statistical Analysis

Multiple linear regression models wereused to analyze the relations among theindependent predictor variables and each ofthe continuous dependent variables. Thesedependent variables were PRT, FMR, TR,MEAM-L, and MEAM-I. In order to repre-

Journal of the American Academy of Audiology/Volume 19, Number 2, 2008

126

Page 8: Multivariate Predictors of Music Perception and … variables were selected that effectively predicted performance on a test battery that assessed different aspects of music listening

sent categorical variables in the models,dummy variables were introduced. Forthese models there were 27 possible predic-tor variables. We wanted to identify thevariables that best predicted each of thedependent variables. With the large numberof predictor variables involved, we used uni-variate tests as a screening measure andselected only the variables that had p-val-ues less than .15 for further study. Theseunivariate tests were simple linear regres-sion analyses for the continuous variables, ttests for dichotomous variables, andANOVA tests for categorical variables.Variables that passed the screening meas-ure were included in the multiple regressionanalyses. To identify the influential vari-ables in the multiple regression analysis,and to eliminate the extraneous variables,we selected the best regression models onthe basis of Akaike’s information criterion(AIC; Akaike, 1973). The top-ranked AIC-selected model for each outcome was chosenas the final model for interpretation.

Generalized linear mixed models(GLMM) were used to analyze the relationsbetween the independent predictor vari-ables and musical excerpt recognition vari-ables (MERT-L and MERT-I). GLMMs canbe used in circumstances where the outcomemeasure is not normally distributed (e.g.,categorical) and where there are repeatedmeasurements per participant. In the pres-ent case, the tested model is also unbal-anced because of the different number ofexcerpts used for each participant. The pos-sibility of testing an unbalanced model isanother benefit of the flexibility of a GLMM.The excerpts that were chosen were meantto be a representative sample of any excerptwith or without lyrics; thus, the variableexcerpt was treated as a random effect inthe analyses. The tested model includedboth fixed effects and random effects for adichotomous outcome repeated over time;thus, the variable it is a generalized linearmixed model with random effects for partic-ipant and song. There is no correspondingmodel selection criterion for this type ofmodel, and stepwise variable selection wasused.

Due to the nature of patient participationand the testing format as described previ-ously, a large number of patients did nothave time to take at least one of the tests.There is, therefore, a large degree of item

nonresponse, and only 72 of the 209 partici-pants took every test. Each selected modelwas then fit using the number of availablecases for that analysis. In other words, weused the participants with measurementsfor all relevant dependent variables for thatoutcome. This approach resulted in signifi-cantly larger sample sizes than were used inmodel selection. The regression (PROCREG) and GLMM (PROC GLIMMIX) analy-ses were performed using SAS 9.1 software(SAS Institute, 2004).

RESULTS

Descriptive statistics summarizing thedependent and independent variables

for the total sample of CI users contribut-ing to the analyses are presented in Table2. Preliminary analyses of the implantdevices and strategies used by the individ-uals in the sample (see Table 1) failed toidentify any statistically significant influ-ence of device or strategy on music percep-tion or appraisal. Thus, it is important tonote that neither device nor strategy wereinfluential variables in any of the modelsthat were tested.

The results of the model selection proce-dures are shown in Table 3. For each out-come, the number of individuals used forthe final analysis is presented, along withthe coefficient of multiple determination(r2), the p-value for the overall F test, thevariables present in the AIC-selectedmodel, their regression coefficients, andthe p-value of the t test for the individualparameters. Note that the GLMM analyseshave neither corresponding r2 values noran overall F test. All models were signifi-cant at the .05 level. In the results from theindividual models that follow, it is worthnoting that, because the models wereselected using a subset of the subjects thatare used to fit the resulting model (see“Statistical Analysis” section for details),some of the models selected contain covari-ates with nonsignificant p-values (p > .05).This is likely caused by random variationbetween the samples, but it could be due tosystematic differences between the twosets of subjects with regard to the relation-ship in question. In either case, there isstatistical support for the position thatthese variables are at least moderatelylinked to the outcome.

Multivariate Predictors of Music Perception/Gfeller et al

127

Page 9: Multivariate Predictors of Music Perception and … variables were selected that effectively predicted performance on a test battery that assessed different aspects of music listening

Pitch Ranking Test (PRT)

The t tests for the individual parameters ofthe regression models show that highschool/adult music training (p ≤ .115), VisualMonitoring Task score (p < .001), and CNCscore (p ≤ .019) were determined to be the bestset of predictors of pitch recognition. Higherscores for all three of these measures are asso-ciated with superior pitch recognition testscores. For example, if all other covariatesremained the same, an increase of 10% in anindividual’s VMT score increases the expectedvalue of his/her PRT score by approximatelyseven points.

Familiar Melody Recognition (FMR)

The regression analysis for melody recogni-tion reveals that wearing a hearing aid (p ≤.083), high school/adult music training (p ≤.014), and visual monitoring task score (p <.001) yielded the best set of predictors ofmelody recognition. Higher FMR scores wereassociated with users who wore hearing aids,

had more musical training, and had highervisual monitoring task scores. For example,adjusted for VMT scores and musical training,hearing aid users recognized familiarmelodies 9.4% more often than nonusers.

Musical Excerpt Recognition Test(MERT-I)—Instrumental

For song recognition without lyrics, use of ahearing aid (p ≤ .01), high school/adult musictraining (p < .001), and age (p < .001) provedto be the best model. Recognition of excerptsdeclined with age and was found to improvewith hearing aid use and musical training.

Musical Excerpt Recognition Test(MERT-L)—Lyrics Present

For the recognition of songs with lyrics, thecovariates months of use (p ≤ .028), music listeningexperience after implantation (p ≤ 0.022), speechperception (HINT sentence scores) (p ≤ .006),and age (p < .001) comprised the best model.

Journal of the American Academy of Audiology/Volume 19, Number 2, 2008

128

Table 2 Summary of the Variables Used for Model Selection

Variable Meaning Obs. Mean Min. Max.

Independent Variables

Device Cochlear implant hardware 209 - - -

Strategy Cochlear implant software 207 - - -

Bilat Bilateral CI user (1 = yes, 0 = no) 209 0.12 0 1

HA Hearing aid user (1 =yes, 0 = no) 147 0.21 0 1

LPD Length of profound deafness (months) 176 10.75 0 58

YrImplnt Year of implant 209 1997.73 1984 2004

MOU Months of use 209 42.26 9 255

MLE-pre Music listening experience prior to implantation 204 5.12 0 8

MLE-post Music listening experience after implantation 204 3.71 0 8

MT1 Amount of music training (elementary school) 194 5.35 0 27

MT2 Amount of music training (high school & adult) 180 1.20 0 20

VMT1 Visual Monitoring Test 1 166 0.70 0 1

VMT2 Visual Monitoring Test 2 165 0.63 0 1

RPM Ravens Progressive Matrices 166 48.64 11 78

SCT Sequence Completion Task 168 21.03 0 48

CNC Consonant Noise Consonant Test 192 0.46 0 0.92

HINT Hearing in Noise Test 195 0.76 0 1

Age Age at time of test 209 60.18 23.7 92.5

Dependent Variables

PRT Pitch Ranking Test (total out of 540) 95 363.94 214 493

FMR Familiar melody recognition (percent correct) 147 25.73 0 87.5

MERT-I Melody recognition, instrumental (proportion correct) 134 4.48 0 50

MERT-L Melody recognition, lyrics present (proportion correct) 134 13.82 0 87.5

TR Timbre recognition (percent correct) 177 43.88 6.7 100

MEAM-I Melody appraisal instrumental (range 0-100) 160 49.04 7.1 94.5

MEAM-L Melody appraisal, with lyrics (range 0-100) 160 60.50 28.3 96.4

Page 10: Multivariate Predictors of Music Perception and … variables were selected that effectively predicted performance on a test battery that assessed different aspects of music listening

Recognition of excerpts with lyrics declinedwith age but improved with increased monthsof CI use, postimplantation music listeningexperience, and better speech perception.

Timbre Recognition (TR)

Music listening experience after implanta-tion (p ≤ .029), high school/adult music train-ing (p ≤ .011), and speech perception (as meas-ured by HINT sentences scores) (p < .001)were found to be the best predictors of timbrerecognition. Higher measures for all threecovariates were associated with increased tim-bre recognition.

Musical Excerpt Appraisal Measure(MEAM-I)—Instrumental

The predictor variables in the best modelfor instrumental song appraisal were revealedto be music listening experience after implan-tation (p ≤ .014) and the average visual moni-toring task score (p ≤ .076). Higher measuresof both variables were linked to greaterappraisal of instrumental music.

Musical Excerpt Appraisal Measure(MEAM-L)—Lyrics Present

Music listening experience prior toimplantation (p ≤ .096), bilateral implanta-tion (p ≤ .028), speech perception (HINT sen-tence scores) (p ≤ .02), and use of a hearingaid (p ≤ .065) proved to be the predictorsyielding the best model for appraisal of musiccontaining lyrics. Hearing aid users, as wellas CI users with bilateral implantation, wereassociated with higher appraisal of musiccontaining lyrics. Higher appraisal of musicwith lyrics was also associated with moreaccurate speech perception and music listen-ing experience prior to implantation.

DISCUSSION

When speech perception is the criterionfor implant benefit, accuracy of percep-

tion is the standard of measurement. Whenmusic perception is the criterion of implantbenefit, accuracy of perception is only one cri-terion. The other criterion is the appraisal ofthe percept. That is, although the ability to

Multivariate Predictors of Music Perception/Gfeller et al

129

Table 3. AIC-Selected Regression Models

r2 = coefficient of multiple determination; � = regression coefficient

Dependent variable n r2

P-value Predictor variables ? P-value

PRT 98 0.209 <0.001 VMT 70.38 <0.001

MT2 2.45 0.115

CNC 50.56 0.019

FMR 86 0.314 <0.001 HA 9.44 0.083

MT2 1.66 0.014

VMT 42.74 <0.001

MERT-I 89 HA 0.12 0.01

MT2 0.02 <0.001

Age -0.01 <0.001

MERT-L 129 MOU 0.00 0.028

MLE-post 0.02 0.022

HINT 0.21 0.006

Age -0.01 <0.001

TR 149 0.169 <0.001 MLE-post 1.87 0.029

MT2 1.46 0.011

HINT 21.43 <0.001

MEAM-I 144 0.068 0.007 MLE-post 1.61 0.014

VMT 7.97 0.076

MEAM-L 110 0.177 0.0004 MLE-pre 0.87 0.096

Bilat 6.25 0.028

HINT 10.88 0.02

HA 5.21 0.065

Page 11: Multivariate Predictors of Music Perception and … variables were selected that effectively predicted performance on a test battery that assessed different aspects of music listening

recognize a musical excerpt can reflect animportant outcome of implantation, whetherthe implant user evaluates the musical sig-nal positively will determine whether theychoose to listen to music and whether theyevaluate the outcome of the implant favor-ably. For that reason, the current analysesdistinguished between the ability to correctlyidentify musical stimuli and the subject’sappraisal of those musical stimuli. By adopt-ing a multiple regression strategy, it was pos-sible to determine whether combinations ofvariables were better than individual predic-tors, but the strategy also made it possible toeliminate individual predictors that pro-vide essentially redundant information.Importantly, for most of these analyses, thesignificant predictors of percept accuracywere not predictors of appraisal. Similarly,variables that predicted appraisal were notnecessarily significant predictors of accuracy.Thus, not only are perceptual accuracy andperceptual appraisal different indices ofimplant benefit, these indices are predictedby different combinations of variables. Theseresults not only add to the justification forconsidering accuracy and appraisal different-ly, they underscore the possibility that differ-ent processes may underlie the differentaspects of musical perception by implantusers.

The relations between the dependent andpredictor variables, and the clinical implica-tions of these relations, are clarified by clas-

sifying predictors into three broad categories:technical factors, individual differences, andlife experiences. With regard to technical fac-tors (see Figure 1), neither device nor strate-gy were significant predictors of any of thefive dependent variables, nor did theyaccount for added variance in multipleregression analyses. Thus, the notion thatsome devices and some strategies wouldafford implant users greater capacity formusical perception and appreciation was notsupported in these analyses. The use of hear-ing aids, however, was a significant con-tributing factor for several measures ofrecognition and appraisal of music, with andwithout lyrics. These results are consistentwith research regarding bimodal stimulationfor those CI users with a long electrode arraywho have usable acoustic hearing (most typ-ically in the nonimplanted ear). Preliminarystudies with such recipients indicate that theusable acoustic hearing (e.g., in a bimodalcondition of a CI plus a contralateral hearingaid) can enhance some aspects of music lis-tening such as pitch perception or melodyrecognition (Büchler et al, 2004; Dillier, 2004;Kong et al, 2005). With regard to more con-ventional internal arrays, this finding doessuggest that implant users who can also usea hearing aid in either ear might enjoyenhanced music appreciation relative to per-sons who only use an implant. From thestandpoint of evolving implant design, thisfinding can be seen as supporting recentefforts to develop implant designs or surgicaltechniques that help to preserve residualhearing (James et al, 2006).

With regard to individual differences (seeFigures 2a and 2b) predicting music percep-tion, those variables related to hearing histo-ry (e.g., length of implant experience) showan interesting contrast relative to speechreception data, in which improved speechperception scores are associated with moreimplant experience (cf. Tyler andSummerfield, 1996). As the present dataindicate, greater length of implant experi-ence with the CI is a predictor only for recog-nition of real-world melodies that also havelinguistic information. In short, when musi-cal stimuli have lyrics, some factors thathave been shown to influence speech percep-tion also influence accuracy of music percep-tion. Greater age at time of testing being neg-atively correlated with recognition of thereal-world melodies in this protocol merits

Journal of the American Academy of Audiology/Volume 19, Number 2, 2008

130

Figure 1. Relations between dependent variables and fac-tors related to technical differences. The arrows indicatethe presence of statistically significant relations betweenthe dependent variables and predictor variables classifiedas technical differences. The p value is indicated by thetype of arrow. NS indicates variables are not significant.

Page 12: Multivariate Predictors of Music Perception and … variables were selected that effectively predicted performance on a test battery that assessed different aspects of music listening

further investigation. Although it might beassumed that older adults would have lessfamiliarity with the selections in the musictests, the selections in those tests were basedon considerable evidence that the melodieswould be familiar to the population studied(Gfeller et al, 2005). Thus, while replicationwith other melodies might be considered, it islikely that other processes associated withaging might account for the results.

The limited contribution of performanceon speech perception tests in predictingmusic perception and appraisal reinforcesthe notion that performance on music per-ception tests does not provide redundantinformation in the evaluation of implant suc-cess. As the analyses indicate, speech percep-tion (i.e., performance on the HINT) con-tributes primarily to the prediction of thosetests that include lyrics. Although the associ-ation between better speech perception andthe perception of musical excerpts with lyricsis not surprising, it is the general absence ofan association between speech perceptionand musical perception that is important tonote. Quite simply, in isolation or in combi-nation with other variables, strong perform-ance on speech perception measures largelydid not predict music perceptual accuracy orpositive appraisal of music in the absence oflyrics.

There were two exceptions to the poor pre-diction of music perception without lyrics.The HINT scores combined with musicaltraining and postimplant listening experi-ence contributed to timbre recognition, andthe CNC scores combined with a cognitive

measure and musical training contributed tothe prediction of pitch ranking. These resultsmay point to some general ability to respondto acoustic events that is tapped by bothspeech perception tests and music perceptiontests. The link between speech perceptionand music perception involving lyrics and theabsence of a link between speech perceptionand music perception without lyrics is quiteconsistent with research that distinguishedbetween those features of speech and musicthat are more salient to understanding, inparticular, spectral information being moreimportant for perception of pitch and timbre,and broad temporal features being adequatefor perception of speech (e.g., Qin andOxenham, 2003; Kong et al, 2004). Yet, theprediction of timbre and pitch ranking byspeech perception measures might seem tobe inconsistent with such findings. It isimportant to note, however, that neither theHINT nor the CNC scores alone accountedfor much variance in timbre recognition orpitch ranking. Thus, it seems probable thatthese measures all reflect a fundamentalability to respond to acoustic events.

Of the three cognitive measures we includ-ed in the tested models, the VisualMonitoring Task (VMT) emerged as the bestpredictor—perhaps because it not onlyrequires correct identification of sequentialinformation, as do the RPM and the SLT, butit also requires rapid responding and work-ing memory, which may be more reflective ofthe task demands facing a CI recipient tryingto extract meaningful information from acomplex, rapidly changing but degraded

Multivariate Predictors of Music Perception/Gfeller et al

131

Figure 2. Relations between dependent variables and factors related to individual differences. The arrows indicate thepresence of statistically significant relations between the dependent variables and predictor variables classified as tech-nical differences. The p value is indicated by the type of arrow. NS indicates variables are not significant.

a. b.

Page 13: Multivariate Predictors of Music Perception and … variables were selected that effectively predicted performance on a test battery that assessed different aspects of music listening

musical signal. To the extent that the VMThas been an effective predictor of speech per-ception (cf. Knutson et al, 1991; Knutson,2006), the utility of the VMT in contributingto the prediction of music perception andappraisal in the absence of lyrics suggeststhat some specialized cognitive processingand general working memory is criticallyimportant for the realization of maximumbenefit from the existing implant designs.Although other research with pediatricimplant recipients has implicated workingmemory in speech perception outcomes (e.g.,Pisoni and Geers, 2000; Cleary et al, 2002;Pisoni and Cleary, 2003), that research hascaused Cleary et al (2002) to argue that audi-tory working memory rather than generalworking memory underlies implant benefit.In the present study, using cross-modality(vision and hearing) prediction from theVMT, there is evidence that adult implantrecipients with better visual working memo-ry are likely to realize greater benefits fromthe implant, especially when other favorablepredictors are present. Thus, additional workon the role of general and modality-specificworking memory in implant benefit is indi-cated.

With regard to life experience (see Figure3), while general music education in gradeschool was not a significant predictor, we didfind formal music training in high school, col-lege, and beyond to be a significant predictor

for four dependent variables with no lyrics.Importantly, in previous studies that we haveconducted with smaller samples, we did notfind formal training prior to implantation tobe significantly correlated with greater accu-racy on a number of measures of music per-ception. In the more comprehensive analyseswith a larger sample size in the presentstudy, formal music training at a moreadvanced level has now emerged as a strongpredictor of success on tasks with no lyrics, inwhich the listener must rely more on spectralinformation. From a rehabilitative perspec-tive, it is hopeful to note that more timespent in listening to music after implantationis associated with greater accuracy andappraisal of real-world music with and with-out lyrics. It is also important to note that thepositive experience that comes with betteraccuracy and appraisal of music might alsoenhance the willingness of implant users toengage in music listening opportunities.

The key findings of this study have sever-al implications for future research initiativesin the implant field. First of all, it is impor-tant to recognize that factors can contributedifferentially to the variability among adultCI recipients with regard to music perceptionand music enjoyment. These results under-score the importance of evaluating both theaccuracy of music perception and the degreeof music enjoyment as distinguishableindices of implant outcome. Because percep-tual acuity and appraisal emerged as two dis-tinct aspects of music listening, they cannotbe assessed with a single omnibus measureof music listening and perception.

The presence or absence of sung lyrics asan important moderator variable is not sur-prising, given that the CI has been designedprimarily to assist in communication of lin-guistic information. It is important toacknowledge, however, that not all CI recipi-ents can extract the lyrics from music. Tosome extent, one might consider instrumen-tal accompaniment that often accompanies avocalist to be the equivalent of backgroundnoise (especially in the case of larger andlouder accompaniments), since it can maskand thus compete with the perception of tar-get lyrics. In addition, some studies haveshown that word recognition of sung lyricscan be more difficult than spoken lyrics(Hsiao et al, 2006); thus the changes inphoneme production that are common insinging (Vongpaisal et al, 2005) may result in

Journal of the American Academy of Audiology/Volume 19, Number 2, 2008

132

Figure 3. Relations between dependent variables andfactors related to life experiences. The arrows indicate thepresence of statistically significant relations between thedependent variables and predictor variables classified astechnical differences. The p value is indicated by the typeof arrow. NS indicates variables are not significant.

Page 14: Multivariate Predictors of Music Perception and … variables were selected that effectively predicted performance on a test battery that assessed different aspects of music listening

reduced word recognition. In short, theextent of success in music listening by CIusers will vary depending upon a variety offactors including age of the subject, theresponse task (open or closed set), the famil-iarity or difficulty of the vocabulary, the hear-ing history of the listener, cognitive skills,and whether a musical accompanimentessentially overpowers the singer.

The emergence of acuity, appraisal, andlyrics as distinct aspects determining musicperception by CI users has implications forselecting test stimuli as well as for the responsetasks that might be incorporated in rehabilita-tive training programs. Furthermore, life expe-riences or individual characteristics significant-ly influenced all seven of the music perceptionindices examined in this study. While CI userstend to show significant improvements inspeech recognition as a result of everyday lis-tening experiences, we do not see significantimprovements in music perception and enjoy-ment as a result of incidental exposure to musicin everyday life. This has implications for theprovision of counseling of our CI patients withregard to device benefit and making judiciouschoices for listening experiences, as well as theprovision of training to help people optimize CIbenefit for music. Because not all CI recipientshave useable residual hearing and long elec-trodes still remain the primary commercialoption for implant recipients, actual postim-plant training may be one of the more viableoptions for improving music perception andenjoyment for those implant recipients who usecontemporary technology and aspire to havingsome music in their lives.

REFERENCES

Akaike H. (1973) Information theory as an extensionof the maximum likelihood principle. In: Petrov VN,Csaki F, eds. Second International Symposium onInformation Theory. 2nd ed. Budapest, Hungary:Akademia Kaido, 267–281.

Büchler M, Lai WK, Dillier N. (2004) Music percep-tion with cochlear implants. Poster presented at ZNZSymposium, Zurich. www.unizh.ch/orl/publications/.

Carpenter PA, Just MA, Shell P. (1990) What one intel-ligence test measures: a theoretical account of theprocessing in the Raven Progressive Matrices Test.Psychol Rev 97(3):404–431.

Cleary M, Pisoni DB, Kirk KI. (2002) Working memoryspans as predictors of spoken word recognition andreceptive vocabulary in children with cochlearimplants. Volta Rev 102:259–280.

Dillier N. (2004) Combining cochlear implants andhearing instruments. In: Proceedings of the ThirdInternational Pediatric Conference, Nov 2004, Chicago,IL. Phonak, 163–172.

Dorman M, Basham K, McCandless G, Dove H. (1991)Speech understanding and music appreciation withthe Ineraid cochlear implant. Hear J 44(6):32–37.

Fujita S, Ito J. (1999) Ability of Nucleus cochlearimplantees to recognize music. Ann Otol RhinolLaryngol 108:634–640.

Gescheider GA. (1997) Psychophysics: The Fundamentals.3rd ed. Mahwah, NJ: L. Erlbaum Associates.

Gfeller KE. (2001) Aural rehabilitation of music lis-tening for adult cochlear implant recipients: addressinglearner characteristics. Music Ther Perspect 19:88–95.

Gfeller K, Christ A, Knutson J, Witt S, Mehr M. (2003)The effects of familiarity and complexity on appraisalof complex songs by cochlear implant recipients andnormal-hearing adults. J Music Ther 40(2):78–112.

Gfeller K, Christ A, Knutson J, Witt S, Murray K,Tyler R. (2000a) Musical backgrounds, listening habitsand aesthetic enjoyment of adult cochlear implantrecipients. J Am Acad Audiol 11:390–406.

Gfeller K, Knutson JF, Woodworth G, Witt S, DeBusB. (1998) Timbral recognition and appraisal by adultcochlear implant users and normal-hearing adults. JAm Acad Audiol 9:1–19.

Gfeller K, Lansing C. (1991) Melodic, rhythmic andtimbral perception of adult cochlear implant users. JSpeech Hear Res 34:916–920.

Gfeller K, Lansing C. (1992) Musical perception ofcochlear implant users as measured by the PrimaryMeasure of Music Audiation: an item analysis. J MusicTher 29(1):18–39.

Gfeller K, Olszewski C, Rychener M, Sena K, KnutsonJF, Witt S, Macpherson B. (2005) Recognition of "real-world" musical excerpts by cochlear implant recipientsand normal-hearing adults. Ear Hear 26(3):237–250.

Gfeller K, Olszewski C, Turner C, Gantz B, Oleson J.(2006) Music perception with cochlear implants andresidual hearing. Audiol Neurootol 11(Suppl. 1):12–15.

Gfeller K, Turner C, Oleson J, Zhang X, Gantz B,Froman R, Olszewski C. (2007) Accuracy of cochlearimplant recipients on pitch perception, melody recog-nition and speech reception in noise. Ear Hear 28(3):412–423.

Gfeller K, Turner C, Woodworth G, Mehr M, Fearn R,Witt S, Stordahl J. (2002a) Recognition of familiarmelodies by adult cochlear implant recipients andnormal-hearing adults. Cochlear Implants Int 3:31–55.

Gfeller K, Witt S, Adamek M, Mehr M, Rogers J,Stordahl J. (2002b) The effects of training on timbrerecognition and appraisal by postlingually deafenedcochlear implant recipients. J Am Acad Audiol13:132–145.

Gfeller K, Witt S, Stordahl J, Mehr M, Woodworth G.(2000b) The effects of training on melody recognitionand appraisal by adult cochlear implant recipients. JAcad Rehabil Audiol 33:115–138.

Multivariate Predictors of Music Perception/Gfeller et al

133

Page 15: Multivariate Predictors of Music Perception and … variables were selected that effectively predicted performance on a test battery that assessed different aspects of music listening

Gfeller K, Witt S, Woodworth G, Mehr M, KnutsonJF. (2002c) Effects of frequency, instrumental family,and cochlear implant type on timbre recognition andappraisal. Ann Otol Rhinol Laryngol 111:349–356.

Gfeller K, Woodworth G, Robin D, Witt S, KnutsonJF. (1997) Perception of rhythmic and sequential pat-terns by normally hearing adults and adult cochlearimplant users. Ear Hear 18:252–260.

Green DM, Swets JA. (1988) Signal Detection Theoryand Psychophysics. Los Altos, CA: PeninsulaPublishing. (Orig. pub. 1966.)

Hong RS, Rubinstein JT. (2003) High-rate condition-ing pulse trains in cochlear implants: dynamic rangemeasures with sinusoidal stimuli. J Acoust Soc Am114(6):3327–3342.

Hsiao F, Gfeller K, Huang T, Hsu H. (2006) Perceptionof familiar melodies by Taiwanese pediatric cochlearimplant recipients who speak a tonal language. Paperpresented at the 9th International Conference onCochlear Implants, Vienna, Austria.

Huron D. (2004) Is music an evolutionary adaptation?In: Peretz I, Zatorre R, eds. The Cognitive Neuroscienceof Music. Oxford: Oxford University Press, 57–78.

James CJ, Fraysse B, Deguine O, Lenarz T, MawmanD, Ramos A, Ramsden R, Sterkers O. (2006). Combineelectroacoustic stimulation in conventional candidatesfor cochlear implantation. Audiol Neurootol 11(Suppl.1):57–62.

Knutson JF. (2006) Psychological aspects of cochlearimplantation. In: Cooper H, Craddock L, eds. CochlearImplants: A Practical Guide. 2nd ed. London: WhurrPublisher Limited, 151–178.

Knutson J, Hinrichs J, Tyler R, Gantz B, Schartz H,Woodworth G. (1991) Psychological predictors of audi-ological outcomes of multichannel cochlear implants:preliminary findings. Ann Otol Rhinol Laryngol100:817–822.

Kong Y, Cruz R, Jones J, Zeng F. (2004) Music per-ception with temporal cues in acoustic and electrichearing. Ear Hear 25(2):173–185.

Kong Y, Stickney G, Zeng F. (2005) Speech and melodyrecognition in binaurally combined acoustic and elec-tric hearing. J Acoust Soc Am 117:1351–1361.

Laneau J, Wouters J, Moonen M. (2004) Relative con-tributions of temporal and place pitch cues tofundamental frequency discrimination in cochlearimplantees. J Acoust Soc Am 116(6):3606–3619.

Leal M, Shin Y, Laborde M, Calmels M, Vergas S,Lugardon S, Andrieu S, Deguine O, Fraysse B. (2003)Music perception in adult cochlear implant recipients.Acta Otolaryngol 123:826–835.

Looi V, McDermott H, McKay C, Hickson L. (2004)Pitch discrimination and melody recognition bycochlear implant users. Paper presented at the VIIICochlear Implant Conference, Indianapolis, Indiana.

McDermott H. (2004). Music perception with cochlearimplants: a review. Trends Amplif 8(2):49–81.

Nielzen S, Cesarec Z. (1982) Emotional experiencesof music as a function of musical structure. PsycholMusic 10:81–85s.

Nilsson M, Soli SD, Sullivan JA. (1994) Developmentof the Hearing in Noise Test for the measurement ofspeech reception thresholds in quiet and in noise. JAcoust Soc Am 95:1085–1099.

Pijl S. (1997) Labeling of musical interval size bycochlear implant patients and normally hearing sub-jects. Ear Hear 18:364–372.

Pisoni DP, Cleary M. (2003) Measures of workingmemory span and verbal rehearsal speed in deaf chil-dren after cochlear implantation. Ear Hear24:106S–120S.

Pisoni DP, Geers A. (2000) Working memory in deafchildren with cochlear implants: correlations betweendigit span and measures of spoken language pro-cessing. Ann Otol Rhinol Laryngol 109:92–93.

Qin MK, Oxenham AJ. (2003) Effects of simulatedcochlear-implant processing on speech reception influctuating maskers. J Acoust Soc Am 114:446–454.

Raven JC, Court JH, Raven J. (1977) Manual forRaven's Progressive Matrices and Vocabulary Scales.London: Lewis.

Rubinstein JT, Wilson BS, Finley CC, Abbas PJ. (1999)Pseudospontaneous activity: stochastic independenceof auditory nerve fibers with electrical stimulation.Hear Res 127(1–2):108–118.

SAS Institute. (2004) SAS/STAT 9.1 User’s Guide.Cary, NC: SAS Institute.

Schultz E, Kerber M. (1994) Music perception withthe MED-EL implants. In: Hochmair-Desoyer LJ,Hochmair EC, eds. Advances in Cochlear Implants.Vienna, Austria: Menz, 326–332.

Simon H, Kotovsky K. (1963) Human acquisition ofconcepts for sequential patterns. Psychol Rev70:543–546.

Tillman TW, Carhart R. (1966) An Expanded Test forSpeech Discrimination Utilizing CNC MonosyllabicWords. Northwestern University Auditory Test No. 6(Technical Report No. SAM-TR-66-55). Brooks AirForce Base, TX: USAF School of Aerospace Medicine.

Tyler RS, Summerfield AQ. (1996) Cochlear implan-tation: relationships with the research on auditorydeprivation and acclimitization. Ear Hear 17:38S-50S.

Vongpaisal T, Trehub S, Schellenberg G. (2005)Challenges of decoding the words of songs. Paper pre-sented at 10th Symposium on Cochlear Implants inChildren, Dallas.

Waltzman SB, Fisher SG, Niparko JK, Cohen NL.(1995) Predictors of postoperative performance withcochlear implants. Ann Otol Rhinol Laryngol165(Suppl.):15–18.

Wilson B. (2000) Cochlear implant technology. In: KirkKI, Niparko JK, Mellon NK, Robbins AM, Tucci DL,Wilson BS, eds. Cochlear Implants: Principles andPractices. New York: Lippincott, Williams and Wilkins,109–118.

Journal of the American Academy of Audiology/Volume 19, Number 2, 2008

134