speech perception and auditory performance in hearing

135
SPEECH PERCEPTION AND AUDITORY PERFORMANCE IN HEARING-IMPAIRED ADULTS WITH A MULTICHANNEL COCHLEAR IMPLANT Abstract in Finnish TAINA VÄLIMAA Department of Finnish, Saami and Logopedics and Department of Otorhinolaryngology, University of Oulu OULU 2002

Upload: others

Post on 12-Apr-2022

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Speech perception and auditory performance in hearing

SPEECH PERCEPTION AND AUDITORY PERFORMANCE IN HEARING-IMPAIRED ADULTS WITH A MULTICHANNEL COCHLEAR IMPLANT

Abstract in Finnish

TAINAVÄLIMAA

Department of Finnish,Saami and Logopedics and

Department ofOtorhinolaryngology,

University of Oulu

OULU 2002

Page 2: Speech perception and auditory performance in hearing
Page 3: Speech perception and auditory performance in hearing

TAINA VÄLIMAA

SPEECH PERCEPTION AND AUDITORY PERFORMANCE IN HEARING-IMPAIRED ADULTS WITH A MULTICHANNEL COCHLEAR IMPLANT

Academic Dissertation to be presented with the assent ofthe Faculty of Humanities, University of Oulu, for publicdiscussion in Raahensali (Auditorium L 10), Linnanmaa, onSeptember 27th, 2002, at 12 noon.

OULUN YLIOPISTO, OULU 2002

Page 4: Speech perception and auditory performance in hearing

Copyright © 2002University of Oulu, 2002

Supervised byProfessor Matti LehtihalmesProfessor Martti Sorri

Reviewed byProfessor Antti IivonenProfessor Reijo Johansson

ISBN 951-42-6817-2 (URL: http://herkules.oulu.fi/isbn9514268172/)

ALSO AVAILABLE IN PRINTED FORMATActa Univ. Oul. B 47, 2002ISBN 951-42-6816-4ISSN 0355-3205 (URL: http://herkules.oulu.fi/issn03553205/)

OULU UNIVERSITY PRESSOULU 2002

Page 5: Speech perception and auditory performance in hearing

Välimaa, Taina, Speech perception and auditory performance in hearing-impairedadults with a multichannel cochlear implant Department of Finnish, Saami and Logopedics, University of Oulu, P.O.Box 1000, FIN-90014University of Oulu, Finland, Department of Otorhinolaryngology, University of Oulu, P.O.Box5000, FIN-90014 University of Oulu, Finland Oulu, Finland2002

Abstract

This work was aimed at studying speech perception and auditory performance in the everyday livesof Finnish-speaking postlingually severely or profoundly hearing-impaired adults before and afterreceiving a multichannel cochlear implant. The association between the formal speech perceptionresults and auditory performance in everyday life was also determined, and an effort was made todefine how well a smaller sample represents the nationwide results.

The patient series comprised a nationwide retrospective survey (N = 67), in which data on hearinglevel and word recognition were requested from the hospitals, and a prospective sample from the cityof Oulu (N=20), in whom hearing level, sentence, word and phoneme recognition and phonemeconfusions were examined using standardised audiometric measures and formal speech perceptiontests in a study with a prospective repeated measure design. Categories of auditory performance ineveryday life were assessed in both samples.

The median sound field hearing level at frequencies of 0.5, 1, 2 and 4 kHz for the subjects in thenationwide survey one year after the switch-on of the implant was comparable to the level of mildhearing impairment. All the subjects achieved at least some open-set word recognition auditorily only(mean 71%, 95% CI 61–81%). The results in the Oulu sample were in line with the nationwidesurvey. A majority of the subjects (31/40) was able to understand conversation without speechreadingone year after switch-on.

Sentence recognition by the subjects in the Oulu sample improved most during the initial sixmonths after the switch-on of the implant, whereas word and phoneme recognition improved steadilyduring the two-year follow-up period. Estimated average sentence recognition after two years was89% (95% CI 71 to 106%), word recognition 73% (95% CI 58 to 87%), syllable recognition 53%(95% CI 42 to 63%), vowel recognition 80% (95% CI 68 to 92%) and consonant recognition 67%(95% CI 57 to 76%). Confusion of phonemes took place more in the direction a spectral energydistribution at higher frequencies. The association between auditory performance in everyday life andthe formal speech perception tests was high (rs > 0.81, p < 0.0001).

Systematic prospective assessment of speech perception with tests of differing difficulty isrecommended for the follow-up of adult cochlear implant users.

Keywords: adult, auditory performance, cochlear implant, phoneme confusions, phonemeperception, postlingual hearing impairment

Page 6: Speech perception and auditory performance in hearing
Page 7: Speech perception and auditory performance in hearing

Välimaa, Taina, Speech perception and auditory performance in hearing-impairedadults with a multichannel cochlear implant Suomen ja saamen kielen ja logopedian laitos, Oulun yliopisto, PL 1000, FIN-90014 Oulunyliopisto, FinlandKorva-, nenä- ja kurkkutautien klinikka, Oulun yliopisto, PL 1000, FIN-90014Oulun yliopisto, FinlandActa Univ. Oul. B 47, 2002Oulu, Finland

Tiivistelmä

Tämän työn tarkoituksena oli tutkia suomenkielisten, kielen oppimisen jälkeen vaikean tai erittäinvaikean kuulovian saaneiden aikuisten kuulon tasoa, puheen vastaanottoa ja kuulon toiminnallistatasoa monikanavaisen sisäkorvaistutteen avulla. Tutkimuksessa selvitettiin myös, miten puheenvastaanottoa mittaavat testit kuvaavat selviytymistä arkipäivän elämässä sisäkorvaistutteenmahdollistaman kuulon avulla. Tarkoituksena oli myös määrittää, millä tavalla pieni otos edustaakansallisia tuloksia.

Tutkimuksessa on retrospektiivinen kansallinen otos (N = 67) ja prospektiivinen Oulun otos(N=20). Kansallisessa otoksessa tiedot kuulon tasosta ja sanojen tunnistuskyvystä kerättiinyliopistosairaaloista koehenkilöiden sairauskertomuksista. Oulun otoksessa kuulon tasoa, sekälauseiden, sanojen ja äänteiden tunnistuskykyä ja äänteiden sekoittuvuuksia tutkittiin audiometrian japuheenvastaanottoa mittaavien testien avulla kahden vuoden seurannan aikana. Kuulontoiminnallista tasoa arvioitiin kuulon toiminnallisen tason luokituksella molemmissa otoksissa.

Kansallisen otoksen koehenkilöiden kuulokynnysten mediaani äänikentässä sisäkorvaistutteellataajuuksilla 0,5, 1, 2 ja 4 kHz oli verrattavissa lievän kuulovian tasoon vuosi sisäkorvaistutteenkäyttöönoton jälkeen. Kaikki koehenkilöt kykenivät tunnistamaan vähintään joitain sanoja pelkästäänkuulonvaraisesti (keskiarvo 71 %, 95 %:n luottamusväli 61–81 %). Oulun otoksen ja kansallisenotoksen tulokset olivat yhteneväiset. Vuosi sisäkorvaistutteen käyttöönoton jälkeen suurin osa (31/40) koehenkilöistä pystyi keskustelemaan ilman huulioluvun tukea hiljaisessa ympäristössä.

Oulun otoksen koehenkilöiden lauseiden tunnistuskyky parani eniten ensimmäisten kuudenkuukauden aikana. Sanojen ja äänteiden tunnistuskyky parani koko kahden vuoden seurannan ajan.Kaksi vuotta sisäkorvaistutteen käyttöönoton jälkeen, estimoitu keskimääräinen lauseidentunnistusprosentti oli 89 % (95 %:n luottamusväli 71-106 %), sanojen tunnistusprosentti oli 73 %(95 %:n luottamusväli 58–87 %), tavujen tunnistusprosentti oli 53 % (95 %:n luottamusväli 42–63 %), vokaalien tunnistusprosentti oli 80 % (95 %:n luottamusväli 68-92 %) ja konsonanttientunnistusprosentti oli 67 % (95 %:n luottamusväli 57–76 %). Koehenkilöt sekoittivat vokaaleja jakonsonantteja useimmiten spektraaliselta energialtaan läheisimpään suuremmille taajuuksillesijoittuvaan äänteeseen. Kuulon toiminnallisen tason luokituksen ja puheen vastaanottoa mittaavientestien välinen korrelaatio oli korkea (rs > 0.81, p < 0.0001).

Sisäkorvaistutteen saavien aikuisten kuulon tason ja puheen vastaanottokyvyn systemaattinenseuranta vaikeudeltaan eritasoisten testien avulla on tärkeää monipuolisen kuntoutuksen suunnitteluntueksi.

Asiasanat: aikuinen, kuulon toiminnallinen taso, postlingvaalinen kuulovika, puheen tun-nistus, sisäkorvaistute, äänteiden sekoittuminen, äänteiden tunnistus

Page 8: Speech perception and auditory performance in hearing
Page 9: Speech perception and auditory performance in hearing

To Saana and Pentti

Page 10: Speech perception and auditory performance in hearing
Page 11: Speech perception and auditory performance in hearing

Acknowledgements

This work was carried out at the Department of Finnish, Saami and Logopedics and theDepartment of Otorhinolaryngology, University of Oulu, in 1996–2002.

I would like to start by thanking Professor Matti Lehtihalmes, my supervisor inLogopedics, for his valuable comments and support. I would also like to thank him forhis understanding attitude towards my several periods as a researcher and consequentabsence from teaching, and for providing facilities at the Department of Finnish, Saamiand Logopedics. I wish to thank Professor Juhani Nuutinen, MD, former Head of theDepartment of Otorhinolaryngology, for his support when starting this project. I alsoexpress my gratitude to Professor Kalevi Jokinen, MD, present Head of the Department,for enabling this study to be carried out, and also for his warm and interested attitudetowards my work as novice researcher.

I owe my deepest and warmest gratitude to Professor Martti Sorri, MD, my supervisorin audiology, for teaching me the principles of audiology and patiently guiding me inscientific thinking and writing during our many supervisory hours and in the preparationof the original communications. He has always been available to help when I haveneeded advice and encouragement most, yet also allowing me to find the answersindependently. I have had the privilege of learning from his ingenuity and versatility inscientific thinking and his pedagogical abilities, and of enjoying his sense of humour. Isincerely thank him for his support.

I wish to thank the official reviewers of this work, Docent Reijo Johansson, MD,University of Turku, and Professor Antti Iivonen, Ph.D., University of Helsinki, for theirconstructive criticism, which helped me to improve this thesis.

I would like to express my warmest gratitude to my co-author and tutor in phoneticsDr. Taisto Määttä, Ph.D., Lecturer in Phonetics, for invaluable discussions and commentswhen preparing papers III and IV, which enabled me to deepen my knowledge in the fieldof phonetics. I would also like to thank him for his warm and supporting empathy. I alsowish to acknowledge the invaluable work of Docent Heikki Löppönen, MD, my co-author when preparing papers III–V. The numerous discussions on diverse aspects ofcochlear implantation and his patience in teaching me the programming of speechprocessors were of invaluable help. The biostatistician Mr. Arto Muhli is also warmlythanked for his valuable work and amazing expertise in statistics. His lovely dry humour

Page 12: Speech perception and auditory performance in hearing

in leading me into the world of data processing and statistical analyses were mostenjoyable. Interdisciplinary co-operation is inspiring, and all the members of the Unit forSpeech Reception Research are warmly thanked for creating a stimulating and pleasantatmosphere for diverse scientific work.

The flexible attitude towards my project on the part of the entire staff at theDepartment of Audiology is warmly acknowledged. Sister Irja Nuojua impressed me withher positive attitude towards research work and with her abilities in all the practicalarrangements, as conveyed to me during our numerous telephone calls. I also thank allthe audiology assistants and the entire staff for their co-operation. My sincere thanks aredue to all the cochlear implant users themselves for participating in this study and forteaching me about the successes and difficulties of rehabilitation. The Departments ofOtorhinolaryngology in Helsinki, Tampere and Turku are also warmly thanked for theirpositive co-operation.

My warmest thanks are due to my colleagues in the Department of Finnish, Saami andLogopedics. Their excellent sense of humour and the pleasant working atmosphereformed the basis of a productive research effort, for which I would like to thank them all.Professor Kari Suomi, Ph.D., is warmly thanked for his supporting attitude towards mywork, and I would also like to express my thanks to Dr. Ritva Koivusaari, Ph.D., forbeing such an excellent teacher for me and showing me a positive attitude towardsresearch work and life in general through her own example. Most especially, I would liketo thank Dr. Sari Kunnari, Ph.D., my dearest friend and colleague, for her deep interestand support both in my studies and research and in my personal life. I have had theprivilege of sharing a long and deep friendship with her, which has formed one of thevery meaningful cornerstones of my life. Without her lively friendship, life would beconsiderably duller. I would also thank Dr. Kerttu Huttunen, Ph.D., most warmly for herfriendship, for sharing supervision hours with me, and for showing through her ownexample that despite momentary despair, our efforts may bear fruit in due time. Warmthanks go to Ms. Tuula Okkonen, Phil.Lic., for sharing the joy and distress of daily workin our research room during the preparation of this thesis. I also wish to thank Ms. KaisaKosola for much practical help and support during these years, and to express mygratitude to Mr. Malcolm Hicks, M.A., for revising the language of this thesis.

I have been proud to be a member of a ‘Nice Team’. The abiding friendship that I havefound with all the members of our team feels quite exceptional to me, and has allowed usto share the many aspects of daily living.

Finally, my loving thanks are due to my family, Pentti for his sincere support andencouragement during these years, and my daughter Saana, who has given me so muchhappiness and love. They have taught me the most important aspects of life, and I couldnever have enjoyed my scientific work without them. I also thank my parents withappreciation and gratitude for their endless support and belief in me. My warm thanks arealso due to my brother, his wife, and my ‘home crew’ of friends for numerous relaxingdiscussions and laughs.

The Finnish Cultural Foundation, the Faculty of Humanities and the Department ofFinnish, Saami and Logopedics at the University of Oulu, the Oulu UniversityScholarship Foundation, the Finnish Audiological Society, and Ulkokorva ry are warmlythanked for financial support during these years.Oulu, August 2002 Taina Välimaa

Page 13: Speech perception and auditory performance in hearing

Abbreviations

AC air conductionACE advanced combination encoders strategyABI auditory brainstem implantBC bone conductionBEHL better ear hearing levelC consonantCA continuous analogue strategyCAP categories of auditory performanceCI confidence intervalCIS continuous interleaved sampling strategyCT computerized tomographyCV consonant-vowel syllable typeCVC consonant-vowel-consonant syllable typeCVV consonant-vowel-vowel syllable typeCVVC consonant-vowel-vowel- consonant syllable typedB decibelFFT Fast Fourier TransformF0/F1/F2 speech coding strategy extracting voice fundamental frequency, first

formant and second formantF1 first formant, first resonant peak caused by the vocal tractF2 second formant, second resonant peak caused by the vocal tractF3 third formant, third resonant peak caused by the vocal tractHA hearing aidHAs hearing aidsHAP™ hybrid analog pulsatile strategyHI hearing impairmentHIs hearing impairmentsHL hearing levelHRQoL health-related quality of lifeHTL hearing threshold levelHz hertz

Page 14: Speech perception and auditory performance in hearing

IQ intelligence quotientkHz kilohertzLPC Linear Predictive CodingLTASS long-term average speech spectramo. monthmos. monthsMpeak multipeak strategyNIH National Institutes of Healthn-of-m number of maxima strategyPET positron emission tomographypps pulses per secondPPS™ paired pulsatile samplerPTA pure tone averageQALY quality-adjusted life-yearSAS simultaneous analog stimulationSNHI sensorineural hearing impairmentS/N signal-to-noise ratioSPEAK spectral peak strategySRT speech recognition threshold

sentence recognition thresholdV vowelVC vowel-consonantVOT voice onset timeWHO World Health OrganizationWRT word recognition threshold

Page 15: Speech perception and auditory performance in hearing

List of original communications

This thesis is based on the following communications, which will be referred to in thetext by their Roman numerals.

I Välimaa T & Sorri M (2001) Speech perception and functional benefit aftercochlear implantation: a multicentre survey. Scand Audiol 30: 112–118.

II Välimaa T & Sorri M (2000) Speech perception after multichannel cochlearimplantation in Finnish-speaking postlingually deafened adults. Scand Audiol29: 276–283.

III Välimaa T, Määttä T, Löppönen H & Sorri M (2002) Phoneme recognition andconfusions with multichannel cochlear implants: Vowels. J Speech Lang HearRes, in press.

IV Välimaa T, Määttä T, Löppönen H & Sorri M (2002) Phoneme recognition andconfusions with multichannel cochlear implants: Consonants. J Speech LangHear Res, in press.

V Välimaa T, Sorri M & Löppönen H (2002) Association between categories ofauditory performance and speech perception in Finnish adult cochlear implantusers. Annals Oto-Rhino-Laryngol, submitted for publication.

Page 16: Speech perception and auditory performance in hearing
Page 17: Speech perception and auditory performance in hearing

Contents

AbstractTiivistelmäAcknowledgementsAbbreviationsList of original communicationsContents1 Introduction …………………………………………………………………….. 172 Review of the literature…………………………………………………………. 19

2.1 Hearing impairments in adult life……………………………………………192.1.1 Definitions…………………………………………………………….. 192.1.2 Aetiology and prevalence……………………………………………... 212.1.3 Auditory speech perception and sensorineural hearing impairment….. 22

2.1.3.1 The Finnish phoneme system and main linguistic features…… 222.1.3.2 Auditory speech perception……………………………….……262.1.3.3 Severe or profound hearing impairment and auditoryspeech perception……………………………………………………… 282.1.3.4 Speech perception tests………………………………………... 29

2.2 Cochlear implants……………………………………………………………322.2.1 Characteristics of cochlear implants…………………………………... 322.2.2 Speech coding strategies for multichannel cochlear implants………… 332.2.3 Indications for cochlear implantation…………………………………. 35

2.3 Effect of a multichannel cochlear implant on speech perception…………... 362.3.1 Sentence and word perception………………………………………… 362.3.2 Phoneme perception and phoneme confusions………………………...362.3.3 Factors affecting speech perception after multichannel cochlearimplantation…………………………………………………………………. 482.3.4 Functional plasticity after multichannel cochlear implantation………..49

2.4 Subjective effect of cochlear implantation and quality of life ………………502.4.1 Self-assessment instruments for hearing-impaired adults…………….. 50

2.4.1.1 Domain-specific instruments…………………………………...502.4.1.2 Generic health-related quality of life instruments……………...52

Page 18: Speech perception and auditory performance in hearing

2.4.2 Self-assessment by cochlear implant users…………………………… 532.4.2.1 Subjective effect of cochlear implantationon everyday life………………………………………………………...532.4.2.2 Health-related quality of life after cochlearimplantation…………………………………………………………….53

3 Aims of the research…………………………………………………………………544 Subjects and methods………………………………………………………………. 55

4.1 Subjects in the nationwide survey……………………………………………… 554.2 Subjects in the Oulu sample……………………………………………………. 564.3 Methods………………………………………………………………………… 58

4.3.1 Nationwide survey………………………………………………….……584.3.2 Oulu sample……………………………………………………………...58

4.4 Study design ……………………………………………………………………. 624.5 Statistical analyses……………………………………………………………… 634.5 Ethical considerations…………………………………………………………... 64

5 Results and comments……………………………………………………………… 655.1 Hearing level, speech perception and auditory performanceafter cochlear implantation: the nationwide survey (I)……….……………………. 65

5.2 Hearing level, speech perception and auditory performanceafter multichannel cochlear implantation: a prospective repeatedmeasures study (II)…………………………………………………………………. 705.3 Phoneme recognition and confusions with multichannelcochlear implants (III, IV)………………………………………………………….. 76

5.3.1 Vowel recognition and confusions (III)…………………………….…… 765.3.2 Consonant recognition and confusions (IV)……………………………. 83

5.4 Association between auditory performance and speech perception (V)………... 916 General discussion…………………………………………………………………. 97

6.1 Validity of the present study…………………………………………………...976.2 Implications for the rehabilitation system……………………………………..1006.3 Future research………………………………………………………………... 101

7 Summary and conclusions………………………………………………………….. 1048 References………………………………………………………………………….. 108Appendices 1–4

Page 19: Speech perception and auditory performance in hearing

1 Introduction

Severe or profound hearing impairment (HI) occurring in adult age affects auditoryspeech perception, communication abilities and the social-emotional capabilities andstatus of an adult person in society in many ways. Due to the restrictions on residualhearing capacity (Lamoré et al. 1990), the selection of hearing aids (HA) for severely andprofoundly hearing-impaired listeners must meet specific requirements, as they may notbe able to gain sufficient benefit, or any at all, from conventional types of HA (Byrne etal. 1990), or from a tactile device (Lynch et al. 1992, Galvin et al. 1999). Hence, cochlearimplantation may be indicated for subjects with severe or profound bilateral HI, aidedopen-set sentence recognition scores of 30% or less and functioning residual auditorynerve fibres, as stated by the National Institutes of Health (NIH) Consensus Statement(Cochlear implants in adults and children 1995).

Speech perception in hearing-impaired adults may be assessed using continuousspeech, sentences, isolated words, syllables or phonemes. Word recognition tests havebeen in most extensive use (Palva 1952, Jauhiainen 1974, Wiley et al. 1995), butcontinuous speech and sentences have been discussed as being more representative ofrealistic everyday communication than isolated words (Plomp & Mimpen 1979, Larsby &Arlinger 1994). Syllable tests have also been used as assessment methods in order toovercome learning effects caused by the limited number of items in a test set or the highlinguistic redundancy caused by the use of meaningful words (Dubno & Dirks 1982).Because of the variation in methods available, open-set sentence and word recognitiontests have been considered necessary in the minimal test battery for assessing auditoryspeech perception in postlingually hearing-impaired adult cochlear implant users(Luxford et al. 2001).

Postlingually severely or profoundly hearing-impaired adults have been found to beable to perceive speech auditorily only with a multichannel cochlear implant (Cohen etal. 1993), although significant subject-related and device-related variation has been foundin the speech perception results (Blamey et al. 1996, van Dijk et al. 1999). Despite thepositive results obtained with some Germanic and Romance languages, for example,numerous questions still remain to be answered. How well is it possible for a Finnish-speaking adult multichannel cochlear implant user to perceive sentences, words andphonemes auditorily only? Are there any specific patterns in speech perception abilities?

Page 20: Speech perception and auditory performance in hearing

18

How do adult cochlear implant users perform in everyday communication situations?And how should the auditory speech perception abilities of adult multichannel cochlearimplant users be assessed? Such information is needed for planning rehabilitation and forassessing the outcome.

The present work began from the need to expand our knowledge of the auditoryspeech perception abilities of Finnish-speaking severely or profoundly hearing-impairedadults before and after receiving a multichannel cochlear implant. When implantationsbegan in Finland in 1995, it became evident that the extensive data on the speechperception of adult cochlear implant listeners that had been obtained for the Germanicand Romance languages were not directly applicable to Finnish. Also, a great deal of theresearch had been focused mainly on reporting percentage changes in speech perceptionafter replacing the devices with a newer and more advanced model, without giving anydetailed prospective results. Furthermore, only one report existed that dealt with thespeech perception of Finnish adults using a single-channel cochlear implant (Rihkanen1980). The purpose of the present work was to provide information on hearing level,sentence, word and phoneme perception, phoneme confusions and auditory performancein everyday life in Finnish-speaking postlingually severely or profoundly hearing-impaired adults using multichannel cochlear implants.

Page 21: Speech perception and auditory performance in hearing

2 Review of the literature

2.1 Hearing impairments in adult life

2.1.1 Definitions

The term HI applies when defective functioning of the auditory system may be measuredusing physiological or psycho-acoustical techniques (e.g. audiometry) and the defect isexpressed as a worsening of pure tone thresholds (EU Work Group 1996). The termshearing disability, handicap, hearing handicap and deafness should be avoided assemantic equivalents to HI, as these have diverse meanings and definitions in differentsocio-cultural contexts (Davidson et al. 1989, EU Work Group 1996).

The assessment of HI aims at classifying the type, degree, configuration, aetiology andtime of onset of the impairment, although especially the latter can often be difficult todetermine. HI is usually considered prelingual when it begins before the age of two years(Parving & Christensen 1993, Paul & Quigley 1994) and postlingual when it begins afterthe most active period of speech and language development, i.e., after the age of fiveyears (Lucks Mendel et al. 1999). Perilingual HI is considered to have its onset duringthe active period of speech and language development, between the ages of two to fiveyears, although HI is sometimes defined as postlingual when it begins after the age of twoyears (Paul & Quigley 1994). Children are usually considered to have acquired most ofthe phonological rules of their language by six to eight years of age and most of thesemantic rules by the age of four to five years (Ingram 1989, Crystal 1990), which clearlypoints to a difficulty in the division into prelingual vs. postlingual HI. When HI developsin adult life it may be classified as acquired (Martin 1982), regardless of the aetiology,i.e. this may include both hereditary and congenital conditions (Morrison 1993).

Page 22: Speech perception and auditory performance in hearing

20

Postlingual HI will be taken here to refer to the onset of severe or profound HI after theage of five years.

According to the changes occurring in the functioning of the hearing organs, HI can beclassified as conductive, sensorineural and combined/mixed. Conductive HI is the resultof a failure in sound conduction, i.e. attenuation caused by a disease in the outer and/ormiddle ear. Conductive HI cannot exceed 60 dB, and according to a proposal by the EUWork Group (1996), the bone conduction thresholds (BC) in conductive HI are normal (<20 dB), but the air conduction (AC) thresholds are > 15 dB worse than these whenaveraged over frequencies of 0.5, 1 and 2 kHz. HI is classified sensorineural (SNHI),when the defect is located in the cochlea (sensory) and/or along the auditory nerve andbrainstem (neural). No appreciable air-bone gap between the AC and BC thresholds ispresent in SNHI (less than 15 dB averaged over frequencies of 0.5, 1 and 2 kHz), andboth AC and BC thresholds are poorer than normal (worse than 20 dB HL) (EU WorkGroup 1996). Mixed or combined HI is a combination of the features of both conductiveand sensorineural impairment, the BC thresholds being worse than 20 dB and the gapbetween the AC and BC thresholds being > 15 dB averaged over frequencies of 0.5, 1 and2 kHz (EU Work Group proposal 1996). HI can be classified as central when referring todysfunction of the central auditory nervous system (Silman & Silverman 1997).

Uniform definitions of the degrees of HI are important for both research and clinicalpurposes (ASHA 1998), but several classifications have been presented during the pastdecades. The definitions put forward by the World Health Organization (WHO) (1991)and the EU Work Group (1996) are currently in widespread use in research and clinicalpractice, the latter being based on the earlier proposals of Liu and Xu (1994) and Parvingand Newton (1995) (Table 1). The recent definition proposed by the EU Work Group(1996) differs from an earlier one by the British Society of Audiology (1988) mainly inthe frequencies included in the calculations, and these two differ from the WHO (1991)definition both in the frequencies and the degrees of average impairment. The aim of theEU Work Group proposal (1996) has been to obtain better uniformity and to improve theinterchange of information, because variation in the definitions can result in differencesin prevalences, for example, which can also affect the estimated need for rehabilitationservices (Uimonen et al. 1997, Duijvestijn et al. 1999, Uimonen et al. 1999, Smith 2001).

Table 1. Three recent classifications of the grades of HI.

Grade of HI

British Society of

Audiology (1988)

PTA0.25–4kHz

WHO

(1991)

BEHL0.5–2kHz

EU Work Group

(1996)

BEHL0.5–4 kHz

Mild 20 – 40 dB 26 – 40 dB 20 dB < x < 40 dB

Moderate 41 – 70 dB 41 – 60 dB 40 dB ≤ x < 70 dB

Severe 71 – 95 dB 61 – 80 dB 70 dB ≤ x < 95 dB

Profound > 95 dB ≥ 81 dB x ≥ 95 dB

x = BEHL0.5–4 kHz.

Page 23: Speech perception and auditory performance in hearing

21

Greater uniformity in the classification of audiometric configurations and the degreesof HI has been called for by many authors (cf. Liu & Xu 1994, Parving & Newton 1995,EU Work Group 1996, Kunst et al. 1998). The frequency range of 0.125–8 kHz can bedivided into sub-classes of low (< 0.5 kHz), mid (0.5 kHz < x < 2 kHz) and highfrequencies (2 kHz < x < 8 kHz), and the audiometric configurations into low frequencyascending (> 15 dB difference between the poorer low-frequency thresholds and thehigher frequencies), mid-frequency U-shaped (> 15 dB difference between the poorestmid-frequency thresholds and the better lower and higher frequencies), flat, gentlysloping to high frequencies (a 15–29 dB difference between the mean of 0.5 and 1 kHzand the mean of 4 and 8 kHz), steeply sloping to high frequencies (a > 30 dB differencein the aforementioned mean values) and residual (Liu & Xu 1994, EU Work Group1996). Although detailed, even this classification is unfortunately not unequivocal (Mäki-Torkko et al. 1998). Unambiguous classifications have been proposed recently for theaudiometric configurations (Sorri et al. 2000), but these do not appear to have gainedwide popularity.

HI can affect people’s lives at many levels of functioning, and the effects ofimpairment on ability to perform activities or fulfil social roles are individual in nature(Arnold 1998). The International Classification of Impairments, Disabilities andHandicaps (ICIDH, WHO 1980), which was aimed at defining the terms impairment(body level), disability (person level) and handicap (society level) and finding aconsensus on the categorization of their domains, succeeded in classifying the previouslyambiguous terminology regarding audiological measurements, but could not establish anyclear consensus in the terminology (Stephens & Hétu 1991, Arnold 1998). The recentversion of the International Classification of Functioning, Disability and Health (ICF,WHO 2001) outlines revisions to the original ICIDH classification and replaces theformer definitions of impairment, disability and handicap with ones that refer to bodyfunctions and structures, activities and participation, and contextual factors of anenvironmental and personal nature. It presents a model in which the relationship betweenthe primary dimensions of disabilities is considered interactive and it is acknowledgedthat changes in one dimension can potentially influence other dimensions. It also aims attaking into consideration the complex interaction between physical and psychologicalfactors on the one hand and social, environmental and personal factors on the other.

2.1.2 Aetiology and prevalence

The aetiologies of HI can be grouped into genetic and non-genetic with relevantsubcategories, and also into prenatal, perinatal and postnatal, according to the mostobvious time of insult (Davidson et al. 1989). It is also possible, of course, that theaetiology may remain unknown. The expression of auditory dysfunction in thesecategories is divided into early developing (e.g. Pendred’s syndrome), or late developing,often delayed by a matter of years, as is especially common in adulthood HI (e.g.,otosclerosis or Ménière’s disease). Approximately 30 to 50% of childhood HIs aregenetic, while in approximately 20 to 40% of cases the aetiology remains unknown(Parving 1993, Parving & Stephens 1997, Vartiainen et al. 1997, Mäki-Torkko et al.

Page 24: Speech perception and auditory performance in hearing

22

1998). Unfortunately, no equivalent definition can be made regarding adulthood HIs, dueto lack of population-based epidemiological studies.

Genetic HIs can be divided into autosomal dominant, autosomal recessive, X-linkeddominant (maternal inheritance), X-linked recessive, mitochondrial and polygenic, andalso unknown hereditary forms (EU Work Group 1996). Most of the genetic HIs are non-syndromal, but approximately 30% of cases are syndromal, including craniofacialabnormalities, for example (Königsmark & Gorlin 1976, Gorlin et al. 1995, Billings &Kenna 1999). Genetic HI may be expressed at birth, or the expression may be delayedinto childhood or even adulthood, as is the case with otosclerosis, for example, which hasbeen shown by some authors to be the single most common cause of HI among whiteadults (Tomek et al. 1998, Van Den Bogaert et al. 2001). Appropriate epidemiologicaldata on factors causing HI in adults are currently needed (EU Work Group 1996).

National population-based studies on the prevalence of HI in adults are rare, and theprevalence figures presented are quite often based on local populations, as well as beingaffected by the differing definitions of HI (Davis 1995, Uimonen et al. 1997, Uimonen etal. 1999, Smith 2001). The prevalence has been shown to increase with age (Davis 1989,Davis 1995, Quaranta et al. 1996, Karlsmose et al. 1999, Uimonen et al. 1999), just asthe sensitivity of hearing in normal adult populations has been found to deteriorateprogressively with age, and in both cases this advances more rapidly at higherfrequencies (ISO 7029 1984, Cruickshanks et al. 1998). There are some reports of sexdifferences, with HI being more prevalent in males than in females (Davis 1989, 1995,Karlsmose et al. 1999), but no clear sex differences have also been reported (Quaranta etal. 1996).

The prevalence among adults aged from 41 to 50 years has been reported to vary from8% to 11% when based on the criterion BEHL0.5–4 kHz > 25 dB (Davis 1989, 1995,Quaranta et al. 1996, Karlsmose et al. 1999), and that in a Finnish adult population aged45 years, based on a slightly different criterion, BEHL0.5–4 kHz > 20 dB, has been found tobe 6.6% (Uimonen et al. 1999). The prevalence of HI has been found to rise to 15.9–18.9% in the sixth decade (Davis 1989, 1995, Quaranta et al. 1996, Uimonen et al. 1999),and the overall prevalence of severe HI in adults (based on the 1996 EU Work Groupproposal for the degrees of HI) has been reported to vary from 0.3% to 0.7% and that ofprofound HI from 0.1% to 0.2% (Davis 1995, Uimonen et al. 1999). A comparableoverall prevalence of severe HI was found in Italy when the severity was defined asBEHL0.5–4 kHz > 90 dB (0.32%, Quaranta et al. 1996).

2.1.3 Auditory speech perception and sensorineural hearing impairment

2.1.3.1 The Finnish phoneme system and main linguistic features

The core of the Finnish phoneme system is reported to be composed of 8 vowelphonemes /i y e ø æ u o / and 11 consonant phonemes /p t k s h m n l r j v/(Karlsson 1983, Sulkala & Karjalainen 1992). The Finnish vowels can be divided into

Page 25: Speech perception and auditory performance in hearing

23

five front vowels /i y e ø æ/ and three back vowels /u o /, and they can be classified asrounded /y ø u o/ or unrounded /i e æ / (Table 2). The terms high, mid and low refer tothe articulatory position of the tongue in the mouth while producing the vowels. Thevowels are unambiguously represented in the orthography by the letters <i y e ö ä u o a>.

Table 2. The Finnish vowel system.

Front Back

Unrounded Rounded Unrounded Rounded

High i y uMid e ø oLow æ

The Finnish consonant system (Table 3) is difficult to describe unambiguously, sincedifferent systems exist side by side at present (Suomi 1996). The core system includes 11consonant phonemes /p t k s h m n l r j v/, which are unequivocally represented in theorthography by the letters <p t k s h m n l r j v>. The consonant / / (<ng> in theorthography) occurs in a word-medial position only, but is not phonemic in some dialects.The consonant /d/ (<d> in the orthography) occurs in native words in standard spokenFinnish, but is absent from most dialects, which indicates a marginal role in the Finnishphoneme system. The fricative /s/ is the only one with an oral constriction, while thefricative /h/ has a glottal constriction. The voiced stops /b / and the fricatives /f / (<b gf s> in the orthography) do not appear in original Finnish words but have recently enteredthe language and are used in loan words.

Table 3. The Finnish consonant system.

Place of articulation

Manner of articulation Labial Dental/alveolar Palato-velar Glottal

Stops

voiced

voiceless

p(b)

td

k( )

Fricatives (f) s ( ) hLiquids

lateral

trill

lr

Nasals m n Semivowels v j

Page 26: Speech perception and auditory performance in hearing

24

Phonemic quantity is distinctive for both vowels and consonants in Finnish (Wiik1965), the phonetically long phonemes being described phonologically as double, i.e.sequences of two identical phonemes. The Finnish quantity system is maintained by noother feature but duration (Lehtonen 1970). All the eight vowels and all but /d h v j/among the consonants can be doubled. This is again represented in the orthography, as inthe series of words <tuli>, /tuli/ ’fire‘, <tuuli>, /tuuli/ ’wind‘, and <tulli>, /tulli/’customs‘, where the only distinction between the words is the quantity of either the firstvowel or the medial consonant. Phonetically, no essential qualitative differences existbetween short (single) and long (double) vowels (Wiik 1965, Iivonen & Laukkanen1993), i.e. the acoustic differences in two lowest formants, F1 and F2, are negligible, thevalues for the short vowels merely being slightly more centralised. Average values for F1and F2 are presented in Table 4.

Table 4. The Finnish vowels, represented by the average values of the two lowestformants (F1, F2) in Hz (see Wiik 19651, Iivonen & Laukkanen 19932).

F11 F12 F21 F22

Front vowels

/i//y/

/e/

/ø/

/æ/

340

340

500

510

675

300

335

517

488

698

2355

1920

2070

1705

1825

2261

1751

1957

1692

1712

Back vowels

/u/

/o/

/ /

400

535

710

295

486

658

780

985

1345

837

1142

1190

Little research has been done into the acoustic properties of the Finnish consonants,and energy concentrations have been estimated mainly from spectrographic andoscillographic studies (Palva 1958, Wiik 1966, Suomi 1980, Iivonen & Laukkanen 1993),or by sonagraphic and seekertone analysis (Sovijärvi 1938, 1964). Formant patterns forthe resonants (Sovijärvi 1938, Wiik 1966) and energy concentrations for the voicelessconsonants, together with indications of place of articulation (Sovijärvi 1964, Palva 1958,Suomi 1980, Iivonen & Laukkanen 1993) are presented in Table 5.

Page 27: Speech perception and auditory performance in hearing

25

Table 5. Main formants or energy concentrations of the Finnish consonants (Sovijärvi1938, Palva 1958, Sovijärvi 1964, Wiik 1966, Suomi 1980, Iivonen & Laukkanen 1993).

Manner of articulation Formants or energy concentrations (Hz) Comment

Stops /p/

/t/ /k/ /d/

360–720

250–500

100–200

1120–2850

2200–2800

900–2250

1500–2800

No aspiration in Finnish.F2 locus a strong indicator of the placeof articulation.Average VOT in word-initial position 9ms for /p/, 11 ms for /t/ and 20 ms for/k/.

Fricatives /s/ /h/

2500–3000 7000–8000

Liquids /l/

/r/

250–550 800–1800 F1 and F2 values strongly associatedwith the adjacent vowel.

Nasals /m/ /n/ / /

FN1 200–250

typical for all

nasals

FN2 1300–1600

typical for all

nasals

F2 locus a strong indicator of place ofarticulation.

Semi-

vowels

/v/ /j/

200–500

250–500

1400–8000

2100–6800

F2 the most characteristic acousticfeature.

The Finnish language entails a fairly even distribution of vowels (approximately 49%)and consonants (approximately 51%) (Häkkinen 1978, Vainio 1996), and features tensyllable patterns defined as sequences of these. The five most prevalent syllable patternsare CV, CVC, CVV, CVVC and VC, accounting for 94% of those occurring in runningtext (Häkkinen 1978). The most prevalent word types are bisyllabic and polysyllabic, andcontinuous speech and text is rich in inflections, i.e. case endings and other suffixes(Karlsson 1983, Sulkala & Karjalainen 1992).

Despite the existence of language-specific phoneme patterns, recent studies haveshown that the long-term average speech spectra (LTASS) of different languages(English, Swedish, Danish, German, French, Japanese, Cantonese, Mandarin, Russian,Welsh, Singhalese and Vietnamese) are very similar, even though they may include manydeviations from the average that are associated with sex differences between male andfemale speakers, for example (Byrne et al. 1994). Kiukaanniemi and Mattila (1980)found a considerable difference between the long-term speech spectra of Finnish andEnglish (26.8% of the speech power at frequencies of 619–3000 Hz in Finnish vs. 46.2%in English), but their results were based on only one male speaker of each language.More cross-linguistic studies involving large populations are needed, since considerableinter-individual and intra-individual variation in LTASS has been found.

Page 28: Speech perception and auditory performance in hearing

26

2.1.3.2 Auditory speech perception

Speech perception includes both the perception of a stimulus and its interpretation, theprocess being explained at present by means of various models and theories. The twomajor theories share the assumption that speech sounds are represented in the brain asabstract linguistic categories and perceived by auditory and phonetic coding operations,but they differ in the details of the process. Several other models of speech perceptionmay be taken as further developments of these two. According to the auditory theory ofspeech perception, ordinary auditory processes are sufficient to explain the perception ofspeech (Fant 1967, Pisoni 1973, Cole & Scott 1974, Kuhl 1981, Lindblom 1990). Theauditory appearances of acoustic patterns are registered and matched with phonetic,categorial labels that persist in the memory. It is not explicitly explained why differentacoustic patterns qualify for the same label. The motor theory of speech perception isbased on the assumption that the intended phonetic gestures of the speaker construct thebasis for phonetic categories (Liberman 1957, Liberman & Mattingly 1985). Speechperception and speech production are intimately linked, and this link is biologically andinnately specified. Pisoni and Sawusch (1975) assume in their information processingmodel that perception of the phonetic distinctive features also includes knowledge of thearticulatory constraints. One further development of the auditory theory is the quantaltheory of speech, which assumes that acoustic invariance is to be found in speech,yielding preferences for certain phonological oppositions in languages (Stevens 1989).

The categorial nature of speech perception was first introduced in experimentsdesigned to determine how listeners classify consonants (Liberman et al. 1957). Theywere found to be able to discriminate some of the consonants only as well as they couldidentify them as phonemes. The point on the continuum where each category was heardequally often was delimited as the phoneme boundary. The perception of vowels has beenconsidered continuous, since the labelling of vowels is gradual and the phonemeboundaries are less sharply defined (Fry et al. 1962). Listeners have also proved able todetect smaller differences than was required for categorization. Whether speechperception is categorial or a relatively continuous relationship between changes in stimuliand changes in perception has been discussed extensively (Hary & Massaro 1982,Massaro & Hary 1984), yielding no clear consensus (Massaro 1994, Remez 1994).Discussion on whether the sentence, word, syllable or phoneme constitutes the smallestrelevant unit of speech perception has also attracted extensive interest within speechperception research, but again no definite consensus has been reached (Liberman et al.1957, Fry et al. 1962, Klatt 1979, Massaro 1994).

The term speech perception refers to the ability to understand speech, includingcontinuous speech, sentences, words, syllables and phonemes (Nicolosi et al. 1983,Lucks Mendel & Danhauer 1997, Lucks Mendel et al. 1999). Discrimination refers to theprocess of distinguishing among speech sounds or words by labelling them as the same ordifferent (Nicolosi et al. 1983), whereas recognition refers to the ability to repeat what isheard (Lucks Mendel & Danhauer 1997). We will use speech perception here as acomprehensive term that includes the perception of sentences, words, syllables andphonemes, and recognition as a term referring to measured auditory ability.

Page 29: Speech perception and auditory performance in hearing

27

Speech segments can be described using the phonological categories of distinctivefeatures such as voicing, place of articulation and manner of articulation (cf. Jakobson etal. 1967, Fant 1967), and the features available for speech perception according toarticulatory concepts, auditory distinctive features or acoustic characteristics of thestimuli. The existence of individual features as independent entities is supported byfindings showing that the acoustic and phonetic cues that determine the perception of onefeature are different from the cues for another feature (Liberman et al. 1958). Analyses ofphoneme identification have shown that phonological features can be perceived asindependent units (Miller & Nicely 1955), but a single phonological representation canhave different phonetic, articulatory and acoustic realizations (Peterson & Barney 1952,Wiik 1965, Kewley-Port 1982, Iivonen & Laukkanen 1993). It is true, of course, thatlinguistic and semantic features are also involved in speech perception, as are factorsassociated with the speaker and person’s speaking style (Picheny et al. 1985, Picheny etal. 1986). It is assumed here, however, that the perceptual and acoustic features of thephonemes are merely available for the listener to use in the process of speech perception.

Most of the information needed for vowel recognition is conveyed by the two lowestformants (F1, F2) (Peterson & Barney 1952, Pols et al. 1969), while the third formant(F3) provides additional information on the roundness of the front vowels and onspeaker-specific factors (Fujimura 1967). The acoustic features available for consonantrecognition are low-frequency periodicity (for voiced stops), silent intervals (forvoiceless stops), noise (for fricatives) and noise bursts (for stops), rapid formantmovements in the vicinity of consonant release, i.e. formant transitions (for obstruentsand resonants), and a formant structure resembling that of vowels (for laterals, nasals andsemivowels) together with a special low-frequency nasal murmur for nasals (Halle et al.1957, Harris et al. 1958, Stevens 1980, Benkí 2001).

The modelling of peripheral hearing has favoured a division of the frequency rangeover which the human ear is able to perceive tones and noises into critical bands(Greenwood 1961, Zwicker 1961). By estimating the length of the basilar membrane andfrequency sensitivity, frequency-position functions have been suggested that yieldestimates that correspond quite closely one to another (Greenwood 1961, Moore &Glasberg 1983, Greenwood 1990). Hence, several theories have been devised to explainthe perception of frequencies, also taking into consideration the tuning curve of thecochlear nerve fibres (Liberman 1982).

According to the place-pitch theory (tonotopy), the auditory nerve fibres that aresensitive to high frequencies are situated at the base of cochlea and those that aresensitive to low frequencies at its apex (Greenwood 1961, Greenwood 1990). The volleytheory suggests that frequency is determined by the rate at which the auditory nervefibres are fired, firing at rates proportional to the period of the input signal up tofrequencies of 5 kHz (Zwicker & Terhardt 1980), i.e. the individual nerve fibres fire ateach cycle of the stimulus at low frequencies (being phase-locked), whereas highfrequencies are indicated by organized firing of groups of nerve fibres (see also 2.3.4, p49).

Page 30: Speech perception and auditory performance in hearing

28

2.1.3.3 Severe or profound hearing impairment and auditory speechperception

Different grades of HI have been found to yield different patterns of speech perceptionand phoneme confusions, while subjects performing at roughly the same level have beenshown to have different phoneme confusion patterns (Bilger & Wang 1976). The patternsof perceptual phoneme confusions have nevertheless been found to be systematicallyrelated to audiometric configurations. Due to individual variability, it has been difficult todraw any definite conclusions on the speech perception patterns of hearing-impairedlisteners.

Recent investigations into the residual hearing capacity of severely and profoundlyhearing-impaired listeners have revealed that this capacity can be described by twoindependent factors: frequency discrimination at the higher frequencies and a decrease inthe peripheral processing efficiency of the auditory system. Specifically, the PTA of themid-frequencies (PTA0.5–2kHz) has been found to have a close positive association with thespeech recognition threshold (SRT) in quiet situations, and also a close negativeassociation with phoneme discrimination. The PTA of the low frequencies (PTA0.125–

0.5kHz) and the SRT have been found to be have a close positive association with thedifference limen for frequencies, which in turn has a close negative association withphoneme discrimination (Lamoré et al. 1985, 1990).

Due to restrictions imposed by the residual hearing capacity, the selection of HAs forlisteners with severe or profound HI must meet specific requirements, and severely andprofoundly hearing-impaired listeners may not be able to gain any benefit fromconventional HAs. It may be difficult to amplify normal speech to an adequate loudnesslevel, to measure the most comfortable level of listening, or to provide an audible signalat some frequencies when there is no measurable hearing at a frequency at all (Byrne etal. 1990). Since SNHI sloping to higher frequencies is a common audiometricconfiguration (Liu & Xu 1994, EU Work Group 1996), the extent to which the lowerfrequencies should be amplified relative to the higher ones has been an essential point ofdiscussion within research. Excessive low-frequency amplification may prevent the useof residual hearing at high frequencies due to a masking effect, yielding furtherrequirements for HA fitting (Byrne 1978). Furthermore, it has been found in some studiesthat the half-gain rule of amplification ceases to apply when the HTL level exceeds 70 dB(Byrne et al. 1990).

Hence, for subjects with severe or profound SNHI steeply sloping to high frequencies,lowering of the frequency range of the speech signal by a certain constant factor(frequency transposition) has been tested experimentally as a means of providinglisteners with acoustic information extracted from high frequencies by presenting it to theremaining hearing at lower frequencies (McDermott et al. 1999, McDermott & Dean2000). There is limited evidence to suggest that the listeners can benefit from frequencytransposition. When a conventional HA does not provide a person with a profound HIwith any hearing sensation, tactile devices have been used with the aim of providingsome suprasegmental and segmental information on the speech signal (Galvin et al.1999). The tactile-auditory-visual perception of continuous speech, monosyllabic wordsand phonetic contrasts in the speech signal has been found to be somewhat better than

Page 31: Speech perception and auditory performance in hearing

29

auditory-visual perception alone (Lynch et al. 1992, Waldstein & Boothroyd 1995,Galvin et al. 1999).

There are a few reports available on the speech perception abilities of Finnish listenerswith normal or impaired hearing. Palva (1952), found by speech audiometry that a soundpressure level (SPL) of 27 dB yielded 50% correct word recognition for his word list no.I and a level of 22 dB SPL yielded 50% recognition for lists nos. II and III in the case ofnormal-hearing listeners, but he did not study speech perception in subjects with HI inany more detail. Jauhiainen (1974), in an experimental study of the auditory perception ofisolated bisyllabic words by normal-hearing Finnish-speaking subjects, found that a levelof 23.2 dB SPL yielded 50% recognition. He also experimented with the effect of soundpressure level on word recognition and phoneme recognition and confusions, and foundthat vowels were somewhat easier to recognize than consonants, and that phonemeconfusions seemed to be associated with first formant frequency differences, the mannerof articulation and the place of articulation.

Kiukaanniemi and Määttä (1980) found in their experimental studies on phonemerecognition that the spectral energy distribution of Finnish phonemes is warped whenfiltering takes place in a frequency range of 0.25–4 kHz sloping by 24 dB SPL/octave.This filtering was found to obliterate part of the F1 band and distort the spectrum, thuscausing phoneme confusions. Because of these alterations, vowels with F1 spectralenergy below the cut-off frequency and near each other were found to be confused, /i/ and/y/ with /u/; and /e/ and /o/ with each other, for example. Subjects with real HI steeplysloping to higher frequencies (pure-tone thresholds corresponding to a linearly sloping HIfrom frequencies of 0.25 kHz + ½ octave to 4 kHz + ½ octave, sloping by 24 dB/octave)were also found to have poorer word recognition ability, especially at high sound pressurelevels, than any other group of subjects with sensorineural HI sloping to higherfrequencies (Kiukaanniemi, 1980).

2.1.3.4 Speech perception tests

The speech perception of adults with HI may be measured at the continuous speech,sentence, word, syllable and phoneme levels. The use of continuous speech has beenjustified by the fact that this is more representative of realistic everyday communicationthan are isolated words (De Filippo & Scott 1978, Hygge et al. 1992, Larsby & Arlinger1994), and a similar idea lies behind the use of sentences (Bench et al. 1979, Plomp &Mimpen 1979, Kollmeier & Wesselkamp 1997). At the same time, since at least somebackground noise is frequently present when communication takes place, noise has beenused in tests to simulate communication in everyday noisy environments (Kalikow &Stevens 1977, Plomp & Mimpen 1979, Hagerman 1982, Nilsson et al. 1994, Kollmeier &Wesselkamp 1997). The measured outcome is expressed as a percentage recognitionscore or a recognition threshold in decibels (the signal level yielding 50% recognition,defined as the sentence recognition threshold [SRT] or word recognition threshold[WRT]). In tests with noise, the measured outcome is a speech recognition threshold innoise, i.e. the S/N ratio that yields 50% correct responses.

Page 32: Speech perception and auditory performance in hearing

30

Word recognition tests have clearly been in most extensive use in speech perceptionresearch. Word recognition in the Germanic languages is mainly assessed usingmonosyllabic test words, because of the dominance of monosyllables (Hirsh et al. 1952,Lidén 1954, Haspiel & Bloomer 1961, Boothroyd 1968, Bosman 1992), the customaryinstrument being the Freiburg monosyllabic word test (Bosman 1992), while in theEnglish language area the Central Institute for the Deaf (CID) auditory test W-22 (CID-22, Hirsh et al. 1952), Northwestern University Auditory Test no. 6 (NU-6, Tillman &Carhart 1966, see Wilson et al. 1976, Martin et al. 1994, Lucks Mendel & Danhauer1997) and short isophonemic word lists (Boothroyd 1968) are widely used. These testscan be scored on the basis of the number of correctly recognized words or phonemes.Scoring on a phoneme basis has been justified by the binomial model of test reliability,according to which the variance in a test score depends on both the level of performanceand the number of stimuli the test includes (Thornton & Raffin 1978, Gelfand 1998).Phoneme scores are also thought to provide a more precise measure of the perception ofacoustic cues in speech. Despite occasional cross-language similarities in phonemepatterns, word structures and phoneme realizations are highly language-dependent,inevitably yielding differences in the test development. Bisyllabic words are preferred forword recognition assessment in languages where these are predominant (see also 2.1.3.1,p 25) (Palva 1952, Jauhiainen 1974, Zubick et al. 1983, Weisleder & Hodgson 1989).

To overcome some of the issues affecting the reliability of test methods, such as thelearning effect and reduced reliability caused by the limited number of items in a test set,or high linguistic redundancy caused by the use of meaningful words, syllable tests havebeen used quite successfully as assessment methods (Dubno & Dirks 1982, Dubno et al.1982, Danhauer et al. 1985, Butts et al. 1987, Bosman & Smoorenburg 1995). The timeneeded for administering the tests and the possible complexity involved in scoring theresponses may have limited their clinical use, but they have been in quite extensive usefor research purposes, especially in the English language area.

Technical advances in HAs and single-channel cochlear implants during the 1980’semphasised the need for adequate tests for assessing adults with profound HI and limitedauditory capabilities (Owens et al. 1985a, Owens et al. 1985b), since open-set speechperception tests had turned out to be difficult and insensitive for measuring changes inspeech perception. The Minimal Auditory Capabilities (MAC) test battery includes awider range of tests, such as environmental sounds and speech intonation patterns as wellas word and sentence recognition, and this has since been adapted for use in some otherlanguages (Dillier & Spillman 1992). Progress in implant technology since the 1980’s hasprovided adult cochlear implant users with possibilities for better auditory performance,thus enabling the use of open-set speech perception tests even in these cases (Luxford etal. 2001).

Speech perception tests used in a controlled environment aim at determining thedegree and type of HI and the site of the lesion, planning auditory (re)habilitation,determining the effectiveness of hearing instruments and predicting performance ineveryday listening situations (Griffiths 1967, Boothroyd 1968, Jerger et al. 1968). Speechperception comprises an important part of the assessment of (re)habilitation, measuringthe effect of the impairment on performance (Gatehouse 1998), but a comprehensiveassessment of the outcome should include the contexts of the disability/handicap (i.e.,

Page 33: Speech perception and auditory performance in hearing

31

activation/restriction and participation/restriction, WHO 2001) and even health-relatedquality of life.

The Finnish speech perception tests available for use with adults are shown in Table 6.Sentence recognition tests have been used previously for clinical and research purposes(e.g. Helsinki University Central Hospital, Ludvigsen 1974, 1981, and Finnish translationby Lonka 1993), but a sentence test has recently been developed and validated withlisteners with different hearing impairments, for example (Määttä et al. 2001). Wordrecognition tests (Palva 1952, Jauhiainen 1974) are in extensive clinical and research use,and speech-in-noise tests (Laitakari 1996, Holma et al. 1997) and distorted speech tests(Palva 1965) have been developed, but their clinical and research applications have beenlimited. Syllable tests have been experimented with for research purposes (Rihkanen1988, Naumanen et al. 1997).

Table 6. The Finnish speech perception tests for adults.

Without noise With noise

Test type Author Test Author Test

Sentence test Original, Ludvigsen

1974,1981; translated by

Lonka 1993

Helsinki University

Central Hospital

Määttä et al. 2001

Helen test

Helsinki sentences

(clinical use)

Oulu sentences

— —

Word test Palva 1952

Jauhiainen 1974

Finnish speech

audiometry

Isolated bi-syllable

Finnish words

Laitakari 1996

Holma et al.

1997

Speech-in-noise

test

New speech-in-

noise test

Syllable test Rihkanen 1988

Naumanen et al. 1997

Phoneme perception test

(research use)

Nonsense syllable test

(research use, semi-

validated)

— —

Page 34: Speech perception and auditory performance in hearing

2.2 Cochlear implants

2.2.1 Characteristics of cochlear implants

Cochlear implant systems differ in electrode design (placement, number andconfiguration), type of stimulation and the type of signal processing used for coding thespeech signal. Essential elements shared by all systems are shown in Fig. 1. Themicrophone senses sound pressure variations in the sound field and converts them intoelectrical waveforms (Pfingst 1986, Wilson 1993). These are processed in the speechprocessor and the stimuli are delivered to an implanted single electrode or multielectrodearray. The processed stimuli may be sent to the electrode/electrode array through apercutaneous connector (Parkin 1990) or a transcutaneous link (Bilger 1977, Eddington etal. 1978, Zierhofer et al. 1995). The now obsolete percutaneous system comprised anexternal stimulator that communicated directly with the implanted electrode/electrodearray, whereas the transcutaneous link comprises an external encoder that encodes thestimuli to radio frequency and a transmission coil that sends these radio frequency signalsto an internal implant consisting of a receiving coil that decodes the stimuli for furthertransmission to the electrode/electrode array.

.Microphone .

Percutaneous plug

.│ .

.

.

.

.

.

.│ .

.

.

.

Fig. 1. Components of the cochlear implant system with a percutaneous stimulator(now obsolete) or a transcutaneous link (modified from Pfingst 1986, Wilson 1993).

Speech

processor

External

stimulator

Electrode/electrode array

Speech

processor

External

encoder

Implant:

receiver,

decoder &

stimulator

External parts Skin Internal parts

Page 35: Speech perception and auditory performance in hearing

33

Implant systems are defined as extracochlear when the implanted electrodes are placedon the medial wall or on the round window of the cochlea (Fourcin et al. 1979) andintracochlear when the electrode/electrode array is placed in the scala tympani (or scalavestibuli) or within the modiolus (Hochmair-Desoyer & Hochmair 1980, Clark et al.1981, Zierhofer et al. 1995). Electrodes can also be placed on the cochlear nucleus, thisbeing defined as an auditory brainstem implant (ABI) and used for subjects with a non-functioning auditory nerve (Otto et al. 1998, Otto et al. 2001).

Single electrode implant systems (Bilger 1977) were quite commonly used in the1970’s, but the advances achieved in multielectrode systems subsequently made themmore common. Since intracochlear placement allows proximity to the spiral ganglion,multielectrode implants with scala tympani placement are currently widely used, havingreplaced the single electrode systems almost entirely nowadays (Eddington et al. 1978,Hochmair et al. 1980, Clark et al. 1981, Seligman & McDermott 1995, Zierhofer et al.1995). Intracochlear placement of a multielectrode array is justified in terms of both theplace-pitch theory (tonotopy) and the volley theory regarding the coding of frequenciesby the normal cochlea (Loizou 1998), as it aims at simulating the tonotopy of the auditorysystem (see also 2.1.3.2, p 26).

Several stimulation configurations can be used in cochlear implants: monopolar,bipolar, pseudo-bipolar or tripolar. In a monopolar configuration the reference electrodeis usually placed under the temporal muscle, acting as a ground for all the activatedelectrodes (Berliner et al. 1985), while in a bipolar configuration the active electrode andthe reference (ground) electrode are placed close to each other, and in a pseudo-bipolarconfiguration each active channel has a bipolar pair of electrodes and a separate referenceelectrode. In tripolar stimulation three adjacent electrodes are selected, and the current isemitted by the two lateral ones and received by the central one (Miyoshi et al. 1999).This aims at minimizing the reduction in information transmission caused by channelinteraction.

There are in general two types of electrical stimulation, depending on how theinformation is supplied to the electrodes: analogue stimulation, in which an electricalanalogue of the acoustic waveform itself is presented to the electrodes (Eddington 1980),and pulsatile stimulation, in which the information is in the form of narrow pulses(Wilson et al. 1995).

2.2.2 Speech coding strategies for multichannel cochlear implants

The type of signal processing used for coding speech signals is defined as a speechcoding strategy. The development of such strategies has received extensive attention inresearch circles during past decades. The most recent speech processing strategies formultichannel, multielectrode cochlear implants may be categorized according to theirspecific stimulus transmission properties, into simultaneous or non-simultaneoustransmission of information or a combination of both, and into either analogue orpulsatile waveform presentation or a combination of both (Kessler 1999).

In the continuous interleaved sampling (CIS) strategy, one of the most commonpulsatile strategies, the audio band is divided into sub-bands and interleaved biphasic

Page 36: Speech perception and auditory performance in hearing

34

pulses are generated from the envelopes of the band-pass filter outputs (Wilson et al.1991). Full-wave rectification and low-pass filtering (cut-off frequency 200–400 Hz) areused to extract the envelopes of the filtered waveforms. The amplitude of each stimuluspulse is determined by a logarithmic transformation that compresses the signalappropriately to fit the dynamic range of the individual channel. The electrode pairs areactivated with brief pulses sequentially at a relatively high stimulation rate in order totransmit the rapid temporal envelope variations contained in the speech signal, alsotaking into consideration the tonotopic principle. The rate of stimulation on each channelusually exceeds 800 pulses per second (pps). In the number-of-maxima (n-of-m) strategy,a total of m frequency bands are analysed and the n electrodes corresponding to the nhighest energy bands are stimulated on a given processing cycle (Lawson et al. 1996).The CIS strategy is currently implemented in several multichannel cochlear implantsystems with slight variations in the stimulation rates and number of channels, forexample (Zierhofer et al. 1995, Helms et al. 1997, Zierhofer et al. 1997, Kessler 1999,Cochlear Ltd. 1999).

The spectral peak (SPEAK) strategy (also a pulsatile strategy, but based on the spectralcomposition), as used in the Nucleus Spectra22 processor, is based on the SpectralMaxima Sound Processor (McDermott et al. 1992). The SPEAK strategy uses a bank of20 filters assigned to 20 stimulation channels (Seligman & McDermott 1995). Each filteris followed by an amplitude detector that detects the highest amplitude from the signallevel and spectral composition. The six channels (1–10) with the highest amplitudes aredefined on average, and the information is transmitted to the electrodes with an adaptivestimulation rate of 250 + 100 pps per channel in tonotopic order. The output of theamplitude detectors is scanned constantly and new estimates for the maxima obtained.The SPEAK strategy is also currently implemented in multichannel cochlear implantsystems (Seligman & McDermott 1995, Cochlear Ltd. 1999).

In the advanced combination encoders (ACE) strategy, the channels with the highestamplitudes are defined from the signal level and spectral composition by the Fast FourierTransform technique (Cochlear Ltd. 1999). A maximum of ten channels with the highestamplitude may be selected from the 22 available, and short-duration pulses can betransmitted to the electrodes at a maximum stimulation rate of 14400 pps.

Simultaneous Analog Stimulation (SAS) is a fully simultaneous strategy that initiallydigitises the analogue waveform, and then reconverts it to analogue waveforms that aretransmitted simultaneously to the electrode (pairs) in the electrode array (Kessler 1999,Zimmerman-Phillips & Murad 1999). The input signal is sampled at a rate of 91000samples per second. The Paired Pulsatile Sampler (PPS™) is a CIS-like coding strategyin which two distant channels may be stimulated simultaneously, aiming mainly atimproving the stimulation rate (13000 pps, Kessler 1999), while the Hybrid AnalogPulsatile (HAP™) strategy is a combination of the SAS and CIS strategies withsimultaneous analogue stimulation applied in the lower frequencies and pulsatilestimulation at high frequencies.

The different options aim at the best possible means of excitation for the remainingfunctional auditory nerve fibres, but it is widely recognised that individual differences inthe number of surviving auditory nerve fibres that can be stimulated still has a majoreffect on the functioning of electrical stimulation of the auditory nerve.

Page 37: Speech perception and auditory performance in hearing

35

2.2.3 Indications for cochlear implantation

Indications for implantation may be divided into audiological, electrophysical, medical,surgical, psychophysical and psychological, and linguistic ones (Cochlear implants1988). Several indications for cochlear implantation have been employed in past decades,but no general criteria exist, since it has been difficult to predict the outcome afterimplantation. In general, it has been considered crucial that the subjects should have nousable residual hearing and gain no significant benefit from a conventional HA. Thedifferent definitions of these terms (residual hearing and no significant benefit fromconventional HA) have nevertheless made the criteria ambiguous.

The audiological indications recognised during the 1980’s were profound postlingualbilateral HI, no open-set auditory speech perception and no improvement in lipreadingwith an appropriately fitted HA (Simmons et al. 1985, Brown et al. 1985, Cochlearimplants 1988). Initially only adults were considered to be appropriate candidates forcochlear implantation. The electrophysical requirements included confirmation offunctioning residual auditory nerve fibres and the surgical requirements were that patientsshould be physically healthy, postlingually deafened adults with no congenital anomaliesregarding the auditory nerve (e.g. its absence). The psychophysical, psychological andlinguistic requirements were used mainly to exclude potential candidates having severepsychiatric disorders or mental retardation, for example. When implantation becameacceptable for children, a minimum age limit of two years and a minimum of a six-monthtrial with appropriate amplification and rehabilitation before implantation wererecommended (Cochlear implants 1988, Cochlear implants in adults and children 1995).

In view of the favourable open-set auditory speech perception results achieved aftermultichannel cochlear implantation (Wilson et al. 1991, Cohen et al. 1993, Schindler &Kessler 1993), the indications were extended to subjects with residual hearing andmarginal or minor benefit from a conventional HA (Shallop et al. 1992, Summerfield &Marshall 1995), even though the definitions of these were still ambiguous.

The NIH Consensus Statement (Cochlear implants in adults and children 1995) furtherextended the audiological indications to subjects with bilateral severe or profound HI andaided open-set sentence recognition scores of 30% or less. It was also consideredessential that the medical and surgical requirements for cochlear implant surgery werefulfilled. For children, commitment to habilitation after implantation and a minimum ageof two years were considered essential. The discussion on cochlear implant candidacy isstill in progress, and no clear consensus (or strict criteria) can yet be said to exist onhearing levels and open-set auditory speech perception before implantation (Kiefer et al.1998, Rubinstein et al. 1999).

Page 38: Speech perception and auditory performance in hearing

2.3 Effect of a multichannel cochlear implant on auditory speech

perception

2.3.1 Sentence and word perception

Adults with postlingual severe or profound HI receiving a multichannel cochlear implantcan be expected to be able to perceive speech auditorily only (Cohen et al. 1993, Cohenet al. 1997, Helms et al. 1997), although both subject-related and device-related factorscause significant variation in the speech perception results (Blamey et al. 1996,Rubinstein et al. 1999, van Dijk et al. 1999). A great deal of the research has beenfocused mainly on reporting changes in speech perception performance after having thedevice changed for a newer, and more advanced model and give no prospective orfollow-up results.

Despite these shortcomings, a number of the earlier studies listed in Table 7 show thatadult multichannel cochlear implant users seem to reach average levels of 40–90% inopen-set auditory sentence recognition and a level of 14–50% in open-set auditory wordrecognition, on average (in Germanic languages), obviously depending on the difficultyof the test. Two patterns of performance in sentence and word-level tests may be seen:either most of the improvement in speech perception takes place during the first sixmonths after the switch-on of the implant the rate of improvement in adults often beingassociated with a mean duration of profound HI for approximately 10 years, or else asteady or gradual improvement may be observed after six months of implant use, oftenassociated with a longer duration of profound HI (Helms et al. 1997, Kessler et al. 1997,Geier et al. 1999). In general, overall positive outcomes in terms of sentence and wordperception, with great individual variability, have been reported for adult multichannelcochlear implant users (Summerfield & Marshall 1995).

2.3.2 Phoneme perception and phoneme confusions

Several earlier reports show that adult multichannel cochlear implant users reach averagelevels of 60–80% in vowel identification and 50–75% in consonant identification (Table8), with similar subject-related and device-related individual variability as in sentenceand word recognition (Blamey et al. 1996, Rubinstein et al. 1999, van Dijk et al.1999). Vowels thus seem to be somewhat easier to perceive than consonants.

Page 39: Speech perception and auditory performance in hearing

Tabl

e7.

Pre

viou

sre

port

son

open

-set

audi

tory

sent

ence

and

wor

dre

cogn

itio

nby

adul

tmul

tich

anne

lcoc

hlea

rim

plan

tuse

rs.

Aut

hor/

s

(Yea

r)

Num

ber

of subj

ects

Preo

pera

tive

grad

eof

HI1

Mea

n

dura

tion

of deaf

ness

inye

ars1

Dev

ice

and

spee

ch

codi

ngst

rate

gy

used

1

Spee

chpe

rcep

tion

test

/sus

ed1

Lan

guag

eL

iste

ning

expe

rien

cew

ith

coch

lear

impl

ant1

Mea

n

reco

gnit

ion

scor

e

Coh

enet

al.

(199

3)

30 30

Post

lingu

al

deaf

ness

Post

lingu

al

deaf

ness

14 16

Nuc

leus

22w

ith

WSP

proc

esso

ran

d

feat

ure-

extr

acti

on

stra

tegy

Iner

aid

wit

h

anal

ogue

stra

tegy

Mon

osyl

labi

cw

ord

test

inqu

iet

Sent

ence

test

inqu

iet

Mon

osyl

labi

cw

ord

test

inqu

iet

Sent

ence

test

inqu

iet

Eng

lish

24m

os.

24m

os.

24m

os.

24m

os.

15%

(min

.4,

max

.38)

32%

(min

.2,

max

.76)

15%

(min

.4,

max

.48)

36%

(min

.2,

max

.74)

Coh

enet

al.

(199

7)

18Pr

ofou

nd

deaf

ness

15.3

1)N

ucle

usw

ith

Mpe

akst

rate

gy

2)N

ucle

usw

ith

SPE

AK

stra

tegy

NU

-6w

ords

inqu

iet

Bam

ford

-Kow

al-B

ench

sent

ence

sin

quie

t

NU

-6w

ords

inqu

iet

Bam

ford

-Kow

al-B

ench

sent

ence

sin

quie

t

Eng

lish

4.5–

7yr

s.

4.5–

7yr

s.

3m

os.w

ith

SPE

AK

stra

tegy

2

3m

os.w

ith

SPE

AK

stra

tegy

2

14%

(min

.0,

max

.34)

38%

(min

.0,

max

.96)

22%

(min

.0,

max

.54)

46%

(min

.2,

max

.94)

Con

tinue

don

next

page

.

37

Page 40: Speech perception and auditory performance in hearing

Tabl

e7.

Con

tinu

ed.

Aut

hor/

s

(Yea

r)

Num

ber

of subj

ects

Preo

pera

tive

grad

eof

HI1

Mea

n

dura

tion

of deaf

ness

inye

ars1

Dev

ice

and

spee

ch

codi

ngst

rate

gy

used

1

Spee

chpe

rcep

tion

test

/sus

ed1

Lan

guag

eL

iste

ning

expe

rien

cew

ith

coch

lear

impl

ant1

Mea

n

reco

gnit

ion

scor

e

Hel

ms

etal

.

(199

7)

60Po

stlin

gual

bila

tera

l

prof

ound

deaf

ness

5.3

Med

-ElC

ombi

40

wit

hC

ISst

rate

gy

Ori

gina

l;Fr

eibu

rger

Ein

silb

erte

stin

quie

t

Inns

brüc

kse

nten

ces

inqu

iet

(als

otr

ansl

ated

into

othe

r

lang

uage

s)

Ger

man

,

n=50

Eng

lish,

n=5

Polis

h,

n=4

Hun

gari

an

n=1

3m

os.;

n=49

6m

os.;

n=41

12m

os.;

n=25

3m

os.;

n=50

6m

os.;

n=41

12m

os.;

n=25

42%

48%

54%

77%

84%

89%

Kes

sler

etal

.

(199

7)

238

Post

lingu

al

bila

tera

l

prof

ound

deaf

ness

;

PTA

0.5–

2kH

z

>90

dB

11C

lari

on®

wit

hC

IS

stra

tegy

(93%

)

Cla

rion

®w

ith

CA

stra

tegy

(7%

)

CID

sent

ence

sin

quie

t

NU

-6w

ords

inqu

iet

Eng

lish

12m

os.;

n=12

0

12m

os.;

n=12

0

69%

37%

Con

tinue

don

next

page

.

38

Page 41: Speech perception and auditory performance in hearing

Tabl

e7.

Con

tinu

ed.

Aut

hor/

s

(Yea

r)

Num

ber

of subj

ects

Preo

pera

tive

grad

eof

HI1

Mea

n

dura

tion

of deaf

ness

inye

ars1

Dev

ice

and

spee

ch

codi

ngst

rate

gy

used

1

Spee

chpe

rcep

tion

test

/sus

ed1

Lan

guag

eL

iste

ning

expe

rien

cew

ith

coch

lear

impl

ant1

Mea

n

reco

gnit

ion

scor

e

Gst

öttn

eret

al.

(199

8)

21Po

stlin

gual

deaf

ness

10.7

Med

-ElC

ombi

40

wit

hC

ISst

rate

gy

Frei

burg

erM

onos

ylla

bles

in

quie

t

Inns

bruc

kse

nten

ces

inqu

iet

Ger

man

12m

os.3

12m

os.3

min

.17%

,

max

.75%

78.5

%(m

in.

42,m

ax.1

00)

Park

inso

net

al.(

1998

)

16Po

stlin

gual

deaf

ness

notg

iven

Nuc

leus

wit

h

feat

ure-

extr

acti

on

stra

tegy

Nuc

leus

wit

h

SPE

AK

stra

tegy

NU

-6w

ord

reco

gnit

ion

test

in

quie

t

Iow

ase

nten

cete

stin

quie

t

NU

-6w

ord

reco

gnit

ion

test

in

quie

t

Iow

ase

nten

cete

stin

quie

t

Eng

lish

min

.12

mos

.;

>2

yrs.

;n=1

5

6m

os.4

6m

os.4

20%

50%

28.3

%

66.4

%

Con

tinue

don

next

page

.

39

Page 42: Speech perception and auditory performance in hearing

Tabl

e7.

Con

tinu

ed.

Aut

hor/

s

(Yea

r)

Num

ber

of subj

ects

Preo

pera

tive

grad

eof

HI1

Mea

n

dura

tion

of deaf

ness

inye

ars1

Dev

ice

and

spee

ch

codi

ngst

rate

gy

used

1

Spee

chpe

rcep

tion

test

/sus

ed1

Lan

guag

eL

iste

ning

expe

rien

cew

ith

coch

lear

impl

ant1

Mea

n

reco

gnit

ion

scor

e

Zie

seet

al.

(200

0)

6Po

stlin

gual

deaf

ness

7.5

Med

-ElC

ombi

40+

wit

hC

ISst

rate

gy;7

chan

nels

Med

-ElC

ombi

40+

wit

hC

ISst

rate

gy;

12ch

anne

ls

Med

-ElC

ombi

40+

wit

hn-

of-m

stra

tegy

;7-o

f-12

chan

nels

Frei

burg

erM

onos

ylla

bles

in

quie

t

HSM

sent

ence

test

Frei

burg

erM

onos

ylla

bles

in

quie

t

HSM

sent

ence

test

Frei

burg

erM

onos

ylla

bles

in

quie

t

HSM

sent

ence

test

Ger

man

12w

eeks

at

max

imum

wit

h

each

stra

tegy

5

(res

ults

aggr

egat

ed

from

AB

AB

desi

gnby

auth

ors)

53%

6

89%

6

51%

6

90%

6

60%

6

90%

6

Con

tinue

don

next

page

.

40

Page 43: Speech perception and auditory performance in hearing

Tabl

e7.

Con

tinu

ed.

Aut

hor/

s

(Yea

r)

Num

ber

of subj

ects

Preo

pera

tive

grad

eof

HI1

Mea

n

dura

tion

of deaf

ness

inye

ars1

Dev

ice

and

spee

ch

codi

ngst

rate

gy

used

1

Spee

chpe

rcep

tion

test

/sus

ed1

Lan

guag

eL

iste

ning

expe

rien

cew

ith

coch

lear

impl

ant1

Mea

n

reco

gnit

ion

scor

e

Kie

fer

etal

.

(200

1)

11Po

stlin

gual

prof

ound

hear

ing

loss

20.4

Nuc

leus

CI2

4M

wit

hSP

EA

K

stra

tegy

Nuc

leus

CI2

4M

wit

hC

ISst

rate

gy

Nuc

leus

CI2

4M

wit

hA

CE

stra

tegy

Frei

burg

mon

osyl

labi

cw

ords

,

Inns

brüc

kse

nten

ces

and

Göt

ting

ense

nten

ces

inqu

iet

Frei

burg

mon

osyl

labi

cw

ords

,

Inns

brüc

kse

nten

ces

and

Göt

ting

ense

nten

ces

inqu

iet

Frei

burg

mon

osyl

labi

cw

ords

,

Inns

brüc

kse

nten

ces

and

Göt

ting

ense

nten

ces

inqu

iet

Ger

man

3m

os.7

37.5

%

67.2

%

43.1

%

42.1

%

61.8

%

40.2

%

49.8

%

72.5

%

61.0

%

1A

s/if

repo

rted

byth

eau

thor

/s.

2A

fter

3m

os.o

flis

teni

ngex

peri

ence

wit

hth

eSP

EA

Kst

rate

gy,h

avin

gus

edth

eM

peak

stra

tegy

for

atle

ast4

.5ye

ars.

3A

fter

12m

os.o

flis

teni

ngex

peri

ence

wit

hth

eC

ISst

rate

gy,9

ofth

e21

subj

ects

havi

ngus

edth

ean

alog

ueV

ienn

aco

chle

arim

plan

tfor

anav

erag

eof

8.11

year

sbe

fore

reim

plan

tatio

n.4

Aft

er6

mos

.of

liste

ning

expe

rien

cew

ith

the

SPE

AK

stra

tegy

,hav

ing

used

afe

atur

e-ex

trac

tion

stra

tegy

for

am

inim

umof

12m

os.

5A

fter

12w

eeks

oflis

teni

ngex

peri

ence

wit

hea

chof

the

opti

ons,

havi

ngus

eda

10or

12ch

anne

lCIS

stra

tegy

for

am

inim

umof

3m

os.,

mea

n12

.3m

os.

6V

alue

spr

ojec

ted

from

afi

gure

.7

Aft

er3

mos

.of

liste

ning

expe

rien

cew

ith

the

best

para

met

erse

ttin

gsw

ithin

each

stra

tegy

,hav

ing

used

eith

erth

eSP

EA

Kor

the

CIS

stra

tegy

for

am

inim

umof

3m

os.

befo

rein

itiat

ion

ofth

est

udy,

med

ian

4m

os.

41

Page 44: Speech perception and auditory performance in hearing

42

Some research has been focused either on comparing coding strategies or ondetermining the levels of the acoustic properties transferred. When comparing the relativemerits of different speech coding strategies in the case of subjects who have had severalyears of experience with an earlier strategy, previous experience is always a confoundingfactor, as pointed out by Tyler et al. (1986). This may be seen especially when the periodof accommodation to the new strategy is short. Significant variation in the duration of thesubjects’ listening experience with cochlear implants before the assessment of speechperception complicates any definition of the rate of rehabilitation in phoneme perceptionfrom reports in the literature (cf. min. 2.6 years, Skinner et al. 1996 and Skinner et al.1999; 1–9 years, Pelizzone et al. 1999; 0.5–6 years, van Wieringen & Wouters 1999).Prospective studies are still needed to investigate the rehabilitation of phonemeperception and the rate of phoneme confusions.

An average vowel identification level of 49–59% has been reported with slightly olderfeature-extraction strategies (Blamey et al. 1987, Parkinson et al. 1998) (Table 8), whilethe average with the SPEAK strategy has been 60%–80% (Whitford et al. 1995,Parkinson et al. 1998), and that with the CIS strategy (using a laboratory-designedIneraid processor implementing the CIS strategy) was 78% (Pelizzone et al. 1999).Correspondingly, an average vowel identification level of 70% has been achieved with aMed-El Combi40 implant implementing the CIS strategy (Helms et al. 1997). The mostcommon vowel confusions, even with the recent speech coding strategies, SPEAK, CISand continuous analogue (CA), have been found to be between the closest vowels, onesin which the mean values for F1, F2 (and F3) and duration are close to each other(Skinner et al. 1996, van Wieringen & Wouters 1999). Confusions between the vowels ofthe words ‘hod’, ‘hawd’ and ‘hard’ in American English, and between the vowels/u o /, /y e i/ or /i / in Dutch having been reported, for example (Skinner et al. 1996,van Wieringen & Wouters 1999).

An average consonant identification level of 40–45% has been reported with feature-extraction strategies (Blamey et al. 1987, Parkinson et al. 1998), and some subjects havescored averages of 55–60% (Skinner et al. 1991, Doyle et al. 1995). An average level of55–76% has been achieved with the SPEAK strategy (Whitford et al. 1995, Parkinson etal. 1998, Skinner et al. 1999), and 45–50% with the CA strategy (Doyle et al. 1995,Kompis et al. 1999, Pelizzone et al. 1999). The average level with the CIS strategy hasbeen 65–70% (Helms et al. 1997, Gstöttner et al. 1998, Pelizzone et al. 1999). Featuresrelated to the voicing and duration of consonants have been reported to be perceived well(Blamey et al. 1987, Doyle et al. 1995), but the place of articulation seems to be moredifficult to classify in the Germanic and Romance languages than the manner, the latterbeing difficult for some subjects (McKay & Mc Dermott 1993, Pelizzone et al. 1999,Skinner et al. 1999, van Wieringen & Wouters 1999). Better performers have beenreported to have fewer confusions in phoneme identification, and more consistent ones,than poorer performers (van Wieringen & Wouters 1999).

Page 45: Speech perception and auditory performance in hearing

Tabl

e8.

Pre

viou

sre

port

son

phon

eme

perc

epti

onan

dco

nfus

ions

inad

ultm

ulti

chan

nelc

ochl

ear

impl

antu

sers

.

Aut

hor/

s

(Yea

r)

Num

ber

of subj

ects

Mea

n

dura

tion

of prof

ound

HIi

n

year

s1

Dev

ice

and

spee

ch

codi

ngst

rate

gy

used

1

Spee

chpe

rcep

tion

test

/sus

ed1

Tes

t

desi

gn1

Lis

teni

ng

expe

rien

cew

ith

coch

lear

impl

ant1

Mea

nvo

wel

reco

gnit

ion

resu

lts1

Mea

n

cons

onan

t

reco

gnit

ion

resu

lts1

Mos

tpre

vale

nt

phon

eme

conf

usio

ns1

Bla

mey

etal

.

(198

7)

2816

.6M

ultip

le-c

hann

el

coch

lear

impl

ant

wit

hfo

rman

t-

esti

mat

ing

stra

tegy

Seto

f11

vow

els

in

/hV

d/co

ntex

t

Seto

f12

cons

onan

tsin

/aC

a/

cont

ext

Clo

sed-

set

notg

iven

49%

37%

notg

iven

Dor

man

etal

.

(199

0)

101–

28In

erai

dim

plan

t

wit

han

alog

ue

stim

ulat

ion

Seto

f16

disy

llabl

esin

/aC

a/

cont

ext

Clo

sed-

set

notg

iven

nott

este

d58

%B

etw

een

nasa

ls

and

sem

ivow

els;

betw

een

plac

esof

artic

ulat

ion

Tyl

eret

al.

(199

2)

108.

2In

erai

dim

plan

t

wit

han

alog

ue

stim

ulat

ion

Seto

f9

vow

els

in

/hV

d/co

ntex

t

Clo

sed-

set

min

.9m

os.

34%

–93%

Bet

wee

nth

e

clos

estv

owel

s

Doy

leet

al.

(199

5)

14no

tgiv

enN

ucle

usw

ith

F0/F

1/F2

stra

tegy

;

n=5

Cla

rion

®w

ith

CIS

stra

tegy

;n=

5

Cla

rion

®w

ith

CA

stra

tegy

;n=

4

Med

ialc

onso

nant

reco

gnit

ion

test

in

/iC

i/co

ntex

t

Clo

sed-

set

F0F1

F2us

ers;

14.4

mos

.

CIS

user

s;9

mos

.

CA

user

s;14

mos

.

nott

este

d54

%

51%

53%

notg

iven

Con

tinue

don

next

page

.

43

Page 46: Speech perception and auditory performance in hearing

Tabl

e8.

Con

tinu

ed.

Aut

hor/

s

(Yea

r)

Num

ber

of subj

ects

Mea

n

dura

tion

of prof

ound

HIi

n

year

s1

Dev

ice

and

spee

ch

codi

ngst

rate

gy

used

1

Spee

chpe

rcep

tion

test

/sus

ed1

Tes

t

desi

gn1

Lis

teni

ng

expe

rien

ce

wit

hco

chle

ar

impl

ant1

Mea

n

vow

el

reco

gnit

ion

resu

lts1

Mea

n

cons

onan

t

reco

gnit

ion

resu

lts1

Mos

tpre

vale

nt

phon

eme

conf

usio

ns1

Whi

tfor

det

al.(

1995

)

248.

4N

ucle

usw

ith

SPE

AK

stra

tegy

Seto

f11

vow

els

in

/hV

d/co

ntex

t

Seto

f12

cons

onan

tsin

/aC

a/

cont

ext

Clo

sed-

set

10w

eeks

280

%3

70%

3no

tgiv

en

Skin

ner

etal

.

(199

6)

94.

5N

ucle

usw

ith

SPE

AK

stra

tegy

Seto

f14

vow

els

in

/hV

d/co

ntex

t

Clo

sed-

set

10w

eeks

473

.4%

nott

este

dB

etw

een

the

clos

estv

owel

s

(e.g

.,in

wor

ds

hod,

haw

d,ha

rd)

Hel

ms

etal

.

(199

7)

605.

3M

ed-E

lCom

bi40

wit

hC

ISst

rate

gy

Seto

f8

vow

els

in

/bV

b/co

ntex

t

Seto

f16

cons

onan

tsin

/aC

a/

cont

ext

Dig

itiz

ed

item

sfe

d

dire

ctly

into

the

proc

esso

r

3m

os.;

n=48

6m

os.;

n=39

12m

os.;

n=24

3m

os.;

n=48

6m

os.;

n=39

12m

os.;

n=24

60%

65%

70%

56%

63%

65%

notg

iven

Con

tinue

don

next

page

.

44

Page 47: Speech perception and auditory performance in hearing

Tabl

e8.

Con

tinu

ed.

Aut

hor/

s

(Yea

r)

Num

ber

of subj

ects

Mea

n

dura

tion

of prof

ound

HIi

n

year

s1

Dev

ice

and

spee

ch

codi

ngst

rate

gy

used

1

Spee

chpe

rcep

tion

test

/sus

ed1

Tes

t

desi

gn1

Lis

teni

ng

expe

rien

ce

wit

hco

chle

ar

impl

ant1

Mea

n

vow

el

reco

gnit

ion

resu

lts1

Mea

n

cons

onan

t

reco

gnit

ion

resu

lts1

Mos

tpre

vale

nt

phon

eme

conf

usio

ns1

Park

inso

net

al.(

1998

)

16no

tgiv

enN

ucle

usw

ith

SPE

AK

stra

tegy

Iow

am

edia

lvow

el

test

;9al

tern

ativ

es

in“h

Vd”

cont

ext

Iow

am

edia

l

cons

onan

ttes

t;13

alte

rnat

ives

in

“eeC

ee”

cont

ext

Clo

sed-

set

6m

os.5

60.3

%

55.1

%

notg

iven

Kom

pis

etal

.

(199

9)

3no

tgiv

enIn

erai

dw

ith

CIS

stra

tegy

Seto

f8

vow

els

in

/aC

a/co

ntex

t

Seto

f16

cons

onan

tsin

/bV

b/co

ntex

t

notg

iven

3w

eeks

648

%3

48%

3

notg

iven

Peliz

zone

et

al.(

1999

)

1213

.3In

erai

dw

ith

CIS

stra

tegy

Seto

f7

vow

els

pres

ente

dal

one

Seto

f14

cons

onan

tsin

/aC

a/

cont

ext

Clo

sed-

set

12m

os.7

78.3

%

65.6

%

1)m

anne

r;/m

/-

/l/,/

n/-/

l/,2)

voic

ing;

/k/-/

/,3)

plac

e;/p

/-/t/

,/b/

-

/d/,

/d/-

//,

/m/-

/n/a

t6m

os.

Con

tinue

don

next

page

.

45

Page 48: Speech perception and auditory performance in hearing

Tabl

e8.

Con

tinu

ed.

Aut

hor/

s

(Yea

r)

Num

ber

of subj

ects

Mea

n

dura

tion

of prof

ound

HIi

n

year

s1

Dev

ice

and

spee

ch

codi

ngst

rate

gy

used

1

Spee

chpe

rcep

tion

test

/sus

ed1

Tes

t

desi

gn1

Lis

teni

ng

expe

rien

ce

wit

hco

chle

ar

impl

ant1

Mea

n

vow

el

reco

gnit

ion

resu

lts1

Mea

n

cons

onan

t

reco

gnit

ion

resu

lts1

Mos

tpre

vale

nt

phon

eme

conf

usio

ns1

Skin

ner

etal

.

(199

9)

94.

5N

ucle

usw

ith

SPE

AK

stra

tegy

Seto

f14

cons

onan

tsin

/aC

a/

cont

ext

Clo

sed-

set

10w

eeks

8no

ttes

ted

76.2

%1)

man

ner;

/v/-

/b/-

//-

/d/-

/m/,

2)pl

ace;

/p/-

/t/-

/k/

van

Wie

rige

n

&W

oute

rs

(199

9)

2513

Lau

raw

ith

CIS

stra

tegy

Seto

f10

vow

els

in

/hV

t/co

ntex

t

Seto

f16

cons

onan

tsin

/aC

a/

cont

ext

Clo

sed-

set

6m

os.–

6ye

ars

42%

33%

1)vo

wel

s;/u

/-/o

/-

//,

/y/-

/e/-

/i/,/

i/-/

/-/

/,2)

cons

on-

ants

;/p/

-/t/-

/k/,

/m/-

/n/,

/v/-

/f/,

/s/-

/f/,/

x/-/

/,/l/

-

/m/-

/n/

Zie

seet

al.

(200

0)

67.

5M

ed-E

lCom

bi40

+

wit

hC

ISst

rate

gy

Med

-ElC

ombi

40+

wit

hC

ISst

rate

gy

Med

-ElC

ombi

40+

wit

hn-

of-m

stra

tegy

Six

rand

omiz

ed

pres

enta

tions

of8

vow

els

in/b

Vb/

cont

ext

Four

rand

omiz

ed

pres

enta

tions

of16

cons

onan

tsin

/aC

a/

cont

ext

notg

iven

12w

eeks

at

max

imum

wit

h

each

stra

tegy

9

(res

ults

aggr

egat

ed

from

AB

AB

desi

gnby

auth

ors)

55%

3w

ith7

chan

nels

63%

3w

ith

12ch

anne

ls

60%

3w

ith

7-of

-12

chan

nels

66%

3w

ith7

chan

nels

68%

3w

ith

12ch

anne

ls

68%

3w

ith

7-of

-12

chan

nels

notg

iven

Con

tinue

don

next

page

.

46

Page 49: Speech perception and auditory performance in hearing

Tabl

e8.

Con

tinu

ed.

Aut

hor/

s

(Yea

r)

Num

ber

of subj

ects

Mea

n

dura

tion

of prof

ound

HIi

n

year

s1

Dev

ice

and

spee

ch

codi

ngst

rate

gy

used

1

Spee

chpe

rcep

tion

test

/sus

ed1

Tes

t

desi

gn1

Lis

teni

ng

expe

rien

ce

wit

hco

chle

ar

impl

ant1

Mea

n

vow

el

reco

gnit

ion

resu

lts1

Mea

n

cons

onan

t

reco

gnit

ion

resu

lts1

Mos

tpre

vale

nt

phon

eme

conf

usio

ns1

Kie

fer

etal

.

(200

1)

1120

.4N

ucle

usC

I24M

wit

hSP

EA

K

stra

tegy

Nuc

leus

CI2

4M

wit

hC

ISst

rate

gy

Nuc

leus

CI2

4M

wit

hA

CE

stra

tegy

Four

sets

of8

vow

els

in/b

Vb/

cont

ext

Four

sets

of16

cons

onan

tsin

/aC

a/

cont

ext

notg

iven

3m

os.10

55.2

%

58.2

%

58.3

%

56.6

%

56.2

%

62.8

%

notg

iven

1A

s/if

repo

rted

byth

eau

thor

/s.

2A

fter

10w

eeks

oflis

teni

ngex

peri

ence

wit

hth

eSP

EA

Kst

rate

gy,h

avin

gus

edth

eM

peak

stra

tegy

for

27m

os.o

nav

erag

e(8

–43

mos

.).

3R

esul

tsde

duce

dfr

omth

efi

gure

s.4

Aft

er10

wee

ksof

liste

ning

expe

rien

cew

ith

the

SPE

AK

stra

tegy

,hav

ing

used

the

Mpe

akst

rate

gyfo

rat

leas

t32

mos

.5

Aft

er6

mos

.of

liste

ning

expe

rien

cew

ith

the

SPE

AK

stra

tegy

,hav

ing

used

the

Mpe

akst

rate

gyfo

rat

leas

t12

mos

.6A

fter

3w

eeks

oflis

teni

ngex

peri

ence

wit

hth

eC

ISst

rate

gy,h

avin

gus

edth

eC

Ast

rate

gyfo

rat

leas

t3ye

ars.

7A

fter

12m

os.o

flis

teni

ngex

peri

ence

wit

hth

eC

ISst

rate

gy,h

avin

gus

edth

eC

Ast

rate

gyfo

r5.

6ye

ars

onav

erag

e(1

–9ye

ars)

.8

Aft

er10

wee

ksof

liste

ning

expe

rien

cew

ith

the

SPE

AK

stra

tegy

,hav

ing

used

the

Mpe

akst

rate

gyfo

rat

leas

t32

mos

.9

Aft

er12

wee

ksof

liste

ning

expe

rien

cew

ith

each

ofth

eop

tion

s(C

ISst

rate

gyw

ith

7ch

anne

ls,C

ISst

rate

gyw

ith

12ch

anne

ls,n

-of-

mst

rate

gyw

ith

7-of

-12

chan

nels

),

havi

ngus

eda

10or

12ch

anne

lCIS

stra

tegy

for

am

inim

umof

3m

os.,

mea

n12

.3m

os.

10A

fter

3m

os.o

flis

teni

ngex

peri

ence

wit

hth

ebe

stpa

ram

eter

setti

ngs

with

inea

chst

rate

gy,h

avin

gus

edei

ther

the

SPE

AK

orth

eC

ISst

rate

gyfo

ra

min

imum

of3

mos

.

befo

rein

itiat

ion

ofth

est

udy,

med

ian

4m

os.

47

Page 50: Speech perception and auditory performance in hearing

48

2.3.3 Factors affecting speech perception after multichannel cochlear

implantation

Short duration of profound HI and residual hearing (also severe HI) before cochlearimplantation have been found to be among the strongest subject-related factors associatedwith good auditory speech perception after multichannel cochlear implantation (Shallopet al. 1992, Blamey et al. 1996, Kiefer et al. 1998, Rubinstein et al. 1999, van Dijk et al.1999). Also age at implantation, IQ and preoperative CT scan findings, along withsurvival of spiral ganglion cells in profound SNHI and the time available foraccommodation to electrical hearing with the device and settings in use, have been foundto be associated with speech perception performance after implantation (Blamey et al.1996, Waltzman et al. 1995, Tyler et al. 1986, Fu & Shannon 1999a).

Numerous investigations into differences in cochlear implant listeners’ speechperception have focused on the settings used in the devices, and these device-relatedfactors have been found to be closely associated with changes in auditory performance,each factor having different effects on aspects of speech perception (Blamey et al. 1996).Different numbers of active channels have been found to be needed for sentencerecognition and for vowel and consonant identification, cochlear implant users havingneeded fewer active channels (4 channels) both for sentence recognition and consonantidentification than for vowel identification (7 channels) according to some authors(Skinner et al. 1995, Fishman et al. 1997, Kiefer et al. 2000). Also, a higher stimulationrate has been found to enhance consonant identification more than vowel identification(Fu & Shannon 2000, Kiefer et al. 2000, Loizou et al. 2000, Kiefer et al. 2001). Theacoustic dynamic range of the channels has been found to be associated especially withthe ability to perceive speech in noise (Fu & Shannon 1999a, Zeng & Galvin 1999), andthe electrode configuration, the overall frequency range and frequency allocation (Fu &Shannon 1999b, Friesen et al. 1999), the location and spacing of the electrodes in thecochlea (Fu & Shannon 1999c) and the automatic gain control settings in the processors(Stöbich et al. 1999) have all been found to be associated with speech perception.

Even so, it is acknowledged that individual variability in the outcome afterimplantation is considerable (Cochlear implants in adults and children 1995,Summerfield & Marshall 1995). Further developments in implant systems may enableeven better coding and transmission of acoustic features, providing cochlear implantusers with better possibilities for auditory speech perception (Wilson 1997).

Page 51: Speech perception and auditory performance in hearing

49

2.3.4 Functional plasticity after multichannel cochlear implantation

Speech perception has been found previously to engage the same brain regions (primaryauditory cortices, auditory association area, Wernicke’s area, Broca’s area) inpostlingually deafened adult cochlear implant users and normal hearing controls (usingpositron emission tomography, PET), whereas altered brain activation patterns (activationof Wernicke’s and Broca’s areas, but no activation in the primary auditory cortices) havebeen found in prelingually deaf cochlear implant users (Okazawa et al. 1996, Naito et al.1997). Recent studies have suggested, however, that there are differences in brainactivation patterns between cochlear implant users and normal hearing controls, implyingthat successful speech perception can result from different neural processes in the two(Giraud et al. 2000, Giraud et al. 2001).

The posterior right superior temporal regions have been found to be more active incochlear implant users after 18–24 months of listening experience with a cochlearimplant (with excellent auditory speech perception of 95–100%) than in normal-hearingsubjects, whereas the inferior temporal cortex regions (basal temporal language areas)have been found to be underactivated (Giraud et al. 2000). The former regions have beenassociated with phonological coding (Petersen et al. 1989, Dèmonet et al. 1992), and thelatter with semantic processing (Hodges & Patterson 1997). Functional specialization inboth the left posterior superior temporal cortex and Wernicke’s area has also been foundto be less noticeable in cochlear implant users (Giraud et al. 2001). Differentiationbetween speech and non-speech sounds has been found to decrease in the auditory andassociation cortices, which are dedicated to complex sound analysis and phonologicalprocessing. Processing at early levels (primary auditory and auditory association cortices)in postlingually profoundly hearing-impaired subjects has been found to be compensatedfor by alternative cognitive strategies involving memory, the formation of visualrepresentations of objects that normally produce the sound, and the automatic recruitmentof visual attentional mechanisms to process associated visual cues (Giraud et al. 2001).

In view of the functional neuroanatomical activation observed during heard language(Binder et al. 2000), it has been suggested that more resources have been allocated tophonological analysis at the expense of semantic processing in the case of cochlearimplant users (Giraud et al. 2000, Giraud et al. 2001). The memory-related regions (leftposterior hippocampus, dorsal occipital regions), which have been found to be associatedwith verbal retrieval and memory-related imagery (Fletcher et al. 1995, Krause et al.1998), have also been found to be activated more in cochlear implant users, although thelongitudinal development of these patterns has yet to be investigated.

Page 52: Speech perception and auditory performance in hearing

2.4 Subjective effect of cochlear implantation and quality of life

2.4.1 Self-assessment instruments for hearing-impaired adults

2.4.1.1 Domain-specific instruments

Demands for documenting the outcome of treatment have increased within therehabilitation field during the past decade as concerns both the efficacy of the treatment(the extent to which it is beneficial) and its effectiveness (the extent to which the servicesare beneficial) (Frattali 1998), and this has emphasised the need for recognised,structured outcome measures.

Self-assessment methods have been used with hearing-impaired adults for a long timein order to obtain information on their ability to manage in everyday life and to provideinformation on both impairment and disability/handicap domains (impairment andactivity limitation/participation restriction, WHO 2001). A domain (dimension) refers to ameasured area of behaviour, such as hearing, for example (Gordon et al. 1993). The needto agree on the concepts associated with separate hearing domains (WHO 1980, WHO2001) has been shown to be important in order to overcome the difficulty of probing theseparate dimensions (Stephens & Hétu 1991, Arnold 1998).

Some of the self-assessment instruments used with respect to hearing, mainly withadult populations, are presented in Table 9. These instruments have been used eitheralone or in combination with formal measures such as speech perception tests, forexample. Initially only a modest correlation between speech perception and self-assesseddisability/handicap was reported (High et al. 1964), which led to discussion aboutmisrepresentation of the responses to questionnaires. Later, however, speech perceptionresults were found to correlate positively with self-assessed levels of disability/handicap,indicating a clear association between speech perception and disability in everyday life(Noble & Atherley 1970, Ewertsen & Birk-Nielsen 1973, Tannahill 1979, Kramer et al.1996) and strengthening the credibility of the self-assessment methods. Differences in thesubjects’ grades of HI and in the sensitivity of the speech perception tests and inventoriesused are factors found to be associated with the reported correlations between self-assessment inventories and formal measures (e.g., the unresponsiveness of the inventoryitself or the unresponsiveness of monosyllabic word tests for measuring changes inspeech perception in subjects with different grades of HI).

Page 53: Speech perception and auditory performance in hearing

51

Table 9. Widely used domain-specific self-assessment instruments.

Author

(Year)

Self-assessment

instrument

Acronym Domain Number

of items

Population

High et al.

(1964)

Hearing handicap

scale

HHS disability/handicap 20 adult

Noble &

Atherley

(1970)

Hearing

measurement

scale

HMS impairment/disability/handicap 42 adult

Ewertsen &

Birk-Nielsen

(1973)

Social hearing

handicap index

SHI disability/handicap 21 adult

Jerger &

Jerger (1979)

Quantifying

auditory handicap

QUAH handicap 40 adult

Giolas et al.

(1979)

Hearing

performance

inventory

HPI impairment/disability/handicap 158 adult

Schow &

Nerbonne

(1982)

Self-assessment of

communication

Significant other

assessment of

communication

SAC

SOAC

disability/handicap

disability/handicap

10

10

adult

Ventry &

Weinstein

(1982)

Hearing handicap

inventory for the

elderly

HHIE disability/handicap 25 elderly

> 65 years

Kramer et al.

(1995)

Amsterdam

inventory of

auditory disability

and handicap

not given disability/handicap 30 adult

(< 65 years)

Ringdahl et

al.

(1998)

Gothenburg

profile

GP disability/handicap 20 adult

Hagerman

(1999)

Questionnaire for

clinical tests of

hearing aids

NSH disability/handicap 17 adult

Gatehouse

(1999)

Glasgow hearing

aid benefit profile

GHABP disability/handicap 48 adult

Page 54: Speech perception and auditory performance in hearing

52

2.4.1.2 Generic health-related quality of life instruments

Health-related quality of life (HRQoL) instruments may be divided into genericinstruments, measuring health profiles or utility, and specific instruments, measuringhealth status specific to a certain area of interest, such as heart failure or asthma, forexample (Guyatt et al. 1993). The major advantage of generic instruments is thepossibility of comparing quality of life among a variety of health areas, but theunresponsiveness of the instruments (especially regarding hearing) may be ashortcoming, as reported earlier (Bess 2000, Krabbe et al. 2000, Karinen et al. 2001).

Several generic functional health measures have been used with hearing-impairedadults to assess the effectiveness of amplification (HA), for example (Bess 2000). TheSickness Impact Profile (SIP) (Gilson et al. 1975), the Self Evaluation of Life Function(SELF) (Linn & Linn 1984) and the Medical Outcomes Study (MOS SF-36) (Ware &Sherbourne 1992) have all been used to assess the effect of HI on functional health status.A clear negative association has been found between HI and health-related function,whereas there is a positive association between amplification and health-relatedfunctioning (Bess 2000).

Cost-utility analysis may be used to assess the value of different interventions on auniform scale, regardless of whether the benefit accrued is additional years of life orimproved quality of life (Guyatt et al. 1993). The utility measures of quality of life(usually yielding one utility value between 0.0 and 1.0) are derived from economic anddecision theory and reflect the effect of treatment processes and the outcome of variousinterventions. The utility instruments also enable a cost-effectiveness analysis expressedin terms of quality-adjusted life-years (QALY), as has been used in a number of reportsduring the past decade (Harris et al. 1995, Cheng & Niparko 1999, Palmer et al. 1999).At the moment, the Nottingham Health Profile (NHP) (Hunt et al. 1980), SF-36 (Brazieret al. 1992), the Health Utilities Index (HUI-III) (Feeny et al. 1995) and EuroQoL (EQ-5D) (Brooks et al. 1996) are used for the assessment of health-related quality of life. Ofthese, the Nottingham Health profile (Koivukangas et al. 1995) and EQ-5D (Johnson etal. 2000) have been validated in Finland. These instruments provide information onmultiple attributes of health, such as vision, hearing, speech production, ambulation,energy, social isolation and anxiety/depression.

Especially profound HI has been found to be associated with lack of energy, negativeemotions and social isolation (Ringdahl & Grimby 2000), reducing the quality of lifesubstantially (Cheng & Niparko 1999). Somewhat differing results, both in terms ofquality of life and QALY, have been obtained with the different instruments, however,and their responsiveness has been widely discussed (Cheng & Niparko 1999, Bess 2000,Krabbe et al. 2000, Karinen et al. 2001). Both specific self-assessment measures ofoutcome and generic health-related quality of life measures are currently recommendedfor use (Beck 2000).

Page 55: Speech perception and auditory performance in hearing

53

2.4.2 Self-assessment by cochlear implant users

2.4.2.1 Subjective effect of cochlear implantation on everyday life

The speech perception abilities of adults with a cochlear implant have been studiedextensively, but only few attempts have been made to measure the subjective benefit ofimplantation (Rihkanen 1990, Kou et al. 1994). Self-assessment methods (instruments)have not been widely used in Finland in connection with cochlear implantation, one ofthe few instances being the work of Rihkanen (1990), who evaluated the subjectivebenefit felt by adult single-channel cochlear implant users (n=10) and their attitudestowards rehabilitation by mailing questionnaires to both the implant users and theirfamilies. He found that the implant users felt subjectively that they gained some benefitfrom the device in everyday situations (mainly improved perception of environmentalsounds) and that they reported improvements in their handicap after implantation. A lowcorrelation was nevertheless found between satisfaction and the results of formalaudiological tests. The family members reported improvements in the cochlear implantusers’ ability to monitor their own speech.

The subjective benefit obtained by postlingually hearing-impaired adults frommultichannel cochlear implantation was found to be highest in the field of speechrecognition by speechreading, voice quality, independence and communicationconfidence (Kou et al. 1994). Slightly less benefit was reported in speech perceptionwithout speechreading. Subjective satisfaction was found to correlate well withindependence and confidence, but a low correlation was found between satisfaction andobjective sentence recognition scores (both auditory and auditory-visual).

2.4.2.2 Health-related quality of life after cochlear implantation

In general, HRQoL has been found to improve markedly after cochlear implantation bycomparison with adults having profound HI who were still on the waiting list for cochlearimplantation, who served as controls (Harris et al. 1995, Summerfield & Marshall 1995,Cheng & Niparko 1999, Palmer et al. 1999). Several authors have reported positivechanges in psychological well-being, a lesser sensation of pain, more favourableemotional reactions and mobility, improved overall health-utility indexes and animprovement in the ability to communicate in everyday situations (Harris et al. 1995,Maillet et al. 1995, Palmer et al. 1999, Karinen et al. 2001).

The recent discussion on the sensitivity of instruments has also touched upon theassessment of HRQoL in the case of adult cochlear implant users after implantation, andthere has been some deliberation as to whether specially constructed instruments areneeded, since inventories may be quite unresponsive in measuring the changes in HRQoLafter implantation, yielding differing economic QALY values (Carter & Hailey 1999,Krabbe et al. 2000, Karinen et al. 2001). Some instruments have been developedespecially for adult cochlear implant users (Hinderink et al. 2000).

Page 56: Speech perception and auditory performance in hearing

3 Aims of the research

Speech perception and auditory performance in the everyday lives of Finnish-speakingadult cochlear implant users were studied with the following specific aims:

1. To assess hearing level and word recognition in Finnish-speaking adultsbefore and after cochlear implantation by means of a retrospective nationwidesurvey, and to determine how representative the results obtained with asmaller sample (from the city of Oulu) are of the nationwide situation.

2. To assess the auditory performance of Finnish-speaking adults in everydaylife before and after cochlear implantation by means of a retrospectivenationwide survey.

3. To examine hearing level, sentence, word and phoneme perception andauditory performance in everyday life in Finnish-speaking postlinguallyseverely or profoundly hearing-impaired adults before and after multichannelcochlear implantation in the context of a prospective repeated measuresresearch design.

4. To analyse vowel recognition and confusions in Finnish-speakingpostlingually severely or profoundly hearing-impaired adults before and aftermultichannel cochlear implantation.

5. To analyse consonant recognition and confusions in Finnish-speakingpostlingually severely or profoundly hearing-impaired adults before and aftermultichannel cochlear implantation.

6. To study the association between auditory performance in everyday life andformal speech perception results in adult multichannel cochlear implant users.

Page 57: Speech perception and auditory performance in hearing

4 Subjects and methods

4.1 Subjects in the nationwide survey

The subjects in the nationwide survey (I) comprised 67 adults (32 females, 35 males)who had received an implant at four departments of otorhinolaryngology in universityhospitals in Finland, those of Helsinki, Turku, Tampere and Oulu, between March 1994and December 1999. The subjects treated at the Department of Otorhinolaryngology inOulu met the following requirements for cochlear implantation: i) insufficient speechrecognition with optimal amplification, ii) no medical or surgical contra-indications, iii)realistic expectations regarding the benefits of implantation, and iv) ability to participatein the follow-up and rehabilitation programme. The indications for cochlear implantationwere not assessed separately in the case of the subjects attending the other departments ofotorhinolaryngology.

The mean age of the subjects in the nationwide survey was 47 years (min. 22 years,max. 77 years) and the mean duration of profound HI (as reported in the case histories)was 13 years (min. 0 years, max. 51 years). When the pure tone threshold could not bemeasured at a certain frequency, a value of 130 dB HL was used in the calculations(British Society of Audiology 1988, see 4.2, p 56 and 4.3.2, p 58). The mean BEHL0.5–4

kHz before implantation was 113 dB (95% confidence interval [CI] 110 to 116 dB, n=66),and the mean PTA 0.5–4 kHz with a HA in sound field was 68 dB (95% CI 59 to 78 dB,n=21). Of the 67 subjects, 48 used a HA before implantation. The aetiologies (as reportedin the case histories) were otosclerosis in 10 cases, Usher’s syndrome in 6 cases,hereditary progressive SNHI in 1 case, unknown in 33 cases, and other causes such aschronic otitis media, meningitis, trauma, ototoxicity, noise-induced hearing loss oridiopathic sudden deafness in 17 cases. The implants, processors and speech codingstrategies used are presented in Table 10.

Page 58: Speech perception and auditory performance in hearing

56

Table 10. Implants, processors, speech coding strategies and active channels used (mode)in the cases included in the nationwide survey.

Implant Processor Speech coding

strategy

Number of

subjects

Mode of the

active channels

Med-El Comfort BTE, Behind-the-ear CA 5 1

Med-El C401 CIS-PRO+ CIS 6 8

Med-El C40GB CIS-PRO+ CIS 1 4

Med-El C40+ CIS-PRO+ CIS 14 11

n-of-m 5 9

Nucleus CI22M Spectra22 SPEAK 14 19 and 20

Nucleus CI20+2 Spectra22 SPEAK 5 16

Cochlear CI24M Sprint SPEAK 1 21

ACE 13 22

CIS 2 22

1 One subject with no information available regarding speech coding strategy or number of active channels in

the case history.

4.2 Subjects in the Oulu sample

The subjects included in the Oulu sample (II) originally comprised 20 consecutive adults(11 males, 9 females) who had received a multichannel cochlear implant at OuluUniversity Hospital, Finland, between April 1996 and January 1999, 19 of whom wereable to participate fully (III, IV, V). Subject no. 14 had suffered a stroke in the rightoccipito-parietal region before implantation, which caused a distinct perseverationphenomenon during completion of the nonsense syllable test. No perseveration wasdetected during the assessment of hearing level, sentence perception or word perception,and her results were included in these analyses (I, II), but it became apparent during moresophisticated analyses of phoneme recognition and confusions, whereupon her resultswere excluded from all further analyses (III, IV, V) in order to avoid bias caused byneurological difficulties.

The mean age of the remaining 19 subjects at implantation was 45.2 years (min. 22years, max. 66 years). Two had severe HI (BEHL0.5–4 kHz 71 to 95 dB) and were HA users(for definitions of the measurements, see 4.1, p 55 and 4.3.2, p 58), and 17 had profoundHI (BEHL0.5–4 kHz > 95 dB), with a mean duration of 6.7 years. The mean BEHL0.5–4 kHz

before implantation was 113 dB (95% CI 106 to 120 dB) and the mean PTA0.5–4 kHz with aHA in sound field was 62 dB (95% CI 53 to 72 dB, n=9). Details of the subjects in theOulu sample (N=20), the implants, processors, speech coding strategies and activechannels used and the duration of the follow-up period are presented in Table 11.

Page 59: Speech perception and auditory performance in hearing

Tabl

e11

.Age

,se

x,ae

tiol

ogy

and

dura

tion

ofpr

ofou

ndhe

arin

gim

pair

men

tof

the

20su

bjec

ts,

the

devi

ces,

spee

chco

ding

stra

tegi

esan

dnu

mbe

rof

acti

vech

anne

lsus

edan

dfo

llow

-up

inm

onth

s.

Age

Sex

Aet

iolo

gyD

urat

ion1

Impl

ant

Proc

esso

r2St

rate

gyC

hann

els3

Follo

w

-up

147

Mun

know

n20

Med

-ElC

40C

IS-P

RO

+C

IS7

242

26M

otos

cler

osis

5M

ed-E

lC40

GB

CIS

-PR

O+

CIS

424

343

Mot

oscl

eros

is1

Med

-ElC

40C

IS-P

RO

+C

IS8

24

443

Mot

oscl

eros

is4

Med

-ElC

40C

IS-P

RO

+C

IS8

24

562

Mot

oscl

eros

is1

Med

-ElC

40+

CIS

-PR

O+

CIS

1124

645

Mot

oscl

eros

is8

Med

-ElC

40+

CIS

-PR

O+

CIS

1124

722

MU

sher

’ssy

ndro

me

6N

ucle

usC

I20+

2Sp

ectr

a22

SPE

AK

2024

853

Mun

know

n9

Med

-ElC

40+

CIS

-PR

O+

CIS

1124

952

Mot

oscl

eros

is3

Med

-ElC

40+

CIS

-PR

O+

CIS

1024

1051

Fun

know

n0

Med

-ElC

40+

CIS

-PR

O+

CIS

1024

1124

Mun

know

n2

Med

-ElC

40+

CIS

-PR

O+

CIS

1224

1231

Fen

ceph

aliti

s21

Med

-ElC

40+

CIS

-PR

O+

CIS

1224

1342

Mun

know

n2

Nuc

leus

CI2

2MSp

ectr

a22

SPE

AK

209

1458

Fid

iopa

thic

sudd

en1

Med

-ElC

40+

CIS

-PR

O+

CIS

1018

1554

Fun

know

n5

Med

-ElC

40+

CIS

-PR

O+

CIS

812

1652

FU

sher

’ssy

ndro

me

15M

ed-E

lC40

+C

IS-P

RO

+C

IS4

912

1766

Fm

enin

gitis

1C

ochl

ear

C24

MSp

rint

AC

E21

9

1844

Fun

know

n0

Coc

hlea

rC

24M

Spri

ntA

CE

226

1949

Fun

know

n7

Coc

hlea

rC

24M

Spri

ntA

CE

216

2052

Fch

roni

cot

itis

med

ia18

Med

-ElC

40+

CIS

-PR

O+

CIS

106

1D

urat

ion

ofpr

ofou

ndH

Iin

year

sac

cord

ing

toth

eca

sehi

stor

ies,

i.e.,

BE

HL

0.5–

4kH

z>

95dB

for

the

firs

ttim

ein

unai

ded

pure

tone

thre

shol

dm

easu

rem

ents

.2

Subj

ects

1–6

used

aC

IS-P

RO

proc

esso

run

tilA

ugus

t199

7an

da

CIS

-PR

O+

proc

esso

rth

erea

fter

.3

Num

ber

ofac

tive

chan

nels

used

mos

tfre

quen

tlydu

ring

the

follo

w-u

p.4

CIS

stra

tegy

used

until

9m

onth

saf

ter

swit

ch-o

n,th

ench

ange

dto

n-of

-mst

rate

gy.

57

Page 60: Speech perception and auditory performance in hearing

58

4.3 Methods

4.3.1 Nationwide survey

Data on the adult subjects who received their implant at the four participatingdepartments of otorhinolaryngology were requested from the hospitals by means ofquestionnaires specifically constructed for the purpose (Appendix 1). The initial aim wasto compile data regarding the latest hearing level and speech perception afterimplantation, but since the number of subjects who had received a cochlear implant waslimited (N=67), all the data on hearing level and speech perception after implantationwere compiled at the departments of otorhinolaryngology.

The items in the questionnaire covered the age and sex of the subject, the aetiologyand duration of HI and information on the implant, processor, speech coding strategy andnumber of active channels in use. Unaided and aided pure tone thresholds, wordrecognition auditorily only and categories of auditory performance (CAP) were assessedaccording to the situation before implantation, and complications (as reported), pure tonethresholds in the sound field with the implant, word recognition auditorily only in thesound field and auditory performance after implantation. Word recognition had beenassessed using standard Finnish word audiometric tests (Palva 1952, Jauhiainen 1974),which have proved in validation tests to be fairly comparable. Two of the subjects wereSwedish-speaking and one was English-speaking, so that the word recognition scores forthese 3 subjects were not included in the analyses. CAP was assessed with a modifiedversion (see 4.3.2, p 60) of an eight-step categorisation previously used for theassessment of children with implants (Archbold et al. 1995).

Due to differences in clinical routines between the departments ofotorhinolaryngology, the data were pooled to represent evaluation points beforeimplantation and 0–3, 4–6, 7–11, 12–23 and > 24 months afterwards. For the samereason, the sample sizes vary considerably between parameters.

4.3.2 Oulu sample

Hearing levelUnaided and aided pure tone thresholds (0.125–8 kHz) were determined beforeimplantation (aided for HA users) and afterwards (II, III, IV) according to the guidelineslaid down in the ISO 389 (1991), ISO 8253-1 (1989) and EN ISO 8253-2 (1998)standards. When the pure tone threshold could not be measured at a certain frequency, avalue of 130 dB HL was used in the calculations (British Society of Audiology 1988).

Page 61: Speech perception and auditory performance in hearing

59

Discrimination of phoneme quantityDiscrimination of phoneme quantity was assessed with 12 bisyllabic minimal word pairs(II) in a closed-set test specially created for the present purpose (Appendix 2). The ideawas that difference in the quantity of the first vowel or medial consonant would changethe meaning of the words (e.g., <tuli>, /tuli/, ’fire‘ vs. <tuuli>, /tuuli/,’wind‘, or <kisa>,/kis /,’competition’, vs. <kissa>, /kiss /, ’cat‘). As the quantity opposition exists in allvowels and the majority of the consonants (see 2.1.3.1, p 24), the phonemes to be variedwere chosen to represent the extremes of the vowel quadrilateral (/i u /) and threedifferent manners of articulation of consonants; stops (/k/), liquids (/r/) and fricatives(/s/).

The words were recorded by a male speaker and presented in a carrier phrase e.g.,<Kuulet sanan kisa.>, [ku let s n n..kis ],’You will hear the word competition.‘). Thestimuli were presented in sound field via a loudspeaker and a Casio DAT DA-2 digitalaudiotape recorder at 70 dB SPL measured with a Brüel & Kjær Sound Pressure LevelMeter, Type 2235 (linear setting), at a distance of one metre (0° azimuth) from theloudspeaker, with the subjects sitting in a sound-insulated room. The order of the wordswas randomised, yielding two sequentially different sets, each word being presentedtwice during each test session, and the subjects were instructed to listen to the words andunderline the one they heard. The test was dropped from the evaluation protocol usedwith an individual subject when the subject had reached the optimal level, which wasconsidered to be over 95%.

Sentence recognitionA translated form of the Helen test (Ludvigsen 1974, 1981, translated into Finnish byLonka 1993, Appendix 3) was used for sentence recognition (II, V). This was chosenbecause of the lack of a validated sentence recognition test in the Finnish language at thetime when the research began (Määttä et al. 2001). The Helen test is composed of 10 setsof 25 question sentences. There are five common question categories, and recognition ofthe sentences is based on both recognising the question category and the key words.

The sentences, again recorded by a male speaker, were presented in sound field via aloudspeaker and a Casio DAT DA-2 digital audiotape recorder at 70 dB SPL measuredwith a Brüel & Kjær Precision Sound Pressure Level Meter, Type 2203 (slow linearsetting), at a distance of one metre (0° azimuth) from the loudspeaker, with the subjectssitting in a sound-insulated room. The subjects listened to the question sentences andwrote the answers on a form. They were tested on nine test occasions during the follow-up, a separate set of sentences being used on each occasion in order to avoid the learningeffect.

Word recognitionTwo standard Finnish word audiometry lists (lists II and III: Palva 1952) were used toassess open-set word recognition (I, II, V) following the guidelines laid down in the ISO

Page 62: Speech perception and auditory performance in hearing

60

8253-3 (1996) standard (open-set test material referring to a set of test items in which thenumber of alternative responses to each test item is unlimited).

Phoneme recognition and confusionsBecause of the lack of validated phoneme recognition tests for the Finnish language(Määttä et al. 2001), this aspect was assessed using an open-set nonsense syllable testspecially created for the follow-up of adult cochlear implant users (II, III, IV, V). The testcomprised 100 syllables conforming to the phoneme and phonotactic system of theFinnish language (Häkkinen 1978, Karlsson 1983, Sulkala & Karjalainen 1992, Suomi1996, see also 2.1.3.1, p 23) and representing combinations of all eight vowel phonemes/i y e ø æ u o / and 11 of the 13 consonant phonemes /p t k s h m n l r j v/ into threesyllable types, CV, CVV and VC (Appendix 4).

The syllables were recorded in a carrier phrase (e.g., “Kuulet tavun [k ].”,[ku let t vun..k ],’You will hear the syllable [k ].‘) by a male speaker and presented insound field via a loudspeaker and a Casio DAT DA-2 digital audiotape recorder at 70 dBSPL measured with a Brüel & Kjær Precision Sound Pressure Level Meter, Type 2203(slow linear setting), at a distance of one metre (0° azimuth) from the loudspeaker, withthe subjects sitting in a sound-insulated room. The subjects were instructed to listen to thesyllables and write them down on a form.

Auditory performanceAuditory performance in everyday life (I, II, V) was assessed with a modified version ofCAP, which is an eight-step categorisation previously used for evaluating the outcome ofpaediatric implantation (Archbold 1994, Archbold et al. 1995, Archbold et al. 1998). Themethod was slightly modified to meet the requirements of adult cochlear implant users(Appendix 1), the category “Recognition of some speech without speechreading” beingadapted to include some open-set word recognition. The time scale was also slightlymodified, in that a one-month assessment point was added to the protocol to revealchanges in auditory performance during the first month after switch-on of the implant.Validation of the modified method was not feasible within the scope of the present work,but the inter-user reliability of the original method has previously been found to be high(correlation coefficient between two judgements in the original CAP 0.97, Archbold et al.1998).

The use of CAP was motivated by the occurrence of effects of HI at many levels offunctioning, and by the individual nature of these effects (Stephens & Hétu 1991, Arnold1998). Auditory performance was assessed before implantation (with HA for HA users)and afterwards on the basis of data obtained from the case histories and interviews.

Validation of the speech perception tests for research purposesNormal hearing young adults aged 18–25 years (n=47, indication for inclusion: pure-tonethreshold < 20 dB at frequencies 0.125–8 kHz) served as informants for the validation ofthe speech perception tests, i.e. discrimination of phoneme quantity, sentence test,bisyllabic word test and syllable test, for research purposes. As in the case of the cochlear

Page 63: Speech perception and auditory performance in hearing

61

implant users in the Oulu sample, the sentences, words and syllables, recorded by a malespeaker, were presented in sound field following the method used with cochlear implantusers.

The informants scored 100% for discrimination of phoneme quantity (n=30), 100% forsentence list no. 2 (n=30), 100% for sentence list no. 4 (n=17), 99.5% for sentence list no.5 (n=17), 100% for sentence list no. 6 (n=17), 99.3% for sentence list no. 7 (n=17), 100%for sentence list no. 8 (n=17), 100% for sentence list no. 9 (n=17), 100% for sentence listno. 10 (n=17), 99.3% For bisyllabic word recognition (n=30) and 99.3% for syllablerecognition (n=30). The results indicated that the tests were highly representative of theFinnish language.

Acoustic analyses of phoneme stimuliAn acoustic analysis of the phoneme stimuli (III, IV) was performed using aComputerized Speech Lab (CSL) system (model 4300B, Kay Elemetrics Corp.) at asampling rate of 11 025 Hz with 16-bit resolution in order to describe their physicalproperties. A macro programme was created allowing simultaneous amplitude waveformdisplay, sound energy calculation, spectrogram display, FFT (Fast Fourier Transform)short-time spectrum display and LPC (Linear Predictive Coding) display overlaid on theFFT spectrum display. The amplitude waveform display and spectrogram display wereused for segmentation.

The vowels (III) were analysed from the stop-vowel-vowel and stop-vowel syllabletypes (n=24), their duration being measured from the first glottal pulse (after theexplosion of the plosive) to the end of the last periodic glottal pulse. Sound energycalculations were made from the entire duration of the vowel. The three lowest formantfrequencies (F1, F2, F3) were analysed at the temporal midpoint of the vowel by meansof LPC and FFT spectrum analyses (30 ms formant response frame length, bandwidthcut-off 500 Hz, filter order 12th, display range 4400 Hz).

The consonants (IV) were analysed in the CV syllable types at the extremes of thevowel space [i u ]. The parameters analysed for the stops [p t k] were voice onset time(VOT) and root-mean-square (rms) and peak amplitudes of the explosion burst. F1, F2and F3 were analysed for the lateral [l], for the nasals [m n] and the semivowels [j ].Nasal–to–vowel amplitude differences were calculated after measuring the amplitudevalues (rms and peak) of the nasals and the following vowels, and the transitions of thethree lowest vowel formants (F1, F2, F3) were measured at 10 ms intervals for the stops,nasals and semivowels during the initial 50 ms period after the consonant. The soundenergy distribution was measured for the fricatives [s h] and the numbers and durationsof the prominent amplitude fluctuations were measured for the trill [r].

Page 64: Speech perception and auditory performance in hearing

62

4.4 Study design

Nationwide surveyData were requested retrospectively from the Departments of Otorhinolaryngology at theHelsinki, Turku and Tampere university hospitals with a questionnaire speciallyconstructed for the purpose (I) (see 4.3.1, p 58). All the relevant data that were availablewere compiled from the case histories. Data on the 24 subjects at the Department ofOtorhinolaryngology in Oulu who were participating in the nationwide survey wascollected both retrospectively and prospectively from the case histories. Qualifiedpersonnel at the departments of otorhinolaryngology (the researcher herself, speechtherapists and engineers) completed the questionnaires by April 2000, employinginformation derived from the clinical case histories and routines.

Oulu sampleThe protocol for the Oulu sample followed the principles of a prospective repeatedmeasure study (20 subjects in the Oulu sample, original communications II, III, IV, V).The follow-up consisted of nine test sessions, beginning with a preoperative sessionduring which hearing level and speech perception were assessed without and, if thesubject was a HA user, with a HA to determine the baseline for recognition abilities. Afterimplantation, hearing level and speech perception were assessed at 3 days (hearing leveland word perception) and 4 days (sentence and phoneme perception, discrimination ofphoneme quantity) and at 1 month (hearing level, discrimination of phoneme quantity,sentence, word and phoneme perception) and 3, 6, 9, 12, 18 and 24 months after switch-on of the implant. Pure tone audiometry and word audiometry were performed at eachevaluation point by audiology assistants. The researcher herself conducted the assessmentof sentence, word and phoneme perception and discrimination of phoneme quantity ateach evaluation session.

The listeners were instructed both orally and in writing to write down their answerseven if they considered them unthinkable and to leave no answer only if they did notperceive the stimulus at all, so that total guessing would have been required. This wasdone to minimize the error caused by guessing. No feedback was provided for thesubjects during the tests.

All the subjects used their multichannel cochlear implants after implantation. Onepreferred to use the cochlear implant and a HA simultaneously in everyday life, but usedonly the cochlear implant during the determinations of hearing level and testing of speechperception. Speech perception was determined at each session before reprogramming ofthe speech processor, i.e. with the programme the subject was familiar with. The subject’sspeech processor settings (programme, volume, microphone sensitivity) were also set forhis/her normal conversational use in such a way as to take full advantage of the settingshe/she was used to.

All the subjects received aural rehabilitation with their local speech and languagetherapist. Due to the decentralised rehabilitation system for hearing-impaired adults (andcochlear implant users), the extent of this rehabilitation was recorded as reported in the

Page 65: Speech perception and auditory performance in hearing

63

tertiary care case histories. The results varied considerably (min. 5 sessions, max. 30sessions).

Because of the lack of validated phoneme recognition tests for the Finnish language atthe time, the nonsense syllable test was developed at the initial stage of the follow-up,and since subjects nos. 1 and 2 served as pilot cases for this, no preoperative results forthese subjects or results of the test performed four days after switch-on for subject no. 2were obtained with the final version of the test. Also, the follow-up period varied slightlybetween individuals (Table 10). These fluctuations in the protocol explain the slightvariability in sample sizes as presented in the results.

Test-retest reliability in the Oulu sampleThe reliability of the scores obtained in a test session was studied by the researcherherself at 3 months and 24 months after switch-on of the implant by assessing the speechperception of the subject on two consecutive days using the sentence test and nonsensesyllable test.

The subjects scored somewhat better on the second day in the sentence recognition testat 3 months, but the difference was not statistically significant, the scores being 64%(95% CI 38 to 89%, n=12/19) and 72% (95% CI 47 to 97%, n=12/19), respectively. Nodifference was noted at 24 months (87% [95% CI 65 to 109%, n=9/12], and 89% [95%CI 69 to 109%, n=9/12]). Likewise, no significant differences in the syllable recognitionscores at 3 months were found between the two consecutive days, the scores being 25%(95% CI 13 to 38%, n=12/19) and 29% (95% CI 17 to 42%, n=12/19), respectively, andthe same was true of nonsense syllable recognition at 24 months (54% [95% CI 36 to72%, n=9/12] and 51% [95% CI 32 to 70%, n=9/12]). This indicates that no clearlearning effect due to a restricted number of lists or items could be detected.

4.5 Statistical analyses

SPSS 10.0 for Windows® (SPSS Inc®) was used for processing and statistical analysis ofthe data in the original communications I and II. Confidence intervals for the mean (95%CI) were calculated for the performance results (Gardner & Altman 1989). The subjects’responses to the nonsense syllable test were initially scored for correctly recognizedsyllables (II) to find out the raw score for phoneme recognition in syllables.

Confidence intervals (95% CI) for proportions (Gardner & Altman 1989, Ranta et al.1989) were calculated for phoneme recognition and confusions (III, IV, V) with the aid ofthe SAS Proprietary Software Release 8.1 (TS1M0).

For the analysis of the association between auditory performance in everyday life andformal speech perception tests (V), the SAS Proprietary Software Release 8.2 (TS1M0)was used for processing and analysing the data. The SAS Procedure Generalized linearModel (GLM) was used to estimate the average level in CAP and in sentence, word,syllable, vowel and consonant recognition scores during the follow-up. The best fit wasfound with a 3rd degree polynomial model for repeated measurements. The 95% CIs forthe separate performance measures were also estimated. The associations between CAP

Page 66: Speech perception and auditory performance in hearing

64

and the speech perception scores for sentence, word, syllable, vowel and consonantrecognition were analysed using Spearman’s rank order correlation coefficient (rs), andthe associations between the speech perception results for sentence, word, syllable, voweland consonant recognition using Pearson’s product moment correlation coefficient (r).

The researcher herself carried out the statistical analyses on the hearing level,sentence, word and phoneme perception and phoneme confusion data, and on the resultsfor auditory performance in everyday life (see sections 5.1–5.3, original communicationsI–IV). The statistical modelling using the SAS Procedure GLM, 3rd degree polynomialmodel for repeated measurements, Spearman’s rank order correlation coefficients (rs) andPearson’s product moment correlation coefficients (5.4, V) was performed by astatistician.

4.6 Ethical considerations

Permission to conduct the nationwide survey (I) was obtained from the respectiveuniversities or university hospitals. The Ethical Committee of the Helsinki UniversityEye and Ear Hospital approved the protocol as ethically acceptable for the subjects inHelsinki, the research plan was accepted by the Department of Otorhinolaryngology inTurku, and a statement by the Ethical Committee of the Faculty of Medicine, Universityof Oulu, was included as an appendix to the research plan. The researcher also signed acommitment to professional discretion.

For the subjects at the Department of Otorhinolaryngology in Oulu, the EthicalCommittee of the Faculty of Medicine, University of Oulu, approved the research plan asethically acceptable, and the research was carried out according to this plan. Writtenconsent was also received from each subject at the first test session.

The normal hearing young adults examined for the validation of the speech perceptiontests for research purposes were recruited from the population register and from theUniversity of Oulu on a voluntary basis.

Page 67: Speech perception and auditory performance in hearing

5 Results and comments

5.1 Hearing level, speech perception and auditory performance after

cochlear implantation: the nationwide survey (I)

Data on hearing level, word recognition and auditory performance were compiled for 67adults who had received a cochlear implant within a five-year period from 1994 to 1999at the Hearing Centres of Helsinki, Oulu, Turku and Tampere University Hospitals. Theirmedian unaided pure tone thresholds at frequencies of 0.5, 1, 2 and 4 kHz beforeimplantation were comparable to a level of profound HI (data available for 66 subjects,Fig. 3), the median aided pure tone thresholds with the best possible amplification (dataavailable for 21 subjects) being comparable to a level of moderate to severe HI. Twelvemonths after switch-on of the implant, the median pure tone thresholds at the samefrequencies were comparable to a level of mild HI (data available for 38 subjects).

Pooled word recognition scores in the national sample are presented in Fig. 4 andword recognition scores in the Oulu sample in Fig. 5. The mean word recognition scorefor the subjects in the national sample without a HA before implantation was 5% (95% CI2 to 8%, n=50) and that for the Oulu sample 8% (95% CI 2 to 13%, n=24), while thecorresponding scores for the subjects with a HA were 9% (95% CI 4 to 14%, n=34) and13% (95% CI 3 to 24%, n=16), respectively. Three months after switch-on of the implantthe mean word recognition scores were 54% for the national sample (95% CI 46 to 63%,n=38) and 53% for the Oulu sample (95% CI 41 to 66%, n=23). An improvement inmean word recognition scores over a longer period was seen in the subjects with morethan 24 months of implant experience, the mean scores being 71% (95% CI 61 to 81%,n=22), and 73% (95% CI 60 to 86%, n=14), respectively.

Page 68: Speech perception and auditory performance in hearing

66

Fig. 3. Median values for unaided (median for BEHL, n=66) and aided pure tone thresholds(median with HA, n=21), at frequencies of 0.25, 0.5, 1, 2, 3, 4, 6 kHz before implantation, andpure tone thresholds with an implant (median with implant, n=38).

Fig. 4. Pooled word recognition scores (median and quartiles) without a HA (wo HA) andwith a HA (for HA users: w HA) before implantation in the national sample, and scores afterimplantation.

Page 69: Speech perception and auditory performance in hearing

67

Fig. 5. Word recognition scores (median and quartiles) without a HA (wo HA) and with a HA(for HA users: w HA) before implantation in the Oulu sample, and scores after implantation.

The effect of the duration of profound HI was assessed by dividing the subjects intothose with 10 years or less of profound HI (min. 0 years, max. 10 years) and those withprofound HI for 11 years or more (min. 11 years, max. 51 years). The mean wordrecognition score three months after switch-on was 59% for the former (95% CI 49 to68%, n=27), and 44% for the latter (95% CI 25 to 63%, n=11), with corresponding scoresof 61% and 46%, respectively, at six months and 74% (95% CI 63 to 85%, n=16) and64% (95% CI 34 to 93%, n=6) at two years, i.e. a reduction in variability.

The auditory performance of the adult cochlear implant users before and afterimplantation was assessed with CAP (Table 12), the preoperative assessment being basedon the best functional situation with a HA if one was used and without one if not. Thecolumns represent the numbers of subjects contributing to each category at a givenassessment session during the follow-up period. It may be seen that 31 of the 66 subjects(for whom data were available) were able to recognize environmental sounds or somespeech without speechreading before implantation. The poorest performers (n=15) werenot aware of environmental sounds, whereas 17 were, but were not able to respond to orrecognize speech. Six months after switch-on the implant, the majority of the subjects(40/48) were able to recognize some speech without speechreading, and 26 were able touse the telephone with a known speaker. One year after switch-on, 31 of the 40 subjectsevaluated were able to understand conversation without speechreading.

Page 70: Speech perception and auditory performance in hearing

Tabl

e12

.Cat

egor

ies

ofau

dito

rype

rfor

man

ce(m

odif

ied

from

Arc

hbol

det

al.1

995)

befo

rean

daf

ter

coch

lear

impl

anta

tion.

Aft

erim

plan

tati

on

Cat

egor

yof

perf

orm

ance

Bef

ore

impl

ant

(N=

66)

1m

o

(n=4

0)

3m

os.

(n=6

0)

6m

os.

(n=4

8)

12m

os.

(n=4

0)

24m

os.

(n=2

6)

36m

os.

(n=9

)

Use

ofte

leph

one

wit

ha

know

nsp

eake

r.5

2026

2922

7

Und

erst

andi

ngof

conv

ersa

tion

with

outs

peec

hrea

ding

.5

47

2

Und

erst

andi

ngof

com

mon

phra

ses

wit

hout

spee

chre

adin

g.1

1113

68

3

Rec

ogni

tion

ofso

me

spee

chw

itho

utsp

eech

read

ing.

118

61

11

1

Iden

tifi

catio

nof

envi

ronm

enta

lsou

nds.

2010

128

1

Res

pons

eto

sim

ple

wor

ds(e

.g.,

go).

2

Aw

aren

ess

ofen

viro

nmen

tals

ound

s.17

15

No

awar

enes

sof

envi

ronm

enta

lsou

nds.

15

68

Page 71: Speech perception and auditory performance in hearing

69

CommentThe median sound field hearing level of the adult cochlear implant users one year afterswitch-on was comparable to mild HI, being determined by the programming and settingsof the devices. The pure tone threshold levels of the subjects were somewhat better thanthe hearing level of Finnish single-channel cochlear implant users as reported byRihkanen (1988).

The finding that even the subjects who gained the least benefit from the cochlearimplant achieved some open-set word recognition is in agreement with the results ofCohen et al. (1993), Helms et al. (1997) and Gstöttner et al. (1998) for example. In anearlier report from Finland, Rihkanen (1988) found that adult single-channel cochlearimplant users reached a level of 55%–61% (n=9) in audio-visual word recognition andthat HA users (n=10) performed better on audio-visual word recognition than the cochlearimplant users. Auditory only word recognition abilities were not assessed. By contrastwith Rihkanen’s results, the majority of the present subjects (62) had receivedmultichannel cochlear implants with more advanced speech processors and codingstrategies. The findings support previous observations that multichannel cochlearimplants provide the listeners with better possibilities for open-set auditory speechperception than do single-channel cochlear implants (Cohen et al. 1993, Summerfield &Marshall 1995, Holden et al. 1997, Hamzavi et al. 2000). The pooled results of thenationwide survey with regard to open-set auditory word recognition did not differmarkedly from those of the Oulu sample, indicating that the latter is fairly representativeof Finnish-speaking postlingually profoundly hearing-impaired multichannel cochlearimplant users.

Contrary to previous observations on the Germanic languages (e.g., English, German,Dutch: Cohen et al. 1993, Kessler et al. 1997, Parkinson et al. 1998, van Dijk et al. 1999,Hamzavi et al. 2000), a tendency for higher word recognition scores was seen in thepresent results. It should be remembered, however, that the present results were achievedwith open-set tests using mainly bisyllabic words (Palva 1952, Jauhiainen 1974), whereasthe research based on the Germanic languages mainly used monosyllabic word tests (cf.Hirsh et al. 1952, Haspiel & Bloomer 1961, Boothroyd 1968, Owens et al. 1985b). Thepredominance of bisyllabic words in the Finnish tests and the facts that word stress isregularly on the first syllable and the unstressed syllables are not greatly reduced(Häkkinen 1978, Karlsson 1983) could have had made it easier for the Finnish subjects toachieve good word recognition scores. Such results have to be approached cautiously,however, on account of individual differences between subjects and systematic languagedifferences.

The results support previous findings that subjects with a longer duration of profoundHI seem to require more time to achieve adequate open-set auditory word recognition(Kiefer et al. 1998, van Dijk et al. 1999, Geier et al. 1999). Kessler et al. (1997)suggested that the best performers reach a very high level in auditory sentence and wordperception during the initial six months after switch-on, while the poorer performers, whoin their case had a longer duration of profound HI, improve either steadily or verygradually after six months of implant use. Furthermore, the present results show thatadult cochlear implant users reach quite high levels of auditory performance in everyday

Page 72: Speech perception and auditory performance in hearing

70

life within a short time of switch-on (as assessed with CAP, Archbold et al. 1995). To theauthor’s knowledge, no attempt has been made previously to assess the auditoryperformance of adult cochlear implant users with a corresponding classification.

5.2 Hearing level, speech perception and auditory performance after

cochlear implantation: a prospective repeated measures study (II)

The subjects in the prospective repeated measures study (II) comprised 20 consecutiveadults (11 males, 9 females) who received a multichannel cochlear implant at the OuluUniversity Hospital, Finland, between April 1996 and January 1999 (4.2, Table 10). Theirhearing level, speech perception and auditory performance were assessed beforeimplantation and 3 days (hearing level and word perception) and 4 days (sentence, wordand phoneme perception) and 1, 3, 6, 9, 12, 18 and 24 months after switch-on of theimplant. The pure tone threshold values in sound field at frequencies of 0.5, 1, 2 and 4kHz for the 15 subjects with at least 12 months of implant experience were comparable tothe level of mild HI (Fig. 6), while the PTA 0.5–4 kHz values in sound field stabilised duringthe first month after switch-on of the implant.

The closed-set discrimination of phoneme quantity in words had already reached amean level of 96% one month after switch-on of the implant (95% CI 94 to 99%, median98%, n=13) as shown in Fig. 7.

Fig. 6. Individual and median pure tone thresholds at frequencies of 0.25, 0.5, 1, 2, 3, 4 and 6kHz 12 months after implantation.

Page 73: Speech perception and auditory performance in hearing

71

Fig. 7. Closed-set discrimination of phoneme quantity in words (median and quartiles)without a HA (wo HA) and with a HA (for HA users: w HA) before implantation, and scoresafter implantation.

The mean sentence recognition score four days after switch-on was 33% (95% CI 18to 50%, median 24%, n=19), but there was considerable variation, since the range ofscores was 100% (Fig. 8.). Most improvement in the ability to recognize sentences tookplace during the initial six months after switch-on, the mean sentence recognition score atthat point being 74% (95% CI 56 to 91%, median 92%, n=17), while improvement over alonger period of time was accompanied by a decrease in variability.

The mean open-set word recognition scores showed a steady improvement during thefirst six months, the mean at that assessment session being 55% (95% CI 42 to 70%, min.10%, max. 92%, n=18) (Fig. 9). Continuous improvement was seen over the wholefollow-up period, however, as the mean score after 12 months was 67% (95% CI 51 to82%, n=14) and that after two years 72% (95% CI 57 to 88%, n=12).

Phoneme recognition in nonsense syllables improved steadily over the entire follow-up period (Fig. 10), the mean score at six months being 34% (95% CI 24 to 45%, median26%, n=17) and that at two years 52% (95% CI 39 to 66%, median 54%, n=12). Thehighest individual value for phoneme recognition in nonsense syllables was 80%,achieved 24 months after switch-on, which indicates the difficulty of the recognition test.

Page 74: Speech perception and auditory performance in hearing

72

Fig. 8. Sentence recognition (median and quartiles) without a HA (wo HA) and with a HA (forHA users: w HA) before implantation, and scores after implantation.

Fig. 9. Word recognition (median and quartiles) without a HA (wo HA) and with a HA (forHA users: w HA) before implantation, and scores after implantation.

Page 75: Speech perception and auditory performance in hearing

73

Fig. 10. Phoneme recognition in syllables (median and quartiles) without a HA (wo HA) andwith a HA (for HA users: w HA) before implantation, and scores after implantation.

The auditory performance of the adult multichannel cochlear implant users wasassessed with CAP before and after implantation (Table 13). The columns represent thenumbers of subjects contributing to each category at a given assessment session duringthe follow-up period. Before implantation, 9 of the 20 subjects were able to recognizeenvironmental sounds or some speech without speechreading, whereas the poorestperformers (7 subjects) were not aware of environmental sounds. Four of the 20 subjectswere able to recognize some speech without speechreading and one was able tounderstand some common phrases without speechreading. Six months after switching onthe implant all the subjects were able to recognize some speech without speechreading,and 14 of the 20 were able to use the telephone with a known speaker. One year afterswitch-on 13 of the 16 subjects examined were able to use the telephone with a knownspeaker in everyday life.

Page 76: Speech perception and auditory performance in hearing

Tabl

e13

.Cat

egor

ies

ofau

dito

rype

rfor

man

ce(m

odif

ied

from

Arc

hbol

det

al.1

995)

befo

rean

daf

ter

mul

tich

anne

lcoc

hlea

rim

plan

tati

on.

Aft

erim

plan

tati

on

Cat

egor

yof

perf

orm

ance

Bef

ore

impl

ant

(N=

20)

1m

o

(n=

20)

3m

os.

(n=

20)

6m

os.

(n=

20)

12m

os.

(n=

16)

24m

os.

(n=

12)

Use

ofte

leph

one

wit

ha

know

nsp

eake

r.4

1014

1311

Und

erst

andi

ngof

conv

ersa

tion

with

outs

peec

hrea

ding

.3

43

Und

erst

andi

ngof

com

mon

phra

ses

wit

hout

spee

chre

adin

g.1

73

23

1

Rec

ogni

tion

ofso

me

spee

chw

itho

utsp

eech

read

ing.

43

31

Iden

tifi

catio

nof

envi

ronm

enta

lsou

nds.

43

Res

pons

eto

sim

ple

wor

ds(e

.g.g

o).

1

Aw

aren

ess

ofen

viro

nmen

tals

ound

s.3

No

awar

enes

sof

envi

ronm

enta

lsou

nds.

7

74

Page 77: Speech perception and auditory performance in hearing

75

CommentThe pure tone thresholds of the Oulu subjects with a multichannel cochlear implant insound field at frequencies of 0.5, 1, 2 and 4 kHz were in line with the pooled results ofthe nationwide survey (5.1, I), being somewhat more favourable than the hearing levelreported by Rihkanen (1988) for Finnish single-channel cochlear implant users. Thesesubjects also performed somewhat better in closed-set discrimination of phonemequantity in words than the single-channel cochlear implant users, who had been listeningto a corresponding word list (mean 96% at one month after switch-on in the Oulu samplevs. 85% after a minimum of 11 months of implant use in that of Rihkanen [1988]). Theshorter duration of profound HI in the Oulu subjects (mean 6.7 years vs. 19 years inRihkanen [1988]) may also have contributed to their better performance. The duration ofprofound HI has been found to be a strong indicator of performance after implantation(Shallop et al. 1992, Blamey et al. 1996, van Dijk et al. 1999).

The present open-set sentence recognition results are in agreement with those forsubjects using mainly the same devices and coding strategies, i.e. Med-El implants withthe CIS strategy and Nucleus implants with the SPEAK strategy (Helms et al. 1997,Kessler et al. 1997, Gstöttner et al. 1998, Hamzavi et al. 2000). The present results alsoconfirm the beneficial effects of a short duration of profound HI and residual hearingbefore implantation (see 4.2, Table 11, p 57) (Kessler et al. 1997, Kiefer et al. 1998, vanDijk et al. 1999), which are to be seen especially in the steep improvement in sentencerecognition during the initial six months after switch-on (Fig. 8).

The level of difficulty of the test may also have contributed to the high sentencerecognition scores achieved here, as also reported by Kiefer et al. (1996), for example.The grammar and topics covered in the sentence test used here were quite simple (as isthe case in most sentence tests, cf. Bench et al. 1979, Owens et al. 1985b), and the scoresmight have been slightly lower with a more difficult battery of sentences (Kiefer et al.1996). The present subjects scored better on sentence recognition than those described byCohen et al. (1993), Cohen et al. (1997), Proops et al. (1999) and Parkinson et al. (1998),who were using Nucleus22 implants with feature-extraction strategies, which indicatesthat somewhat more favourable results can be achieved with the newer, more advanceddevices and strategies. In view of the language differences, the differences in thedifficulty of the tests used and the differences in the subjects’ backgrounds, anycomparisons between the results must be approached with caution, however.

The present open-set word recognition results are line with the pooled results of thenationwide survey (5.1, I), which supports the earlier view that the Oulu sample may beconsidered representative of Finnish-speaking adult cochlear implant users. Furthermore,a tendency for higher word recognition scores was seen relative to the results for theGermanic languages (Cohen et al. 1993, Cohen et al. 1997, Helms et al. 1997, Kessler etal. 1997, Gstöttner et al. 1998, Kiefer et al. 1998, Parkinson et al. 1998, van Dijk et al.1999, Waltzman et al. 1999, Hamzavi et al. 2000). The predominance of bisyllabic wordsin the test may have had a beneficial effect, as may be seen from studies of the wordrecognition abilities of the Spanish adult cochlear implant users, for example (Manriqueet al. 1998).

Page 78: Speech perception and auditory performance in hearing

76

Phoneme recognition in open-set nonsense syllables turned out to be the mostdemanding listening task, as expected, because it minimized the contextual redundancyusually arising from meaningful words or sentences (Dubno & Dirks 1982, Butts et al.1987). The mean scores were in line with those for word and phoneme recognition in theGermanic languages as assessed with monosyllabic words (Cohen et al. 1993, Helms etal. 1997, Kessler et al. 1997, Gstöttner et al. 1998, Parkinson et al. 1998, van Dijk et al.1999, Hamzavi et al. 2000). The results indicate well the differences in test difficulty.

The auditory performance of the Oulu subjects (assessed with CAP) was quiteconsistent with the results of the nationwide survey, showing that adult multichannelcochlear implant users are able to reach higher categories of auditory performance withina shorter time interval than children with cochlear implants, as reported by Archbold etal. (1995).

5.3 Phoneme recognition and confusions with a multichannel cochlear

implant (III, IV)

5.3.1 Vowel recognition and confusions (III)

Since the overall percentage of correct responses in the recognition of phonemes revealsvery little about the recognition of individual vowels and consonants, the subjects’ repliesin the nonsense syllable test were also coded for phoneme errors segment by segment(confusions, omissions and additions; III, IV). To avoid bias in this coding, the answersthat formed real words were scored separately as words, since they were often bisyllabicwords (e.g., <mittää>, /mittææ/, ’nothing‘, and <aamu>, / mu/, ’morning‘, for thestimulus [k ]). Subject no. 14 was excluded from all the analyses because of a distinctperseveration phenomenon in the nonsense syllable test (see 4.2, p 56). The syllable typefactor was ignored in the analyses in order to concentrate on the recognition of individualvowels and consonants and related confusions. The combining of single and doublevowels in the analysis of vowel recognition was supported by the absence of essentialqualitative differences between them (Lehtonen 1970, Iivonen & Laukkanen 1993).Syllable and phoneme omissions were combined to form a category ‘no response’ (nr).

The overall vowel recognition scores for the Oulu subjects are presented in Fig. 11.The subjects with a HA scored 19% in overall vowel recognition before implantation(95% CI 16 to 21%, n=12) and 68% six months after switch-on (95% CI 66 to 70%,n=17). Their overall vowel recognition performance seemed to level off at 18 months(mean 80%, 95% CI 78 to 82%, n= 12), although the possibility of a continuedimprovement cannot be ruled out without a longer follow-up.

Page 79: Speech perception and auditory performance in hearing

77

������������� ������

���

���

��

��

� �

���

��

��

��

��

��

��

��

���

�� �

� �

�� � � � �� � ��

Fig. 11. Vowel recognition before implantation without HA (wo HA) and with HA (for HAusers: w HA), and scores four days (4d) and 1, 3, 6, 9, 12, 18 and 24 months after switch-on ofthe implant (N=19).

The rates of recognition of individual vowels during the two-year follow-up period areshown in Table 14. A clear improvement in the recognition of all the individual vowelsoccurred during the initial 12 months, and the scores continued to improve over the entiretwo-year period. The vowels [æ], [u], [i], [o] and [ ] were the easiest to recognize at twoyears, while the front vowels [y], [e] and [ø] were the most difficult over the entirefollow-up period.

Vowel confusion percentages during the two-year follow-up are shown in Fig. 12. Thesubjects confused the vowels in a very variable manner during the first three months, anda considerable number of stimuli were not perceived at all, but from that time onwards itwas clear that the front vowels [y], [e] and [ø], which were the most difficult torecognize, were also most often confused with others. The vowel [ø] was confusedparticularly with /æ/ at three months, 32% of cases, [y] with /i/, 26%, and [e] with /æ/,21%. By six months the pattern of confusions had become more consistent, but the frontvowels [y], [e] and [ø] were still the most difficult to recognize even after twelvemonths, [y] being particularly often confused with /i/ and /e/, [e] particularly with /æ/and /ø/, and [ø] particularly with /æ/ and /e/. After two years of implant use the samevowels were still causing the most difficulties, even though the recognition scores formost of the individual vowels were very high.

Page 80: Speech perception and auditory performance in hearing

Tabl

e14

.Rec

ogni

tion

scor

esfo

rth

evo

wel

sin

perc

enta

ges,

wit

h95

%co

nfid

ence

inte

rval

s(C

I)fo

rpr

opor

tion

sw

itho

utH

A(w

oH

A)

and

wit

hH

A(f

orH

Aus

ers:

wH

A)

befo

reim

plan

tati

on,a

ndsc

ores

duri

ngth

efo

llow

-up

afte

rsw

itch

-on

ofth

eim

plan

t.

Ass

essm

ents

essi

on

wo

HA

wH

A4

days

1m

o.3

mos

.6

mos

.9

mos

.12

mos

.18

mos

.24

mos

.

/’/0

(0–0

)13

(9–1

7)31

(26–

35)

52(4

7–58

)60

(55–

64)

65(6

1–70

)65

(60–

70)

77(7

2–81

)89

(86–

93)

90(8

6–93

)

/š/

0(0

–0)

11(3

–18)

18(9

–26)

17(8

–26)

14(7

–22)

30(2

1–40

)40

(28–

52)

41(3

0–53

)43

(31–

56)

53(4

1–66

)

/“/

0(0

–0)

15(7

–24)

28(1

8–37

)41

(30–

53)

44(3

4–55

)37

(27–

47)

32(2

0–43

)44

(33–

56)

47(3

4–59

)47

(34–

59)

/›/

0(0

–0)

0(0

–0)

18(9

–26)

13(5

–21)

26(1

7–35

)28

(19–

37)

28(1

7–40

)39

(27–

50)

45(3

2–58

)43

(31–

56)

/ª/

0(0

–0)

19(9

–28)

44(3

3–55

)64

(53–

76)

68(5

8–77

)79

(71–

87)

78(6

8–89

)81

(72–

91)

87(7

8–95

)93

(87–

99)

/™/

0(0

–0)

21(1

7–26

)38

(33–

43)

56(5

1–61

)68

(64–

72)

79(7

5–83

)82

(78–

86)

89(8

5–92

)92

(89–

95)

90(8

7–93

)

/˜/

0(0

–0)

23(1

2–35

)53

(41–

65)

52(3

9–65

)69

(59–

80)

78(6

8–87

)71

(58–

84)

75(6

4–86

)85

(75–

95)

79(6

8–91

)

/–/

0(0

–0)

27(2

2–32

)46

(41–

50)

51(4

6–56

)65

(60–

69)

77(7

4–81

)79

(74–

83)

76(7

1–80

)77

(72–

82)

77(7

2–82

)

78

Page 81: Speech perception and auditory performance in hearing

79

Fig. 12. Confusions between vowels, vowel stimuli not responded to at all (nr) and stimulirecognized as words (word), in percentages, with HA (for HA users: w HA) beforeimplantation, and during the follow-up after switch-on of the implant.

The vowels that were the most difficult to recognize and the three most prevalentconfusion patterns for these at nine months after switching on the implant are plotted onan acoustic vowel chart in Fig. 13. As shown, the front vowels [i], [y], [e] and [ø], whichgained the lowest recognition scores at nine months, were clearly the mostly easilyconfused, each with the closest vowel having a higher F1, and in two cases ([y] and [ø])with a higher F1 and F2. The front vowels are the ones with the smallest differences in

Page 82: Speech perception and auditory performance in hearing

80

formant frequencies in Finnish. With two years of listening experience, the front vowels[y], [ø] and [e] remained the most difficult to distinguish from their closest neighbourswith a similar F1 or F2 (Fig. 14).

Fig. 13. The four vowels that were the most difficult to recognize at nine months after switch-on of the implant, and the three most prevalent confusion patterns for these vowels, plottedon an acoustic vowel chart (N=12).

Page 83: Speech perception and auditory performance in hearing

81

Fig. 14. The three vowels that were the most difficult to recognize two years after switch-on ofthe implant, and the three most prevalent confusion patterns for these vowels, plotted on anacoustic vowel chart (N=12). Note the high recognition score for [i].

Page 84: Speech perception and auditory performance in hearing

82

CommentThe overall vowel recognition results are quite in line with those achieved elsewhere

with recent multichannel cochlear implants and coding strategies (CIS, SPEAK, Whitfordet al. 1995, Skinner et al. 1996, Helms et al. 1997, Parkinson et al. 1998, Pelizzone et al.1999), but it should be remembered that they were achieved with an open-set recognitiontest involving no other closed-set restrictions than the Finnish phoneme system itself, incontrast to the closed-set tests used widely in the reports from other countries. Thesubjects actually reached a very high level in vowel recognition with their multichannelcochlear implants, as nonsense syllable tests have been found to be highly reliable anddemanding when testing the phoneme perception of subjects with HI (Dubno & Dirks1982). The fact that the stimuli were recorded by a single male speaker may have slightlyenhanced the results, as has also been reported by Loizou et al. (1998).

In contrast to several earlier reports on overall vowel recognition by cochlear implantusers (Doyle et al. 1995, Whitford et al. 1995, Skinner et al. 1996, Helms et al. 1997,Parkinson et al. 1998, Pelizzone et al. 1999), the present results showed a continuousimprovement in the recognition of individual vowels. The fact that the front vowels ([y],[ø] and [e]) were the most difficult for the cochlear implant users to recognize during thefollow-up period could not have been revealed by an analysis of overall vowelrecognition.

The vowel confusion results are in general quite consistent with previous reports onconfusions between the acoustically and perceptually closest vowels (Tyler et al. 1992,Skinner et al. 1996, van Wierigen & Wouters 1999), and are especially in line with thoseof van Wieringen & Wouters (1999), who found that the front and central vowels, whichare acoustically close to each other (/y/, / /, /i/ and / /) were the most difficult torecognize.

The present results revealed clear patterns in the confusions, however. Although it wasevident that most of them were related to the acoustically and perceptually closestvowels, as has been reported earlier (Tyler et al. 1992, Skinner et al. 1996, van Wierigen& Wouters 1999), it was evident here that the predominant erroneous responses involvedthe vowel with the next higher F1, and in a number of cases the next higher F1 and F2.Starting from three months after switch-on, a situation involving two vowels with eitherF1 or F2 at roughly the same frequencies would be resolved mainly in favour of thevowel with the next higher F1 or F2. Furthermore, the confusion patterns were quiteconsistent, which supports the claim that subjects who perform better are able to utilizemore spectral information in vowel recognition than those who perform poorly (vanWierigen and Wouters 1999), since 11 of the 12 subjects assessed at 24 months afterswitch-on may be considered good performers in terms of their ability to perceivesentences and words (5.2, II).

As the vowel stimuli were produced naturally, the consonant contexts in which thevowels occurred reflect natural syllabic contexts of the Finnish language. Alveolarconsonants have been found to yield higher F2 transitions, whereas labial consonantshave been found to yield lower ones, for example (Iivonen & Laukkanen 1993). To theauthors’ knowledge, no systematic study of the effect of these transitions on theperception of Finnish vowels by normal-hearing or hearing-impaired listeners has beenreported. These natural variations in vowel formant transitions were not controlled

Page 85: Speech perception and auditory performance in hearing

83

synthetically in the present case, but were naturally embedded in the stimuli. Hence, thevowel recognition results may reflect the effect of the natural contextual variation onvowel perception better than those obtained with a test employing a controlled, singleconsonant context. Vowel recognition has been found to be easier when both the dynamiccues given by vowel transitions and the static cues obtained from the steady-statecharacter of the vowel are available (Kirk et al. 1992).

There have been few Finnish studies on vowel perception in listeners with normal orimpaired hearing, and the present report has yielded the first results on this aspect of theperformance of Finnish cochlear implant users. In some experiments, filtering in thefrequency range 0.25–4 kHz, sloping by 24 dB SPL/octave (simulating HI sloping to highfrequencies), has been found to obliterate part of the F1 band and distort the spectrum,causing confusion between front and back vowels (between /i/, /y/ and /u/ and between/e/ and /o/) in Finnish bisyllabic words, for example (Kiukaanniemi & Määttä 1980).Confusions between front and back vowels were rare or absent in the tests performed onthe present experienced cochlear implant users, however. While HI, particularly SNHI,reduces the acoustic information at higher frequencies, cochlear implants provideacoustic information across a wider frequency range.

The present results support previous claims regarding the existence of vowelconfusions, since the acoustic information may be indistinguishable (Harnsberger et al.2001), and they also support the earlier observation of partial adaptation to electricalstimulation (Dorman et al. 1997a, Shannon et al. 1998, Fu & Shannon 1999a, Rosen etal. 1999) and of the effect of reduced formant structure perception on vowel recognition(Harnsberger et al. 2001).

5.3.2 Consonant recognition and confusions (IV)

The subjects with a HA scored 17% (95% CI 14 to 19%, n=12) on overall consonantrecognition before implantation (Fig. 15). Six months after switch-on, the subjects scored58% in consonant recognition (95% CI 56 to 61%, n=17). A steady improvement wasseen during the whole follow-up period, and the score after two years was 71% (95% CI68 to 73%, n=12).

Page 86: Speech perception and auditory performance in hearing

84

��

��

��

��

��

��

��

���

�� �

� �

�� � � � �� � ��

���

���

����

���

��

� �

���

������������� ������

Fig. 15. Consonant recognition without HA (wo HA) and with HA (for HA users: w HA)before implantation, and scores four days (4d) and 1, 3, 6, 9, 12, 18 and 24 months afterswitch-on of the implant (N=19).

The scores for classification of the manner of articulation of the consonants are shownin Table 15. The trill (mean score 89%, 95% CI 84 to 93%,), the fricatives (mean score67%, 95% CI 61 to 73%) and the stops (mean score 65%, 95% CI 61 to 69%) wereamong the three easiest manners of articulation to classify three months after switch-on,while the lateral remained the category with the poorest classification after two years(score 50%, 95% CI 39 to 62%, n=12). Manner of articulation was perceived well duringthe follow-up, however, and the subjects made only a few confusions in this respect,more frequently among the resonants (nasals, lateral and semivowels) than between thevoiceless stops and resonants.

The places of articulation of consonants produced in the dental and alveolar regionswere most often correctly classified during the 24-month follow-up period, the scorebeing 80% (95% CI 76 to 83%) two years after switch-on (Table 16). The labial place ofarticulation was clearly the most difficult to recognize, and the labials were oftenconfused with dentals and alveolars. Two years after switch-on the subjects scored 53%(95% CI 46 to 60%) in the classification of the labial place of articulation.

Page 87: Speech perception and auditory performance in hearing

85

Table 15. Recognition scores for manner of articulation, with 95% confidence intervalsfor proportions without (wo HA) and with a HA (w HA) before implantation, and duringthe follow-up after switch-on of the implant.

Manner of articulation

Stops

(/p t k/)

Fricatives

(/s h/)

Nasals

(/m n/)

Lateral

(/l/)Trill

(/r/)Semivowels

(/j v/)

wo HA 0 (0–0) 0 (0–0) 0 (0–0) 0 (0–0) 0 (0–0) 0 (0–0)

w HA 20 (17–24) 12 (7–17) 20 (15–24) 10 (4–17) 42 (35–50) 31 (25–38)

4 days 48 (44–52) 48 (41–56) 29 (24–34) 20 (12–28) 59 (52–65) 34 (27–40)

1 mo. 58 (53–62) 61 (53–68) 45 (39–51) 19 (11–27) 77 (71–83) 51 (44–58)

3 mos. 65 (61–69) 67 (61–73) 57 (52–62) 23 (15–31) 89 (84–93) 62 (56–68)

6 mos. 75 (71–78) 78 (73–84) 71 (66–76) 38 (29–47) 94 (92–97) 71 (65–77)

9 mos. 79 (75–83) 75 (68–82) 71 (66–77) 36 (25–47) 94 (91–98) 76 (69–82)

12 mos. 86 (83–89) 77 (70–83) 78 (73–83) 46 (36–57) 98 (97–100) 82 (76–87)

18 mos. 88 (85–91) 79 (72–85) 87 (83–91) 54 (43–66) 100 (100–100) 85 (79–90)

24 mos. 93 (90–95) 80 (73–86) 90 (86–94) 50 (39–62) 100 (100–100) 86 (80–91)

Table 16. Recognition scores for place of articulation, with 95% confidence intervals forproportions without a HA (wo HA) and with a HA (w HA) before implantation, andduring the follow-up after switch-on of the implant.

Place of articulation

Labials

(/p m v/)

Dentals and alveolars

(/t s n l r/)Palatals, velars and glottals

(/j k h /)

wo HA 0 (0–0) 0 (0–0) 0 (0–0)

w HA 16 (11–21) 21 (18–24) 19 (15–22)

4 days 14 (10–19) 46 (43–49) 29 (25–33)

1 mo. 25 (19–31) 59 (56–63) 40 (36–45)

3 mos. 31 (25–36) 65 (62–68) 52 (48–56)

6 mos. 35 (30–41) 71 (68–74) 62 (58–66)

9 mos. 34 (27–41) 73 (70–77) 59 (54–64)

12 mos. 42 (36–49) 75 (72–78) 68 (64–73)

18 mos. 48 (41–55) 77 (74–80) 72 (67–76)

24 mos. 53 (46–60) 80 (76–83) 75 (71–80)

Page 88: Speech perception and auditory performance in hearing

86

Analysis of the recognition of individual consonants (Table 17) revealed that the trill[r] and the fricative [s] were the ones that were most often recognized correctly, thisbeing the case as early as four days after the initial switch-on, and their recognition hadreached a high level by six months (94% and 96%, respectively). A clear improvementoccurred in the recognition of the other consonants throughout the follow-up period of 24months, the consonants [h], [m], [l] and [ ] remaining the most difficult to recognize.

There were also clear differences in the recognition of individual consonants and inrelated confusions within the same manner category, even though the manner ofarticulation was classified correctly (Fig. 16). The stops were mostly confused with eachother, but also with the fricative /s/ or with the nasals /m/ and /n/. The stops [k] and [t]were somewhat easier to recognize than [p], for example. Thus the score for [k] at ninemonths was 50% (95% CI 42 to 58%), that for [t] 55% (95% CI 48 to 63%) and that for[p] only 31% (95% CI 20 to 41%), the difference between [p] and the pair [k] and [t]being statistically significant at the 0.05 level. A slight, but not significant, difference inthe recognition scores for these three stops was seen even after two years of listeningexperience with a cochlear implant. The fricative [h] was also very difficult to recognizeand was often confused with /s/, with the nasals and with the semivowels. Furthermore,the lateral [l] was often confused with the vowels and with the nasals.

The results also revealed some unidirectional response patterns. For example, whilethe nasals were mostly confused with each other and with the lateral and the semivowels,the nasal [m] was more often falsely identified as /n/ than [n] was as /m/ (Fig. 16). Asimilar pattern was seen in the semivowels, in that while they were confused either witheach other or with the nasals, the semivowel [ ] was more often confused with the nasalsand with /j/ than the semivowel [j] was with /v/ or with the nasals.

Page 89: Speech perception and auditory performance in hearing

Tabl

e17

.R

ecog

niti

onsc

ores

(in

perc

enta

ges)

for

the

cons

onan

ts,

wit

h95

%co

nfid

ence

inte

rval

sfo

rpr

opor

tion

sw

itho

uta

HA

(wo

HA

)an

dw

ith

aH

A(f

orH

Aus

ers:

wH

A)

befo

reim

plan

tati

on,a

nddu

ring

the

foll

ow-u

paf

ter

swit

ch-o

nof

impl

ant.

Indi

vidu

alco

nson

ants

Sess

ion

/p/

/t/

/k/

/s/

/h/

/m/

/n/

/l//r/

/j//

/

wo

HA

0(0

-0)

0(0

-0)

0(0

-0)

0(0

-0)

0(0

-0)

0(0

-0)

0(0

-0)

0(0

-0)

0(0

-0)

0(0

-0)

0(0

-0)

wH

A6

(1-1

2)8

(4-1

2)14

(9-1

9)12

(4-1

9)6

(1-1

2)9

(3-1

5)11

(6-1

6)10

(4-1

7)42

(35-

50)

28(2

1-35

)18

(6-3

0)

4da

ys15

(8-2

2)26

(21-

32)

26(2

0-32

)70

(61-

79)

7(2

-13)

6(1

-11)

27(2

1-33

)20

(12-

28)

59(5

2-65

)34

(27-

41)

17(6

-27)

1m

o.27

(18-

37)

37(3

1-44

)33

(27-

40)

82(7

4-90

)16

(8-2

3)13

(6-2

0)41

(34-

48)

19(1

1-27

)77

(71-

83)

52(4

4-60

)19

(7-3

1)

3m

os.

32(2

3-40

)42

(36-

48)

44(3

8-50

)86

(80-

93)

27(1

9-35

)15

(8-2

2)50

(44-

56)

23(1

5-31

)89

(84-

93)

64(5

7-70

)26

(14-

38)

6m

os.

40(3

1-49

)43

(37-

49)

54(4

8-60

)96

(93-

100)

33(2

5-42

)21

(14-

29)

63(5

7-69

)38

(29-

47)

94(9

2-97

)75

(69-

81)

32(1

9-44

)

9m

os.

31(2

0-41

)55

(48-

63)

50(4

2-58

)98

(93-

101)

23(1

4-33

)21

(12-

30)

64(5

7-72

)36

(25-

47)

94(9

1-98

)80

(74-

87)

39(2

3-55

)

12m

os.

45(3

5-56

)51

(44-

58)

65(5

8-72

)10

0(1

00-1

00)

31(2

1-41

)27

(18-

37)

69(6

2-75

)46

(36-

57)

98(9

7-10

0)83

(77-

89)

41(2

6-55

)

18m

os.

47(3

6-59

)51

(43-

58)

65(5

8-72

)99

(96-

101)

33(2

2-44

)35

(24-

46)

73(6

7-81

)54

(43-

66)

100

(100

-100

)89

(83-

94)

44(2

8-61

)

24m

os.

53(4

1-64

)61

(53-

68)

68(6

1-75

)99

(96-

101)

40(2

9-52

)40

(29-

52)

76(7

0-83

)50

(39-

62)

100

(100

-100

)95

(91-

99)

36(2

0-52

)

87

Page 90: Speech perception and auditory performance in hearing

Tobe

cont

inue

d.

Fig

.16.

Con

fusi

ons

betw

een

indi

vidu

alco

nson

ants

,con

sona

ntst

imul

ino

tre

cogn

ized

orno

tre

spon

ded

toat

all

(nr)

,and

stim

uli

reco

gniz

edas

wor

ds(w

ord)

,in

perc

enta

ges,

wit

hH

A(f

orH

Aus

ers:

wH

A)

befo

reim

plan

tati

on,a

nddu

ring

the

follo

w-u

paf

ter

swit

ch-o

nof

the

impl

ant.

88

Page 91: Speech perception and auditory performance in hearing

Fig

.16.

Con

tinu

ed.

89

Page 92: Speech perception and auditory performance in hearing

90

CommentThe present results with respect to overall consonant recognition, even though achievedwith an open-set recognition test, are quite consistent with others reported recently thatapply to with multichannel cochlear implants implementing the CIS and SPEAKstrategies (Whitford et al. 1995, Helms et al. 1997, Parkinson et al. 1998, Pelizzone et al.1999, Skinner et al. 1999). This indicates that the subjects had quite a good ability torecognize consonants, since nonsense syllable tests have been found to be demanding forsubjects with impaired hearing (Dubno & Dirks 1982).

The results are also quite in line with previous reports regarding the Germanic andRomance languages in the fact that the manner of articulation was easier to classify thanthe place (Doyle et al. 1995, Pelizzone et al. 1999, Skinner et al. 1999, van Wieringen &Wouters 1999), and they also show that while the manner of articulation may be correctlyclassified, the place of articulation within the manner category may not be, whichsupports previous findings that most errors occur between the places of articulation(Pelizzone et al. 1999, Skinner et al. 1999, van Wierigen & Wouters 1999). Additionally,the present findings are consistent with previous findings that confusions are moreprevalent among resonants than between stops and resonants (Dorman et al. 1990,Pelizzone et al. 1999, Skinner et al. 1999, van Wierigen & Wouters 1999).

Some differences in consonant recognition patterns were found, however. The presentresults revealed prospective changes in the recognition of individual consonants during atwo-year period, and also clear differences in the recognition patterns for consonantswithin the same category. There were consonants within the manner categories that wereconsistently better recognized than others, those recognized better being ones with eitherthe spectral energy distribution of the consonant or the locus of the formant transitions athigher frequencies. Specifically, the consonants with velar, palatal and alveolartransitions (high F2, 1.5–2 kHz) were somewhat better recognized than those with labialtransitions (low F2, 1.2–1.4 kHz).

The results also pointed to some unidirectional confusion patterns. The consonantswere confused to a greater extent with their phonetically closest neighbours with a higherspectral energy distribution or a higher F2 transition. For example, the fricative [h] wasmore often confused with /s/ than [s] was with /h/, the nasal [m] was more oftenconfused with /n/ than [n] was with /m/, and the semivowel [ ] was more frequentlyconfused with /j/ than [j] was with /v/. The prospective improvement in the recognition ofindividual consonants and the discrepancies in the recognition of individual consonantswithin the different manners and places of articulation have not been reported before (see2.3.2, p 43 Whitford et al. 1995, Helms et al. 1997, Parkinson et al. 1998, Pelizzone et al.1999, Skinner et al. 1999, van Wierigen & Wouters 1999).

The effects of the ambient language on the confusion patterns should not be forgotten,since the Finnish phoneme system has a sparse obstruent and fricative inventory. Thereare only four stops in the core consonant system of Finnish, which yields limitedconfusion alternatives with the voiced stops, for example. Confusions between voicedand voiceless stops have been reported in the Germanic and Romance languages(Pelizzone et al. 1999, Skinner et al. 1999, van Wieringen & Wouters 1999), but thesewere extremely rare in the present material, and the same is true of confusions among

Page 93: Speech perception and auditory performance in hearing

91

fricatives (Pelizzone et al. 1999, Skinner et al. 1999, van Wieringen & Wouters 1999).There is only one real fricative in the Finnish phoneme inventory (the spectral structureof [h] being similar to that of a voiceless vowel), which also minimizes the confusionalternatives within this manner category and clearly contributes to the high percentage ofcorrect recognitions of [s]. The Finnish trill [r] was likewise very seldom confused withany other consonant, as was also the case in Dutch (van Wieringen & Wouters 1999), forexample.

Few reports have been published on consonant recognition by Finnish hearing-impaired listeners, and the present results are the first to apply to Finnish cochlearimplant users (see also 5.3.1, p 83). Filtering at frequencies of 0.25–4 kHz sloping by 24dB SPL/octave (simulating HI sloping to high frequencies) has been found to yield betterrecognition for the resonants than for the stops or fricatives for normal hearing listeners(in Finnish bisyllabic words, Kiukaanniemi & Määttä 1980), but the present resultspointed to quite the opposite recognition patterns in adult multichannel cochlear implantusers (see above), which may need to be taken into account in the rehabilitation of suchpeople in Finland.

As the stimuli were produced naturally, the context in which the consonants occurredreflects the natural syllabic contexts of Finnish. This means that the present results mayreflect the effect of natural contextual variation on consonant perception better thanresults obtained with tests employing a controlled phoneme context (see also 5.3.1, p 82).To the author’s knowledge, no systematic study has been made of the perception ofFinnish consonants, and it is clear that further research is needed on the contextual effectson consonant recognition.

The present finding that consonants were more difficult to recognize than vowels is ingeneral agreement with many previous observations (Whitford et al. 1995, Skinner et al.1996, Helms et al. 1997, Parkinson et al. 1998, Pelizzone et al. 1999, Skinner et al. 1999,van Wierigen & Wouters 1999). The results regarding vowel and consonant recognitionimply that multichannel cochlear implant users confuse the phonemes more in thedirection of a spectral energy distribution at higher frequencies than in that of a spectralenergy at lower frequencies.

The results support previous observations on the difficulties of categorizing andlabelling phonemes with only a minimal difference in spectral energy distribution(Boothroyd et al. 1996, Dorman et al. 1997a, Fu & Shannon 1999a, Rosen et al. 1999,Shannon et al. 1998).

5.4 Association between auditory performance and speech perception

(V)

Recognition scores were initially calculated for sentences, words and syllables in theOulu sample (5.2, II), and for the recognition of the proportions of vowels andconsonants (5.3, III, IV). In order to analyse the association between auditoryperformance and speech perception, the baseline of the best functional level was taken tobe the level achieved before cochlear implantation (with a HA in the case of the HAusers). The assessment intervals were determined by the sessions held for CAP. The aim

Page 94: Speech perception and auditory performance in hearing

92

was to estimate the change in auditory performance in everyday life and in the sentence,word, syllable, vowel and consonant recognition scores and to smooth the random errorbetween the subjects. The improvement in performance may best be described by using acurve for each of the measures separately. The aim was to find a curve that would givethe best fit to the scores. The SAS Procedure Generalized linear model (GLM) was usedto estimate the average level in CAP and in the sentence, word, syllable, vowel andconsonant recognition scores during the follow-up. To facilitate the analysis, the CAPvalues (0–7) were uniformly scaled to percentages, even though the CAP items do notfollow a uniform scale. Because of the prospective repeated measures design, themeasured values could not be considered independent, and an analysis of repeatedmeasurements was used. The best fit was found with the 3rd degree polynomial model forrepeated measurements. The 95% confidence intervals (CI) were also estimated.

The associations between CAP and the speech perception scores for sentence, word,syllable, vowel and consonant recognition were analysed using Spearman’s rank ordercorrelation coefficients (rs), and those between the speech perception results for sentence,word, syllable, vowel and consonant recognition using Pearson’s product momentcorrelation coefficients (r).

The estimated average performances in CAP and in sentence, word, syllable, voweland consonant recognition during the follow-up are shown in Fig. 17. The majority of thesubjects were able to understand conversation without visual cues, which represents afairly good auditory performance in everyday life, after three months (5.2). At that pointthe estimated average level of performance was 69% for sentences (95% CI 59 to 80%,n= 18), 51% for words (95% CI 43 to 59%, n=17), 27% for nonsense syllables (95% CI21 to 33%, n=18), 59% for vowels (95% CI 51 to 66%, n=18) and 44% for consonants(95% CI 38 to 50%, n=18).

After six months, 14 of the 19 subjects could use a telephone in everyday life (5.2). Atthat time the estimated average sentence recognition was 77% (95% CI 66 to 89%, n=16,Fig. 17), word recognition 60% (95% CI 51 to 69%, n=17), vowel recognition 69% (95%CI 62 to 76%, n=17) and consonant recognition 53% (95% CI 48 to 59%, n=17), whereasthe level of nonsense syllable recognition was only 37% (95% CI 31 to 43%, n=17),indicating difficulty in that area.

Page 95: Speech perception and auditory performance in hearing

93

Fig. 17. Adult cochlear implant users’ estimated average level of performance in CAP and insentence, word, nonsense syllable, vowel and consonant recognition during the follow-up. See5.2, Table 13, for the categories of auditory performance.

Spearman’s rank order correlation coefficients for the relations between CAP and thesentence, word, syllable, vowel and consonant recognition results are shown in Table 18.All the coefficients were high (rs > 0.81, p < 0.0001), showing that the formal speechperception tests correlated significantly with a measure indicating auditory performancein everyday life. The inter-correlations between the speech perception results were alsohigh (r > 0.79, p < 0.0001).

Page 96: Speech perception and auditory performance in hearing

94

Table 18. Overall Spearman’s rank order correlation coefficients (rs) for the relationsbetween the categories of auditory performance (CAP), and sentence, word, syllable,vowel and consonant recognition, and overall Pearson’s product moment correlationcoefficients1 (r) for the relations among the sentence, word, syllable, vowel andconsonant recognition results.

Sentence

recognition

Word

recognition

Syllable

recognition

Vowel

recognition

Consonant

recognition

CAP (rs) 0.81*** 0.83*** 0.85*** 0.83*** 0.83***

Sentence recognition (r) 0.88*** 0.79*** 0.85*** 0.89***

Word recognition (r) 0.87*** 0.87*** 0.88***

Syllable recognition (r) 0.90*** 0.91***

Vowel recognition (r) 0.92***

*** p < 0.0001.

Analysis of the associations between CAP and the formal speech perception results atthe different assessment sessions during the two-year follow-up revealed differencesbetween the formal tests used (Table 19). The word recognition scores beforeimplantation showed the highest and the most significant correlation with CAP, while thesentence recognition results and the nonsense syllable results both showed a modestcorrelation with auditory performance in everyday life. The lowest correlation was seenbetween CAP and syllable recognition before implantation (rs = 0.54, df = 15, p < 0.05).The highest correlations one month after switching on the device were found betweenCAP and sentence and word recognition, but a high correlation was also seen betweenCAP and the nonsense syllable test even at this stage. All the formal tests showed amodest correlation with CAP one year after switching on the implant.

Table 19. Spearman’s rank order correlation coefficients (rs) for the relations between thecategories of auditory performance (CAP) and the sentence, word, syllable, vowel andconsonant recognition results before implantation and 1, 3, 6, 12 and 24 monthsafterwards.

Categories of auditory performance

Test levelBefore

implant 1 mo. 3 mos. 6 mos. 12 mos. 24 mos.

Sentence recognition 0.66*** 0.83*** 0.65** 0.63** 0.62* 0.51

Word recognition 0.78*** 0.80*** 0.74*** 0.69** 0.61* 0.48

Syllable recognition 0.54* 0.76** 0.72*** 0.73** 0.61* 0.48

Vowel recognition 0.64** 0.82*** 0.73*** 0.72*** 0.46 0.48

Consonant recognition 0.64** 0.91*** 0.68** 0.61** 0.61** 0.48

* p < 0.05, ** p < 0.01, ***p < 0.001

Page 97: Speech perception and auditory performance in hearing

95

CommentTo the best of the author’s knowledge, the association between auditory performance ineveryday life and formal speech perception results has not been studied before in adultcochlear implant users. Likewise, no comparable report can be found on the associationsbetween the different speech perception measures. A close association was seen herebetween formal speech perception tests and auditory performance in everyday life (Table18), in line with earlier results showing a positive relationship between speech perceptionand self-assessed disability/handicap in hearing-impaired adults (Noble & Atherley 1970,Kramer et al. 1996). Both CAP and the formal speech perception tests showed acontinuous improvement after implantation, which inevitably contributes to the quiteclose positive correlation between the measures. The high and statistically highlysignificant correlations between CAP and the speech perception results indicate that theformal tests used here were good describers of auditory performance in everyday life.

The statistical modelling revealed a definite ceiling effect in the average estimatedperformance of the adult cochlear implant users, both in CAP and in the sentencerecognition results, while the performance in word and phoneme recognition showedcontinuous improvement over the whole two-year period. Deeper analysis of thecorrelation during the two-year follow-up also showed that 1) the correlation betweenCAP and the formal speech perception results was highest during the first year afterswitch-on of the implant, and that 2) the correlation was highest between CAP and theword, syllable and phoneme recognition results after only 3 months. Of the speechperception tests with different levels of difficulty, i.e. the sentence test, word test andnonsense syllable test, the highest association with CAP during the follow-up wasachieved in the case of sentence recognition at one month. The ceiling effect in theestimated average level of sentence recognition implies that the sentence recognition testused here cannot be used as the only measure of auditory performance in everyday life,but it does seem to be most responsive during the first month.

The nonsense syllable test seemed to be difficult, especially before implantation,which may point to possible unresponsiveness in measuring the performance of subjectswith very limited auditory capabilities (Owens et al. 1985b). It was particularly useful fordetecting improvement in performance during the two-year follow-up, however, and itscorrelation with CAP (like that between CAP and vowel and consonant recognition) washigh during the first six months, indicating its value as another measure of auditoryperformance in everyday life. The present results imply that a wide test battery is needed,since tests with different levels of difficulty are responsive at different stages during thefollow-up.

The close association between CAP and the formal speech perception results duringthe initial six months after switching –on the implant indicates that the changes inperformance in everyday life due to improved speech perception could be detected withCAP, especially during the initial stage. The ceiling effect in CAP and its modestcorrelations with the speech perception results, especially after 12 months of implant use,may imply a possible unresponsiveness in the case of adults with over one year oflistening experience with an implant. The present results imply that the modified form ofCAP may be used as a functional outcome measure during the initial stage after adultcochlear implantation, but more demanding categories, e.g. the understanding of

Page 98: Speech perception and auditory performance in hearing

96

conversation in a small group of people or in a noisy environment, may be needed toassess auditory performance in everyday life after several years of listening experience.

Although CAP is not an instrument measuring the elements of disability/handicap(activity limitation/participation restriction) in everyday life, but rather an instrument forassessing auditory performance, it may be considered to indicate disability in everydaycommunication situations. It is encouraging that the present results regarding auditoryperformance in everyday life (indicating an aspect of disability) and the formal speechperception results (representing measures of disability in a laboratory setting) show ahigh and statistically highly significant correlation. The results provide some guidelinesfor the prediction of everyday functioning from formal speech perception results, e.g.persons using a telephone with a known speaker 12 months after the implant had beenswitched on scored an average of 75% on word recognition.

Page 99: Speech perception and auditory performance in hearing

6 General discussion

6.1 Validity of the present study

This study yields the first results on hearing level, auditory speech perception andauditory performance in everyday life applying to adult Finnish-speaking cochlearimplant users. The speech perception of such listeners has been widely studied in theGermanic and Romance languages, but due to language differences, the results may notbe directly applicable to Finnish. The only comparable work done previously in Finlandconcerned hearing levels, closed-set auditory speech discrimination, auditory-visualspeech perception and subjective benefit in adults using a single-channel cochlearimplant (Rihkanen 1988).

Study designThe present research was based on data compiled from several sources, which enabledhearing levels, word recognition and auditory performance after cochlear implantation tobe investigated in a larger sample than could have been obtained from one universityhospital. The nationwide survey employed data from four departments ofotorhinolaryngology at university hospitals in Finland, covering altogether 67 adults. Thedata were collected both retrospectively (Helsinki, Oulu, Turku and Tampere) andprospectively (Oulu) up to April 2000, and had been compiled according to standardclinical routines (Helsinki, Turku, Tampere) or according to the systematic research plan(Oulu sample). Thus it has been possible to put forward here the first nationwide resultson the benefits of multichannel cochlear implantation for adults with postlingualprofound HI and to define the representativeness of a smaller sample, which is importantfor a small language area such as Finland.

The prospective Oulu sample (assessed in more detail) was composed of 20 subjectswith severe or profound HI who were followed up at fixed intervals until September 1999on a prospective repeated measures basis. The use of several measures, i.e. hearing level,sentence, word and phoneme perception and auditory performance in everyday life,

Page 100: Speech perception and auditory performance in hearing

98

together with the prospective study design, enabled versatile and more specificinformation to be gathered on speech perception performance. This was found to beimportant, as it proved capable of increasing and extending our knowledge on the abilityof severely or profoundly postlingually hearing-impaired adults to perceive speechauditorily only using a multichannel cochlear implant.

The results of the Oulu sample (n=24) with respect to hearing level and wordrecognition did not differ appreciably from the pooled results of the nationwide survey(n=67), indicating that the Oulu sample may serve as a good representative of theauditory abilities of Finnish-speaking adult multichannel cochlear implant users.

MethodsAnalysis of the determination of the pure tone thresholds in sound field with the implantrevealed clear variations in clinical routines between the departments ofotorhinolaryngology. These data provide important support for programming the implantsand determining their settings, as they supply information on hearing thresholds with theimplant at different frequencies, and this aspect should not be overlooked when workingwith cochlear implant users.

The speech perception of the subjects in the Oulu sample (N=20) was assessed byclosed-set discrimination of phoneme quantity in words and open-set sentence, word, andnonsense syllable tests before and after implantation. None of these tests has beenvalidated for the assessment of auditory speech perception after multichannel cochlearimplantation, but rather they were constructed to measure the performance of hearing-impaired subjects before rehabilitation. Since no validated tests exist in Finland forassessing the performance of adult cochlear implant users, the results gathered here mayserve as basic information on the responsiveness and usability of speech perception testswith different degrees of difficulty for this purpose.

Closed-set discrimination of phoneme quantity in words was clearly the easiest of thespeech perception tests, and is justified by the structure of the Finnish language, in whichquantity performs a distinctive phonological function in the case of both vowels andconsonants (see 2.1.3.1, p 23). A closed-set test with only two alternatives entails achance element of 50%, however, reducing the reliability of the test. The test wasremoved from the present evaluation protocol when the subject reached a score of over95%, a level of performance that could be considered optimal, in that a slight variation inindividual scores could be associated with changes in alertness, etc. A closed-setdiscrimination test with only two alternatives is easy for the majority of adultmultichannel cochlear implant users and hence quite unresponsive.

The translated sentences of the Helen test represent the written and spoken phonemeand syllable structures of Finnish fairly well. The grammar and topics covered in thesentence test were quite simple, as is true of many of the sentence tests used in otherlanguages (cf. Bench et al. 1979, Nilsson et al. 1994), e.g. sentence material initiallyconstructed for the assessment of hearing-impaired children. The scores might have beenslightly lower if a more difficult test had been used (cf. Kiefer et al. 1996), but novalidated sentence test existed for Finnish at the time when this work began. The ceilingeffect in the estimated average level of sentence recognition (see 5.4) implies that the testused in this study cannot be used as the only describer of auditory performance in

Page 101: Speech perception and auditory performance in hearing

99

everyday life, and it seems to be most responsive during the initial six months. It will beimportant in future to use several sentence tests of differing difficulty to assess speechperception at this level.

Two separate word lists (Palva 1952) were used for the Oulu sample to avoid thelearning effect in word perception, but the restricted number of lists may still haveaffected the results. Unfortunately, there are few validated word perception tests availablefor Finnish. Phoneme recognition in open-set nonsense syllables was the most demandinglistening task, as anticipated, because it minimized the contextual redundancy inherent inmeaningful words or sentences. The nonsense syllable test clearly revealed thedifferences between the adult cochlear implant users in their ability to perceive mannersand places of articulation and the individual phonemes within the manner and placecategories. This knowledge may serve as a basis for planning auditory rehabilitation.

Changes in auditory performance in everyday life due to improved speech perceptionwere assessed with a modified version of CAP, the results showing that this approachmay usefully be taken as a measure of functional outcome after adult cochlearimplantation. CAP provides a measure that is globally informative for professionals, theimplant users themselves and health-care purchasers, as also discussed by Archbold andher colleagues (1998). A definite ceiling effect was evident in the average estimatedperformance of adult cochlear implant users in CAP, however (see 5.4), at a point wherethe word and phoneme recognition tests were still showing continued improvement.Since CAP was initially constructed for assessing the outcome of paediatric implantation,more demanding categories (e.g., understanding of conversation in a small group ofpeople or in a noisy environment) may be needed to evaluate the performance of adultcochlear implant users in everyday life.

The statistical methods used here (the generalized linear model (GLM) 3rd degreepolynomial model for repeated measurements, Spearman’s rank order correlationcoefficients and Pearson’s product moment correlation coefficients) enabled us to studythe association between auditory performance in everyday life and the formal speechperception measures.

The close correlation between CAP and the speech perception results indicates that theformal tests used here are good describers of auditory performance in everyday life(indicating an aspect of disability/handicap, activation limitation/participationrestriction). Deeper analysis of this association during the two-year follow-up (see 5.4)also showed that the correlation varied during this time. Hence, the various speechperception tests measure performance after implantation at different stages during thefollow-up. Initial improvement is described best by the easier tests, and improvementover a longer period by the more demanding tests.

The nonsense syllable test, for example, may be somewhat unresponsive formeasuring the performance of subjects with very limited auditory capabilities (beingdifficult and showing lowest correlation with CAP before implantation, see also Owens etal. 1985b), but it was particularly suitable for detecting continued improvement in speechperception during the two-year follow-up. The close correlation between CAP andnonsense syllable, vowel and consonant recognition during the evaluation period arguesfor the use of the nonsense syllable test as a further measure of auditory performance ineveryday life with adult multichannel cochlear implant users, while the high inter-

Page 102: Speech perception and auditory performance in hearing

100

correlations between the speech perception tests indicate that the formal tests measureaspects of speech perception that are closely associated with each other.

The high correlations between CAP and the formal speech recognition results areencouraging, since they provide some guidelines for the prediction of performance ineveryday life by means of formal speech perception results.

Test-retest reliabilityIt was considered important to study the reliability of the scores in a test session. Thiswas done at 3 months (initial learning period after switching on the implant) and at 24months (a more stable period), by assessing the subjects’ speech perception on twoconsecutive days with the sentence test and nonsense syllable test. The reliability of bothtests is quite good, since no statistically significant differences in the recognition scoreswere noted between the two consecutive days at either 3 or 24 months after switching onthe implant. This implies that no clear learning effect due to the restricted number of listsor items on a list could be detected.

6.2 Implications for the rehabilitation system

It has been claimed that standardised, specific, diverse and uniform testing procedureswould enhance the ability of researchers, clinicians and the cochlear implant usersthemselves to compare the results of speech perception tests and assess functionaloutcome (Danhauer et al. 1986, Luxford et al. 2001). Uniform testing procedures wouldalso facilitate adequate interpretations of the outcome by researchers and clinicians.

Some suggestions have included open-set sentence and word recognition tests in theminimal battery for assessing auditory speech perception in postlingually deafened adultcochlear implant users, while simpler tests (such as closed-set suprasegmental andsegmental tests) and vowel and consonant recognition tests would be excluded from thebattery (Luxford et al. 2001). However, since the goal of the rehabilitation system ishigh-quality patient care, the quality of the evaluation system should not be sacrificed inthe interests of time saving (Wiley et al. 1995). On the other hand, the removal of lessresponsive measures, e.g. closed-set tests, may in some cases allow sufficient time for theuse of effective procedures that require more time.

The retrospective nature of the data collection revealed clear differences in the clinicalroutines of the departments of otorhinolaryngology regarding both assessment methodsand assessment intervals. Based on these findings, systematic protocols are recommendedfor the clinical follow-up of cochlear implant users, including assessment of hearing leveland speech recognition tests with different degrees of difficulty used at fixed intervals,i.e. tests assessing sentence, word and phoneme perception, and protocols assessingauditory performance in everyday life are essential in the follow-up of adult multichannelcochlear implant users in order to assess the short-term and long-term effects ofimplantation. Tests with different degrees of difficulty are important 1) for motivating thesubjects themselves, 2) for motivating the staff at different levels of health care, 3) for

Page 103: Speech perception and auditory performance in hearing

101

programming the devices, and 4) for planning rehabilitation. Closed-set discriminationtests seem to be unresponsive with adult multichannel cochlear implant users (withpostlingual HI), however, on account of the ceiling effect, and are not recommendable inthe light of the present results.

Speech and language therapists need information not only on overall sentence andword perception after multichannel cochlear implantation, but also on the specificrecognition and confusion patterns of individual phonemes. The present results providesome guidelines on phonemes that are especially difficult to perceive with a multichannelcochlear implant. The differences in phoneme recognition between adult users of theseimplants and the results of experiments simulating HI sloping to higher frequencies(Kiukaanniemi & Määttä 1980) are also obvious. Hence, data on individual subjects’performance in phoneme perception need to be available when planning systematicauditory rehabilitation.

The present research concerns only the auditory speech perception of Finnish-speaking adults with postlingual severe or profound HI receiving a multichannel cochlearimplant. There is still a great deal of work to be done in the field of rehabilitation afterimplantation. The results with regard to phoneme perception and confusions may alsoprovide some clues for rehabilitation in the case of children receiving multichannelcochlear implants, since the phonemes that cause difficulty for adult implant users mayalso do so for children.

6.3 Future research

The present research yields merely a set of basic results on the auditory speech perceptionof Finnish-speaking adult multichannel cochlear implant users, since no such data havebeen available before. Although the duration of profound HI and residual hearing beforeimplantation have been found to be among the strongest predictors of speech perceptionperformance after multichannel cochlear implantation (Kessler et al. 1997, Kiefer et al.1998, van Dijk et al. 1999), all the present subjects achieved at least some open-setspeech perception, indicating that a longer duration of profound HI should not be adecisive contraindication for cochlear implant candidacy. It is important, however, tocounsel such subjects that improvement in open-set speech recognition can require alonger period of implant listening experience, and that the benefit achieved may be morelimited. Also, the effect of the audio-visual communication mode should not be forgotten,as this is the main communication mode in most everyday life situations. Research intothe above aspects will be needed in future to describe the improvement in performanceachieved by Finnish-speaking adult cochlear implant users in more detail.

Many other factors that were not focused on in this study have been found to beassociated with auditory performance after multichannel cochlear implantation. The dataacquired by the prospective repeated measures approach enabled various factorsassociated with the programming and settings of the devices to be noted, which mayincrease our knowledge of device-related factors affecting speech perception. Accordingto experimental studies with normal-hearing subjects, a simulated shift in the spectralenvelope has been found to be devastating for phoneme identification (Dorman et al.

Page 104: Speech perception and auditory performance in hearing

102

1997a, Fu & Shannon 1999c, Shannon et al. 1998). It would be interesting to knowwhether further analysis of the effects of the insertion depth of the electrode array, thecut-off frequencies of the individual channels, or the mismatch between the simulationand the original tonotopy would reveal any differences in phoneme recognition andconfusion patterns between individual subjects.

Other areas deserving closer attention are the optimal number of active channelsneeded for speech perception and the effect of the stimulation rate in different devices.The number of active channels needed for optimal vowel and consonant recognition mayvary (Dorman et al. 1997b, Fishman et al. 1997, Kiefer et al. 2000), and the stimulationrate can obviously be adjusted to improve the speech perception abilities of cochlearimplant users (Kiefer et al. 2000, Loizou et al. 2000, Kiefer et al. 2001). Further research— preferably in an experimental design enabling systematic variation of the differentstrategies and the parameters of the programming options in the devices — is needed inorder to study factors associated with the speech perception performance of Finnishmultichannel cochlear implant users. Such results could provide more information onoptimal programming of the implants and promote maximal individual benefit from theprogramming options contained in the devices for Finnish-speaking users.

As the stimuli used in the present tests were produced naturally, the contexts in whichthe consonants and vowels occurred reflected natural syllabic contexts in the Finnishlanguage. The present results may therefore reflect the effect of the natural contextualvariation on phoneme perception better than results obtained with a controlled phonemecontext (see 5.3.1 and 5.3.2). To the best of the author’s knowledge, no systematic reportexists on the effects of context on the perception of Finnish vowels and consonants.Further research into these effects will also be needed, not least to meet the needs ofspeech and language therapists planning auditory rehabilitation for cochlear implantusers.

The auditory performance of cochlear implant users can be measured on several levels(e.g. impairment, activation limitation/participation restriction and HRQoL). The reportedunresponsiveness of many generic inventories for detecting specific changes inperformance and HRQoL after implantation (Carter & Hailey 1999, Krabbe et al. 2000,Karinen et al. 2001) emphasises the need for specific measures of the disability/handicaplevel (or activation limitation/participation restriction) as well. The present findingsconcerning the responsiveness of CAP indicate that it could be used as a domain-specificmeasure for assessing changes in performance associated with improved hearing — andeven better with the slight modifications suggested here. Further research will also beneeded to investigate the inter-user and intra-user reliability of CAP. Certainly factors notdirectly related to the measures used here (e.g. age, IQ and personality) also account forvariability both in the results of the formal tests and in auditory performance in everydaylife (Gatehouse 1991). These aspects were not assessed here, but they may be consideredto have affected the auditory performance in particular.

Prior to cochlear implantation, 8 of the 14 HA users in the present series had achievedsome degree of open-set speech recognition with a conventional HA, but they gainedinsufficient help from this as far as the requirements of their work and social life wereconcerned. Without multi-channel cochlear implantation, the option might have been adisability pension, which had already been planned for three of them. Twelve of thesubjects have been able to continue working after implantation, and this has also affected

Page 105: Speech perception and auditory performance in hearing

103

their individual HRQoL. It would be interesting to know what effects this could have onthe cost-utility ratio for cochlear implantation. If resource allocation in health care werebased on generic HRQoL measures that are unresponsive to the changes occurring aftermultichannel cochlear implantation, a bias could exist in the assessment of the outcomeand benefits of such an intervention. Further research is needed to investigate both theresponsiveness and utility of different measures and the utility (and cost-utility) ofmultichannel cochlear implants for subjects with severe or profound HI, both globallyand in Finland.

Furthermore, the NIH Consensus Statement (Cochlear implants in adults and children1995) extends the audiological indications for cochlear implantation to subjects withbilateral severe or profound HI, and discussion on indications for cochlear implantationcandidacy is still in progress (Kiefer et al. 1998, Rubinstein et al. 1999). The presentresults suggest that the indications for cochlear implantation should certainly bereconsidered.

The present research revealed many interesting patterns of speech perception inpostlingually severely or profoundly hearing-impaired adults after receiving amultichannel cochlear implant. These findings may increase our knowledge of speechperception patterns in general and carry implications for planning the rehabilitation ofadult cochlear implant subjects. Cochlear implant users can be expected to benefit bothfrom information and guidance on possible speech recognition difficulties, and also fromsystematic listening practice during the rehabilitation period. More prospective andfocused research is needed in order to gain a further insight into issues that are associatedwith speech perception performance.

Page 106: Speech perception and auditory performance in hearing

7 Summary and conclusions

The aim of this work was to provide information on hearing levels, auditory speechperception and auditory performance in everyday life in adults with severe or profoundpostlingual HI having a multichannel cochlear implant. For this purpose the effects of theimplant on hearing level and speech perception were evaluated in a large series ofpatients (nationwide survey, N=67), and hearing level, sentence, word, syllable andphoneme recognition abilities and auditory performance in everyday life wereinvestigated in more detail in a small series according to a prospective repeated measuresdesign over a two-year follow-up period (Oulu sample, N=20).

1. One year after switch-on of the implant, the median sound field hearing level ofthe adult cochlear implant users in the nationwide survey at frequencies of 0.5,1, 2 and 4 kHz was comparable to the level of mild HI. The pure tone thresholdsin sound field measured for the subjects in the Oulu sample were in line with thepooled results of the nationwide survey.

All the subjects in the nationwide survey achieved at least some open-set word recognition after implantation, as assessed with standardised, validatedword recognition tests (mean 71%, 95% CI 61 to 81%). The lack of anyappreciable differences between these pooled results and those for the Oulusample indicates that the latter is fairly representative of the situation in thecountry as a whole.

2. Assessment of the auditory performance of the adult cochlear implant users ineveryday life with CAP showed that 31 of the 66 subjects in the nationwidesurvey for whom data were available were able to recognize environmentalsounds or some speech without speechreading before implantation, with a HA ifthey normally used one. The poorest performers (n=15) were not aware ofenvironmental sounds, and 17 of the 66 subjects were aware of environmentalsounds but were not able to respond to or recognize speech. Six months afterswitch-on, the majority of the subjects (40/48) were able to recognize somespeech without speechreading, and 26 of them were able to use the telephone

Page 107: Speech perception and auditory performance in hearing

105

with a known speaker. After one year 31 of the 40 subjects were able tounderstand conversation without speechreading.

3. The greatest improvement in the sentence recognition ability of the subjects inthe Oulu sample (followed-up according to the prospective repeated measuresdesign) to took place during the initial six months, whereas the mean open-setword recognition scores showed a steady improvement during the two-yearfollow-up. Vowel and consonant recognition in nonsense syllables also improvedsteadily over the entire follow-up period. Fourteen out of 19 subjects were usinga telephone in everyday life after six months, and 11 out of 12 were able to do sowith a known speaker after two years of listening experience with the cochlearimplant.

The estimated average sentence recognition score six months afterswitching on the implant was 77% (95% CI 66 to 89%, n=16), word recognition60% (95% CI 51 to 69%, n=17), vowel recognition 69% (95% CI 62 to 76%,n=17) and consonant recognition 53% (95% CI 48 to 59%, n=17), whereas thelevel of entire nonsense syllable recognition was only 37% (95% CI 31 to 43%,n=17), indicating the difficulty of this test. The estimated average sentencerecognition score after two years was 89% (95% CI 71 to 106%), wordrecognition 73% (95% CI 58 to 87%), syllable recognition 53% (95% CI 42 to63%), vowel recognition 80% (95% CI 68 to 92%) and consonant recognition67% (95% CI 57 to 76%).

4. The overall vowel recognition performance of the subjects in the Oulu sampleseemed to level off at 18 months (mean 80%, 95% CI 78 to 82%, n= 12),although a possible continued improvement in vowel recognition cannot beruled out without a longer follow-up. Deeper analysis of the individual vowelrecognition and confusions showed that the front vowels ([i], [y], [e] and [ø]),which were the most difficult to recognize, were also the ones that were mostoften confused with other vowels from three months of implant use onwards.

The Finnish front vowels, which gained the lowest recognition scores atnine months, were clearly confused most often with the closest vowel having ahigher F1, and in several cases a higher F2 as well, [y] being particularlyconfused with /e/ and /i/, [e] with /æ/ and /ø/ and [ø] with /æ/ and /e/. Thefront vowels also showed the smallest differences in formant frequencydistribution, and [y], [ø] and [e] remained the ones that were the most difficultto distinguish from the closest vowels with a similar F1 or F2 even after twoyears of listening experience. In general, the present results support earliersuggestions of partial adaptation to the electrical stimulation and of an effect ofreduced formant structure perception on vowel recognition.

5. A steady improvement in overall consonant recognition was seen in the subjectsbelonging to the Oulu sample throughout the follow-up period, the score onconsonant recognition at two years being 71% (95% CI 68 to 73%, n=12).Deeper analysis of individual consonant recognition patterns and confusions

Page 108: Speech perception and auditory performance in hearing

106

showed that the trill [r] (score 89%, 95% CI 84 to 93%), the fricatives [s h](score 67%, 95% CI 61 to 73%) and the stops [p t k] (score 65%, 95% CI 61 to69%) were among the three easiest manners of articulation to classify at threemonths, while the lateral [l] remained the consonant with the poorestidentification after two years (score 50%, 95% CI 39 to 62%, n=12).

Analysis of the consonant confusions showed that the subjects madeonly a few confusions between the manners of articulation, and that confusionsamong the resonants (the nasals [m n], lateral [l] and semivowels [j ]) weremore frequent than confusions between the voiceless stops ([p t k]) and theresonants. The labial place of articulation ([p m v]) was clearly the mostdifficult to classify throughout the 24-month follow-up, and the labials wereoften confused with the dentals and alveolars ([t s n l r]). Thus the subjects stillscored only 53% (95% CI 46 to 60%) in classification of the labial place ofarticulation after two years’ experience. There were also clear differences in therecognition of individual consonants within the same manner category and inconfusions between them, even though the manner of articulation may havebeen classified correctly.

The results also revealed some unidirectional response patterns. Whilethe nasals were mostly confused with each other and with the lateral and thesemivowels, the nasal [m] was more often misidentified as /n/ than was [n] as/m/. A similar pattern was seen in the semivowels, in that while they wereconfused either with each other or with the nasals, the semivowel [ ] was moreoften misidentified as a nasal or as /j/ than was the semivowel [j] as a nasal or/v/.

Thus the consonants were evidently more difficult to recognize than thevowels, and the multichannel cochlear implant users tended to confuse thephonemes more in the direction of a spectral energy distribution at higherfrequencies than in that of a distribution at lower frequencies. The resultssupport previous observations that categorization and labelling of phonemeswith only a minimal difference in spectral energy distribution remains difficult,but emphasizes the possibility of confusion in the direction of phonemes havingtheir spectral energy distribution at higher frequencies.

6. The Spearman correlation coefficients (rs > 0.81, p < 0.0001) between CAP andall the speech perception results were high, showing that the formal speechperception tests correlated significantly with a measure indicating auditoryperformance in everyday life. At the same time the inter-correlations betweenthe speech perception results themselves were high (r > 0.79, p < 0.0001),indicating that the formal tests measured aspects of speech perception that wereclosely associated with each other.

Systematic prospective assessment is needed for the follow-up of thecochlear implant users, and the present results point to the importance of formalspeech perception tests with different levels of difficulty for detecting changes inspeech perception performance in such subjects over longer periods of time. Thesystematic protocols should include speech recognition tests with differentdegrees of difficulty administered at fixed intervals, i.e. tests of sentence, word

Page 109: Speech perception and auditory performance in hearing

107

and phoneme perception and auditory performance in everyday life, in order toassess the short-term and long-term effects of cochlear implantation on theseparameters.

The present study also provides some guidelines on the phonemes thatare difficult to perceive with a multichannel cochlear implant. Information onthe specific recognition and confusion patterns of individual phonemes may beconsidered valuable for speech and language therapists when planning auditoryrehabilitation.

Page 110: Speech perception and auditory performance in hearing

References

Archbold S (1994) Implementing a paediatric cochlear implant programme: theory and practice. In:McCormick P, Archbold S & Sheppard S (eds) Cochlear implants for young children. Whurr,London, p 25–59.

Archbold S, Lutman ME & Marshall DH (1995) Categories of auditory performance. Ann OtolRhinol Laryngol 104 (Suppl 166): 312–314.

Archbold S, Lutman ME & Nikolopoulos T (1998) Categories of auditory performance: inter-userreliability. Br J Audiol 32: 7–12.

Arnold P (1998) Is there still a consensus on impairment, disability and handicap in audiology? BrJ Audiol 32: 265–271.

ASHA (1998) Joint Committee of the American Speech-Language-Hearing Association and theCouncil on Education of the Deaf. Hearing Loss: Terminology and classification. Positionstatement & technical report. Asha 40 (Suppl 18): 22–23.

Beck LB (2000) The role of outcomes data in health-care resource allocation. Ear& Hear 21: 89S–96S.

Bench J, Kowal Å & Bamford J (1979) The BKB (Bamford-Kowal-Bench) sentence lists forpartially-hearing children. Br J Audiol 13: 108–112.

Benkí JR (2001) Place of articulation and first formant transition pattern both affect perception ofvoicing in English. J Phonet 29: 1–22.

Berliner KI, Luxford WM, House WF (1985) Cochlear implants: 1981 to 1985. Am J Otol 6: 173–186.

Bess FH (2000) The role of generic health-related quality of life measures in establishingaudiological rehabilitation outcomes. Ear & Hear 21: 74S–79S.

Bilger RC (1977) Evaluation of subjects presently fitted with implanted auditory prostheses. AnnOtorhinolaryngol 86 (Suppl 38): 1–176.

Bilger RC & Wang MD (1976) Consonant confusions in patients with sensorineural hearing loss. JSpeech Hear Res 19: 718–748.

Billings KR & Kenna MA (1999) Causes of pediatric sensorineural hearing loss. Yesterday andtoday. Arch Otolaryngol Head Neck Surg 125: 517–521.

Binder JR, Frost JA, Hammeke TA, Bellgowan PS, Springer JA, Kaufman JN & Possing ET (2000)Human temporal lobe activation by speech and nonspeech sounds. Cerebral Cortex 10: 512–528.

Blamey P, Arndt P, Bergeron F, Bredberg G, Brimacombe J, Facer G, Larky J, Lindstrom B,Nedzelski J, Peterson A, Shipp D, Staller S & Whitford L (1996) Factors affecting auditoryperformance of postlinguistically deaf adults using cochlear implants. Audiol & Neuro-Otol 1:293–306.

Page 111: Speech perception and auditory performance in hearing

109

Blamey PJ, Dowell RC, Brown AM, Clark GM & Seligman PM (1987) Vowel and consonantrecognition of cochlear implant patients using formant-estimating speech processors. J AcoustSoc Am 82: 48–57.

Boothroyd A (1968) Developments in speech audiometry. Sound 2: 3–10.Boothroyd A, Mulhearn B, Gong J & Ostroff J (1996) Effects of spectral smearing on phoneme and

word recognition. J Acoust Soc Am 100: 1807–1818.Bosman AJ (1992) Review of the speech audiometric tests. In: Kollmeier B (ed) Moderne

Verfahren der sprachaudiometrie. Heidelberg, Median-Verlag von Killisch-Horn, Gmbh, p 11–34.

Bosman AJ & Smoorenburg GF (1995) Intelligibility of Dutch CVC syllables and sentences forlisteners with normal hearing and with three types of hearing impairment. Audiol 34: 260–284.

Brazier JE, Harper R, Jones NMB, O’Cathain A, Thomas KJ, Usherwood T & Westlake L (1992)Validating the SF-36 health survey questionnaire: New outcome measure for primary care. BrMed J 305: 160–164.

British Society of Audiology (1988) British Society of Audiology recommendation. Descriptors forpure tone audiograms, technical note. Br J Audiol 22: 123.

Brooks R, with the EuroQoL Group (1996) EuroQol: The current state of play. Health Policy 37:53–72.

Brown AM, Dowell RC, Clark GM, Martin LFA & Pyman BC (1985) Selection of patients formultiple-channel cochlear implantation. In: Schindler RA & Merzenich MM (eds) Cochlearimplants. Raven Press, New York, p 403–405.

Butts FM, Ruth RR & Schoeny ZG (1987) Nonsense syllable test (NST) results and hearing loss.Ear & Hear 8: 44–48.

Byrne D (1978) Selection of hearing aids for severely deaf children. Br J Audiol 12: 9–22.Byrne D, Dillon H, Tran K, Arlinger S, Wilbraham K, Cox R, Hagerman B, Hétu R, Kei J, Lui C,

Kiessling J, Kotby MN, Nasser NHA, El Kholy WAH, Nakanishi Y, Oyer H, Powell R,Stephens D, Meredith R, SirimannaT, Tavartkiladze G, Frolenkov G, Westerman S &Ludvigsen C (1994) An international comparison of long-term average speech spectra. J AcoustSoc Am 96: 2108–2120.

Byrne D, Parkinson A & Newall P (1990) Hearing aid gain and frequency response requirementsfor the severely/profoundly hearing impaired. Ear & Hear 11: 40–49.

Carter R & Hailey D (1999) Economic evaluation of the cochlear implant. Int J Technol AssessHealth Care 15: 520–530.

Cheng AK & Niparko JK (1999) Cost-utility of the cochlear implant in adults: A meta-analysis.Arch Otolaryngol Head Neck Surg 125: 1214–1218.

Clark GM, Tong YC, Martin LF & Busby PA (1981) A multiple channel cochlear implant: Anevaluation using an open-set word test. Acta Otolaryngol 91: 173–175.

Cochlear Ltd (1999) Nucleus® Technical reference manual Z43470 Issue I. Australia: Author.Cochlear implants (1988) NIH Cons Stat Online 1988 May 2–4, 7: 1–25.Cochlear implants in adults and children (1995) NIH Cons Stat 1995 May 15–17, 13: 1–30.Cohen NL, Waltzman SB & Fisher SG (1993) A prospective randomized study of cochlear

implants. New Engl Med 328: 233–237.Cohen NL, Waltzman SB, Roland JT, Bromberg B, Cambron N, Gibbs L, Parkinson W & Snead C

(1997) Results of speech processor upgrade in a population of veterans affairs cochlear implantrecipients. Am J Otol 18: 462–465.

Cole RA & Scott B (1974) Toward a theory of speech perception. Physic Review 81: 348–374.Cruickshanks KJ, Wiley TL, Tweed TS, Klein BEK, Kelin R, Mares-Perlman JA, Nondahl DM

(1998) Prevalence of hearing loss in older adults in Beaver Dam, Wisconsin. The epidemiologyof hearing loss study. Am J Epidemiol 148: 879–886.

Crystal D (1990) The Cambridge encyclopedia of language. Cambridge University Press, NewYork.

Davidson J, Hyde ML, Alberti PW (1989) Epidemiologic patterns in childhood hearing loss: Areview. Int J Ped Otorhinolaryngol 17: 239–266.

Davis AC (1989) The prevalence of hearing impairment and reported hearing disability amongadults in Great Britain. Int J of Epidem 18: 911–917.

Page 112: Speech perception and auditory performance in hearing

110

Davis AC (1995) Hearing in adults. The prevalence and distribution of hearing impairment andreported hearing disability in the MRC Institute of Hearing Research’s National Study ofHearing. Whurr Publishers Ltd., London.

De Filippo CL & Scott B (1978) A method for training and evaluating the reception of ongoingspeech. J Acoust Soc Am 63: 1186–1192.

Dèmonet JF, Chollet F, Ramsay S, Cardebat D, Nespoulous JL, Wise R, Rascol A & Frackowiak R(1992) The anatomy of phonological and semantic processing in normal subjects. Brain 115:1753–1768.

Danhauer JL, Garnett CM & Edgerton BJ (1985) Older persons’ performance on auditory, visualand auditory-visual presentations of the Edgerton and Danhauer Nonsense Syllable Test. Ear &Hear 6: 191–197.

Danhauer JL, Lucks LE, Abdala C (1986) A survey of speech and other auditory perceptionassessment materials used by cochlear implant centers. J Audit Res 26: 75–87.

Dillier N & Spillman T (1992) Deutsche version der Minimal Auditory Capability (MAC)-Test-Batterie: Anwendungen bei hörgeräte- und CI-trägern mit und ohne störlärm. In: Kollmeier B(ed) Moderne verfahren der sprachaudiometrie. Median Verlag, Heidelberg, p 238–263.

Dorman MF, Loizou PC & Rainey D (1997a) Simulating the effect of cochlear-implant electrodeinsertion depth on speech understanding. J Acoust Soc Am 102: 2993–2996.

Dorman MF, Loizou PC & Rainey D (1997b) Speech intelligibility as a function of the number ofchannels of stimulation for signal processors using sine-wave and noise-band outputs. J AcoustSoc Am 102: 2403–2411.

Dorman MF, Soli S, Smith LM, McCandless G & Parkin J (1990) Acoustic cues for consonantidentification by patients who use the Ineraid cochlear implant. J Acoust Soc Am 88: 2074–2079.

Doyle KJ, Mills D, Larky J, Kessler D, Luxford WM & Schindler RA (1995) Consonant perceptionby users of Nucleus and Clarion multichannel cochlear implants. Am J Otol 15: 676-681.

Dubno JR & Dirks DD (1982) Evaluation of hearing-impaired listeners using a nonsense-syllabletest. J Speech Hear Res 25:135–141.

Dubno JR, Dirks DD & Langhofer LR (1982) Evaluation of hearing-impaired listeners using anonsense-syllable test: II syllable recognition and consonant confusion patterns. J Speech HearRes 25: 141–148.

Duijvestijn JA, Anteunis LJC, Hendriks JJT & Manni JJ (1999) Definition of hearing impairmentand its effect on prevalence figures: A survey among senior citizens. Acta Otolaryngol (Stockh)119: 420–423.

Eddington DK (1980) Speech discrimination in deaf subjects with cochlear implants. J Acoust SocAm 68: 885–891.

Eddington DK, Dobelle WH, Brackman DE, Mladejovsky MG & Parkin JL (1978) Auditoryprosthesis research with multiple channel intracochlear stimulation in man. Ann Otol RhinolLaryngol 87 (Suppl 53): 5–38.

EN ISO 8253-2 (1998) Acoustics—Audiometric test methods—Part 2: Sound field audiometrywith pure tone and narrow-band test signals (ISO 8253-2:1992). European Committee forStandardization, Brussels.

EU Work Group (1996) EU Work Group on Genetics of Hearing Impairment. In: Martini A (ed)European Commission Directorate, Biomedical and Health Research Programme HereditaryDeafness, Epidemiology and Clinical Research (HEAR), Infoletter 2.

Ewertsen HW & Birk-Nielsen H (1973) Social hearing handicap index: Social handicap in relationto hearing impairment. Audiol 12: 180–187.

Fant G (1967) Auditory patterns of speech. In: Wathen-Dunn W (ed) Models for the perception ofspeech and visual form. MIT Press, Cambridge, Mass., p 111–125.

Feeny D, Furlong W, Boyle M & Torrance GW (1995) Multi-attribute health status classificationsystems. Health Utilities Index. Pharmacoeconom 7: 490–502.

Fishman KE, Shannon RV & Slattery WH (1997) Speech recognition as a function of the numberof electrodes used in the Speak cochlear implant speech processor. J Speech Lang Hear Res 40:1201–1215.

Page 113: Speech perception and auditory performance in hearing

111

Fletcher PC, Frith CD, Baker SC, Shallice T, Frackowiak RS & Dolan RJ (1995) The mind’s eye –precuneus activation in memory-related imagery. NeuroImage 2: 195–200.

Frattali CM (1998) Outcomes measurement: Definitions, dimensions and perspectives. In: FrattaliCM (ed) Measuring outcomes in speech-language pathology. Thieme Press, New York, p 1–27.

Fourcin AJ, Rosen SM, Moore BCJ, Douek EE, Clark GP, Dodson H & Bannister LH (1979)External electrical stimulation of cochlea: Clinical, psychological, speech-perceptual andhistological findings. Br J Audiol 13: 85–107.

Friesen LM, Shannon RV & Slattery WH III (1999) The effect of frequency allocation on phonemerecognition with the Nucleus 22 cochlear implant. Am J Otol 20: 729–734.

Fry DB, Abramson AS, Eimas PD & Liberman AM (1962) The identification and discrimination ofsynthetic vowels. Lang & Speech 5: 171–189.

Fu Q-J & Shannon RV (1999a) Effect of acoustic dynamic range on phoneme recognition in quietand noise by cochlear implant users. J Acoust Soc Am 106: L65–L70.

Fu Q-J & Shannon RV (1999b) Effects of electrode configuration and frequency allocation onvowel recognition with the Nucleus-22 cochlear implant. Ear & Hear 20: 332–344.

Fu Q-J & Shannon RV (1999c) Effects of electrode location and spacing on phoneme recognitionwith the Nucleus-22 cochlear implant. Ear & Hear 20: 321–331.

Fu Q-J & Shannon RV (2000) Effect of stimulation rate on phoneme recognition by Nucleus-22cochlear implant listeners. J Acoust Soc Am 107: 589–597.

Fujimura O (1967) On the second spectral peak of front vowels: A perceptual study of the role ofthe second and third formants. Lang & Speech 10: 181–193.

Galvin KL, Mavrias G, Moore A, Cowan RSC, Blamey PJ & Clark GM (1999) A comparison ofTactaid II+ and Tactaid 7 use by adults with a profound hearing impairment. Ear & Hear 20:471–482.

Gardner MJ & Altman DG (1989) (eds) Statistics with confidence—confidence intervals andstatistical guidelines. British Medical Journal, London.

Gatehouse S (1991) The role of non-auditory factors in measured and self-reported disability. ActaOtol (Suppl 476): 249–56.

Gatehouse S (1998) Speech tests as measures of outcome. Scand Audiol 27 (Suppl 49): 54–60.Gatehouse S (1999) Glasgow Hearing Aid Benefit Profile: Derivation and validation of a client-

centered outcome measure for hearing aid services. J Am Acad Audiol 10: 80–103.Geier L, Fisher L, Barker M & Opie J (1999) The effect of long-term deafness on speech

recognition in postlingually deafened adult Clarion® cochlear implant users. Ann Otol RhinolLaryngol 108: 80–3.

Gelfand SA (1998) Optimizing the reliability of speech recognition scores. J Speech Lang Hear Res41: 1088–1102.

Gilson BS, Gilson JS, Bergner M, Bobbit RA, Kressel S, Pollard WE & Vessalago M (1975) TheSickness Impact Profile: Development of an outcome measure of health care. Am J Publ Health65: 1304–1310.

Giolas TG, Owens E, Lamb SH & Schubert ED (1979) Hearing performance inventory. J SpeechHear Disord 44: 169–195.

Giraud A-L, Truy E, Frackowiak RSJ, Grégoire M-C, Pujol J-F & Collet L (2000) Differentialrecruitment of the speech processing system in healthy subjects and rehabilitated cochlearimplant patients. Brain 123: 1391–1402.

Giraud A-L, Price CJ, Graham JM & Frackowiak RSJ (2001) Functional plasticity of language-related brain areas after cochlear implantation. Brain 124: 1307–1316.

Gordon G, Feeny DH & Patrick DL (1993) Measuring Health-related Quality of Life. Ann InternMed 118: 622–629.

Gorlin RJ, Toriello HV & Cohen MM Jr. (1995) Hereditary hearing loss and its syndromes. Oxfordmonographs on medical genetics no. 28. Oxford University Press, Oxford.

Greenwood DD (1961) Critical bandwidth and the frequency coordinates of the Basilar Membrane.J Acoust Soc Am 33: 1344–1356.

Greenwood DD (1990) A cochlear frequency-position function for several species—29 years later.J Acoust Soc Am 87: 2592–2605.

Page 114: Speech perception and auditory performance in hearing

112

Griffiths JD (1967) Rhyming minimal contrasts: A simplified diagnostic articulation test. J AcoustSoc Am 42: 236–241.

Gstöttner WK, Hamzavi J & Baumgartner WD (1998) Speech discrimination scores ofpostlingually deaf adults implanted with the Combi 40 cochlear implant. Acta Otolaryngol(Stock) 118: 640–645.

Guyatt GH, Feeny DH & Patrick DL (1993) Measuring health-related quality of life. Ann InternMed 118: 622–629.

Hagerman B (1982) Sentences for testing speech intelligibility in noise. Scand Audiol 11: 79–87.Hagerman B (1999) Minimum Nordic requirements for clinical testing of hearing aids. Nordiska

samarbetsorganet för handikappfrågor (Nordic co-operation on disability). Working group forharmonization of requirements on aids for hearing-impaired persons, June 1998. Scand Audiol28: 102–116.

Häkkinen K (1978) Eräistä suomen kielen äännerakenteen luonteenomaisista piirteistä ja niidentaustasta. Lisensiaatintyö. Suomalaisen ja yleisen kielitieteen osasto. Turun yliopisto, Turku.

Halle M, Hughes GW & Radley J-PA (1957) Acoustic properties of stop consonants. The J AcoustSoc Am 29: 107–116.

Hamzavi JS, Baumgartner WD, Adunka O, Franz P & Gstöttner W (2000) Audiologicalperformance with cochlear reimplantation from analogue single-channel implants to digitalmulti-channel devices. Audiol 39: 305–310.

Harnsberger JD, Svirsky MA, Kaiser AR, Pisoni DB, Wright R & Meyer TA (2001) Perceptual“vowel spaces” of cochlear implant users: Implications for the study of auditory adaptation tospectral shift. J Acoust Soc Am 109: 2135–2145.

Harris JP, Anderson JP & Novak R (1995) An outcomes study of cochlear implants in deafpatients. Arch Otolaryngol Head Neck Surg 121: 398–404.

Harris KS, Hoffman HS, Liberman AM, Delattre PC & Cooper FS (1958) Effect of third-formanttransitions on the perception of the voiced stop consonants. J Acoust Soc Am 30: 122–126.

Haspiel GS & Bloomer RH (1961) Maximum auditory perception (MAP) word list. J Speech HearDisord 26: 156–163.

Hary JM & Massaro DW (1982) Categorical results do not imply categorical perception. Percep &Psychoph 32: 409–418.

Helms J, Müller J, Schön F, Moser L, Arnold W, Janssen T, Ramsden R, von Ilberg C, Kiefer J,Pfenningdorf F, Gstöttner W, Baumgartner W, Ehrenberger K, Skarzynski H, Ribari O,Thumfart W, Stephan K, Mann W, Heinemann M, Zorowka P, Lippert KL, Zenner HP,Bohndord M, Huttenbrink K & Hochmair-Desoyer I (1997) Evaluation of performance with theCombi 40 cochlear implant in adults: A multicentric clinical study. Oto-Rhino-Laryngol 59: 23–35.

Helsinki University Central Hospital. Helsinki sentences, CD.High WS, Fairbanks G & Glorig A (1964) Scale for self-assessment of hearing handicap. J Speech

Hear Disord 29: 215–230.Hinderink JB, Krabbe PFM & van den Broek P (2000) Development and application of a health-

related quality-of-life instrument for adults with cochlear implants: The Nijmegen cochlearimplant questionnaire. Otolaryngol Head Neck Surg 123: 756–765.

Hirsh IJ, Davis H, Silverman SR, Reynolds EG, Eldert E & Benson RW (1952) Development ofmaterials for speech audiometry. J Speech Hear Disord 17: 321–337.

Hochmair-Desoyer IJ & Hochmair ES (1980) An eight-channel scala tympani electrode forauditory prostheses. IEEE Trans Biomed Eng BME-27: 44–50.

Hodges JR & Patterson K (1997) Semantic memory disorders. Trends Cogn Sci 1: 68–72.Holden LK, Skinner MW & Holden TA (1997) Speech recognition with the Mpeak and Speak

speech-coding strategies of the Nucleus cochlear implant. Otolaryngol Head Neck Surg 116:163–167.

Holma T, Laitakari K, Sorri M & Winblad I (1997) New speech-in-noise test in different types ofhearing impairment. Acta Otolaryngol (Stockh) (Suppl 529): 71–73.

Hunt SM, McKenna SP, McEwen J, Backett EM, Williams J & Papp E (1980) Quantitativeapproach to perceived health status: A validation study. J Epidemiol Comm Health 334: 281–286.

Page 115: Speech perception and auditory performance in hearing

113

Hygge S, Rönnberg J, Larsby B & Arlinger S (1992) Normal-hearing and hearing-impairedsubjects’ ability to just follow conversation in competing speech, reversed speech and noisebackgrounds. J Speech Hear Res 35: 208–215.

Iivonen A & Laukkanen A-M (1993) Explanations for the qualitative variation of Finnish vowels.In: Iivonen A & Lehtihalmes M (eds) Studies in Logopedics and Phonetics 4. Publications ofthe Department of Phonetics, University of Helsinki, Series B: Phonetics, Logopedics andSpeech Communication 5, p 29–54.

Ingram D (1989) First language acquisition: Method, description, and explanation. CambridgeUniversity Press, New York.

ISO 389 (1991) Acoustics—Standard reference zero for the calibration of pure-tone air conductionaudiometers. International Organization for Standardization, Geneva, Switzerland.

ISO 7029 (1984) Acoustics—Threshold of hearing by air conduction as a function of age and sexfor otologically normal persons. International Organization for Standardization, Geneva,Switzerland.

ISO 8253-1 (1989) Acoustics—Audiometric test methods—Part 1: Basic pure tone air and boneconduction threshold audiometry. International Organization for Standardization, Geneva.

ISO 8253-3 (1996) Acoustics—Audiometric test methods—Part 3: Speech audiometry.International Organization for Standardization, Geneva.

Jacobson R, Fant CGM & Halle M (eds) (1967) Preliminaries to speech analysis: The distinctivefeatures and their correlates. MIT Press, Cambridge, Mass..

Jauhiainen T (1974) An experimental study of the auditory perception of isolated bi-syllableFinnish words. Academic Dissertation, University of Helsinki, Helsinki.

Jerger S & Jerger J (1979) Quantifying auditory handicap: A new approach. Audiol 18: 225–237.Jerger J, Speaks C & Tramell JL (1968) A new approach to speech audiometry. J Speech Hear

Disord 33: 318–328.Johnson JA, Ohinmaa A, Murti B, Sintonen H & Coons SJ (2000) Comparison of Finnish and U.S.-

based visual analog scale valuations of the EQ-5D measure. Med Dec Making 20: 281–289.Kalikow DN, Stevens KN & Elliott LL (1977) Development of a test of speech intelligibility in

noise using sentence materials with controlled word predictability. J Acoust Soc Am 61:1337–1351.

Karinen PJ, Sorri MJ, Välimaa TT, Huttunen KH & Löppönen HJ (2001) Cochlear implant patiensand quality of life. Scan Audiol (Suppl 52): 48–50.

Karlsmose B, Lauritzen T & Parving A (1999) Prevalence of hearing impairment and subjectivehearing problems in a rural Danish population aged 31–50 years. Br J Audiol 33: 395–402.

Karlsson F (1983) Suomen kielen äänne- ja muotorakenne. Werner Söderström, Helsinki.Kessler DK (1999) The Clarion® Multi-strategy™ cochlear implant. Ann Otol Rhinol Laryngol 108

(Pt 2): 8–16.Kessler DK, Osberger MJ & Boyle P (1997) Clarion® patient performance: An update on the adult

and children’s clinical trials. Scand Aud Suppl 47: 45–49.Kewley-Port D (1982) Measurement of formant transitions in naturally produced stop consonant-

vowel syllables. J Acoust Soc Am 72: 379–389.Kiefer J, Hohl S, Stürzebecher E, Pfennigdorff T & Gstöttner W (2001) Comparison of speech

recognition with different speech coding strategies (SPEAK, CIS, and ACE) and theirrelationship to telemetric measures of compound action potentials in the Nucleus CI 24Mcochlear implant system. Audiol 40: 32–42.

Kiefer J, Müller J, Pfenningdorff TH, Schön F, Helms J, von Ilberg C, Baumgartner W, GstöttnerW, Ehrenberger K, Arnold W, Stephan K, Thumfart W & Baur S (1996) Speech understandingin quiet and in noise with the CIS speech coding strategy (Med-El Combi40) compared to theMultipeak and Spectral Peak strategies (Nucleus). Oto-rhino-laryngol 58: 127–135.

Kiefer J, von Ilberg C, Reimer B, Knecht R, Gall V, Diller G, Sturzebercher E, Pfennigdorff T &Spelsberg A (1998) Results of cochlear implantation in patients with severe to profound hearingloss—Implications for patient selection. Audiol 37: 382–395.

Kiefer J, von Ilberg C, Rupprecht V, Hubner-Egner J & Knecht R (2000) Optimized speechunderstanding with the continuous interleaved sampling speech coding strategy in patients with

Page 116: Speech perception and auditory performance in hearing

114

cochlear implants: Effect of variations in stimulation rate and number of channels. Ann OtolRhinol Laryngol 109: 1009–1020.

Kirk KI, Tye-Murray N & Hurtig RR (1992) The use of static and dynamic vowel cues bymultichannel cochlear implant users. J Acoust Soc Am 91: 3487–3498.

Kiukaanniemi H (1980) Speech discrimination of patients with high frequency hearing loss. ActaOtolaryngol 89: 419–423.

Kiukaanniemi H & Määttä T (1980) Speech discrimination and hearing loss sloping to highfrequencies. Scand Audiol 9: 235–242.

Kiukaanniemi H & Mattila P (1980) Long-term speech spectra. A computerized method ofmeasurement and a comparative study of Finnish and English data. Scand Audiol 9: 67–72.

Klatt DH (1979) Speech perception: A model of acoustic-phonetic analysis and lexical access. JPhonet 7: 279–312.

Koivukangas P, Ohinmaa A & Koivukangas J (1995) Nottingham Health Profilen suomalainenversio. Stakes, Sosiaali- ja terveysalan tutkimus- ja kehittämiskeskus. Gummerus, Helsinki.

Kollmeier B & Wesselkamp M (1997) Development and evaluation of a German sentence test forobjective and subjective speech intelligibility assessment. J Acoust Soc Am 102: 2412–2421.

Kompis M, Vischer M & Häusler R (1999) Performanc of Compressed Analogue (CA) andContinuous Interleaved Sampling (CIS) coding strategies for cochlear implants in quiet andnoise. Acta Otolaryngol (Stock) 119: 659–664.

Kou BS, Shipp DB & Nedzelski JM (1994) Subjective benefits reported by adult Nucleus 22-channel cochlear implant users. J Otolaryngol 23: 8–14.

Krabbe PF, Hinderink JB & van den Broek P (2000) The effect of cochlear implant use inpostlingually deaf adults. Int J Technol Ass Health Care 16: 864–873.

Kramer SE, Kapteyn TS, Festen JM & Tobi H (1995) Factors in subjective hearing disability.Audiol 34: 311–320.

Kramer SE, Kapteyn TS, Festen JM & Tobi H (1996) The relationships between self-reportedhearing disability and measures of auditory disability. Audiol 35: 277–287.

Krause BJ, Schmidt D, Mottaghy FM, Taylor J, Halsband U, Herzog H, et al. (1998) The precuneusis a major player in a network of distributed brain regions in episodic memory retrieval.NeuroImage 7 (Pt 2): S828.

Kuhl PK (1981) Discrimination of speech by nonhuman animals: Basic auditory sensitivitiesconducive to the perception of speech-sound categories. J Acoust Soc Am 70: 340–349.

Kunst H, Marres H, van Camp G & Cremers C (1998) Non-syndromic autosomal dominantsensorineural hearing loss: A new field of research. Clin Otolaryngol 23: 9–17.

Königsmark BW & Gorlin RJ (1976) Genetic and metabolic deafness. WB Saunders co.,Philadelphia.

Laitakari K (1996) Speech recognition in noise: Development of a computerized test andpreparation of test material. Scand Audiol 25: 29–34.

Lamoré PJJ, Verweij C & Brocaar MP (1985) Investigations of the residual hearing capacity ofseverely hearing-impaired and profoundly deaf subjects. Audiolo 24: 343–361.

Lamoré PJJ, Verweij C & Brocaar MP (1990) Residual hearing capacity of severely hearing-impaired subjects. Acta Otolaryngol (Stockh) Suppl 469: 7–15.

Larsby & Arlinger (1994) Speech recognition and just-follow-conversation tasks for normal-hearing and hearing-impaired listeners with different maskers. Audiol 33: 165–176.

Lawson DT, Wilson BS, Zerbi M & Finley CC (1996) Speech processors for auditory prostheses.NIH contract N01-DC-5-2103. Third Quart Progr Rep, Feb 1, 1996, through April 30, 1996.

Lehtonen J (1970) Aspects of quantity in standard Finnish. Academic dissertation. Studiaphilological Jyväskyläensia VI. Jyväskylä, University of Jyväskylä.

Liberman AM (1957) Some results of research on speech perception. J Acoust Soc Am 29: 117–123.

Liberman AM, Delattre PC & Cooper FS (1958) Some cues for the distinction between voiced andvoiceless stops in initial position. Lang & Speech 1: 153–167.

Liberman AM, Harris KS, Hoffman HS & Griffith BC (1957) The discrimination of speech soundwithin and across phoneme boundaries. J Exp Psychol: Hum percep & perform 54: 358–368.

Page 117: Speech perception and auditory performance in hearing

115

Liberman AM & Mattingly IG (1985) The motor theory of speech perception revised. Cognition21: 1–36.

Liberman MC (1982) The cochlear frequency map for the cat: Labeling auditory-nerve fibers ofknown characteristic frequency. J Acoust Soc Am 72: 1441–1449.

Lidén G (1954) Speech Audiometry: An experimental and clinical study with Swedish languagematerial. Acta Oto-Laryngol (Stockh) (Suppl 114).

Lindblom B (1990) On the notion of “possible speech sound”. J Phonet 18: 135–152.Linn MW & Linn BS (1984) Self evaluation of life function (SELF) scale: A short, comprehensive

self-report of health for elderly adults. J Gerontol 39: 603-612.Liu X & Xu L (1994) Nonsyndromic hearing loss. An analysis of audiograms. Ann Otol Rhinol

Laryngol 103: 428–433.Loizou PC (1998) Mimicking the human ear: An overview of signal-processing strategies for

converting sound into electrical signals in cochlear implants. IEEE Sign Proc Mag: 101–130.Loizou PC, Dorman MF & Powell V (1998) The recognition of vowels produced by men, women,

boys and girls by cochlear implant patients using a six-channel CIS processor. J Acous Soc Am103: 1141–1149.

Loizou PC, Poroy O & Dorman M (2000) The effect of parametric variations of cochlear implantprocessors on speech understanding. J Acoust Soc Am 108: 790–802.

Lonka E (1993) Aikuinen huonokuuloinen ja huulioluvun oppiminen—huuliolukumenetelmänseurantatutkimus. Lisentiate study in Logopedics, University of Helsinki.

Lucks Mendel L & Danhauer JL (1997) Historical review of speech perception assessment. In:Lucks Mendel L & Danhauer JL (eds) Audiologic evaluation and management and speechperception assessment. Singular Publishing Group Inc., San Diego, p 1–5.

Lucks Mendel L, Danhauer JL & Singh S (1999) Singular’s illustrated dictionary of audiology.Singular Publishing Group Inc., San Diego.

Ludvigsen C (1974) Construction and evaluation of an audio-visual test (the Helen-test). In: BirkNielsen H & Kampp E (eds) Visual and audio-visual perception of speech. Scand Audiol (Suppl4). Almqvist & Wiksell, Stockholm, p 67–82.

Ludvigsen C (1981) Operation Helen. Auditive, visual and audio-visual perception of speech.Stougaard Jensen, Copenhagen.

Luxford WM (2001) Ad Hoc Subcommittee of the Committee on Hearing and Equilibrium of theAmerican Academy of Otolaryngology-Head and Neck Surgery. Minimum speech test batteryfor postlingually deafened adult cochlear implant patients. Otolaryngol-Head & Neck Surg 124:125–126.

Lynch MP, Eilers RE & Pero PJ (1992) Open-set word identification by an adult with profoundhearing impairment: Integration of touch, aided hearing, and speechreading. J Speech Hear Res35: 443–448.

Määttä TK, Sorri MJ, Huttunen KH, Välimaa TT & Muhli AA (2001) On the construction of aFinnish audiometric sentence test. Scand Audiol 29 (Suppl 52): 171–173.

Maillet CJ, Tyler RS & Jordan HN (1995) Change in the quality of life of adult cochlear implantpatients. Ann Otol Rhinol Laryngol 104 (Suppl 165): 31–48.

Mäki-Torkko EM, Lindholm PK, Väyrynen MEH, Leisti JT & Sorri MJ (1998) Epidemiology ofmoderate to profound childhood hearing impairments in Northern Finland. Any changes in tenyears? Scand Aud 27: 95–103.

Manrique M, Ramos A, Morera C, Sainz M, Algaba J & Cervera-Paz FJ (1998) Spanish studygroup on cochlear implants for persons with marginal benefit form acoustic amplification. ActaOtolaryngol (Stock) 118: 635–639.

Martin JAM (1982) Aetiological factors relating to childhood deafness in the European community.Audiol 21: 149–158.

Martin FN, Armstrong TW & Champlin CA (1994) A survey of audiological practices in theUnited States. Am J Audiol 3: 20–26.

Massaro DW (1994) Psychological aspects of speech perception: Implications for research andtheory. In: Gernsbacher MA (ed) Handbook of psycholinguistics. Academic Press Inc., SanDiego, p 219–263.

Page 118: Speech perception and auditory performance in hearing

116

Massaro DW & Hary JM (1984) Categorical results, categorical perception and hindhsight. Percep& Psychoph 35: 586–588.

McDermott HJ, McKay CM & Vandali AE (1992) A new portable sound processor for theUniversity of Melbourne/Nucleus Limited multielectrode cochlear implant. J Acoust Soc Am91: 3367–3371.

McDermott HJ, Dorkos VP, Dean MR & Ching TYC (1999) Improvements in speech perceptionwith user of the AVR TranSonic frequency-transposing hearing aid. J Speech, Lang Hear Res42: 1323–1335.

McDermott HJ & Dean MR (2000) Speech perception with steeply sloping hearing loss: effects offrequency transposition. Br J Audiol 34: 353–361.

McKay CM & McDermott HJ (1993) Perceptual performance of cochlear implantees using theSpectral Maxima Sound Processor (SMSP) and the Mini Speech Processor (MSP). Ear & Hear14: 350–367.

Miller GA & Nicely PE (1955) An analysis of perceptual confusions among some Englishconsonants. J Acoust Soc Am 27: 338–352.

Miyoshi S, Shimizu S, Matsushima J-I & Ifukube T (1999) Proposal of a new method fornarrowing and moving the stimulated region of cochlear implants: Animal experiment andnumerical analysis. IEEE Trans Biomed Eng 46: 451–460.

Moore BCJ & Glasberg BR (1983) Suggested formulae for calculating auditory-filter bandwidthsand excitation patterns. J Acoust Soc Am 74: 750–753.

Morrison AW (1993) Acquired sensorineural deafness. In: Ballantyne J, Martin MC & Martin A(eds) Deafness. Whurr Publishers Ltd, London, p 181–213.

Naito Y, Hirano S, Honjo I, Okazawa H, Ishizu K, Takahashi H, Fujiki N, Shiomi Y, Yonekura Y& Konishi J (1997) Sound-induced activation of auditory cortices in cochlear implant users withpost- and prelingual deafness demonstrated by positron emission tomography. Acta Oto-Laryngol (Stockh) 117: 490–496.

Naumanen T, Määttä T & Huttunen K (1997) Finnish phonology and the construction of anauditory test battery to cochlear implant patients. 3rd European Conference on Audiology, 18–21June 1997, Prague, Czech Republic. Book of Abstracts, p 273.

Nicolosi L, Harryman E & Kresheck J (1983) Terminology of communication disorders: Speech-language-hearing. 2nd ed. Williams & Wilkins, Baltimore.

Nilsson M, Soli SD & Sullivan JA (1994) Development of the hearing in noise test for themeasurement of speech reception thresholds in quiet and in noise. J Acoust Soc Am 95: 1085–1099.

Noble WG & Atherley GRC (1970) The hearing measure scale: A questionnaire for the assessmentof auditory disability. J Audit Res 10: 229–250.

Okazawa H, Naito Y, Yonekura Y, Sadato N, Hirano S, Nishizawa S, Magata Y, Ishizu K, TamakiN, Honjo I & Konishi J (1996) Cochlear implant efficiency in pre- and postlingually deafsubjects. Brain 119: 1297–1306.

Otto SR, Shannon RV, Brackmann DE, Hitselberger WE, Staller S & Menapace C (1998) Themultichannel auditory brain stem implant: performance in twenty patients. Otolaryngol—Head& Neck Surg 118:291–303.

Otto SR, Brackmann DE, Hitselberger WE & Shannon RV (2001) Brainstem electronic implantsfor bilateral anacrusis following surgical removal of cerebello pontine angle lesions.Otolaryngol Clin North America 34:485–99.

Owens E, Kessler D, Raggio M, Blazek B & Schubert E (1985a) MAC: A progress report. In:Schindler RA & Merzenich MM (eds) Cochlear implants, Raven Press, New York, p 499–503.

Owens E, Kessler D, Witte Raggio M & Schubert E (1985b) Analysis and revision of the MinimalAuditory Capabilities (MAC) battery. Ear & Hear 6: 280–290.

Palmer CS, Niparko JK, Wyatt R, Rothman M & de Lissovoy G (1999) A prospective study of thecost-utility of the multichannel cochlear implant. Arch Otolaryngol Head Neck Surg 125: 1221–1228.

Palva A (1965) Filtered speech audiometry: I Basic studies with Fiunnish speech towards thecreation of a method for the diagnosis of central hearing disorders. Academic dissertation,University of Turku, Turku.

Page 119: Speech perception and auditory performance in hearing

117

Palva T (1952) Finnish speech audiometry: Methods and clinical applications. Academicdissertation, University of Turku, Turku.

Palva T (1958) On speech audiometry. Spectrographic studies of single sounds and of test words.Acta Otolaryngol (Stockh) 49: 531.

Parkin JL (1990) The percutaneous pedestal in cochlear implantation. Ann Otol Rhinol Laryngol99: 796–801.

Parkinson AJ, Parkinson WS, Tyler RS, Lowder MW & Gantz BJ (1998) Speech perceptionperformance in experienced cochlear-implant patients receiving the SPEAK processing strategyin the Nucleus Spectra-22 cochlear implant. J Speech Lang Hear Res 41: 1073–1087.

Parving A (1993) Congenital hearing disability – epidemiology and identification. Int J PediatricOtorhinolaryngol 27: 29–46.

Parving A & Christensen B (1993) Training and employment in hearing-impaired subjects at 20–35years of age. Scand Audiol 22: 133–139.

Parving A & Newton V (1995) Editorial Guidelines for description of inherited hearing loss. JAudiol Med 4: ii–v.

Parving A & Stephens D (1997) Profound permanent hearing impairment in childhood: Causativefactors in two European countries. Acta Otolaryngol (Stockh) 117: 158–160.

Paul PV & Quigley SP (1994) (eds) Language and deafness. Singular Publishing Group Inc., SanDiego.

Pelizzone M, Cosendai G & Tinembart J (1999) Within-patient longitudinal speech receptionmeasures with continuous interleaved sampling processors for Ineraid implanted subjects. Ear &Hear 20: 228–237.

Petersen SE, Fox PT, Posner MI, Mintun M & Raichle ME (1989) Positron emission tomographicstudies of the processing of single words. J Cogn Neurosci 1: 153–170.

Peterson GE & Barney HL (1952) Control methods used in a study of the vowels. J Ac Soc Am 24:175–184.

Pfingst BE (1986) Stimulation and encoding strategies for cochlear prostheses. Otolaryngol ClinNorth Amer 19: 219–235.

Picheny MA, Durlach NI & Braida LD (1985) Speaking clearly for the hard of hearing I:Intelligibility differences between clear and conversational speech. J Speech Hear Res 28: 96–103.

Picheny MA, Durlach NI & Braida LD (1986) Speaking clearly for the hard of hearing II: Acousticcharacteristics of clear and conversational speech. J Speech Hear Res 29: 434–446.

Pisoni DB (1973) Auditory and phonetic memory codes in the discrimination of consonants andvowels. Perc & Psychoph 13: 253–260.

Pisoni DB & Sawusch JR (1975) Some stages of processing in speech perception. In: Cohen A &Nooteboom SG (eds) Structure and process in speech perception: Proceedings of theSymposium on Dynamic Aspects of Speech Perception, Eindhoven, Netherlands, August 4–6,1975, p 16–35.

Plomp R & Mimpen AM (1979) Improving the reliability of testing the speech reception thresholdfor sentences. Audiol 18: 43–52.

Pols LCW, van der Kamp LJTh & Plomp R (1969) Perceptual and physical space of vowel sounds.J Acoust Soc Am 46: 458–467.

Proops DW, Donaldson I, Cooper HR, Thomas J, Burrell SP, Stoddart RL, Moore A & Cheshire M(1999) Outcomes from adult implantation, the first 100 patients. J Laryngol & Otol 113: 5–13.

Quaranta A, Assennato G & Sallustio V (1996) Epidemiology of hearing problems among adults inItaly. Scand Audiol 25 Suppl 42: 7–11.

Ranta E, Rita H & Kouki J (1989) Biometria: Tilastotiedettä ekologeille. 2. korjattu painos.Yliopistopaino, Helsinki.

Remez RE (1994) A guide to research on the perception of speech. In: Gernsbacher MA (ed)Handbook of psycholinguistics. Academic Press Inc., San Diego, p 145–172.

Rihkanen H (1988) Rehabilitation assessment of postlingually deaf adults using single-channelintracochlear implants or vibrotactile aids: A prospective clinical study. Academic Dissertation,University of Helsinki, Helsinki.

Page 120: Speech perception and auditory performance in hearing

118

Rihkanen H (1990) Subjective benefit of communication aids evaluated by postlingually deafadults. Br J Audiol 24: 161–166.

Ringdahl A, Eriksson-Mangold M & Andersson G (1998) Psychometric evaluation of theGothenburg Profile for measurement of experienced hearing disability and handicap:Applications with new hearing aid candidates and experienced hearing aid users. Br J Audiol32: 375–385.

Ringdahl A & Grimby A (2000) Severe-profound haring impairment and health-related quality oflife among postlingual deafened Swedish adults. Scand Audiol 29: 266–275.

Rosen S, Faulkner A & Wilkinson L (1999) Adaptation by normal listeners to upward spectralshifts of speech: Implications for cochlear implants. J Acoust Soc Am 106: 3629–3636.

Rubinstein JT, Parkinson WS, Tyler RS & Gantz BJ (1999) Residual speech recognition andcochlear implant performance: Effects of implantation. Am J Otol 20: 445–452.

Schindler RA & Kessler DK (1993) Clarion cochlear implant: Phase I investigational results. Am JOtol 14: 263–272.

Schow RL & Nerbonne MA (1982) Communication screening profile: Use with elderly clients. Ear& Hear 3: 135–147.

Seligman P & McDermott H (1995) Architecture of the Spectra 22 speech processor. Ann Otol,Rhinol & Laryngol 104 (Suppl 166): 139–141.

Shallop JK, Arndt PL & Turnacliff KA (1992) Expanded indications for cochlear implantation:Perceptual results in seven adults with residual hearing. J Speech Lang Path 16: 141–148.

Shannon RV, Zeng F-G & Wygonski J (1998) Speech recognition with altered spectral distributionof envelope cues. J Acoust Soc Am 104: 2467–2476.

Silman S & Silverman CA (1997) Auditory diagnosis. Principles and applications. SingularPublishing Group Inc, San Diego.

Simmons FB, Lusted HS & Myers T (1985) Selection criteria for implant candidates. In: SchindlerRA & Merzenich MM (eds) Cochlear implants. Raven Press, New York, p 383–385.

Skinner MW, Fourakis MS, Holden TA, Holden LK & Demorest ME (1996) Identification ofspeech by cochlear implant recipients with the Multipeak (MPEAK) and Spectral Peak(SPEAK) speech coding strategies I. Vowels. Ear & Hear 17: 182–197.

Skinner MW, Fourakis MS, Holden TA, Holden LK & Demorest ME (1999) Identification ofspeech by cochlear implant recipients with the Multipeak (MPEAK) and Spectral Peak(SPEAK) speech coding strategies II. Consonants. Ear & Hear 20: 443–460.

Skinner MW, Holden LK & Holden TA (1995) Effect of frequency boundary assignment on speechrecognition with the SPEAK speech-coding strategy. Ann Otol Rhinol Laryngol 10 (Suppl 166):307–311.

Skinner MW, Holden LK, Holden TA, Dowell RC, Seligman PM, Brimacombe JA & Beiter AL(1991) Performance of postlinguistically deaf adults with the Wearable Speech Processor (WSPIII) and Mini Speech Processor (MSP) of the Nucleus multi-electrode cochlear implant. Ear &Hear 12: 3–22.

Smith AW (2001) WHO activities for prevention of deafness and hearing impairment in children.Scand Audiol 30 (Suppl 53): 93–100.

Sorri M, Muhli A, Mäki-Torkko E & Huttunen K (2000) Unambiguous systems for describingaudiogram configurations. J Audiol Med 9: 160–169.

Sovijärvi A (1938) Die gehaltenen, geflusterten und gesungenen vokale und nasale der Finnischensprache: Physiologish-physikalische lautanalysen. Ann Acad Scient Fenn B. Academicdissertation, University of Helsinki, Helsinki.

Sovijärvi A (1964) Tarkkuusmittauksia suomen yleiskielen /s/:n ja /�/:n äänikirjoista. Suomenlogopedis-foniatrisen yhdistyksen julkaisuja 1.

Stephens D & Hétu R (1991) Impairment, disability and handicap in audiology: Towards aconsensus. Audiol 30: 185–200.

Stephens D & Kerr P (2000) Auditory disablements: An update. Audiol 39: 322–332.Stevens KN (1980) Acoustic correlates of some phonetic categories. J Acoust Soc Am 68: 836–

842.Stevens KN (1989) On the quantal nature of speech. J Phonet 17: 3–45.

Page 121: Speech perception and auditory performance in hearing

119

Stöbich B, Zierhofer CM & Hochmair ES (1999) Influence of automatic gain control parametersettings on speech understanding of cochlear implant users employing the continuousinterleaved sampling strategy. Ear & Hear 20: 104–116.

Sulkala H & Karjalainen M (1992) Finnish descriptive grammars. Routledge, London.Summerfield AQ & Marshall DH (1995) Cochlear implantation in the UK 1990–1994. Report by

the MRC institute of hearing research on the evaluation of the national cochlear implantprogramme. HMSO Publications Centre, London.

Suomi K (1980) Voicing in English and Finnish. A typological comparison with an interlanguagestudy of the two languages in contact. Publications of the department of Finnish and generallinguistics of the University of Turku 10. University of Turku, Turku.

Suomi K (1996) Fonologian perusteita. Publications of the Department of Finnish, Saami andLogopedics 4. Oulu University Press, Oulu.

Tannahill JC (1979) The hearing handicap scale as a measure of hearing aid benefit. J Speech HearDisord 44: 91–99.

Thornton A & Raffin MJM (1978) Speech discrimination scores modelled as a binomial variable. JSpeech Hear Res 21:507–518.

Tomek MS, Brown MR, Mani SR, Ramesh A, Srisailapathy CR, Coucke P, Zbar RI, Bell AM,McGuirt WT, Fukushima K, Willems PJ, Van Camp G & Smith RJ (1998) Localization of agene for otosclerosis to chromosome 15q25-q26. Hum Mol Gen 7: 285–290.

Tyler RS, Preece JP, Lansing CR & Gantz BJ (1992) Natural vowel perception by patients with theIneraid cochlear implant. Audiol 31: 228–239.

Tyler RS, Preece JP, Lansing CR, Otto SR & Gantz BJ (1986) Previous experience as aconfounding factor in comparing cochlear-implant processing schemes. J Speech Hear Res 29:282–287.

Uimonen S, Huttunen K, Jounio-Ervasti K & Sorri M (1999) Do we know the real need for hearingrehabilitation at the population level? Hearing impairments in the 5- to 75-year-old cross-sectional Finnish population. Br J Audiol 33: 53–59.

Uimonen S, Mäki-Torkko E, Jounio-Ervasti K & Sorri M (1997) Hearing in 55 to 75 year oldpeople in Northern Finland—a comparison of two classifications of hearing impairment. ActaOtolaryngol (Stockh) Suppl 529: 69–70.

Vainio M (1996) Phoneme frequencies in Finnish text and speech. In: Iivonen A & Klippi A (eds)Studies in Logopedics and Phonetics 5. Publications of the Department of Phonetics. Universityof Helsinki, Series B. Phonetics, Logopedics and Speech Communication 6, p 181–194.

Van Camp G, Willems PJ & Smith RJ (1997) Nonsyndromic hearing impairment: unparalleledheterogeneity. Am J Hum Gen 60: 758–764.

van Dijk JE, Olphen AF, Langereis MC, Mens LHM, Brokx JPL & Smoorenburg GF (1999)Predictors of cochlear implant performance. Audiol 38: 109–116.

Van Den Bogaert K, Govaerts PJ, Schatteman I, Brown MR, Caethoven G, Offeciers FE, SomersT, Declau F, Coucke P, Van de Heyning P, Smith RJ & Van Camp G (2001) A second gene forotosclerosis, OTSC2, maps to chromosome 7q34-36. Am J Hum Gen 68: 495–500.

Van Wieringen A & Wouters J (1999) Natural vowel and consonant recognition by Laura cochlearimplantees. Ear & Hear 20: 89–103.

Vartiainen E, Kemppainen P & Karjalainen S (1997) Prevalence and etiology of bilateralsensorineural hearing impairment in a Finnish childhood population. Int J Ped Otorhinolaryngol41: 175–185.

Ventry IM & Weinstein BE (1982) The hearing handicap inventory for the elderly: A new tool. Ear& Hear 3: 128–134.

Waldstein RS & Boothroyd A (1995) Comparison of two multichannel tactile devices assupplements to speechreading in a postlingually deafened adult. Ear & Hear 16: 198–208.

Waltzman SB, Fisher SG, Niparko JK & Cohen NL (1995) Predictors of postoperative performancewith cochlear implants. Ann Otol, Rhinol & Laryngol 104 (Suppl 165): 15–18.

Ware JE & Sherbourne CD (1992) The MOS 36-item short form health survey (SF-36). Med Care30: 473–483.

Weisleder P & Hodgson W (1989) Evaluation of four Spanish word-recognition-ability lists. Ear &Hear 10: 387–393.

Page 122: Speech perception and auditory performance in hearing

120

Whitford LA, Seligman PM, Everigham CE, Antognelli T, Skok MC, Hollow RD, Plant, KL, GerinES, Staller SJ, McDermott HJ, Gibson WR & Clark GM (1995) Evaluation of the NucleusSpectra 22 processor and new speech processing strategy (SPEAK) in postlinguisticallydeafened adults. Acta Otolaryngol (Stockh) 115: 629–637.

WHO (1980) International classification of impairments, disabilities and handicaps. World HealthOrganization, Geneva, Switzerland.

WHO (1991) Grades of hearing impairment. Hearing Network News 1, September.WHO (2001) International classification of functioning, disability and health. World Health

Organization, Geneva, Switzerland.Wiik K (1965) Finnish and English vowels. A comparison with special reference to the learning

problems met by native speakers of Finnish learning English. Academic Dissertation. AnnalesUniversitatis Turkuensis, Series B, 94. Turku, University of Turku.

Wiik K (1966) Finnish and English laterals. A comparison with special reference to the learningproblems met by native speakers of Finnish learning English. Publications of the PhoneticsDepartment of the University of Turku No. 1. Turku, University of Turku.

Wiley TL, Stoppenbach DT, Feldhake LJ, Moss KA & Thordardottir ET (1995) Audiologicpractices: What is popular versus what is supported by evidence. Am J Audiol 4: 26–34.

Wilson B (1993) Signal processing. In: Tyler R (ed) Cochlear implants: Audiological foundations.Singular Publishing Group Inc., San Diego, p 35–85.

Wilson BS (1997) The future of cochlear implants. Br J Audiol 31: 205–225.Wilson BS, Finley CC, Lawson DT, Wolford RD, Eddington DK & Rabinowitz WM (1991) Better

speech recognition with cochlear implants. Nature 352: 236–238.Wilson BS, Lawson DT, Zerbi M (1995) Advances in coding strategies for cochlear implants. Adv

Otolaryngol Head Neck Surg 9: 105–129.Wilson RH, Coleay KE, Haenel JL & Browning KM (1976) Northwestern university auditory test

no. 6: Normative and comparative intelligibility functions. J Am Audiol Soc 1: 221–228.Zeng F-G & Galvin JJ III (1999) Amplitude mapping and phoneme recognition in cochlear implant

listeners. Ear & Hear 20: 60–74.Zierhofer CM, Hochmair-Desoyer IJ & Hochmair ES (1995) Electronic design of a cochlear

implant for multichannel high-rate pulsatile stimulation strategies. IEEE Trans Rehab Eng 3:112–116.

Zierhofer CM, Hochmair-Desoyer IJ & Hochmair ES (1997) The advanced Combi 40+ cochlearimplant. Am J Otol 18: S37–38.

Ziese M, Stützel A, von Specht H, Begall K, Freigang B, Sroka S & Nopp P (2000) Speechunderstanding with the CIS and the n-of-m strategy in the Med-El Combi40+ system. Oto-Rhino-Laryngol 62: 321–329.

Zimmerman-Phillips S & Murad C (1999) Programming features of the Clarion® multi-strategy™cochlear implant. Ann Otol Rhinol Laryngol 108: 17–21.

Zubick HH, Irizarry LM, Rosen L, Feudo Pjr, Kelly JH & Strome M (1983) Development ofspeech-audiometric materials for native Spanish-speaking adults. Audiol 22: 88–102.

Zwicker E (1961) Subdivision of the audible frequency range into critical bands(Frequenzgruppen). J Acoust Soc Am 33: 248.

Zwicker E & Terhardt E (1980) Analytical expressions for critical-band rate and critical bandwidthas a function of frequency. J Acoust Soc Am 68: 1523–1525.

Page 123: Speech perception and auditory performance in hearing

Appendices 1–4

Page 124: Speech perception and auditory performance in hearing

Appendix 1 Questionnaire used at the departments of

otorhinolaryngology

QUESTIONNAIRE

Items 1–4: Information on the subject

1. Age of the subject at the time of implantation: ________________________

2. Sex: ________________________

3. Aetiology of the HI:_______________________________________________________________________

4. Duration of profound HI before implantation:_______________________________________________________________________

Items 5–8: Data on implantation

5. Date of implantation: ____________________________________________________

6. Date of switch-on of the device:________________________________________

7. Implant: ______________________________________________________________

8. Processor and speech coding strategy: _______________________________________

_____________________________________________________________________

-----------------------------------------------------------------------------------------------------------

Items 9–21: Data on audiometry and speech audiometry before implantation

Please copy the results of a) the best audiometry and speech audiometry beforeimplantation, and b) the last audiometry and speech audiometry before implantation. Ifthe information cannot be found on the audiometry forms, please fill in the followingitems of the questionnaire.

A. Best audiometry and speech audiometry before implantation9. Pure tone thresholds without a HA in the best audiometry before implantation:_______________________________________________________________________

Page 125: Speech perception and auditory performance in hearing

10. Maximum speech recognition score without a HA in the best speech audiometrybefore implantation:_______________________________________________________________________

11. Pure tone thresholds in the sound field with a HA/HAs in the best audiogram beforeimplantation:________________________________________________________________________

12. Maximum speech recognition score with a HA/HAs in the best speech audiometrybefore implantation:_______________________________________________________________________

13. Word recognition test used:___________________________________________

14. Date of the best audiogram available before implantation: ______________________

B. Last audiometry and speech audiometry before implantation15. Pure tone thresholds without a HA in the latest audiometry before implantation:______________________________________________________________________

16. Maximum speech recognition score without a HA in the last speech audiometrybefore implantation:_______________________________________________________________________

17. Indicate whether the subject continued to use a HA/HAs until implantation:______________________________________________________________________

18. Pure tone thresholds in the sound field with a HA/HAs in the last audiogram beforeimplantation (if a HA was used):_________________________________________________________________

19. Maximum speech recognition score with a HA/HAs in the last speech audiometrybefore implantation:________________________________________________________________________

20. Word recognition test used:___________________________________________

21. Date of the last audiogram available before implantation: _____________________

------------------------------------------------------------------------------------------------------------

Page 126: Speech perception and auditory performance in hearing

Items 22–28: Data on audiometry and speech audiometry after implantation

Please copy the results of the audiometry and speech audiometry measurements afterimplantation. If the following information cannot be found on the audiometry forms,please fill in the following items of the questionnaire.

22. Pure tone thresholds in the sound field with the implant in the latest audiogram sinceimplantation:_______________________________________________________________________

23. Maximum speech recognition score with the implant in the latest speech audiometrysince implantation:_______________________________________________________________________

24. Time since switching on the device (years and months): ________________

25. Speech coding strategy used:_________________________________________

26. Number of active channels in use: _________________________________________

27. Word recognition test used:___________________________________________

28. Name of person filling in the questionnaire: ______________________________

Thank you!

Enclosure Categories of auditory performance

Page 127: Speech perception and auditory performance in hearing

TH

EC

AT

EG

OR

IES

OF

AU

DIT

OR

YP

ER

FOR

MA

NC

E

Ple

ase,

fill

inth

esu

bjec

t’sm

axim

alca

tego

ryof

perf

orm

ance

befo

reim

plan

tatio

n(w

ithH

A,i

fus

edbe

fore

impl

anta

tion)

.Ple

ase,

also

fill

inth

em

axim

alca

tego

ryof

perf

orm

ance

the

subj

ect

has

reac

hed

wit

han

impl

ant,

and

also

the

tim

edu

ring

whi

chth

eca

tego

ryof

perf

orm

ance

inqu

esti

onha

sbe

enre

ache

d(e

.g.,

ifth

esu

bjec

t“r

ecog

nize

sso

me

spee

chw

itho

utsp

eech

read

ing”

six

mon

ths

afte

rth

esw

itch-

onof

the

devi

ce,f

illi

na

cros

sin

the

colu

mn

inqu

esti

on).

Bef

ore

impl

ant

Cat

egor

yof

perf

orm

ance

1m

o.3

mos

.6

mos

.12

mos

.24

mos

.36

mos

.

Use

ofte

leph

one

wit

ha

know

nsp

eake

r.

Und

erst

andi

ngof

conv

ersa

tion

with

outs

peec

hrea

ding

.

Und

erst

andi

ngof

com

mon

phra

ses

wit

hout

spee

chre

adin

g.

Rec

ogni

tion

ofso

me

spee

chw

itho

utsp

eech

read

ing.

Iden

tifi

catio

nof

envi

ronm

enta

lsou

nds.

Res

pons

eto

sim

ple

wor

ds(e

.g.,

go).

Aw

aren

ess

ofen

viro

nmen

tals

ound

s.

No

awar

enes

sof

envi

ronm

enta

lsou

nds.

Tim

esi

nce

the

switc

h-on

(yea

rsan

dm

onth

s):_

____

____

____

____

____

____

___

Per

son

fill

ing

inth

eca

tego

ries

:___

____

____

____

____

____

____

____

____

____

_T

hank

you!

Page 128: Speech perception and auditory performance in hearing

Appendix 2 Bisyllabic minimal pairs used to assess the discrimination

of phoneme quantity

Table 1. Test words with variation in vowel, stop, liquid and fricative quantities.

Vowel quantity Stop quantity Liquid quantity Fricative quantity

<kato / kaato>

[k to / k to], (look /

fall)

<tuki / tukki>

[tuki / tuk i], (support /

log)

<käry / kärry>

[kæry / kær y], (smell

of burning / trolley)

<kisa / kissa>

[kis / kis ],

(competition / cat)

<raja / raaja>

[r j / r j ], (border,

limit / limb)

<pako / pakko>

[p ko / p k o], (escape

/ compulsion)

<hera / herra>

[her / her ], (whey /

gentleman)

<kasa / kassa>

[k s / k s ], (pile /

cash desk)

<sima / siima>

[sim / si m ], (mead /

thread)

<tina / Tiina>

[tin / ti n ], (tin /

womans’ name)

<puro / puuro>

[puro / pu ro], (stream

/ porridge)

<tuli / tuuli>

[tuli / tu li], (fire / wind)

Page 129: Speech perception and auditory performance in hearing

Tabl

e2.

Sent

ence

sus

edto

asse

ssse

nten

cere

cogn

itio

n.

Lis

t2L

ist4

Lis

t5L

ist6

Mik

äku

ukau

situ

lee

helm

ikuu

n

jälk

een?

(Wha

tmon

thco

mes

afte

r

Febr

uary

?)

Mik

ävi

ikon

päiv

äon

ylih

uom

enna

?

(Wha

tday

isit

the

day

afte

rto

mor

row

?)

Mik

äku

ukau

situ

lee

joul

ukuu

njä

lkee

n?

(Wha

tmon

thco

mes

afte

rD

ecem

ber?

)

Mik

äku

ukau

sion

enne

nta

mm

ikuu

ta?

(Wha

tmon

thco

mes

befo

reJa

nuar

y?)

Mik

älu

kued

eltä

älu

kua

neljä

kym

-

men

täse

itsem

än?

(Wha

tfig

ure

prec

edes

the

figu

refo

rty-

seve

n?)

Mik

älu

kued

eltä

älu

kua

kaks

ikym

-

men

tävi

isi?

(Wha

tfig

ure

prec

edes

the

figu

retw

enty

-fiv

e?)

Mik

älu

kued

eltä

älu

kua

kolm

ekym

-

men

täne

ljä?

(Wha

tfig

ure

prec

edes

the

figu

reth

irty

-fou

r?)

Mik

älu

kued

eltä

älu

kua

kolm

ekym

-

men

tävi

isi?

(Wha

tfig

ure

prec

edes

the

figu

reth

irty

-fiv

e?)

Min

kävä

rist

äon

pum

puli?

(Wha

t

colo

uris

cotto

nw

ool?

)

Min

kävä

rine

non

palo

auto

?(W

hat

colo

uris

afi

reen

gine

?)

Min

kävä

rist

äon

nurm

ikko

?(W

hat

colo

uris

gras

s?)

Min

kävä

rine

non

lippu

tank

o?(W

hat

colo

uris

afl

agpo

le?)

Onk

oL

iisa

tytö

nva

ipoj

anni

mi?

(Is

Lii

sath

ena

me

ofa

girl

ora

boy?

)

Onk

oN

iilo

tytö

nva

ipoj

anni

mi?

(Is

Nii

loth

ena

me

ofa

girl

ora

boy?

)

Onk

oJy

rkit

ytön

vaip

ojan

nim

i?(I

s

Jyrk

ithe

nam

eof

agi

rlor

abo

y?)

Onk

oM

arke

ttaty

tön

vaip

ojan

nim

i?(I

s

Mar

kett

ath

ena

me

ofa

girl

ora

boy?

)

Mih

insa

ksia

käyt

etää

n?(W

hata

re

scis

sors

used

for?

)

Mih

inve

istä

käyt

etää

n?(W

hati

sa

knif

e

used

for?

)

Mih

inku

ppia

käyt

etää

n?(W

hati

sa

cup

used

for?

)

Mih

inky

nää

käyt

etää

n?(W

hati

sa

penc

ilus

edfo

r?)

Paljo

nko

onko

lme

plus

seit

sem

än?

(Wha

tis

thre

epl

usse

ven?

)

Paljo

nko

onka

ksip

lus

kahd

eksa

n?

(Wha

tis

two

plus

eigh

t?)

Paljo

nko

onku

usip

lus

kahd

eksa

n?

(Wha

tis

six

plus

eigh

t?)

Paljo

nko

onko

lme

plus

kuus

i?(W

hati

s

thre

epl

ussi

x?)

Mik

äon

uude

nva

stak

ohta

?(W

hati

sth

e

oppo

site

ofne

w?)

Mik

äon

viha

isen

vast

akoh

ta?

(Wha

tis

the

oppo

site

ofan

gry?

)

Mik

äon

pien

enva

stak

ohta

?(W

hati

s

the

oppo

site

ofsm

all?

)

Mik

äon

kesä

nva

stak

ohta

?(W

hati

sth

e

oppo

site

ofsu

mm

er?)

Mik

älu

kutu

lee

luvu

nka

ksik

ymm

entä

-

kahd

eksa

njä

lkee

n?(W

hatf

igur

e

follo

ws

the

figu

retw

enty

-eig

ht?)

Mik

älu

kutu

lee

luvu

nka

ksik

ym-

men

täka

ksij

älke

en?

(Wha

tfig

ure

follo

ws

the

figu

retw

enty

-tw

o?)

Mik

älu

kutu

lee

luvu

nka

ksik

ym-

men

täse

itsem

änjä

lkee

n?(W

hatf

igur

e

follo

ws

the

figu

retw

enty

-sev

en?)

Mik

älu

kutu

lee

luvu

nko

lmek

ym-

men

täko

lme

jälk

een?

(Wha

tfig

ure

follo

ws

the

figu

reth

irty

-thr

ee?)

Onk

oE

lisa

tytö

nva

ipoj

anni

mi?

(Is

Elis

ath

ena

me

ofa

girl

ora

boy?

)

Onk

oE

llaty

tön

vaip

ojan

nim

i?(I

sE

lla

the

nam

eof

agi

rlor

abo

y?)

Onk

oM

eeri

tytö

nva

ipoj

anni

mi?

(Is

Mee

rith

ena

me

ofa

girl

ora

boy?

)

Onk

oM

aari

ttyt

önva

ipoj

anni

mi?

(Is

Maa

ritt

hena

me

ofa

girl

ora

boy?

)

Con

tinue

don

next

page

.

Appendix 3 The sentence test

Page 130: Speech perception and auditory performance in hearing

Tabl

e2.

Con

tinu

ed.

Lis

t2L

ist4

Lis

t5L

ist6

Mik

älu

kued

eltä

älu

kua

kaks

ikym

-

men

täka

hdek

san?

(Wha

tfig

ure

prec

edes

the

figu

retw

enty

-eig

ht?)

Mik

älu

kued

eltä

älu

kua

kaks

itois

ta?

(Wha

tfig

ure

prec

edes

the

figu

re

twel

ve?)

Mik

älu

kued

eltä

älu

kua

seit

sem

änto

ista

?(W

hatf

igur

epr

eced

es

the

figu

rese

vent

een?

)

Mik

älu

kued

eltä

älu

kua

kahd

eksa

ntoi

sta?

(Wha

tfig

ure

prec

edes

the

figu

reei

ghte

en?)

Paljo

nko

onka

ksik

ymm

entä

jaet

tuna

kahd

ella

?(W

hati

stw

enty

divi

ded

by

two?

)

Paljo

nko

onku

usito

ista

jaet

tuna

kahd

ella

?(W

hati

ssi

xtee

ndi

vide

dby

two?

)

Paljo

nko

onku

usij

aettu

naka

hdel

la?

(Wha

tis

six

divi

ded

bytw

o?)

Paljo

nko

onka

hdek

san

jaet

tuna

kahd

ella

?(W

hati

sei

ghtd

ivid

edby

two?

)

Mitä

kiel

täpu

huta

anR

ansk

assa

?(W

hat

lang

uage

issp

oken

isFr

ance

?)

Mitä

kiel

täpu

huta

anR

uots

issa

?(W

hat

lang

uage

issp

oken

inSw

eden

?)

Mitä

kiel

täpu

huta

anH

olla

nnis

sa?

(Wha

tlan

guag

eis

spok

enin

the

Net

herl

ands

?)

Mitä

kiel

täpu

huta

anSu

omes

sa?

(Wha

t

lang

uage

issp

oken

inFi

nlan

d?)

Mik

älu

kutu

lee

luvu

nne

ljäky

mm

entä

-

seit

sem

änjä

lkee

n?(W

hatf

igur

efo

llow

s

the

figu

refo

rty-

seve

n?)

Mik

älu

kued

eltä

älu

kua

kolm

ekym

-

men

täka

ksi?

(Wha

tfig

ure

follo

ws

the

figu

reth

irty

-tw

o?)

Mik

älu

kued

eltä

älu

kua

kaks

ikym

-

men

täko

lme?

(Wha

tfig

ure

follo

ws

the

figu

retw

enty

-thr

ee?)

Mik

älu

kued

eltä

älu

kua

kaks

i-

kym

men

täyk

si?

(Wha

tfig

ure

follo

ws

the

figu

retw

enty

-one

?)

Onk

oE

rkki

tytö

nva

ipoj

anni

mi?

(Is

Elis

ath

ena

me

ofa

girl

ora

boy?

)

Onk

oM

arjo

tytö

nva

ipoj

anni

mi?

(Is

Mar

joth

ena

me

ofa

girl

ora

boy?

)

Onk

oR

isto

tytö

nva

ipoj

anni

mi?

(Is

Ris

toth

ena

me

ofa

girl

ora

boy?

)

Onk

oN

iina

tytö

nva

ipoj

anni

mi?

(Is

Nii

nath

ena

me

ofa

girl

ora

boy?

)

Kum

piel

äin

onsu

urem

pi,k

oira

vai

hevo

nen?

(Whi

chan

imal

isbi

gger

,a

dog

ora

hors

e?)

Kum

piel

äin

onsu

urem

pi,k

issa

vai

leij

ona?

(Whi

chan

imal

isbi

gger

,aca

t

ora

lion?

)

Kum

piel

äin

onsu

urem

pi,s

ika

vai

lehm

ä?(W

hich

anim

alis

bigg

er,a

pig

ora

cow

?)

Kum

piel

äin

onsu

urem

pi,m

uura

hain

en

vaie

lefa

ntti

?(W

hich

anim

alis

bigg

er,

anan

tor

anel

epha

nt?)

Mik

äku

ukau

situ

lee

enne

njo

uluk

uuta

?

(Wha

tmon

thpr

eced

esD

ecem

ber?

)

Mik

ävi

ikon

päiv

äol

ieile

n?(W

hatd

ay

was

itye

ster

day?

)

Mik

ävi

ikon

päiv

äon

tänä

än?

(Wha

tday

isit

toda

y?)

Mik

ävi

ikon

päiv

äon

ylih

uom

enna

?

(Wha

tday

isit

the

day

afte

rto

mor

row

?)

Con

tinue

don

next

page

.

Page 131: Speech perception and auditory performance in hearing

Tabl

e2.

Con

tinu

ed.

Lis

t2L

ist4

Lis

t5L

ist6

Mik

äon

pyör

eän

vast

akoh

ta?

(Wha

tis

the

oppo

site

ofro

und?

)

Mik

äon

mär

änva

stak

ohta

?(W

hati

sth

e

oppo

site

ofw

et?)

Mik

äon

läm

pim

änva

stak

ohta

?(W

hati

s

the

oppo

site

ofw

arm

?)

Mik

äon

vaal

ean

vast

akoh

ta?

(Wha

tis

the

oppo

site

oflig

ht?)

Min

kävä

rist

äon

liitu

?(W

hatc

olou

ris

chal

k?)

Min

kävä

rine

non

tom

aatti

?(W

hat

colo

uris

ato

mat

o?)

Min

kävä

rine

non

sitr

uuna

?(W

hat

colo

uris

ale

mon

?)

Min

kävä

rist

äon

voi?

(Wha

tcol

our

is

butt

er?)

Onk

oO

ssit

ytön

vaip

ojan

nim

i?(I

s

Oss

ithe

nam

eof

agi

rlor

abo

y?)

Onk

oN

oora

tytö

nva

ipoj

anni

mi?

(Is

Noo

rath

ena

me

ofa

girl

ora

boy?

)

Onk

oPe

ntti

tytö

nva

ipoj

anni

mi?

(Is

Pent

tith

ena

me

ofa

girl

ora

boy?

)

Onk

oT

iina

tytö

nva

ipoj

anni

mi?

(Is

Tii

nath

ena

me

ofa

girl

ora

boy?

)

Mih

inly

htyä

käyt

etää

n?(W

hati

sa

lant

ern

used

for?

)

Mih

insa

haa

käyt

etää

n?(W

hati

sa

saw

used

for?

)

Mih

inpö

lyni

mur

iakä

ytet

ään?

(Wha

tis

ava

cuum

clea

ner

used

for?

)

Mih

inka

ssia

käyt

etää

n?(W

hati

sa

bag

used

for?

)

Paljo

nko

onku

usip

lus

yhde

ksän

?

(Wha

tis

thre

epl

usse

ven?

)

Paljo

nko

onvi

isip

lus

kuus

i?(W

hati

s

five

plus

six?

)

Paljo

nko

onvi

isip

lus

kaks

i?(W

hati

s

five

plus

two?

)

Paljo

nko

onvi

isip

lus

seit

sem

än?

(Wha

t

isfi

vepl

usse

ven?

)

Mik

äon

kuiv

anva

stak

ohta

?(W

hati

s

the

oppo

site

ofdr

y?)

Mik

äon

avoi

men

vast

akoh

ta?

(Wha

tis

the

oppo

site

ofop

en?)

Mik

äon

kape

anva

stak

ohta

?(W

hati

s

the

oppo

site

ofna

rrow

?)

Mik

äon

pim

eän

vast

akoh

ta?

(Wha

tis

the

oppo

site

ofda

rk?)

Mik

älu

kutu

lee

luvu

nku

usik

ymm

entä

-

kahd

eksa

njä

lkee

n?(W

hatf

igur

e

follo

ws

the

figu

resi

xty-

eigh

t?)

Mik

älu

kutu

lee

luvu

nka

hdek

san-

kym

men

täse

itse

män

jälk

een?

(Wha

t

figu

refo

llow

sth

efi

gure

eigh

ty-s

even

?)

Mik

älu

kutu

lee

luvu

n

neljä

kym

men

täku

usij

älke

en?

(Wha

t

figu

refo

llow

sth

efi

gure

fort

y-si

x?)

Mik

älu

kutu

lee

luvu

n

neljä

kym

men

täka

ksij

älke

en?

(Wha

t

figu

refo

llow

sth

efi

gure

fort

y-tw

o?)

Onk

oH

annu

tytö

nva

ipoj

anni

mi?

(Is

Han

nuth

ena

me

ofa

girl

ora

boy?

)

Onk

oPe

kka

tytö

nva

ipoj

anni

mi?

(Is

Pekk

ath

ena

me

ofa

girl

ora

boy?

)

Onk

oK

irst

ityt

önva

ipoj

anni

mi?

(Is

Kir

stit

hena

me

ofa

girl

ora

boy?

)

Onk

oT

omm

ityt

önva

ipoj

anni

mi?

(Is

Tom

mit

hena

me

ofa

girl

ora

boy?

)

Kui

nka

mon

tapy

örää

onau

toss

a?(H

ow

man

yw

heel

sdo

esa

car

have

?)

Kui

nka

mon

taku

ukau

ttaon

vuod

essa

?

(How

man

ym

onth

sar

eth

ere

ina

year

?)

Kui

nka

mon

taku

lmaa

onko

lmio

ssa?

(How

man

yco

rner

sar

eth

ere

ina

tria

ngle

?)

Kui

nka

mon

tavä

riä

onSu

omen

lipus

sa?

(How

man

yco

lour

sar

eth

ere

inth

e

Finn

ish

flag

?)

Con

tinue

don

next

page

.

Page 132: Speech perception and auditory performance in hearing

Tabl

e2.

Con

tinu

ed.

Lis

t7L

ist8

Lis

t9L

ist1

0

Mik

ävi

ikon

päiv

äon

tänä

än?

(Wha

tday

isit

toda

y?)

Mik

ävi

ikon

päiv

äon

ylih

uom

enna

?

(Wha

tday

isit

the

day

afte

rto

mor

row

?)

Mik

ävi

ikon

päiv

äol

ieile

n?(W

hatd

ay

was

itye

ster

day?

)

Mik

ävi

ikon

päiv

äol

itoi

ssa-

päiv

änä?

(Wha

tday

was

itth

eda

ybe

fore

yest

erda

y?)

Mik

älu

kued

eltä

älu

kua

neljä

kym

-men

täka

ksi?

(Wha

tfig

ure

prec

edes

the

figu

refo

rty-

two?

)

Mik

älu

kued

eltä

älu

kua

neljä

kym

-men

täku

usi?

(Wha

tfig

ure

prec

edes

the

figu

refo

rty-

six?

)

Mik

älu

kued

eltä

älu

kua

kahd

eksa

nkym

men

täse

itsem

än?

(Wha

tfig

ure

prec

edes

the

figu

reei

ghty

-

seve

n?)

Mik

älu

kued

eltä

älu

kua

kahd

eksa

nkym

men

täko

lme?

(Wha

t

figu

repr

eced

esth

efi

gure

eigh

ty-t

hree

?)

Min

kävä

rine

non

jääk

arhu

?(W

hat

colo

uris

apo

lar

bear

?)

Min

kävä

rist

äon

kahv

i?(W

hatc

olou

ris

coff

ee?)

Min

kävä

rine

non

man

sikk

a?(W

hat

colo

uris

ast

raw

berr

y?)

Min

kävä

rist

äon

mai

to?

(Wha

tcol

our

is

milk

?)

Onk

oV

altte

rity

tön

vaip

ojan

nim

i?(I

s

Val

tter

ithe

nam

eof

agi

rlor

abo

y?)

Onk

oA

nnel

ityt

önva

ipoj

anni

mi?

(Is

Ann

elit

hena

me

ofa

girl

ora

boy?

)

Onk

oM

arja

tytö

nva

ipoj

anni

mi?

(Is

Mar

jath

ena

me

ofa

girl

ora

boy?

)

Onk

oA

nita

tytö

nva

ipoj

anni

mi?

(Is

Ani

tath

ena

me

ofa

girl

ora

boy?

)

Mih

inki

tara

akä

ytet

ään?

(Wha

tis

a

guita

rus

edfo

r?)

Mih

insi

lmän

eula

akä

ytet

ään?

(Wha

tis

ane

edle

used

for?

)

Mih

inka

uhaa

käyt

etää

n?(W

hati

sa

scoo

pus

edfo

r?)

Mih

intu

olia

käyt

etää

n?(W

hati

sa

chai

r

used

for?

)

Paljo

nko

onku

usip

lus

kahd

eksa

n?

(Wha

tis

six

plus

eigh

t?)

Paljo

nko

onse

itse

män

plus

kahd

eksa

n?

(Wha

tis

seve

npl

usei

ght?

)

Paljo

nko

onka

ksip

lus

yhde

ksän

?

(Wha

tis

two

plus

nine

?)

Paljo

nko

onka

ksip

lus

kolm

e?(W

hati

s

two

plus

thre

e?)

Mik

äon

vanh

anva

stak

ohta

?(W

hati

s

the

oppo

site

ofol

d?)

Mik

äon

pitk

änva

stak

ohta

?(W

hati

sth

e

oppo

site

oflo

ng?)

Mik

äon

paha

nva

stak

ohta

?(W

hati

sth

e

oppo

site

ofba

d?)

Mik

äon

happ

aman

vast

akoh

ta?

(Wha

t

isth

eop

posi

teof

sour

?)

Mik

älu

kutu

lee

luvu

n

kaks

ikym

men

täyk

sijä

lkee

n?(W

hat

figu

refo

llow

sth

efi

gure

twen

ty-o

ne?)

Mik

älu

kutu

lee

luvu

n

kolm

ekym

men

täka

ksij

älke

en?

(Wha

t

figu

refo

llow

sth

efi

gure

thir

ty-t

wo?

)

Mik

älu

kutu

lee

luvu

n

kaks

ikym

men

täko

lme

jälk

een?

(Wha

t

figu

refo

llow

sth

efi

gure

twen

ty-t

hree

?)

Mik

älu

kutu

lee

luvu

n

kolm

ekym

men

täyk

sijä

lkee

n?(W

hat

figu

refo

llow

sth

efi

gure

thir

ty-o

ne?)

Onk

oL

otta

tytö

nva

ipoj

anni

mi?

(Is

Lot

tath

ena

me

ofa

girl

ora

boy?

)

Onk

oR

iina

tytö

nva

ipoj

anni

mi?

(Is

Riin

ath

ena

me

ofa

girl

ora

boy?

)

Onk

oIn

keri

tytö

nva

ipoj

anni

mi?

(Is

Inke

rith

ena

me

ofa

girl

ora

boy?

)

Onk

oE

eva

tytö

nva

ipoj

anni

mi?

(Is

Eev

ath

ena

me

ofa

girl

ora

boy?

)

Con

tinue

don

next

page

.

Page 133: Speech perception and auditory performance in hearing

Tabl

e2.

Con

tinu

ed.

Lis

t7L

ist8

Lis

t9L

ist1

0

Mik

älu

kued

eltä

älu

kua

yhde

ksän

-

tois

ta?

(Wha

tfig

ure

prec

edes

the

figu

re

nine

teen

?)

Mik

älu

kutu

lee

luvu

nka

ksit

oist

a

jälk

een?

(Wha

tfig

ure

follo

ws

the

figu

re

twel

ve?)

Mik

älu

kutu

lee

luvu

nka

hdek

sant

oist

a

jälk

een?

(Wha

tfig

ure

prec

edes

the

figu

reei

ghte

en?)

Mik

älu

kued

eltä

älu

kua

yksi

tois

ta?

(Wha

tfig

ure

prec

edes

the

figu

re

elev

en?)

Paljo

nko

onky

mm

enen

jaet

tuna

kahd

ella

?(W

hati

ste

ndi

vide

dby

two?

)

Paljo

nko

onku

usij

aettu

na

kahd

ella

?(W

hati

ssi

xdi

vide

dby

two?

)

Paljo

nko

onne

ljäto

ista

jaet

tuna

kahd

ella

?(W

hati

sfo

urte

endi

vide

dby

two?

)

Paljo

nko

onka

ksik

ymm

entä

jaet

tuna

kahd

ella

?(W

hati

stw

enty

divi

ded

by

two?

)

Mitä

kiel

täpu

huta

anK

iina

ssa?

(Wha

t

lang

uage

issp

oken

inC

hina

?)

Mitä

kiel

täpu

huta

anE

ngla

nnis

sa?

(Wha

tlan

guag

eis

spok

enin

Eng

land

?)

Mitä

kiel

täpu

huta

anPu

olas

sa?

(Wha

t

lang

uage

issp

oken

inPo

land

?)

Mitä

kiel

täpu

huta

anV

enäj

ällä

?(W

hat

lang

uage

issp

oken

inR

ussi

a?)

Mik

älu

kued

eltä

älu

kua

kaks

ikym

-

men

täse

itsem

än?

(Wha

tfig

ure

follo

ws

the

figu

retw

enty

-sev

en?)

Mik

älu

kued

eltä

älu

kua

kaks

ikym

-

men

täka

ksi?

(Wha

tfig

ure

follo

ws

the

figu

retw

enty

-tw

o?)

Mik

älu

kued

eltä

älu

kua

kolm

ekym

-

men

täko

lme?

(Wha

tfig

ure

prec

edes

the

figu

reth

irty

-thr

ee?)

Mik

älu

kued

eltä

älu

kua

kolm

ekym

-

men

täku

usi?

(Wha

tfig

ure

prec

edes

the

figu

reth

irty

-six

?)

Onk

oT

uom

asty

tön

vaip

ojan

nim

i?(I

s

Tuo

mas

the

nam

eof

agi

rlor

abo

y?)

Onk

oJe

nnit

ytön

vaip

ojan

nim

i?(I

s

Jenn

ithe

nam

eof

agi

rlor

abo

y?)

Onk

oV

eijo

tytö

nva

ipoj

anni

mi?

(Is

Vei

joth

ena

me

ofa

girl

ora

boy?

)

Onk

oSo

nja

tytö

nva

ipoj

anni

mi?

(Is

Sonj

ath

ena

me

ofa

girl

ora

boy?

)

Kum

piel

äin

onsu

urem

pi,j

outs

enva

i

tiai

nen?

(Whi

chan

imal

isbi

gger

,a

swan

ora

tit?)

Kum

piel

äin

onsu

urem

pi,m

ato

vai

käär

me?

(Whi

chan

imal

isbi

gger

,a

wor

mor

asn

ake?

)

Kum

piel

äin

onsu

urem

pi,l

eijo

nava

i

elef

antti

?(W

hich

anim

alis

bigg

er,a

lion

oran

elep

hant

?)

Kum

piel

äin

onsu

urem

pi,v

arpu

nen

vai

papu

kaija

?(W

hich

anim

alis

bigg

er,a

spar

row

ora

parr

ot?)

Mik

äku

ukau

situ

lee

tam

mik

uun

jälk

een?

(Wha

tmon

thfo

llow

s

Janu

ary?

)

Mik

äku

ukau

situ

lee

joul

ukuu

njä

lkee

n?

(Wha

tmon

thfo

llow

sD

ecem

ber?

)

Mik

ävi

ikon

päiv

äon

tänä

än?

(Wha

tday

isit

toda

y?)

Mik

ävi

ikon

päiv

äon

huom

enna

?(W

hat

day

isit

tom

orro

w?)

Mik

äon

luki

tun

vast

akoh

ta?

(Wha

tis

the

oppo

site

oflo

cked

?)

Mik

äon

ahke

ran

vast

akoh

ta?

(Wha

tis

the

oppo

site

ofdi

ligen

t?)

Mik

äon

siis

tinva

stak

ohta

?(W

hati

sth

e

oppo

site

oftid

y?)

Mik

äon

kylm

änva

stak

ohta

?(W

hati

s

the

oppo

site

ofco

ld?)

Con

tinue

don

next

page

.

Page 134: Speech perception and auditory performance in hearing

Tabl

e2.

Con

tinu

ed.

Lis

t7L

ist8

Lis

t9L

ist1

0

Min

kävä

rist

äon

soke

ri?

(Wha

tcol

our

issu

gar?

)

Min

kävä

rist

äon

lum

i?(W

hatc

olou

ris

snow

?)

Min

kävä

rine

non

jout

sen?

(Wha

tcol

our

isa

swan

?)

Min

kävä

rist

äon

sukl

aa?

(Wha

tcol

our

isch

ocol

ate?

)

Onk

oT

uula

tytö

nva

ipoj

anni

mi?

(Is

Tuu

lath

ena

me

ofa

girl

ora

boy?

)

Onk

oK

ertt

uty

tön

vaip

ojan

nim

i?(I

s

Ker

ttu

the

nam

eof

agi

rlor

abo

y?)

Onk

oV

uokk

oty

tön

vaip

ojan

nim

i?(I

s

Vuo

kko

the

nam

eof

agi

rlor

abo

y?)

Onk

oPä

ivit

ytön

vaip

ojan

nim

i?(I

s

Päiv

ithe

nam

eof

agi

rlor

abo

y?)

Mih

inlu

sikk

aakä

ytet

ään?

(Wha

tis

a

spoo

nus

edfo

r?)

Mih

inki

väär

iäkä

ytet

ään?

(Wha

tis

a

rifl

eus

edfo

r?)

Mih

inha

mm

asha

rjaa

käyt

etää

n?(W

hat

isa

toot

hbru

shus

edfo

r?)

Mih

inpe

nsse

liäkä

ytet

ään?

(Wha

tis

a

pain

tbru

shus

edfo

r?)

Paljo

nko

onvi

isip

lus

kolm

e?(W

hati

s

five

plus

thre

e?)

Paljo

nko

onka

ksip

lus

neljä

?(W

hati

s

two

plus

four

?)

Paljo

nko

onko

lme

plus

neljä

?(W

hati

s

thre

epl

usfo

ur?)

Paljo

nko

onku

usip

lus

neljä

?(W

hati

s

six

plus

four

?)

Mik

äon

suur

enva

stak

ohta

?(W

hati

s

the

oppo

site

ofbi

g?)

Mik

äon

huol

imat

tom

anva

stak

ohta

?

(Wha

tis

the

oppo

site

ofca

rele

ss?)

Mik

äon

pain

avan

vast

akoh

ta?

(Wha

tis

the

oppo

site

ofhe

avy?

)

Mik

äon

kork

ean

vast

akoh

ta?

(Wha

tis

the

oppo

site

ofhi

gh?)

Mik

älu

kutu

lee

luvu

nko

lmek

ym-

men

tävi

isij

älke

en?

(Wha

tfig

ure

follo

ws

the

figu

reth

irty

-fiv

e?)

Mik

älu

kutu

lee

luvu

nko

lmek

ym-

men

täne

ljäjä

lkee

n?(W

hatf

igur

e

follo

ws

the

figu

reth

irty

-fou

r?)

Mik

älu

kutu

lee

luvu

nka

ksik

ym-

men

tävi

isij

älke

en?

(Wha

tfig

ure

follo

ws

the

figu

retw

enty

-fiv

e?)

Mik

älu

kutu

lee

luvu

nka

ksik

ym-

men

täne

ljäjä

lkee

n?(W

hatf

igur

e

follo

ws

the

figu

retw

enty

-fou

r?)

Onk

oPa

avo

tytö

nva

ipoj

anni

mi?

(Is

Paav

oth

ena

me

ofa

girl

ora

boy?

)

Onk

oPa

ula

tytö

nva

ipoj

anni

mi?

(Is

Paul

ath

ena

me

ofa

girl

ora

boy?

)

Onk

oH

eikk

ityt

önva

ipoj

anni

mi?

(Is

Hei

kkit

hena

me

ofa

girl

ora

boy?

)

Onk

oT

uro

tytö

nva

ipoj

anni

mi?

(Is

Tur

oth

ena

me

ofa

girl

ora

boy?

)

Kui

nka

mon

taso

rmea

onih

mis

ellä

?

(How

man

yfi

nger

sdo

esa

hum

anbe

ing

have

?)

Kui

nka

mon

tase

kunt

iaon

min

uuti

ssa?

(How

man

yse

cond

sar

eth

ere

ina

min

ute?

)

Kui

nka

mon

tapä

ivää

onvi

ikos

sa?

(How

man

yda

ysar

eth

ere

ina

wee

k?)

Kui

nka

mon

tapu

olta

onla

ntis

sa?

(How

man

ysi

des

does

aco

inha

ve?)

Page 135: Speech perception and auditory performance in hearing

Tabl

e3.

Syll

able

sus

edto

asse

ssvo

wel

and

cons

onan

trec

ogni

tion

.

Non

sens

esy

llabl

ete

st

1.[k

]2.

[t]

3.[nu]

4.[nø

]5.

[jo]

6.[l]

7.[in

]8.

[ko

]9.

[ki]

10.[

k]11

.[no

]12

.[tu

]13

.[ip

]14

.[jæ

]15

.[j

]16

.[ke

]17

.[t

]18

.[ir]

19.[n

]20

.[ki]

21.

[ku

]22

.[s]

23.[ry

]24

.[ih

]25

.[nu

]26

.[r]

27.[kø

]28

.[t

]29

.[ro

]30

.[ni

]

31.

[ik]

32.[jø

]33

.[un

]34

.[k

]35

.[næ

]36

.[p

]37

.[uk

]38

.[mu]

39.[je

]40

.[s

]

41.

[j]

42.[li]

43.[r

]44

.[tu

]45

.[jy

]46

.[l

]47

.[p]

48.[

i]49

.[hu

]50

.[ji

]

51.

[m

]52

.[it]

53.[ku

]54

.[pu

]55

.[m

]56

.[tø

]57

.[si]

58.[ri]

59.[um

]60

.[h]

61.

[us]

62.[te

]63

.[n

]64

.[ti]

65.[kæ

]66

.[u]

67.[ru

]68

.[tæ

]69

.[il]

70.[ru

]

71.

[su]

72.[ut

]73

.[r

]74

.[ni

]75

.[]

76.[ty

]77

.[uh

]78

.[is]

79.[tø

]80

.[ji]

81.

[ul]

82.[ky

]83

.[ju

]84

.[mi]

85.[

n]86

.[pi

]87

.[re

]88

.[im

]89

.[ti

]90

.[lu

]

91.[ri

]92

.[ur

]93

.[ju

]94

.[h

]95

.[ny

]96

.[up

]97

.[t

]98

.[ræ

]99

.[hi

]10

0.[ne

]

Appendix 4 Nonsense syllable test