assessing interactive oral skills in

3
Assessing Interactive Oral Skills in EFL Contexts Jason Beale 15 4.2 Bias for best Testing language skills requires getting a representative sample of optimum performance. To 'biasfor best' means to elicit a candidate's best performance on a test. A poorly designed or deliveredtest will not provide consistent results. This may be because confusing instructions favour somestudents over others, or perhaps because role play situations require specific knowledge orvocabulary that only some of the candidates possess. Also, generally distracting or stressfulconditions of assessment will clearly disadvantage some students over others in a way that isunrelated to language ability. 4.3 Marking Applying descriptive assessment criteria to a candidate's oral performance requires makingsubjective (or impressionistic) judgements. This is in contrast to objective marking, in which aquantitative marking scheme is mechanically applied to structured tasks, such as multiple choiceand sentence completion exercises.A descriptive scale of oral performance, with clearly defined levels, can be combined withquantitative grades. Subjective judgements matching performance to such descriptors will thengenerate a quantitative grade score useful for ranking candidates. Analytic rating scales, thatdescribe specific language skills (see 2.5 above), can be graded differently to emphasize therelative importance of different skills. This is called 'weighting' the assessment criteria, and needsto be based on a clear understanding of the stages of language development ( construct validity )and the purpose of the assessment instrument ( systemic validity ). A graded analytic scale can thenbe combined with a global scale, for example as shown by McClean (1995) in her description of a negotiated grading scheme at a Japanese university.Grading is very much dependent on the purpose of the test and the way this is reflected in thecriteria. An achievement test that is criterion referenced will judge candidates individually on Assessing Interactive Oral Skills in EFL Contexts Jason Beale

Upload: arafandi

Post on 09-Nov-2015

217 views

Category:

Documents


3 download

TRANSCRIPT

Assessing Interactive Oral Skills inEFL ContextsJason Beale154.2 Bias for bestTesting language skills requires getting a representative sample of optimum performance. To 'biasfor best' means to elicit a candidate's best performance ona test.A poorly designed or deliveredtest will not provide consistent results. This may be because confusing instructions favour somestudents over others, or perhaps because role play situations require specific knowledge orvocabulary that only some of the candidates possess. Also, generally distracting or stressfulconditions of assessment will clearly disadvantage some students over others in a way that isunrelated to language ability.4.3 MarkingApplying descriptive assessment criteria to a candidate's oral performance requires makingsubjective (or impressionistic) judgements. This is in contrast to objective marking, in which aquantitative marking scheme is mechanically applied to structured tasks, such as multiple choiceand sentence completion exercises.A descriptive scale of oral performance, with clearly defined levels, can be combined withquantitative grades. Subjective judgements matching performance to such descriptors will thengenerate a quantitative grade score useful for ranking candidates. Analytic rating scales, thatdescribe specific language skills (see 2.5 above), can be graded differently to emphasize therelative importance of different skills. This is called 'weighting' the assessment criteria, and needsto be based on a clear understanding of the stages of language development (construct validity)and the purpose ofthe assessment instrument (systemic validity). A graded analytic scale can thenbe combined with a global scale, for example as shown by McClean (1995) in her description ofa negotiated grading scheme at a Japanese university.Grading is very much dependent on the purpose of the test and the way this is reflected in thecriteria. An achievement test that iscriterion referencedwill judge candidates individually onAssessing Interactive Oral Skills inEFL ContextsJason Beale16their achievement of learning outcomes. Score distribution depends solely on learning success,and it is theoretically possible for all candidates to receive 100%. On the other hand, a test forselection purposes will need to separate candidates, making fine distinctions between theirperformances. This kind of comparative assessment is callednorm referenced, and the scores areideally distributed on a bell-shaped curve, so that most candidates are placed at the centre of thedistribution.ConclusionAn effective test of interactive oral skills is not a haphazard selection of tasks chosen at random.Instead each assessment situation presents a set of practical demands that need to be specificallyaddressed. The principles of validity, reliability, practicality and bias for best provide basicguidelines for evaluating the effectiveness of a testinstrument.A theoretical model of oral skills is also necessary to structure what is fundamentally fleeting andchangeable. At the same it needs to be remembered that human skills are highly dependent on avariety of internal and external factors that are independent of language ability per se. The art oftesting involves minimising the influence of such extraneous factors and creating conditionsunder which all candidates can display theirgenuine abilities.Assessing Interactive Oral Skills inEFL ContextsJason Beale17BibliographyCanale, M. and M. Swain. 1980. Theoretical bases of communicative approaches to secondlanguage teaching and testing.Applied Linguistics(1): 1-47.Clankie, S. 1995. The SPEAK test of oral proficiency: A case study of incoming freshmen. InJALT AppliedMaterials: Language Testingin Japan.eds. J. D. Brown and S. O. Yamashita,119-125. Tokyo: The Japan Association forLanguage Teaching.Kent, H. 1998.The Australian Oxford Mini Dictionary.2nded. Melbourne: Oxford UniversityPress.McClean, J. 1995 Negotiating a spoken-English scheme with Japanese university students. InJALT AppliedMaterials: Language Testingin Japan.eds. J. D. Brown and S. O. Yamashita,119-125. Tokyo: The Japan Association forLanguage Teaching.Nagata, H. 1995. Testing oral ability: ILR and ACTFL oral proficiency interviews. InJALTApplied Materials:Language Testingin Japan.eds. J. D. Brown and S. O. Yamashita, 119-125. Tokyo: The Japan Association for Language Teaching.Nakamura, Y. 1995. Making speaking tests valid: Practical considerations in a classroom setting.InJALTAppliedMaterials:LanguageTestinginJapan.eds. J. D. Brown and S. O.Yamashita, 119-125. Tokyo: The Japan Association forLanguage Teaching.Turner, J. 1998. Assessing speaking.Annual Review of Applied Linguistics18: 192-207.Underhill, N. 1987.Testing Spoken Language: A Handbook of Oral Testing Techniques.Cambridge: Cambridge University Press.Weir, C. J. 1988.Communicative Language Testing with Special Reference to English as aForeign Language.Exeter: University of Exeter.Weir, C. J. 1993.Understanding and Developing Language Tests.New York: Prentice Hall.