theassessment of l2listcompreh

13
The Assessment of an L2 Listening Comprehension Construct: A Tentative Model for Test Specification and Development Author(s): Patricia Dunkel, Grant Henning, Craig Chaudron Source: The Modern Language Journal, Vol. 77, No. 2 (Summer, 1993), pp. 180-191 Published by: Blackwell Publishing on behalf of the National Federation of Modern Language Teachers Associations Stable URL: http://www.jstor.org/stable/328942 . Accessed: 26/09/2011 20:41 Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. Blackwell Publishing and National Federation of Modern Language Teachers Associations are collaborating with JSTOR to digitize, preserve and extend access to The Modern Language Journal. http://www.jstor.org

Upload: eazzari

Post on 26-Nov-2015

14 views

Category:

Documents


1 download

TRANSCRIPT

  • The Assessment of an L2 Listening Comprehension Construct: A Tentative Model for TestSpecification and DevelopmentAuthor(s): Patricia Dunkel, Grant Henning, Craig ChaudronSource: The Modern Language Journal, Vol. 77, No. 2 (Summer, 1993), pp. 180-191Published by: Blackwell Publishing on behalf of the National Federation of Modern Language TeachersAssociationsStable URL: http://www.jstor.org/stable/328942 .Accessed: 26/09/2011 20:41

    Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .http://www.jstor.org/page/info/about/policies/terms.jsp

    JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range ofcontent in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new formsof scholarship. For more information about JSTOR, please contact [email protected].

    Blackwell Publishing and National Federation of Modern Language Teachers Associations are collaboratingwith JSTOR to digitize, preserve and extend access to The Modern Language Journal.

    http://www.jstor.org

  • The Assessment of an L2 Listening Comprehension Construct: A Tentative Model for Test Specification and Development PATRICIA DUNKEL Department of Speech

    Communication Pennsylvania State University University Park, PA 16802 E-Mail: FYB@PSUVM

    GRANT HENNING Department of Speech

    Communication Pennsylvania State University University Park, PA 16802 E-Mail: GHH1 @PSUVM

    CRAIG CHAUDRON Department of ESL University of Hawaii

    at Manoa Honolulu, Hawaii 96822 E-Mail: T341280@uhccmvs

    ALTHOUGH A GROWING BODY OF EVI- dence suggests that tests of second/foreign lan- guage (L2) listening comprehension can, under certain conditions, demonstrate construct valid- ity as being in some sense distinct from mea- sures of other language skills (7; 10; 13; 15; 16), there exists neither uniform agreement on the components of listening comprehension and the factors that affect the success or failure of comprehension (12), nor general consensus on the best techniques for assessing that con- struct (8; 16; 17). However, when attempting to develop either traditional and/or more inno- vative types of tests (e.g., computer-adaptive tests) of L2 listening comprehension, the devel- opers must first examine the nature of a lis- tening comprehension construct and identify the critical aspects of listening comprehension assessment that need to be addressed. In this essay, the various aspects and components of listening comprehension assessment that need to be addressed when constructing a test of lis- tening comprehension proficiency are pro- posed in a tentative framework/model which specifies the person, competence, text, and item domains and components of assessment. The specifications focus on identification of the fac- tors that relate to the purpose, object, and agent of assessment.

    FOREIGN/SECOND LANGUAGE LISTENING COMPREHENSION: PROBLEMS OF DEFINITION AND OPERATIONALIZATION

    In his 1971 synthesis of native language (NL) reading and listening comprehension research, Carroll (cited in 12) observed that much of the research on listening comprehension con- ducted in the 1950s and 60s seemed focused on "establishing 'listening ability as a valid objec- tive for the educational program, without de- termining its nature and parameters in a pre- cise manner'. . . . [He] bemoaned the fact that even in the seventh decade of the 20th century, 'there does not seem to exist any comprehen- sive theory of listening behavior in relation to language behavior in general or to other modes of language reception' (e.g., reading compre- hension). . ."(p. 432).

    More than twenty years after Carroll's com- prehensive synthesis of the central foci and ba- sic quality of the listening research conducted during the first seven decades of this century, Witkin examined the state of the art of native language (NL) listening theory and research and pronounced it to be in a parlous state. Ac- cording to Witkin, one of the chief problems facing the field of NL listening research is the lack of a generally agreed upon definition of listening. She notes that the vocabulary used to discuss NL listening is diffuse, with "some terms being on a highly abstract level, and some describing quite specific physiological or neuro- logical processes" (p. 8).' Wolvin and Coakley have also expressed their concern about the dis- accord concerning definitions and operationali-

    The Modern Language Journal, 77, ii (1993) 0026-7902/93/180-91 $1.50/0 01993 The Modern Language Journal

    HPg42-215Realce

  • Dunkel, Henning, and Chaudron 181

    zations of the construct of listening comprehen- sion in the native language research.2

    In an extension of Wolvin and Coakley's ex- amination of the listening definitions given by sixteen communication scholars between 1925 and 1985, Glen analyzed an additional thirty-four definitions of listening appearing in speech communication scholarly books and instructional texts, and concurred that there indeed appears to be no universally accepted definition of the construct of native language listening. She contends that the problem of definition limits communication research in lis- tening and lessens the chance of finding effec- tive methods of training individuals to be effec- tive listeners (and speakers) of their own native language, English. The problem also highlights the difficulty of generating "a universal concep- tual definition of listening from which opera- tional guidelines may be established. . ." (p. 29).3 The attempt to operationalize guidelines, particularly for purposes of assessing L2 lis- tening comprehension, must proceed, however, despite the sometimes dismaying perplexity of the task. The collaborative effort described in this report is an attempt to advance under- standing of the construct, particularly as that understanding relates to the development of tests of listening comprehension proficiency.

    PREREQUISITES FOR UNDERSTANDING THE NATURE OF A LISTENING COMPREHENSION CONSTRUCT

    Prior to designing and utilizing a possible framework for the assessment of L2 listening comprehension proficiency, we must first at- tempt to understand and articulate the nature of the listening comprehension construct so that we have a clear understanding of what it is that we are attempting to assess. We need to do this so that we can make more accurate judgments as to whether or not we are suc- ceeding in our assessment techniques. We need, therefore, to do the following: 1) to delimit a listening comprehension construct; 2) to elabo- rate the ordered components of listening com- prehension; 3) to anchor elaborations of tax- onomies of listening skills and tasks within appropriate models and theories of compre- hension; 4) to validate models incorporating lis- tening comprehension components; and 5) to orient models of listening comprehension to appropriate purposes for language learning, language use, and language assessment.

    The Need to Delimit a Listening Comprehension Construct. At the outset of any test development project, it is important to delimit focus to the cognitive operation of comprehension, as distinct from earlier prerequisite operations such as ori- entation, attention, perception, and recognition, and from subsequent, more complex cognitive op- erations such as application, analysis, synthesis, and evaluation-to borrow a few of the terms employed in Bloom et al.'s familiar taxonomy. The earlier knowledge operations may be nec- essary but not sufficient to indicate comprehen- sion. The later more complex operations may go beyond comprehension to include elements of formal reasoning ability. Although we main- tain that effective listening involves more than just comprehension, and that comprehension is required in other modalities in addition to the aural modality, for purposes of assessment of a listening comprehension construct, it is appro- priate initially to limit focus to necessary and sufficient elements of listening comprehension.

    Anderson makes a convincing case that many test items constructed to assess comprehension do not in fact measure comprehension per se, and that for minimal evidence of comprehen- sion to be present, the test item or task should require some transformation of the targeted in- formation at a deep structural, semantic level. This requirement does not imply, however, that comprehension is not facilitative of successful performance with pre- or extra-comprehension tasks. Indeed, one of the major paradoxes in determining the valid content of assessment is the observation that items measuring trivial dis- crete precursor elements of a skill to be assessed appear to differentiate well between persons who do or do not manifest the actual skill (17). These narrow-spectrum, precursory-skill items may actually fail to tap the targeted compre- hension construct directly, but because com- prehension is so facilitative of successful per- formance with the comprehension-precursor items or because mastery of the precursory skills is so facilitative of comprehension, the items appear to satisfy the usual correlational criteria of valid measurement of comprehen- sion. For example, because comprehenders are better at recognition of phonemes within a pas- sage than noncomprehenders are, phoneme recognition items show high discrimination in the assessment of comprehension, and follow- ing the rejection of nondiscriminating items, comprehension can wrongly become redefined operationally in tests as phoneme recognition.

  • 182 The Modern Language Journal 77 (1993) Thus we can allow our own correlational, fac- tor-analytic tools to deceive us. The same prob- lem can be noted with higher-order synthesis, application, and evaluation-type comprehension items. In this case, the ability to apply informa- tion that has been comprehended from a pas- sage will correlate well with actual comprehen- sion because someone who has comprehended should be better at applying what has been comprehended than someone who has not comprehended. Here again, correlational evi- dence could mislead an investigator into be- lieving that comprehension and the ability to apply what has been comprehended are one and the same.

    It seems vital to differentiate not only be- tween actual comprehension and its attendant precursor and subsequent skills, but also to dif- ferentiate among broad facets within compre- hension itself. The existence of a person/com- petence/ability facet and a text/task/difficulty facet within a listening comprehension con- struct appears axiomatic; although in defining the construct, the nature of the interplay be- tween these facet domains is apparently critical. Thus these two broad facet domains (i.e., per- son/competence/ability versus text/task/diffi- culty) are like the two legs of a ladder: both are required and both must be joined by appropri- ately spaced rungs if certain kinds of planned ascent are to become possible. Particular texts are appropriate for particular (though dif- fering) persons, particular tasks are represen- tative of particular (though diverse) compe- tencies, and particular difficulty levels are identified with reference to particular (though various) ability levels.

    In a measurement sense, unless some impli- cational or Guttman-type scale can be formed with monotonic increment of person ability and task difficulty in the same response matrix, whatever we choose to label listening comprehen- sion would not qualify as a unitary measure- ment construct, and the reporting of unitary scores as a reflection of comparative perfor- mance would be misleading. Some empirical test of the integrity of the construct as defined by particular tasks and persons for particular assessment purposes should exist. Possibly, in practice, more than one listening comprehen- sion assessment model may also exist, and these models may be distinguished by the purposes for assessment as reflected in the kinds of per- sons, competencies, texts and tasks the models admit.

    The Need to Elaborate Ordered Components ofLis-

    tening Comprehension. A number of scholars have provided useful taxonomies of listening comprehension component skills or operations (24; 27; 28; 29; 30; 32; 34; 39). However, few of these valuable efforts have attempted to pro- vide clear definitions or non-redundant order- ings of components in any systematic graded hierarchy that has been shown empirically to correspond to task difficulty. Scientific defi- nitions of constructs and their components, whether denotative or connotative, should con- tain sufficient information to distinguish the referent from nonreferents and should not contain information that would exclude any in- tended referent; that is, they should be neither unduly inclusive nor unduly exclusive (22). Also, hierarchical ordering of the components of a construct according to difficulty or some other organizing principle in order to form im- plicational scales or to provide operational definitions of latent traits is one of the primary goals of behavioral assessment.

    Bloom and colleagues have provided one of the few taxonomies of educational objectives with demonstrated utility for the placement of comprehension within other hierarchically- ordered cognitive operations. But even that taxonomy, on empirical investigation, has been criticized because not all of the categories of the taxonomy (i.e., knowledge, comprehension, applica- tion, analysis, synthesis, and evaluation) appear to be arranged hierarchically in the order hypoth- esized (20; 25). Synthesis and evaluation appear to be the misordered categories, with some con- flicting evidence suggesting that synthesis may precede comprehension and that evaluation may not constitute the highest or final level of the hierarchy. Clearly, however, the criticisms have not negated the general utility of this tax- onomy.

    The Proficiency Guidelines of the American Council on the Teaching of Foreign Languages (ACTFL), as well as the various articulations of the Interagency Language Roundtable/Educa- tional Testing Service language skill level de- scriptions (23), provide hierarchies of global characterizations of performance at stages of language proficiency. While these latter efforts were not intended to constitute comprehensive models of language ability or language use, they do attempt to describe and order behav- iors at incremental stages along a proficiency continuum. Most importantly, they have dem- onstrated utility for particular assessment pur- poses in particular situational contexts. The ACTFL guidelines have not escaped criticism

  • Dunkel, Henning, and Chaudron 183

    for a number of perceived inadequacies (e.g., 4; 5; 11; 21), but none of these criticisms has negated the utility of the guidelines. The most frequent criticism of the guidelines has been that more empirical validational research is needed. This need has recently been addressed, in part, by Dandonoli and Henning.

    The Need to Anchor Elaborations of Taxonomies of Listening Skills and Tasks within Appropriate Models and Theories of Comprehension. Many of the current L2 listening comprehension tests were constructed with little or no explicit refer- ence to any particular model or theory of lis- tening comprehension. This same criticism may be offered of some of the taxonomies of lis- tening comprehension subskills. Given the of- ten primitive status and tentative nature of such models and theories, this weakness is not en- tirely inexcusable. Still, component skills and tasks can be aggregated within some organiza- tional framework such as that suggested by Bloom and his colleagues' taxonomy, or that implicit in any of several cognitive processing models (19). The advantage of reference to ap- propriate theory lies in the hope that the inter- relationships among the hypothesized compo- nents of the listening comprehension construct can be interpreted and explained.

    McLaughlin has noted that theories are ei- ther deductive or inductive in kind depend- ing on whether the source of hypotheses is in- terim solutions (deductive) or empirical data (inductive). Also, theories are either micro or macro in nature depending on the range of phenomena they are proposed to accommo- date. McLaughlin further proposed that theo- ries should be evaluated in terms of their defi- nitional adequacy and their explanatory power. For purposes of listening comprehension as- sessment, an organizing theory should mini- mally provide adequate definition of compo- nent constructs and veridical explanation of how the components relate.

    The Need to Validate Models Incorporating Lis- tening Comprehension Components. Several levels of validation are implied in listening compre- hension assessment, each of which entails rec- ognition of the purpose for which the assess- ment is taking place. One level is the level of validation of the testing instrument itself. Tra- ditional psychometric validational theory is beneficially invoked at this level. Another level concerns the validation of the performance model or guidelines on which the test is based. This validation usually entails gathering evi- dence of the psychological reality and utility of

    the model and its elements. Unfortunately, such models are less commonly evaluated in terms of their parsimony and utility than in terms of their psychological reality alone. Just as two-dimensional geographical maps have well-established utility despite their limitations as approximations of three-dimensional space, so a model of listening comprehension assess- ment may demonstrate utility even when it pro- vides an imperfect approximation of psycho- logical reality. If the test instruments routinely developed according to a given performance model demonstrate construct validity, then the model on which the test has been based may be judged to exhibit validity also, if only validity for the utilitarian purpose of guiding the gen- eration of valid tests. At a still deeper level of concern, it is possible to consider validity of the- ories foundational to models that form the bases of tests.

    In their validation study of the ACTFL pro- ficiency guidelines, Dandonoli and Henning compared the functioning of tests developed according to those guidelines within each of the four general language skills of listening, speak- ing, reading, and writing, and across the two languages of English and French. While the re- searchers found evidence of convergent validity of experimental tests in every skill area and across both languages, only the tests developed to assess listening comprehension failed to ex- hibit discriminant validity in either language. One hypothesis offered to explain this outcome was that the ACTFL Guidelines may require clearer articulation of performance descriptors across the listening comprehension proficiency levels to provide a better basis for test develop- ment and assessment. An alternative hypothesis considered was that the Guidelines descriptors were satisfactory, but the listening comprehen- sion tests themselves were poorly constructed. While this latter explanation is not entirely without merit, we consider it unlikely since the test exhibited high internal consistency, and the same results (i.e., high reliability, low discrimi- nant validity) were replicated across both lan- guages. This example suggests either that fur- ther validity evidence is still needed for listening comprehension tests developed ac- cording to such guidelines or that the listening guidelines themselves are not yet adequate.

    The Need to Orient Models to Appropriate Pur- poses for Language Learning, Use, and Assessment. We may safely conclude that there can and should be more than one model of listening comprehension assessment, but any model pro-

  • 184 The Modern Language Journal 77 (1993) posed should be judged in terms of its utility in addition to its psychological reality. Utility implies purpose. Just as there can be no judg- ment of test validity apart from a knowledge of the purpose for which the test is being applied, so there can be no final judgment of the ade- quacy of any model of listening comprehension assessment without an appreciation of the par- ticular applications for which the model is de- signed. A given listening comprehension assess- ment model may need to be characterized by reference to purposive qualifiers, such as for as- sessment of English as a second language for class- room-context academic purposes at the tertiary stage in the physical sciences, for example; then the judgment of the adequacy of the model can be made partly by consideration of the extent to which the model is useful for the purpose for which it has been proposed.

    The clarification of the purposes of assess- ment in each application was listed as the number one priority in language testing in the nineties at the most recent ACTFL Priorities Conference (18). Without a clear designation of purpose of assessment, accurate judgments can not really be offered concerning the validity of the tests, the appropriateness of the test speci- fications, or the adequacy of the underlying theoretical models.

    ASPECTS AND COMPONENTS OF LISTENING COMPREHENSION ASSESSMENT

    A general framework for the consideration of listening comprehension assessment may be provided by the relationships displayed in Fig- ure I.

    This framework attempts to relate the vari- ous aspects and components of listening com- prehension assessments already mentioned. To the extent that the definitions of the compo- nents are adequate for some assessment pur- poses and the framework accurately explains interrelationships among the components, the framework itself comprises a model or a theory of listening comprehension assessment. With appropriate changes in cognitive operations and other elements, the model could be adapted for applications beyond listening com- prehension assessment.

    According to this framework, the PURPOSE OF ASSESSMENT component (as distinct from but not unconcerned with the purpose of lis- tening) dictates the form and function of all of the other elements. Accordingly, it has the most prominent starting position at the top of the

    figure. Each of the other elements in the figure has representative subcategories that will be mentioned later. TASKS are viewed as repre- senting both text types and elements within texts, as well as the text meanings that are con- veyed in these elements, and the particular test item types and sample items that are based on the texts. These constitute the key material to be comprehended. RESPONSE CATEGORY is viewed as extending from particular cognitive operations to include response modes, generic item formats, and specific item formats (in gen- eral, the notion of test specifications is included here). SCORING METHOD indicates the man- ner in which the level of performance is re- corded and reported either at the test or the item level-whether as number correct, trans- formed number correct, expert rating, level de- scriptor, or any other method. Just as persons obtain ability scores over tasks, so tasks can be assigned difficulty scores over persons. In as- signing ability and difficulty levels, reference is made to LEVELING VARIABLES that serve to differentiate performance levels. COMPE- TENCE CATEGORY serves to denote the vari- ous native and acquired skills and knowledge required to perform the task, including mem- ory of schemata. The specific behavior type and performance acts needed to do the task are con- sidered to derive from the Person-Competence facet of this model. SAMPLE CHARACTERIS- TICS represent the intended examinees classi- fied according to any appropriate subcategories such as intelligence, personality, experience and background knowledge (see 36). The ge- neric listening COGNITIVE OPERATION in- dicated in the center of the figure in our case is comprehension, but within comprehension are the component cognitive operations identifica- tion and interpretation. All of the elements of the model are embedded in a field of SOCIOCUL- TURAL CONTEXT, reflecting the fact that any assessment involves sociocultural value as- sumptions, whether explicit or implicit.

    Text/Task/Difficulty Dimension. One view of lis- tening comprehension entails close examina- tion of the task demands. Earlier we suggested that text/task/difficulty comprises one of two major listening facet dimensions. Here we pro- pose a series of categorical components ex- tending from text types to scoring methods as follows: 1) Text Type; 2) Text Element; 3) Cog- nitive Operation; 4) Response Category; 5) Item Type; 6) Sample Item; 7) Scoring Method.

    Text Type. A variety of listening text types have been identified (e.g., 6; 9).4 The following

  • Dunkel, Henning, and Chaudron 185

    FIGURE I Tentative Model of L2 Listening Comprehension Assessment

    Purpose

    rof

    Examinees TasMs

    sample C haazrsisf etTp Cognitive

    Person Operation Text Domain Element

    Behavior Response Type Category

    Leveling ComencVaabes item

    Category Type Person

    I Task Ability Leve Difficult '.MDfficuhty

    Level %A tj~I! ~ ~ 'Sample

    Act Item

    Person Item+ Scores Soe

    Assment

    list is by no means exhaustive. We expect that there will be a broad range of text and task difficulty possible within each text type. Academic lectures Instructions/Directions Advertisements Jokes Announcements Narrative stories Conversations Newscasts

    casual exchanges Poetic lyrics interviews Scripted dramatic panel discussions pieces telephone conver- Sermons

    sations Songs Debates Speeches

    These text types could be further analyzed by rhetorical function, or by transactional versus interactional language focus (see 31), or by lev- els of formality, depending on the purpose of assessment.

    Text (and Task) Element. A category of text ele- ments must be designated since it is not usually the text as a whole that is the focus of cognitive

    operations involved in assessment. For exam- ple, the text as a whole is not the object of com- prehension, but rather the informational ele- ments within the text, which may, of course, be related in hierarchical or other ways within the text structure. The following text elements are commonly listed in attempts to grade text diffi- culty. While an effort is frequently made to as- sociate the capacity to comprehend the earlier lower-order or bottom-up elements with lower levels of proficiency and the later higher-order or top-down elements with higher proficiency, a great range in difficulty may be provided within examples of each text element. For example, words can be found with high levels of compre- hension difficulty, while some connected discourse can be processed at the lowest levels of profi- ciency if the task is merely to identify the topic. These same text elements found in listening comprehension passages may also be found in listening comprehension test items that are as- sociated with passages. In this way, they may be

  • 186 The Modern Language Journal 77 (1993) considered to be task elements as well as text elements.

    Morphemes Simple utterances/ Paralinguistic syntax

    features Complex utterances/ stress, intonation, syntax

    tone Connected discourse Words and phrases Nonliteral implicature

    The ACTFL Proficiency Guidelines for lis- tening explicitly employ text elements as level- ing devices to signify the comparative difficulty of a listening task or the comparative compe- tence indicated by a listening performance. Those guidelines mention words, phrases, ut- terances, sentences, and connected discourse as selectively comprehensible at different stages of proficiency. Our position is that, while the size of an element is one of a variety of leveling variables that help to differentiate levels of comprehensibility, certain uncommon words will be incomprehensible at the highest profi- ciency levels, and some connected discourse will be comprehensible at the lowest proficiency lev- els. Text elements are here considered to be those informational units of the text that con- vey meanings. Inherent in the types and ele- ments of texts are the semantic/propositional contents that must be accessed in order to un- derstand the texts. In addition, certain contex- tual information and background knowledge allow the listener to infer other semantic and propositional meanings from the texts. For the purposes of listening comprehension assess- ment, the main categories of text meanings are:

    Orientation meanings Persons and their relationships Elements of the stated or implied setting (lo-

    cation and time) of the text events Topic of the text

    Detail meanings Simple lexical meanings Single propositional meanings in the text

    Main ideas The derived principal proposition of a text

    Implications Meanings derived from the listener's applica-

    tion of background knowledge and logic to the text (i.e., this would include the illocu- tionary force of the text, logical relations implied by the text, etc.)

    These components of text meaning are ac- cessed by means of cognitive operations (see be- low), according to the particular test item for- mat and response mode, in what will be called the cognitive task. Thus there are cognitive tasks

    which can be called identifying detail, interpret- ing main idea, interpreting implications, identi- fying topic, etc.

    Cognitive Operation. The following cognitive op- erations may be viewed as relevant in the assess- ment of listening ability. If the purpose of the assessment is to measure listening "comprehen- sion," then it may be appropriate to concentrate on tasks that involve only that operation. This distinction is not obvious since comprehension facilitates both lower and higher level opera- tions and those lower and higher operations also facilitate comprehension, implying that performance on many non-comprehension tasks will be highly correlated with perfor- mance on true comprehension tasks.5 Orientation Attention Knowledge

    perception recognition

    Comprehension identification interpretation

    Application Analysis Synthesis Evaluation

    As noted earlier, no uniform empirical evi- dence exists that the final two categories (i.e., synthesis and evaluation) are appropriately po- sitioned in this list. For purposes of test devel- opment, the two comprehension operations are identification and interpretation. After consider- able deliberation, we view identification and inter- pretation as defensible, necessary, and sufficient examples of comprehension, whereas perception and recognition do not necessarily imply com- prehension, and evaluation and other higher cognitive operations go beyond comprehension to require formal reasoning abilities.

    Response Category. Response category is a ru- bric intended to permit the systematic listing of broad categories of ways of responding that exemplify any given targeted cognitive opera- tion. Response categories may be classified ac- cording to a variety of features; however, the most important with respect to the measure- ment of listening comprehension is the distinc- tion between a production response (i.e., oral or written), and a manual selection response (i.e., identifying from among a set of response options, whether by paper-and-pencil, key- board, mouse, or other manipulation). The fol- lowing response modes appropriate to the mea- surement of comprehension (identification and

  • Dunkel, Henning, and Chaudron 187

    interpretation) are indicated with their possible productive or manual selection categories. We assume that any item requiring active genera- tion of a response rather than selection of op- tions will tend to be productive. This assump- tion does not necessarily mean that a set of manual selection options could not be con- structed for such an item, but that the test de- veloper would be constrained in the develop- ment of such options (and distractors) in a non-interactive (nonparticipatory) listening test environment.

    Response Category Production Selection The listener: Answers question(s) often often Defines often seldom Describes often seldom Follows directions often often Paraphrases often seldom Selects picture seldom often Produces picture often seldom Translates often often

    Other response categories, such as content- word recall and imitation, may prove highly use- ful as indirect methods of assessing comprehen- sion, but they have been purposely excluded here for failing to meet sufficiency criteria as indicators of comprehension.

    Item Type. Item type is used here in a re- stricted sense to denote generic characteristics of the item itself that may be employed within response categories. Some valid tests of lis- tening comprehension may not be of the tradi- tional "item-based" variety, but may instead call for holistic judgments of the quality of exami- nee performance. In these cases, every example of rated performance may still be viewed as an item, and every stimulus for eliciting compre- hension performance as the prompt or stem of the item. Examples of item types include the following, whether in an oral or a written mo- dality: Selection Production Multiple-choice Completion True-false Short answer

    Composition A full specification of item type involves de-

    tails of item format. Item format includes infor- mation about response mode and item type as mentioned earlier, but it goes beyond those considerations to provide the exact specifica- tions of item features. For example, item for- mat specifications would prescribe the range of acceptable lengths of item stems (prompts) and response options, the number and kinds of

    multiple-choice distractors (if the item type is multiple-choice), content exclusion (e.g., taking care to eliminate disturbing and prejudicial ma- terial from the input or items), the precise form of the test instructions, the frequency interval of cloze deletions, the subject field of vocabu- lary employed, and so forth.

    Sample Item. Each of the specific item formats may have an infinite number of representative sample items. We do not propose to provide examples of every possible item format, but rather to point out that sample items comprise the most discrete representation of the task.

    Scoring Method. The selection of a particular scoring method will be a function of the pur- pose of the assessment and the nature of the tasks and competencies that are being assessed. A few of the varieties of scoring method avail- able are as follows: Binary (pass-fail, correct- incorrect); Partial credit; Rating scale; Degree of correctness; Profile; Holistic; Analytic; Raw score; Scaled score.

    Leveling Variables. The preceding variables may be viewed as categorical variables; thus, it may not be productive to grade text types at difficulty levels apart from reference to a fur- ther series of continuous variables that deter- mine text difficulty and may accordingly be termed leveling variables. Each text or text ele- ment may be graded with regard to the degree of presence or absence of such leveling vari- ables as: Content imagery; Contextual support; Cultural proximity; Length; Dialectal variation; Lexical difficulty; Organization; Participation (opportunity to modify input); Plausibility; Propositional density; Redundancy; Repetition; Social frequency; Signal to noise ratio (clarity); Speededness; Syntactic complexity; Topic fa- miliarity; Video support (gestures, objects in field, etc.).

    Close inspection of the ACTFL Proficiency Guidelines for listening will reveal that several of these leveling variables are explicitly employed therein to differentiate among levels of perfor- mance. In particular, the following leveling concepts appear in the guideline descriptors: frequency, length, clarity, simplicity, repetition, speed, familiarity, propositional density, socio- cultural proximity. The use of particular level- ing variables for assessment purposes would be signaled in the item format category (above) or in the behavior type category (below).

    We propose here a specific organization of leveling variables to systematize several of these in a proficiency rating scale. When the purpose of assessment permits, we suggest that the main variables of contextual support, rate of speech,

  • 188 The Modern Language Journal 77 (1993) lexical complexity, and syntactic complexity could be selected in order to screen items and texts for specific listening comprehension pro- ficiency levels. Each of these variables can be applied to different domains of listening com- petencies, whether linguistic, psycholinguistic, sociolinguistic, pragmatic, or strategic. Other leveling variables such as dialect/register, topic commonness, repetition, and clarity of speech signal may also be applied, but are probably most appropriate to listening tasks at an ad- vanced level of proficiency, and sometimes in- volve higher level cognitive operations than those of comprehension per se (e.g., analysis, synthe- sis, evaluation). While all of the above-listed lev- eling variables are candidates for classifying dif- ficulty of texts and tasks, it is usually desirable or necessary to limit recourse to only those lev- eling variables that can be shown to work effec- tively across a desired range of difficulty for a prescribed purpose. While we are suggesting a subgrouping of these leveling variables here, an empirical question remains as to which vari- ables can be used most effectively for which purposes of assessment.

    Person/Competence/Ability Dimension. The above suggested categorical domains of text and task type may be considered as having analogs within facets of person/competence/ability. The following series of categorical domains represents the second leg of the metaphorical ladder mentioned earlier. Cognitive operation (see above) and scoring method are categories shared across examinee and task domains as indicated in Figure I: 1) Sample Characteristics; 2) Person Domain; 3) Cognitive Operation; 4) Behavior Type; 5) Competence Category; 6) Performance Act; 7) Scoring Method.

    Sample Characteristics. A person may qualify for inclusion in a variety of categorical do- mains that may have bearing on the choice of valid content and techniques of assessment of listening comprehension. Some of the con- ceivable relevant sample characteristics would include the following: Child-adult; Educated- noneducated; Educational major fields; Fe- male-male; Language background; Actual or intended employment categories; Interest cate- gories.

    Person Domain. Perhaps the person analog to text element can be termed person domain. Here we are concerned with the particular aspect of personhood that becomes the object or focus of assessment. The following person domains may be named: Cognitive; Affective; Physiological; Sensorial (sight, hearing, taste, touch).

    While comprehension as a construct is pri-

    marily located within the cognitive domain, the designation of the listening modality, the selec- tion of appropriate assessment tasks, and the articulation of particular assessment purposes may involve the other domains as well.

    Cognitive Operation. (See above) Behavior Type. Analagous to response category

    in the text/task/difficulty facet is behavior type in the person/competence/ability facet. This cate- gory seeks to specify and prescribe the kind of behavior that would demonstrate the particular competencies being evaluated. Like the elabo- ration of response category, behavior type further systematizes the classes of behavior represent- ing particular competencies, prescribing the range of behavior acceptable in the assessment as indicative of particular competencies. This facet further specifies the leveling variables that will serve in the assessment to delimit levels of ability. (For example, if a competence category is specified as the ability to distinguish between literal and implied meanings, a corresponding behavior type would be writing [or recognizing] both the literal and implied meanings).

    Competence Category. Analagous to item type in the text/task/difficulty facet is competence cate- gory in the person/competence/ability facet. Just as item types may have subcategories of elic- ited responses involving production and selection to evoke and distinguish targeted cognitive op- erations (see above), so also may competence cate- gory differentiate between skills and knowledge. Competence category differs from behavior type much as competence is said to differ from per- formance. The cognitive operation of listening comprehension involves many underlying abili- ties. Richards (32) lists thirty-three micro-skills of conversational listening and eighteen micro- skills of academic listening. While not all of these fall within the more selective rubric of comprehension as discussed here, some of them do. Two of those that do fit here are the follow- ing: 1) ability to detect meanings expressed in differing grammatical forms/sentence types- i.e., that a particular meaning may be expressed in different ways (related to the response category of paraphrasing); 2) ability to distinguish be- tween literal and implied meanings (related to the cognitive operation of interpretation).

    Still other competence categories for lis- tening comprehension might include, to name but two, the following: 1) ability to identify main topics of conversations; 2) ability to inter- pret the desired objects of requests.

    In general, competence categories represent abilities to perform cognitive operations on spe- cific text elements and meanings. Each of these

  • Dunkel, Henning, and Chaudron 189

    competence categories admits a full range of ability levels that might be indicated by the score on the test or test item.

    Performance Act. This category refers to the actual performance act in the assessment that is intended to exemplify the behavior type being assessed. Performance act would correspond to "sample item" in the text-/task-/difficulty do- main. In practice, a performance act is often a specific response to a particular test question.

    CONCLUDING CONSIDERATIONS

    We have reviewed several issues related to the assessment of a listening comprehension construct, and we have proposed a tentative framework that models the interrelationships among the elements of an L2 listening compre- hension assessment. We trust that this frame- work can have some utility in the preparation of L2 test specifications and with the develop- ment of tests of L2 listening comprehension. We also hope that in referencing such a frame- work, test developers will be able to devise testing instruments that exhibit still greater construct validity for particular assessment pur- poses. The model is, at present, general and tentative; it remains for us and other research- ers to expand upon individual elements within and certain components of the model, and to test the model against reality. In other words, the following tasks remain to be accomplished: * to expand the model by further identifying

    and elaborating the individual components set forth in the model (e.g., the purpose of assessment; the person-ability level; etc.);

    * to map existing taxonomies of listener func- tions and listening skills (e.g., Richards' [32, 33] and Lund's taxonomies), as well as estab- lished descriptions of the levels of listening proficiency (e.g., the ACTFL Guidelines' gen-

    eral descriptions for listening) onto compo- nents of the model, as appropriate;

    * to interrelate components of the model in a more detailed fashion (e.g., to specify the in- terrelationship between components such as competence category and performance act);

    * to compare the predictive and discriminative power of a variety of potential leveling vari- ables employed for a variety of assessment purposes (e.g., topic familiarity; proposi- tional density);

    * to devise tables of test specifications for clearly identifiable and specific purposes of assessment using an articulated version of the model;

    * to use the model and appropriate tables of specifications to construct L2 listening com- prehension tests and then to probe the valid- ity of the resulting measures (i.e., the content, concurrent, predictive, and construct va- lidity).

    In short, researchers have to demonstrate the utility of the model with the construction of multi-faceted listening comprehension tests de- signed to specify with precision and clarity the specific purpose(s) of assessment, the object(s) of assessment, and the agent(s) of assessment. The need for testing the proposed model, not only for utility but also for approximation to reality, can not be overlooked or overstated for, as Agnew and Pyke point out, while theory con- struction and model building are vital, theory and model testing are equally critical. They ex- hort researchers to establish "observational checkpoints" that test theories constructed and models proposed against reality. We echo Ag- new and Pyke's contention and exhortation, and we both advocate the further elaboration of the model and call for testing the proposed model of L2 listening comprehension assess- ment against real world situations and pur- poses.

    ACKNOWLEDGMENTS

    The authors acknowledge the collegial and institu- tional support we received to further study the issue of interest. We began the research on model building

    as Collaborative Research Fellows at the National For- eign Language Center during the summer of 1990. We received additional support for the work from the US Department of Defense. Dennis Gouran, Head of the Department of Speech Communication at Penn- sylvania State University, provided additional consul- tation and support for the research endeavor.

    NOTES

    SBarber and Fitch Hauser (cited in Witkin) identi- fied 315 variables used in studying NL listening com-

    prehension, some of which were defined in broad terms (e.g., listening, memory, perception, and atten- tion); some in more precise terms (e.g., selectivity, channel, feedback, and decoding); and others in highly specific terms (e.g., electrochemical impulses,

  • 190 The Modern Language Journal 77 (1993) auditory discrimination, and dichotic/diotic listening) (p. 9).

    2Wolvin and Coakley found the following numer- ous and varied differences in the meaning of the term "listening." Researchers perceive NL listening to involve the hearer's "analyzing, concentrating, un- derstanding, registering, converting meaning to the mind, engaging in further mental activity, re- sponding, reacting, interpreting, relating to past experiences and future expectancies, assimilating, acting upon, selecting, receiving, apprehending, hearing, remembering, identifying, recognizing, comprehending, sensing, evaluating, emphasizing, and organizing" (p. 57). Many of the terms used by some researchers to describe listening are synonyms for expressions used by others. Much verbal confu- sion and overlap of meaning, as well as general dis- agreement concerning the psycholinguistic processes of listening, exist, according to Wolvin and Coakley.

    'A similar assertion could be made with respect to the construct of second/foreign language listening. Definitions range from the simple and reductive (e.g., "listening is the activity of paying attention to and trying to get meaning from something we hear" [38: p. 1], to the more expansive and encompassing notion that listening needs to be defined in terms of the vari- ous types of listening: critical, global, intensive, inter- actional, transactional, recreational, and selective lis- tening (35).

    In addition to pointing out the "definition and op- erationalization" problem, Witkin identified several other problems endemic in theory building and re- search on NL listening comprehension: most re- search on listening is not based on theory; the extant research is often contradictory; and almost no studies have been done to replicate or verify previous re- search. The problematic state of research may be par- tially due to the fact that there exists "a serious ques- tion among scholars as to whether there is an 'art' to listening research, and whether indeed the processes can be observed and studied" (40: p. 7). This percep- tion needs to be altered if we are to increase the quan- tity and quality of the empirical research base on lis- tening, as well as the quality of listening training and assessment, according to Witkin.

    4Child's 1987 classification of texts establishes a

    graduated scale of text difficulty which is deterinined, at least in part, by the degree of information between the speaker and the listener: the orientation mode texts are those whose general propose is to orient the lis- tener (or reader) regarding "who or what is where, or what is happening or supposed to happen within a generally predetermined pattern, much of which is external to language" (p. 102); the instructive mode texts include forms such as "extended instructions on how to assemble objects or complicated directions to remote places; recounting of incidents in one's past; narration of historical events; certain kinds of mate- rial where a supposedly factual treatment is strongly influenced by political theory; and, in speaking situa- tions, exchanges of facts and opinions, and questions which elicit these. . ." (p. 103); the evaluative mode texts include "editorials on and analyses of facts and events; apologia; certain kinds of belles-lettristic ma- terial, such as biography with some critical interpreta- tion; and in verbal exchanges, extended outbursts (rhapsody, diatribe, etc.)" (p. 104); and the projective mode texts contain less shared information between the speaker (writer) and listener, and include forms such as philosophical discourse, certain kinds of tech- nical papers and "other forms of analysis or argu- mentation" (p. 104).

    5 It should be apparent that this list is not intended to be an exact representation of any of the familiar taxonomies (e.g., Bloom and colleagues'). Thus, while there is considerable overlap with current expressions of Bloom and colleagues' 1984 taxonomy of educa- tional objectives, we have taken the liberty of adding cognitive operations such as orientation and attention preceding knowledge on the hierarchy. In addition, we have defined knowledge and comprehension somewhat more restrictively. For example, we have included identification as an element of comprehension since, in our view, identification requires more than mere rec- ognition, but includes an ability to label and classify that which has been recognized. We have not in- cluded extrapolation as an element of comprehension since, to the extent that extrapolation differs from inter- pretation, we believe it involves analysis and specula- tion of consequences on the basis of what was com- prehended and thus falls more logically under the categories of application, analysis, and/or synthesis.

    BIBLIOGRAPHY

    1. Agnew, Neil & Sandra Pyke. The Science Game: An Introduction to Research in the Social Sciences. 4th ed. Englewood Cliffs, NJ: Prentice-Hall, 1987.

    2. American Council on the Teaching of Foreign Languages. ACTFL Proficiency Guidelines. Has- tings-on-Hudson, NY: ACTFL, 1986.

    3. Anderson, Richard C. "How to Construct Achieve- ment Tests to Assess Comprehension." Review of Educational Research 42 (1972): 145-70.

    4. Bachman, Lyle F. "Problems in Examining the Va- lidity of the ACTFL Oral Proficiency Inter- view." Studies in Second Language Acquisition 10 (1988): 149-64.

    5. &- Sandra Savignon. "The Evaluation of Communicative Language Proficiency: A Cri- tique of the ACTFL Oral Interview." Modern Language Journal 70 (1986): 380-90.

    6. Biber, Douglas. Variation Across Speech and Writing. New York: Cambridge Univ. Press, 1988.

    7. Buck, Gary. "Listening Comprehension Construct Validity and Trait Characteristics." Language Learning 42 (1992): 313-57.

  • Dunkel, Henning, and Chaudron 191

    8. Carroll, John B. Learning from Verbal Discourse in Educational Media: A Review of the Literature. ETS Research Bulletin RB-71-61. Princeton, NJ: ETS, 1971.

    9. Child, James. "Language Proficiency and the Ty- pology of Texts." Defining and Developing Pro- ficiency: Guidelines, Implementation, and Concepts. Ed. Heidi Byrnes & Michael Canale. Lin- colnwood, IL: National Textbook & ACTFL, 1987: 97-106.

    10. Dandonoli, Patricia & Grant Henning. "An In- vestigation of the Construct Validity of the ACTFL Proficiency Guidelines and Oral Inter- view Procedure." Foreign Language Annals 23 (1990): 11-22.

    11. Douglas, Daniel. "Testing Listening Comprehen- sion in the Context of the ACTFL Proficiency Guidelines." Studies in Second Language Acquisi- tion 10 (1988): 245-60.

    12. Dunkel, Patricia. "Listening in the Native and Second/Foreign Language: Toward an Inte- gration of Research and Practice." TESOL Quarterly 25 (1991): 431-57.

    13. Glen, Ethel. "A Content Analysis of Fifty Defini- tions of Listening." Journal of the International Listening Association 3 (1989): 21-31.

    14. Hale, Gordon, Donald Rock & Thomas Jirele. Confirmatory Factor Analysis of the Test of English as a Foreign Language. TOEFL Research Re- port 32. Princeton, NJ: ETS, 1989.

    15. Hauser, Margaret & Adele Hughes. "A Factor Analytic Study of Four Listening Tests." Jour- nal of the International Listening Association 1 (1987): 129-47.

    16. Henning, Grant. "Analysis of a Content-Word- Recall Approach to the Testing of Listening and Reading Comprehension." Comprehension- Based Second Language Teaching. Ed. Robert Courchene, Joan Glidden, Jennifer St. John & Christiane Th6rien. Ottawa, Ontario: Univ. of Ottawa, 1992: 339-49.

    17. - . A Study oj the Effects of Variation of Short Term Memory Load, Reading Response Length, and Processing Hierarchy on TOEFL Listening Compre- hension Item Performance. TOEFL Research Re- port 33. Princeton, NJ: ETS, 1991.

    18. - . "Priority Issues in the Assessment of Communicative Language Abilities." Foreign Language Annals 23 (1990): 379-84.

    19. Klatt, Dennis. "Review of Selected Models of Speech Perception." Lexical Representation and Process. Ed. William Marslen-Wilson. Cam- bridge, MA: MIT Press, 1989: 169-226.

    20. Kropp, Russell & Howard Stoker. "The Con- struction and Validation of Tests of the Cogni- tive Processes Described in the Taxonomy of Educational Objectives." Cooperative Research Project 2117. Washington: US Office of Edu- cation, 1966 [ERIC DOCUMENT ED 010 044].

    21. Lantolf, James & William Frawley. "Proficiency: Understanding the Construct." Studies in Sec- ond Language Acquisition 10 (1988): 181-95.

    22. Lastrucci, Carlo L. The Scientific Approach: Basic Principles of the Scientific Method. Cambridge, MA: Schenkman, 1967.

    23. Lowe, Pardee. "The Unassimilated History." Sec- ond Language Proficiency Assessment: Current Is- sues. Ed. Pardee Lowe & Charles Stansfield. Englewood Cliffs, NJ: Prentice-Hall, 1988: 11-51.

    24. Lund, Randall. "A Taxonomy for Teaching Sec- ond Language Listening." Foreign Language Annals 23 (1990): 105-15.

    25. Madaus, George, Elinor Woods & Ronald Nut- tall. "A Causal Model Analysis of Bloom's Tax- onomy." American Educational Research Journal 10 (1973): 253-62.

    26. McLaughlin, Barry. Theories of Second Language Learning. London: Arnold, 1987.

    27. Munby, John. Communicative Syllabus Design. Cambridge: Cambridge Univ. Press, 1978.

    28. Omaggio, Alice. Teaching Language in Context: Proficiency-Oriented Instruction. Boston: Heinle, 1986.

    29. Petersen, Patricia. "A Synthesis of Methods for Interactive Listening." Teaching English as a Second or Foreign Language. 2nd ed. Ed. Mari- anne Celce-Murcia. New York: Newbury House, 1991: 106-22.

    30. Powers, Donald. "Academic Demands Related to Listening Skills." Language Testing 3 (1986): 1-38.

    31. Richards, Jack C. The Language Teaching Matrix. New York: Cambridge Univ. Press, 1990.

    32. . The Context of Language Teaching. New York: Cambridge Univ. Press, 1985.

    33. - . "Listening Comprehension: Approach, Design, Procedure." TESOL Quarterly 17 (1983): 219-40.

    34. Rivers, Wilga. Teaching Foreign Language Skills. Chicago: Univ. of Chicago Press, 1968.

    35. Rost, Michael. Listening in Language Learning. New York: Longman, 1990.

    36. Skehan, Peter. Individual Differences in Second Language Learning. London: Arnold, 1989.

    37. Taxonomy of Educational Objectives: The Classifica- tion of Educational Goals. Handbook I: Cognitive Domain. Ed. Benjamin Bloom. New York: Longman, 1984.

    38. Underwood, Mary. Teaching Listening. New York: Longman, 1989.

    39. Valette, Rebecca. Modern Language Testing: A Handbook. New York: Harcourt, Brace, 1967.

    40. Witkin, Belle. "Listening Theory and Research: The State of the Art." Journal of the Interna- tional Listening Association 4 (1990): 7-32.

    41. Wolvin, Andrew & Carolyn Coakley. Listening. 3rd ed. Dubuque, IA: Brown, 1988.

    Article Contentsp. [180]p. 181p. 182p. 183p. 184p. 185p. 186p. 187p. 188p. 189p. 190p. 191

    Issue Table of ContentsThe Modern Language Journal, Vol. 77, No. 2 (Summer, 1993), pp. 137-276+i-xviiiFront MatterLorraine A. Strasheim, 1930-1993 [pp. 137-138]When Do Foreign-Language Readers Look up the Meaning of Unfamiliar Words? The Influence of Task and Learner Variables [pp. 139-147]Learning Styles and Composition [pp. 148-162]The Fulbright Program [p. 162]A Second Look at Grading and Classroom Performance: Report of a Research Study [pp. 163-169]Academic Achievement through Japanese, Spanish, or French: The First Two Years of Partial Immersion [pp. 170-179]The Assessment of an L2 Listening Comprehension Construct: A Tentative Model for Test Specification and Development [pp. 180-191]Teacher and Student Role Expectations: Cross-Cultural Differences and Implications [pp. 192-207]MLJ Readers' Forum [pp. 208-212]MLJ News & Notes of the Profession [pp. 213-220]Winners of NEH 1993 Summer FL Fellowships [p. 220]In Other Professional Journals [pp. 221-228]MLJ ReviewsTheory & PracticeReview: untitled [pp. 229-230]Review: untitled [p. 230]Review: untitled [pp. 230-231]Review: untitled [pp. 231-232]Review: untitled [p. 233]Review: untitled [pp. 233-234]Review: untitled [pp. 234-235]Review: untitled [pp. 235-236]Review: untitled [p. 236]Review: untitled [pp. 237-238]

    ArabicReview: untitled [p. 238]Review: untitled [pp. 238-239]

    ChineseReview: untitled [p. 240]Review: untitled [pp. 240-241]Review: untitled [pp. 241-243]

    ESLReview: untitled [p. 243]Review: untitled [pp. 243-244]Review: untitled [pp. 244-245]Review: untitled [pp. 245-246]

    FrenchReview: untitled [p. 246]Review: untitled [p. 247]Review: untitled [pp. 247-248]Review: untitled [pp. 248-249]Review: untitled [pp. 249-250]Review: untitled [p. 250]Review: untitled [pp. 250-251]Review: untitled [pp. 251-252]Review: untitled [pp. 252-253]Review: untitled [p. 253]

    GermanReview: untitled [p. 254]Review: untitled [pp. 254-255]Review: untitled [pp. 255-256]Review: untitled [pp. 256-257]Review: untitled [pp. 257-258]

    HawaiianReview: untitled [p. 258]

    JapaneseReview: untitled [p. 259]Review: untitled [pp. 259-260]Review: untitled [pp. 260-261]Review: untitled [pp. 261-262]Review: untitled [pp. 262-263]

    RussianReview: untitled [pp. 263-264]

    SpanishReview: untitled [p. 264]Review: untitled [pp. 264-265]Review: untitled [pp. 265-266]Review: untitled [pp. 266-267]Review: untitled [p. 267]Review: untitled [pp. 267-268]Review: untitled [pp. 268-269]Review: untitled [pp. 269-270]Review: untitled [pp. 270-272]

    TranslationReview: untitled [p. 272]Review: untitled [pp. 272-273]Review: untitled [pp. 273-274]

    UkrainianReview: untitled [pp. 274-275]

    Back Matter [pp. 276-xviii]