eval rev-1985-greene-523-45

Upload: banar-suharjanto

Post on 03-Mar-2016

219 views

Category:

Documents


0 download

DESCRIPTION

Eval Rev-1985-Greene-523-45

TRANSCRIPT

  • 523

    TRIANGULATION IN EVALUATION

    Design and Analysis Issues

    JENNIFER GREENECHARLES McCLINTOCK

    Cornell University

    More effective use of mixed-methods evaluation designs employing quantitative andqualitative methods requires clarification of important design and analysis issues.Design needs include assessments of the relative costs and benefits of alternative mixed-methods designs, which can be differentiated by the independence of the differentmethods and their sequential or concurrent implementation. The evaluation reportedherein illustrates an independent, concurrent mixed-method design and highlights itssignificant triangulation benefits. Strategies for analyzing quantitative and qualitativeresults are further needed. Underlying this analysis challenge is the issue of cross-paradigm triangulation. A comment on this issue is provided, in conjunction withseveral triangulation analysis strategies.

    onsiderable debate has accompanied the emergence of qualita-tive methodology and the naturalistic paradigm of inquirywithin the evaluation arena. The intensity and persistence of this debateattests to its importance for both the theory and practice of evaluation.Though originally focused on the relative merits of quantitative versusqualitative methods and of positivist versus naturalistic paradigms, thedebate has shifted to questions about the complementarity of thesealternative methods and the degree of cross-perspective integrationpossible. This shift signals a greater acceptance of the naturalistic per-spective-or at least of qualitative methods-within the evaluationcommunity. There is also an emerging consensus that inquiry methodsthemselves are not inherently linked to one or the other paradigmAU FHORS NOTE: An earlier version of this article was presented at the Joint Meetingof the Evaluation Network and the Evaluation Research Society, San Francisco, 1984.

    EVALUATION REVIEW, Vol. 9 No. 5, October 1985 523-545@ 1985 Sage Publications, Inc

    by Banar Suharjanto on September 17, 2015erx.sagepub.comDownloaded from

  • 524

    (Bednarz, 1983; Patton, 1980; Reichardt and Cook, 1979). Rather,&dquo;methods are neutral in the sense that a hammer is neutral to its use forbuilding fine furniture or smashing ants, that is, they serve the purposesof the researcher&dquo; (Bednarz, 1983: 4).

    This consensus about neutrality of methods, along with thewidespread acceptance of qualitative methods, has afforded theevaluation community a vastly increased repertoire of methodologicaltools and has renewed interest in the time-honored methodologicalstrategy of triangulation (Denzin, 1978; Jick, 1983; Webb et al., 1966,1980). Broadly defined, triangulation is &dquo;the multiple employment ofsources of data, observers, methods, or theories&dquo; (Bednarz, 1983: 38)in investigations of the same phenomenon. Between-methodtriangulation is the use of two or more different methods to measurethe same phenomenon. The goal of triangulating methods is tostrengthen the validity of the overall findings through congruenceand/or complementarity of the results from each method. Con-gruence here means similarity, consistency, or convergence of results,whereas complementarity refers to one set of results enriching, expan-ding upon, clarifying, or illustrating the other. Thus, the essence ofthe triangulation logic is that the methods represent independentassessments of the same phenomenon and contain offsetting kinds ofbias and measurement error (Campbell and Fiske, 1959).

    Despite widespread advocacy of mixed-method evaluation designswith triangulation of quantitative and qualitative methods, severalmajor obstacles inhibit their use. First, there is insufficient guidanceregarding the implementation of different mixed-methods designs,which leads to confusion about the comparative costs and benefits ofdesigns choices (Mark and Shotland, 1984). Similarly, there are toofew examples of data analysis in mixed-methods research, either interms of comparing or integrating results, and even fewer that mean-ingfully attend to the underlying issue of cross-paradigm triangula-tion. Both concerns, design and analysis, will be addressed in thisreview of a two-part evaluation of program development processes inan educational organization. Our focus in this discussion is ontriangulation in mixed-method designs employing quantitative andqualitative methods that are linked to contrasting positivist andnaturalistic paradigms, respectively.

    by Banar Suharjanto on September 17, 2015erx.sagepub.comDownloaded from

  • 525

    MIXED-METHODS EVALUATION DESIGNS

    Mixed-method evaluation designs can be differentiated along twodimensions: (a) the degree of independence of the quantitative andqualitative data collection and analysis activities and (b) the degree towhich the implementation of both methods is sequential and iterativeversus concurrent. Siebers (1973) often-cited discussion of &dquo;the in-tegration of fieldwork and survey methods&dquo; in social research em-phasizes the benefits accrued from the sequential, iterative use of bothmethods by a single (i.e., not independent) researcher or researchteam. Madey (1982) provides a similar discussion for evaluation con-texts, specifically an evaluation of the federally funded educationalState Capacity Building program. In both examples, the authorshighlight the multiple benefits of interactive mixed-methods inquiry interms of design, data collection, analysis, and interpretation ofresults. However, these benefits notwithstanding, a nonindependent,sequential mixed-method strategy loses the capacity for triangulation.In this strategy, the methods are deliberately interactive, not indepen-dent, and they are applied singly over time so that they may or maynot be measuring the same phenomenon.

    Trend (1979) and Knapp (1979) illustrate the concurrent use ofsurvey and ethnographic methods by different members of a projectteam for large-scale evaluations of federally funded demonstrationprograms in the areas of low-income housing and experimental educa-tion, respectively. This strategy can be labelled &dquo;semi-independent,&dquo;in that the quantitative and qualitative methods were implementedseparately by different individuals, but these individuals had somedegree of interaction and communication during the inquiry processas members of a common project team. Both authors highlight thetensions experienced between the quantitative and qualitative com-ponents of the evaluation. Trend focuses on tensions incurred inresolving highly conflicting results, whereas Knapp includes problemdefinition, evaluator role conflict, and policy relevance of results inhis discussion of tensions. These examples suggest that a semi-independent, concurrent mixed-method design requires a significantincrease in resources and invokes a variety of tensions. The lack ofmethodological independence in this strategy also limits its capacityfor triangulation.

    by Banar Suharjanto on September 17, 2015erx.sagepub.comDownloaded from

  • 526

    Examples of relatively independent and concurrent use of quan-titative and qualitative methods to study the same phenomenon aremore rare. (Note that an independent, sequential mixed-methodstrategy approaches existing practice in the profession at large, withdifferent researchers/evaluators building on each others work.)However, there is a clear need for multiple &dquo;competing&dquo; evaluations,smaller in scope than the single &dquo;blockbuster&dquo; study, differentiatedby the designs and methods of separate project teams (Cronbach andassociates, 1980) or by orientation to a single stakeholder group(Cohen, 1983; Weiss, 1983). The benefits cited for this strategy aresubstantial, including improved project manageability and increasedopportunities for true triangulation.

    This brief review underscores the need for comparative assessmentsof various mixed-method designs to help dispel current confusionabout their interchangeability. These assessments should clarify therelative costs and benefits of different designs on such criteria as proj-ect management, validity, and utilization of results. As illustrated,different designs pose different logistical requirements, and the logicof triangulation requires not just multiple methods but their indepen-dent, concurrent implementation as well (McClintock and Greene,forthcoming).

    DATA ANALYSIS IN MIXED-METHODS DESIGNS

    The literature is even more sparse regarding the actual processes ofmixed-method data analysis. As suggested by Trend (1979: 83), &dquo;thetendency is to relegate one type of analysis or the other to a secondaryrole, according to the nature of the research and the predilections ofthe investigators.&dquo; Jick (1983: 142) offered the following reflection onthis issue:

    It is a delicate exercise to decide whether or not results have converged. Intheory, multiple confirmation of findings may appear routine. If there is con-gruence it presumably is apparent. In practice, though, there are a few guidelinesfor systematically ordering eclectic data in order to determine congruence orvalidity.... Given the differing nature of multimethod results, the determina-tion is likely to be subjective.

    Yet subjectivity accompanies all data interpretation. In short, with therenewed interest in mixed-method designs comes the need for

    by Banar Suharjanto on September 17, 2015erx.sagepub.comDownloaded from

  • 527

    systematic strategies for jointly analyzing and triangulating the resultsof quantitative and qualitative methods.

    The underlying issue here is the possibility of integrating the dif-ferent paradigms guiding the different methods. That is, can mixed-method data analysis strategies achieve between-paradigm integrationor must one triangulate results within a single paradigm, relegatingone set of data-either the quantitative or the qualitative-as sub-sidiary to the other? The essence of the debate on this issue is well cap-tured by its participants, first the &dquo;yes&dquo; position, followed by the&dquo;no.&dquo;

    In fact, all of the attnbutes which are said to make up the paradigms are logicallyindependent. Just as the methods are not logically linked to any of theparadigmatic attributes, the attributes themselves are not logically linked to eachother.... There is nothing to stop the researcher, except perhaps tradition, frommixing and matching the attributes from the two paradigms to achieve that com-bination which is most appropriate for the research problem and setting at hand(Reichardt and Cook, 1979: 18).Cross-philosophy triangulation is not possible because of the necessity of sub-suming one approach to another. There are conflicting requisites from the totali-ty of the perspective, i.e., the location of causality and its derivatives regardingvalidity, reliability, the limits of social science, and its mission.... There havebeen calls for the selective use of parts of the &dquo;qualitative and quantitativeparadigms&dquo; in the belief that the researcher can somehow stand outside aperspective when choosing the ways to conduct social research. I have arguedthat even for individuals who can see the differences of alternative perspectives itis not possible to simultaneously work within them because at certain points inthe research process to adhere to the tenets of one is to violate those of the other.Put differently, the requirements of differing perspectives are at odds (Bednarz,1983: 39, 41: emphasis in the original).

    (Additional perspectives on this debate are found in Guba and Lin-coln, 1981: 76-77; Ianni and Orr, 1979; Patton, 1980; and Smith,1983a, 1983b.)

    A TRIANGULATED EVALUATIONOF PROGRAM DEVELOPMENT

    To illustrate these mixed-method design and analysis issues, we willreview a two-part evaluation of a structured program developmentprocess used by an adult and community education organization. Theevaluation included both a mail questionnaire administered to a

    by Banar Suharjanto on September 17, 2015erx.sagepub.comDownloaded from

  • 528

    statewide sample and on-site, open-ended interviews conducted withpurposively selected state and local staff. Both methods addressed thesame phenomena, but each was designed and implemented by separateevaluation teams. The evaluation thus represents an independent, con-current mixed-method design. The evaluation also illustrates severalanalysis strategies for comparing quantitative and qualitative results.Further, the deliberate linkage of the questionnaire with a positivistperspective and the interview with a naturalistic perspective allowed usto comment on the issue of cross-perspective triangulation. 2

    TRIANGULATION DESIGN

    The questionnaire and interview components were implementedconcurrently during the Spring of 1984 by two separate evaluationteams who started with a common conceptual framework, kept eachother informed of activities and progress during the study, but other-wise functioned independently.3 3 Each set of data was analyzedseparately and summarized in separate reports, from which an in-tegrated summary of major findings and recommendations wasprepared and disseminated within the client organization. The ques-tionnaire component was expected to identify specific changes neededin the program development process. The emphasis in the interviewcomponent was largely descriptive. In both components, data collec-tion focused on the nature and role of information used in programdevelopment, including the degree to which current practices of infor-mation gathering, exchange, interpretation, and reporting met needsfor program decision making and accountability.

    More specifically, the studys conceptual framework focused on in-formation needs in program development and was developed fromliteratures on evaluation utilization and organizational decision mak-ing. Utilization issues centered on the importance of identifyingevaluation questions that are of priority interest to programstakeholders (Gold, 1981; Guba and Lincoln, 1981; Patton, 1978).Evaluation studies that address important stakeholder informationneeds are more likely to produce results perceived as useful and actual-ly used. However, understanding these information needs-or morebroadly, the role of information in program decision making-iscomplicated considerably by organizational and political factors(Lotto, 1983; Thompson and King, 1981; Weiss, 1975). For example,

    by Banar Suharjanto on September 17, 2015erx.sagepub.comDownloaded from

  • 529

    stakeholder information needs can be influenced by (a) the perceivedand actual models of decision making operating in the organization(Allison, 1971); (b) the different uses made of information (e.g., in-strumental, conceptual and/or symbolic uses, Feldman and March,1982; Leviton and Hughes, 1981); and (c) the degree of uncertaintysurrounding program goals and which inputs and program processeswould lead to goal attainment (Maynard-Moody and McClintock,1981).4 These and other aspects of the organizational context were in-cluded to provide a broader understanding of the role of informationin program development and decision making.

    Further, the use of two different methods in the design reflectstheoretical concerns about meaningful assessment of informationneeds. In a recent review of initial trials of the stakeholder approachto evaluation, Weiss (1983) questioned the assumption that decisionmakers can articulate in advance their information needs, given the in-stability and lack of predictability inherent in most organizationalmilieus. In the same review, the original architect of this approachobserved that &dquo;effective assessment of stakeholder needs remains aserious concern&dquo; (Gold, 1983: 68). Evaluation theorists who have ad-dressed this concern have consistently argued for such naturalisticmethods as open-ended interviewing (Cronbach, 1982; Gold, 1983;Guba and Lincoln, 1981; Patton, 1978). The dual methodology allow-ed for a test of this argument and provided a basis for assessments oftriangulation design and analysis procedures.

    The questionnaire contained 28 sets of questions. Eighteen close-ended sets assessed perceived needs for various kinds of information(e.g., about clients, program resources, management, and outcomes),the perceived usefulness of various methods of gathering informationfor these same information needs, the perceived usefulness of existingreporting methods for program development and accountability pur-poses, attitudes toward and inservice needs regarding long-range plan-ning and evaluation, and uncertainties about programs and programdevelopment processes. The 10 open-ended questions generally gaverespondents space to elaborate or comment further on these sameareas (e.g., kinds of information needed but not available). The ques-tionnaire was mailed to selected populations of four stakeholdergroups (county staff and volunteers, campus faculty, and statewideadministrators). Usable questionnaires were returned by 233respondents representing a 7001o response rate. Analyses were primari-ly descriptive with many comparisons among the stakeholder groups.

    by Banar Suharjanto on September 17, 2015erx.sagepub.comDownloaded from

  • 530

    The on-site, open-ended interviews focused on themes identified inan interview guide-specifically, respondents perceptions of currentprogram development processes, information uses and needs vis-A-vistheir own program development responsibilities, and reasons for theseneeds or anticipated uses of this information. A total of 27 interviewswas conducted with representatives of the same four stakeholdergroups (10 county staff, 11 volunteers, 2 campus faculty, and 4statewide administrators), all representing two counties that had beenpurposively selected with a set of sampling criteria. Interviews lasted45 to 60 minutes, and were conducted on-site (county or campus) bytrained interviewer. Data were analyzed via an inductive, iterativecontent analysis (Greene et al., 1984).

    Assumptive and methodological characteristics of the question-naire and the interview are contrasted in Table 1. This table portraysthe deliberate linkage of the two methods with their respectiveparadigms. That is, the positivist nature of the questionnaire compo-nent is reflected in its intent to derive prescriptions for change from adeductive analysis of responses on a predetermined set of specificvariables. Criteria of technical rigor guided questionnaire develop-ment (e.g., minimum measurement error), data collection (e.g., max-imum response rate), and analysis (e.g., statistical significance)toward a reductionistic prioritizing of major findings. In contrast, thenaturalistic nature of the interview component is reflected in its intentto describe and understand inductively the domain of inquiry from themultiple perspective of respondents. Criteria of relevance and emicmeaning guided interview development (e.g., open-ended, unstruc-tured), data collection (e.g., emergent, on-site), and analysis (e.g., in-ductive, thematic) toward an expansionistic, holistic description ofpatterns of meaning in context.

    TRIANGULATION ANALYSIS

    In this independent, concurrent mixed-method design, thetriangulation analysis is conducted on separately written reports, noton the raw data. The client also wrote an integrated summary of find-ings and recommendations for change based on the two reports,though it was much less detailed than the analysis reported here. Ourefforts to compare, analyze, and integrate the two reports focus on

    by Banar Suharjanto on September 17, 2015erx.sagepub.comDownloaded from

  • 531

    TABLE 1A Comparison of the Questionnaire and Interview Along

    Paradigmatic and Methodological Dimensionsa

    a. Dimensions from Reichardt and Cook (1979) and Guba and Lincoln (1981). Notapplicable to this study and thus excluded are dimensions relevant to issues of causal-ity and to the design and implementation of a treatment.

    by Banar Suharjanto on September 17, 2015erx.sagepub.comDownloaded from

  • 532

    three levels: descriptive results, major findings, and recommendationsfor change.

    Descriptive results. As shown in Table 2, the descriptive results arecompared by constructing a matrix reflecting the organization andmajor substantive areas of each report. The column and row headingsof the matrix represent the major section headings of the question-naire and interview reports, respectively, and matrix entries representexamples of results from within one or both report sections. Thisstrategy allows for comparisons of both specific results and broaderpatterns.

    Specific results can be reviewed for between-method congruenceand complementarity. For example, the first section of the question-naire report presents results on respondent characteristics, and thefirst section of the interview report describes characteristics of thepeople involved in the organization. Both of these sections containsimilar information on the longevity of volunteers and staff with theorganization, an instance of congruent findings. This questionnairereport section also notes the role changes of many members duringtheir tenure with the organization, whereas results in this interviewreport section further highlight the strong, positive interpersonalperceptions among organization members, an instance of complemen-tary findings.

    For this particular study, Table 2 illustrates a high degree ofbetween-method congruence and complementarity of specific results.The matrix entries include several instances of similar findings fromthe two different methods, for example the member longevity notedabove and the lack of utility of information in existing reports forcounty level program development efforts. There are also many in-stances of complementary findings that enrich and expand upon eachother: from the interviews, a description of the informal network ofongoing communication and support that characterizes the programdevelopment process, and from the questionnaire, perceived needs forstrengthening some communication linkages in this program develop-ment network.

    This matrix display also allows for a review of broader patterns ofbetween-method congruence and complementarity of results. Suchpatterns can be assessed by comparing the content and organization ofthe major sections of each report and by analyzing the pattern ofmatrix entries. In this particular study, these patterns reveal a strong

    (text contmues on page 536)

    by Banar Suharjanto on September 17, 2015erx.sagepub.comDownloaded from

  • 533

    3Q)E(L)

    9

    br.

    CISQ)V.9.8KQ)maQ)

    ..t::N E..: 0.4 :..< AH

    :z

    m

    .ck.z0v.IQ)QQ)

    x4..~0x.z

    Cd

    $

    ?zs5

    zc

    .

    by Banar Suharjanto on September 17, 2015erx.sagepub.comDownloaded from

  • 534

    100.rgr_0u

    N

    W.4PQ#t.S !! .fiN iid) -

    i _Ed) 4 a; t3.. d13 C:Q,=.S .gw ww ~ 4)

    .

    r= 11z CY4.6

    by Banar Suharjanto on September 17, 2015erx.sagepub.comDownloaded from

  • 536

    parallelism in the two sets of results, providing additional convergentsupport for the overall validity of the findings. The report sectionheadings show similarities, and most of the matrix entries fall alongthe diagonal (a pattern that would hold if the matrix included alldescriptive results). Although part of this parallelism is attributable tothe common purpose guiding both inquiries, very different organiza-tional frameworks could have been expected and accepted from thetwo methods.

    Major findings. Despite this convergence of descriptive results, themajor findings of the two reports bear little resemblance to oneanother either in substance or in form. (See Tables 3 and 4 for thesequestionnaire and interview findings, respectively, presented in con-densed form.) Substantively, the questionnaire conclusions areprescriptive, whereas the interview summary themes remain descrip-tive. In form, the questionnaire conclusions are focused, selective, andspecific (reductionist); the interview themes are broad and general (ex-pansionist) and also incorporate contextual and affective informationobtained during the interview process (tacit knowledge). Further, thequestionnaire conclusions represent derived relationships amongdiscrete variables (particularistic), whereas interview themes representpatterns of meaning in context (holistic). In short, the summary find-ings of each component are highly consistent with the purpose,assumptions, and characteristics of the differing methodologies used(refer to Table 1).

    This within-method consistency was further pursued by analyzingthe links between the descriptive results and major findings of eachreport. The results, illustrated in Tables 3 and 4, reveal the markedlydifferent analytic processes guiding the two forms of inquiry. For thequestionnaire, the general pattern is a one-to-one mapping of selecteddescriptive results on major findings. Overall, this pattern representsthe analytic guidance provided by an a priori set of questions in thequestionnaire component. For example, because the context-relevantdescriptive results-for instance, on respondent characteristics andperceived strengths of programs-are not related to these a prioriquestions, they are not represented in the prescriptive conclusions.Further, each of the other sets of descriptive results contributes to on-ly one conclusion, and all conclusions are based on only one set ofresults. This pattern well reflects the deductive, reductionistic, linearnature of the quantitative questionnaire data analysis.

    by Banar Suharjanto on September 17, 2015erx.sagepub.comDownloaded from

  • 537

    o;t~

    0>04D

    fil4

    cuc,3Soo

    o

    -~4.5}13

    0CdM- I-fP.0t)AC1)1O 1.4-

    o ~

    14:l&dquo;0c:

    -~ ~

    w ~

    0fl f2q C1)F. 04

    C1)tC1)+10

    i0It~oduoc&dquo;0s::&dquo;4I..0

    .cts-S

    by Banar Suharjanto on September 17, 2015erx.sagepub.comDownloaded from

  • 539

    In contrast, the interview pattern is more like a web of interconnec-tions, weaving the threads of the descriptive results into the fabric ofmajor themes. Again, this pattern overall represents that fact that theinterview analysis was guided only by broad domains of inquiry. Morespecifically, (a) nearly all of the interview descriptive results, contex-tual and substantive, are incorporated into the major thematic find-ings ; (b) many of these results contribute to more than one theme; and(c) the summary themes clearly are based on multiple sets of results.This pattern well reflects the emergent, expansionistic, holistic natureof the qualitative interview data analysis.

    Moreover, we believe that this within-method consistency for bothstudy components, revealed in the differing substance, form, andderivation of their summary findings, is largely attributable to the in-dependence of the two efforts. In our view, this independence preserv-ed the assumptive and methodological integrity of each component.But, what are the implications of these different summary findings forbetween-method triangulation? For answers to this, we now turn to ananalysis of recommendations for change.

    Recommendations for change. The questionnaire data identify im-provements needed in the program development process, whereas theinterview data describe the complexities and details of this process asconducted in the two selected counties. These results are consistentwith the clients expectation that the detail and depth of the interviewfindings would make questionnaire recommendations more mean-ingful and easier to interpret.

    Implementation of the questionnaire-based recommendations bythemselves is constrained by two limitations of the methodology, oneobvious and the other less apparent. First, a mail questionnaire, evenwith pilot testing and open-ended questions, is limited in its capacityfor representing details of description, nuances of meaning, and pat-terns of interaction, a limitation especially problematic for cross-sectional designs. With the addition of the contextual data from theinterviews, this limitation is at least partially countered. As shown inTables 2, 3, and 4, the interview data highlight the complexity of in-formation sharing both inside and outside the organization. There aremany formal actors in this process: county and state staff, volunteers,campus faculty, outside agency personnel, and community leaders.Interview analyses identify the strength of relations among individualactors, the network of interconnections among groups of actors, and

    by Banar Suharjanto on September 17, 2015erx.sagepub.comDownloaded from

  • 540

    the types of information exchange that occur. Interview data also por-tray the feelings and values of the various actors for each others con-tributions and for the organization as a whole.

    The second limitation of the mail questionnaire is, ironically, alsoone of its major strengths. With a mail questionnaire, it is possible tocollect data from a larger cross section of the population for a givencost and, therefore, to attempt to make the recommendations forchange representative of the respondent groups sampled. In an action-oriented study, however, it is often necessary to ask the kinds ofspecific, detailed questions that render the questionnaire too specializ-ed for some respondent groups. This is the case in the present study, inwhich the response rate for the volunteers stratum (53 %) is muchlower than that for the rest of the sample (86%). Thus, althoughthe questionnaire is more representative of the organization on astatewide basis than is the interview, it systematically excludesrespondents who feel marginal to the formalized aspects of programdevelopment represented in the questionnaire. The interview formatmore successfully captures the perceptions and understandings of allparticipants in the program development process. The interviewsmore integrated portrayal of this process thereby strengthens the finalquestionnaire-based recommendations for change by (a) reducing thelikelihood that recommendations will be ignored by grounding themin an existing strong link or useful exchange within the network; (b)focusing recommendations on specific types of actors or groups thatare connected to others by weak links; (c) incorporating into recom-mendations the perceptions and values that are important to the ac-tors ; and (d) reducing the risk of recommended new activities replac-ing useful existing ones.

    What emerges from the results of the two components is a set ofrecommendations for change that has structure, substance, andstrength. If one imagines the entire set of change recommendations asa tent, the questionnaire data help to determine the shape of the struc-ture and the number and placement of the stakes. The interview find-ings provide the connecting ropes and the degree of tension or slackthat must be applied to maintain the integrity of the structure undervarying conditions. This imagery also aptly describes the successfulprocess and product of complementarity between-method triangula-tion, in that the results of one method (the interview in this case) serve

    by Banar Suharjanto on September 17, 2015erx.sagepub.comDownloaded from

  • 541

    primarily to complement, enrich, and thereby strengthen the implica-tions of the other.

    Summary. This effort at between-method triangulation of resultsreveals congruence and complementarity at the level of specificresults, significant substantive and structural differences at the level ofmajor findings that preclude meaningful integration, and complemen-tarity again at the level of recommendations for change. This effortalso illustrates several strategies for comparing and analyzing resultsfrom quantitative and qualitative methods. Given the deliberate link-ing of method with paradigm in this study, does this effort also con-stitute an example of cross-paradigm triangulation?We think not. Following Bednarz (1983), Guba and Lincoln (1981),

    and Smith (1983a, 1983b), we suggest that triangulation is possible on-ly within paradigms, that any effort to compare or integrate findingsfrom different methods requires the prior adoption of one paradigmor the other, even when-as was true in this study-the methodsthemselves are linked to and implemented within alternativeparadigms.

    The integrity of the method-paradigm linkage in the present studyis illustrated by the differences in the major findings of each compo-nent. Each set of findings well represents the contrasting assumptionsof the methodology used. Further, it is precisely these differences thatthwart triangulation efforts at this level. More successful, in terms ofcongruence and complementarity of findings, are triangulation effortsat the levels of specific descriptive findings and discrete recommenda-tions for change. This specificity and discreteness, however, reflect theparticularistic, reductionist stances of the questionnaire paradigm, notthe expansionist, holistic stances of the interview paradigm.

    Further, in the clients written summary of recommendations forchange, interview results were consciously allocated a secondary, sup-portive role (Trend, 1979), as consistent with the study objectives.This all argues that our triangulation effort was conducted not acrossparadigms, but rather from the perspective represented by the ques-tionnaire. To reinforce this point, we imagined what this effort wouldhave looked like if conducted from the perspective represented by theinterview. Our speculations suggest that very little of any question-naire data would fit with, make sense, or otherwise be of convergentor complementary value to the interview results.

    by Banar Suharjanto on September 17, 2015erx.sagepub.comDownloaded from

  • 542

    CONCLUSION

    This article uses the results of a mixed-method evaluation to il-lustrate significant design and analysis issues related to integratingquantitative and qualitative methods, specifically between-methodand cross-paradigm triangulation. The strong link between methodand paradigm deliberately established in this study significantlyfacilitated the discussion.The mixed-method evaluation design involved the independent,

    concurrent implementation of a quantitative questionnaire and aqualitative interview guide, both investigating the same phenomena.The benefits of this mixed-method strategy appear to be twofold.First, unique to this strategy, the independence of the two study com-ponents preserved the assumptive and methodological integrity ofeach, thus maximizing the intended value of each set of results andavoiding the kinds of between-method tensions reported by Trend(1979) and Knapp (1979). Second, opportunities for triangulation ofresults were significantly aided by the independent and concurrent im-plementation of both components. As illustrated, between-methodtriangulation of results can enhance fulfillment of study objectivesbeyond that provided by a single method (though increased costs,notably evaluator time, must also be noted). This illustration also of-fered several specific strategies for triangulated analysis of quan-titative and qualitative results.

    However, even with the method-paradigm linkage, we have arguedthat the triangulation effort in this study was conducted from theperspective represented by the questionnaire and thus does not con-stitute an instance of cross-paradigm triangulation. Differentepistemological origins and assumptions preclude the possibility orsensibility of cross-paradigm triangulation.

    NOTES

    1. For simplicity, the terms "paradigm" and "perspective" will be used inter-changeably, and the labels positivist and naturalistic will be used respectively to refer to(a) the traditional, dominant perspectives of logical positivism and postpositivism,realism, experimentalism and (b) the emergent perspectives of idealism, phenom-enology, symbolic interactionism, and ethnomethodology in the evaluation field.

    by Banar Suharjanto on September 17, 2015erx.sagepub.comDownloaded from

  • 543

    Differences among members of these two camps of perspectives will not be ad-dressed (see Bednarz, 1983; Norris, 1983; Phillips, 1983; and Smith, 1983a, 1983b). Thequantitative and qualitative labels will be reserved for types of methods and data.

    2. For discussions of the substantive findings of the questionnaire and interviewcomponents of this study, see McClintock and Nocera (1984) and Greene (1984),respectively.

    3. Three individuals were members of both teams.4. Other major influences on information needs are the cognitive processes and

    personality styles of the individual users of information. See Nisbett and Ross (1980)and Kilmann (1979) for two different approaches to understanding these individual dif-ference factors.

    5. In one of the counties, the interviews preceded the questionnaire, whereas theorder was reversed in the other. The interview data from both counties were similar, andthe questionnaire data consistent with responses statewide. Thus, the double assessmentin these two counties did not seem to affect their results.

    6. Both the questionnaire and the interviews also served instructional purposes, pro-viding field experiences for graduate courses in evaluation methods.

    REFERENCES

    ALLISON, G. T. (1971) Essence of Decision: Explaining the Cuban Missile Crisis.Boston: Little, Brown.

    BEDNARZ, D. (1983) "Quantity and quality in evaluation research: A divergentview." Revised version of paper presented at the Joint Meeting of the EvaluationNetwork and the Evaluation Research Society, Chicago.

    CAMPBELL, D. T. and D. W. FISKE (1959) "Convergent and discriminant validationby the multitrait-multimethod matrix." Psychological Bulletin 56: 81-106.

    COHEN, D. K. (1983) "Evaluation and reform," in A. S. Bryk (ed.) Stakeholder-Based Evaluation. Beverly Hills, CA: Sage.

    CRONBACH, L. J. (1982) Designing Evaluations of Educational and Social Programs.San Francisco: Jossey-Bass.

    and associates (1980) Toward Reform of Program Evaluation. San Francisco:Jossey-Bass.

    DENZIN, N. K. (1978) The Research Act: An Introduction to Sociological Methods.New York: McGraw-Hill.

    FELDMAN, M. S and J. G. MARCH (1982) "Information in organization as signal andsymbol." Administrative Science Quarterly 26: 71-186.

    GOLD, N. (1983) "Stakeholders and program evaluation: Characterizations andreflections," in A. S. Bryk (ed.) Stakeholder-Based Evaluation. San Francisco:Jossey-Bass.

    (1981) The Stakeholder Process in Educational Program Evaluation.Washington, DC: National Institute of Education.

    by Banar Suharjanto on September 17, 2015erx.sagepub.comDownloaded from

  • 544

    GREENE, J. C. (1984) "Toward enhancing evaluation use: organizational andmethodological perspectives." Presented at the Joint Meeting of the EvaluationNetwork and the Evaluation Research Society, San Francisco.

    J. L. COMPTON, B. RUIZ, and H. SAPPINGTON (1984) "Successfulstrategies for implementing qualitative methods." Unpublished manuscript.

    GUBA, E. G. and Y. S. LINCOLN (1981) Effective Evaluation. San Francisco:Jossey-Bass.

    IANNI, F. A. and M. T. ORR (1979) "Toward a rapprochement of quantitative andqualitative methodologies," in T. D. Cook and C. S. Reichardt (eds.) Qualitativeand Quantitative Methods in Evaluation Research. Beverly Hills, CA: Sage.

    JICK, T. D. (1983) "Mixing qualitative and quantitative methods: Triangulation inaction," in J. Van Maanen (ed.) Qualitative Methodology. Beverly Hills, CA: Sage.

    KILMANN, R. H. (1979) Social Systems Design. New York: North-Holland.KNAPP, M. S. (1979) "Ethnographic contributions to evaluation research," in T. D.

    Cook and C. S. Reichardt (eds.) Qualitative and Quantitative Methods in Evalu-ation Research. Beverly Hills, CA: Sage.

    LEVITON, C. C. and E. F. HUGHES (1981) "Research on the utilization of evalua-tion : a review and synthesis." Evaluation Review 5: 525-548.

    LOTTO, L. S. (1983) "Revisiting the role of organizational effectiveness in educationevaluation." Educational Evaluation and Policy Analysis 5: 367-378.

    MADEY, D. L. (1982) "Some benefits of integrating qualitative and quantitativemethods in program evaluation, with illustrations." Educational Evaluation andPolicy Analysis 4: 223-236.

    MARK, M. M. and R. L. SHOTLAND (1984) "Problems in drawing inferences frommultiple methodologies." Presented at the Joint Meeting of the EvaluationNetwork and the Evaluation Research Society, San Francisco.

    MAYNARD-MOODY, S. and C. McCLINTOCK (1981) "Square pegs in round holes:program evaluation and organizational uncertainty." Policy Studies Journal9: 644-666.

    McCLINTOCK, C. and J. C. GREENE (forthcoming) "Triangulation in practice."Evaluation and Program Planning.

    (1984) "Conceptual and methodological considerations in assessinginformation needs for planning and evaluation." Program Evaluation StudiesPaper Series 6, Department of Human Service Studies, Cornell University.

    McCLINTOCK, C. and C. NOCERA (1984) "Information management in programplanning and evaluation: a study of program development in Extension homeeconomics." Program Evaluation Studies Paper Series 8, Department of HumanServices Studies, Cornell University.

    NISBETT, R. and L. ROSS (1980) Human Inference: Strategies and Shortcomings ofSocial Judgment. Englewood Cliffs, NJ: Prentice-Hall.

    NORRIS, S. P. (1983) "The inconsistencies at the foundation of construct validationtheory," In E. R. House (ed.) Philosophy of Evaluation, New Directions for Pro-gram Evaluation 19. Beverly Hills, CA: Sage.

    PATTON, M. Q. (1980) Qualitative Evaluation Methods. Beverly Hills, CA: Sage.(1978) Utilization-Focused Evaluation. Beverly Hills, CA: Sage.PHILLIPS, D.C. (1983) "After the wake: postpositivist educational thought."

    Educational Researcher 12: 4-12.

    by Banar Suharjanto on September 17, 2015erx.sagepub.comDownloaded from

  • 545

    REICHARDT, C. S. and T. D. COOK (1979) "Beyond qualitative versus quantitativemethods," in T. D. Cook and C. S. Reichardt (eds.) Qualitative and QuantitativeMethods in Evaluation Research. Beverly Hills, CA: Sage.

    SIEBER, S. D. (1973) "The integration of field work and survey methods." AmericanJournal of Sociology 78: 135-159.

    SMITH, J. K. (1983a) "Quantitative versus qualitative research: An attempt toclarify the issue." Educational Researcher 12: 6-13.

    (1983b) "Quantitative versus interpretive: The problem of conducting socialinquiry," in E. R. House (ed.) Philosophy of Evaluation, New Directions forProgram Evaluation 19. Beverly Hills, CA: Sage.

    THOMPSON, B. and J. A. KING (1981). "Evaluation utilization: a literaturereview and research agenda." Presented at the Annual Meeting of the AmericanEducational Research Association, Los Angeles.

    TREND, M. G. (1979) "On the reconciliation of qualitative and quantitative analyses:a case study," in T. D. Cook and C. S. Reichardt (eds.) Qualitative andQuantitative Methods in Evaluation Research. Beverly Hills, CA: Sage.

    WEBB, E. J., D. T. CAMPBELL, R. D. SCHWARTZ, and L. SECHREST(1980) Nonreactive Measures in Social Sciences. Chicago: Rand McNally.

    (1966) Unobtrusive measures: Nonreactive Research in the Social Sciences.Chicago: Rand McNally.

    WEISS, C. H. (1983) "Toward the future of stakeholder approaches in evaluation,"in A. S. Bryk (ed.) Stakeholder-Based Evaluation. San Francisco: Jossey-Bass.

    (1975) "Evaluation research in the political context," in E. L. Streuning andM. Guttentag (eds.) Handbook of Evaluation Research. Beverly Hills, CA: Sage.

    Jennifer Greene is Assistant Professor of Human Service Studies at Cornell University.Her extensive field experience in educational and other human service program evalua-tion have led to research interests in the areas of evaluation methodology and informa-tion utilization. Her current research efforts focus on the balance between user-responsiveness and technical quality in evaluation via case study investigation of thestakeholder approach to evaluation.

    Charles McClmtock is Assistant Dean for Educational Programs and Policy in the Col-lege of Human Ecology at Cornell University. He is also Associate Professor in theDepartment of Human Service Studies, where his teaching and research interests focuson evaluation methods and information management in organizations.

    by Banar Suharjanto on September 17, 2015erx.sagepub.comDownloaded from