developing guidelines to enhance the evaluation of overseas development projects

12
Developing guidelines to enhance the evaluation of overseas development projects Diane McDonald* 64 Falconer Street, North Fitzroy, Victoria, 3068, Australia Received 1 April 1998; accepted 1 August 1998 Abstract This paper identifies some of the key factors which should be considered by those engaged in the evaluation of overseas development projects, in order that the evaluation process leads to the empowerment of local stakeholders. An extensive review of prior research undertaken by the author has led to the development of seven broad ‘guidelines’ relating to: stakeholder participation; a focus on education; appropriate methodology; feedback and utilisation; enhancement of local capacity; partnership; and cross-cultural teaming. Rather than being definitive, the proposed list indicates significant aspects which could be of assistance to those working in the field. Although based on a wealth of experience accumulated by prominent evaluators, these ‘guidelines’ need to be further extended by other practitioners involved in the evaluation of international development cooperation activities in various project settings and cultural contexts. In so doing, particular attention should be given to seeking, and listening to, the views of indigenous stakeholders. # 1999 Elsevier Science Ltd. All rights reserved. Keywords: Cross-cultural evaluation; Evaluation of overseas development projects; Empowerment evaluation 1. The evaluation of overseas development projects: a brief overview Overseas development assistance, as we know it today, really commenced after World War 2. Since that time, and particularly since the mid 1950’s, there has been an enormous growth, in the provision of aid and technical assistance to Third World countries, using funds donated by the so called ‘developed world’. Over the years, many changes have taken place in the design and implementation of overseas develop- ment projects and these have been well documented by practitioners and academics interested in this field (Binnendijk, 1989; Rebien, 1996; Stokke, 1991). Along with the expansion of the development inter- ventions themselves, many donor agencies put increased eort into assessing whether or not the aid dollar was well spent. Originally, evaluations were undertaken, with a primary focus on accountability and assessing the achievement of project objectives. However, by the 1980’s there was growing concern that despite an increasing amount of evaluation ac- tivity, there was little overall improvement in the pro- ject work undertaken. Some of those well experienced in this area believed that although evaluation potentially had an important role to play in enhancing development assistance, there were major shortcomings in many studies conducted up until that time (Binnendijk, 1989). The following decade saw increased eorts to enhance the standard of evaluation and to develop tools and criteria by which its calibre could be assessed. Greater focus was given to the importance of developing more collabora- tive approaches in the design and conduct of project evaluations. In 1982, the Development Assistance Committee of the Organisation for Economic Evaluation and Program Planning 22 (1999) 163–174 0149-7189/99/$ - see front matter # 1999 Elsevier Science Ltd. All rights reserved. PII: S0149-7189(99)00009-9 * Tel.: +61-03-94817397; fax: +61-03-94194280. E-mail address: [email protected] (D. McDonald)

Upload: diane-mcdonald

Post on 15-Sep-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

Developing guidelines to enhance the evaluation of overseasdevelopment projects

Diane McDonald*

64 Falconer Street, North Fitzroy, Victoria, 3068, Australia

Received 1 April 1998; accepted 1 August 1998

Abstract

This paper identi®es some of the key factors which should be considered by those engaged in the evaluation of overseas

development projects, in order that the evaluation process leads to the empowerment of local stakeholders. An extensive reviewof prior research undertaken by the author has led to the development of seven broad `guidelines' relating to: stakeholderparticipation; a focus on education; appropriate methodology; feedback and utilisation; enhancement of local capacity;

partnership; and cross-cultural teaming. Rather than being de®nitive, the proposed list indicates signi®cant aspects which couldbe of assistance to those working in the ®eld.Although based on a wealth of experience accumulated by prominent evaluators, these `guidelines' need to be further

extended by other practitioners involved in the evaluation of international development cooperation activities in various projectsettings and cultural contexts. In so doing, particular attention should be given to seeking, and listening to, the views ofindigenous stakeholders. # 1999 Elsevier Science Ltd. All rights reserved.

Keywords: Cross-cultural evaluation; Evaluation of overseas development projects; Empowerment evaluation

1. The evaluation of overseas development projects: abrief overview

Overseas development assistance, as we know ittoday, really commenced after World War 2. Sincethat time, and particularly since the mid 1950's, therehas been an enormous growth, in the provision of aidand technical assistance to Third World countries,using funds donated by the so called `developedworld'. Over the years, many changes have taken placein the design and implementation of overseas develop-ment projects and these have been well documented bypractitioners and academics interested in this ®eld(Binnendijk, 1989; Rebien, 1996; Stokke, 1991).

Along with the expansion of the development inter-ventions themselves, many donor agencies put

increased e�ort into assessing whether or not the aiddollar was well spent. Originally, evaluations wereundertaken, with a primary focus on accountabilityand assessing the achievement of project objectives.However, by the 1980's there was growing concernthat despite an increasing amount of evaluation ac-tivity, there was little overall improvement in the pro-ject work undertaken.

Some of those well experienced in this area believedthat although evaluation potentially had an importantrole to play in enhancing development assistance, therewere major shortcomings in many studies conductedup until that time (Binnendijk, 1989). The followingdecade saw increased e�orts to enhance the standardof evaluation and to develop tools and criteria bywhich its calibre could be assessed. Greater focus wasgiven to the importance of developing more collabora-tive approaches in the design and conduct of projectevaluations. In 1982, the Development AssistanceCommittee of the Organisation for Economic

Evaluation and Program Planning 22 (1999) 163±174

0149-7189/99/$ - see front matter # 1999 Elsevier Science Ltd. All rights reserved.

PII: S0149-7189(99 )00009 -9

* Tel.: +61-03-94817397; fax: +61-03-94194280.

E-mail address: [email protected] (D. McDonald)

Cooperation and Development, concerned about poorperformance in the ®eld, established the Expert Groupon Aid Evaluation `to strengthen members' exchangeof information and experience; to contribute toimproving the e�ectiveness of aid by drawing on thelessons learned from evaluation; and to seek ways ofsupporting developing countries' own evaluation ca-pacities' (Binnendijk, 1989).

However, despite general improvements in programevaluation itself, the evaluation of overseas develop-ment projects was considered to have `lagged behind'(Ginsberg, 1988). Overall, endeavours made during the1980's and early 1990's to improve the e�ectiveness ofthe evaluation of aid and development programsproved disappointing. The ®ndings of an extensivereview of the impact of aid documented in the CassenReport (1986), showed that although `aid worked', aidagencies had failed to apply the lessons learned inplanning new activities (Forss, Cracknell & Samset,1994). Furthermore, despite considerable rhetoricabout the need to increase the participation of hostcountry personnel in the design and conduct of evalu-ations, practice in this area had been poor (Rebien,1996; Stokke, 1991). By the mid 1990's, internationalagreement about guidelines to improve the evaluationof overseas development projects, remained elusive(Piciotto, 1995).

One major criticism raised by several who promoteda more participatory approach, was that evaluatorshad paid insu�cient attention to cross-cultural issueswhen designing and implementing their studies(Cuthbert, 1985; Duncan 1985; Ginsberg, 1988;Klitgaard, 1995; Marsden & Oakley, 1990; Merry®eld,1985; Russen, Wentling & Zuloaga, 1995; Westwood& Brous, 1993). Research stressed the importance oftaking cross-cultural issues into consideration in orderto increase `an evaluator's ability to provide reliableand useful information' (Merry®eld, 1985). However,by the late 1980's there was a noticeable shortage ofinquiry into this speci®c type of evaluation, particu-larly whether or not the theories and methods used byWestern evaluators were `automatically . . . transfer-able to the development context' (Ginsberg, 1988).Progressively, evaluators conducting evaluations withina cross-cultural context began to grapple with some ofthe important issues raised (Kumar, 1989; Slaughter,1991;Williams, 1991).

In addition, in recent years, evaluators pursuingmore participatory and collaborative approaches, high-lighted the need for the evaluation process to enhancethe self-determination of people within the projectcommunity i.e. `to take action to achieve in¯uenceover the organizations and institutions which a�ecttheir lives and the communities in which they live'(Whitmore, 1988). They proposed certain conditionsfor evaluation and particular techniques that could be

adopted in order to lead to the empowerment of thedisenfranchised. During the past decade, an increasingnumber of those involved in the evaluation of overseasdevelopment projects (Feuerstein, 1986; Kalyalya,1988; Rebien, 1996; Rugh, 1986; Thompson, 1990;Van Sant, 1989; Westwood et al., 1993) have con-sidered empowerment to be an important outcome forwhich practitioners should strive.

2. A review of literature concerning the evaluation ofoverseas development projects (ODP)

2.1. Outline of the research undertaken

But what is the best way to design and implement across-cultural evaluation of an overseas developmentproject, in order to empower local stakeholders? Thispaper aims to provide some insights to help answerthis question. It attempts to identify some `broadguidelines' which may be of some assistance to thoseworking in this area.

In order to determine the key elements that couldusefully be considered in the planning and implemen-tation of ODP evaluation an extensive review of rel-evant literature was undertaken in three main areas:the evaluation of overseas development projects; cross-cultural evaluation; and empowerment evaluation.Suitable research documents were identi®ed via abroad ranging electronic literature search, using theERIC, Social Science, Current Contents andAustralian Educational Index databases. Some 100articles were examined.

From this material, some of the important `ingredi-ents' which contribute to an e�ective cross-culturalevaluation of development assistance activities werenoted. These elements then were synthesised to developa list of `broad guidelines' providing some baselineideas which an evaluator might consider when design-ing an evaluation in this area. However, it is importantto stress from the outset that the resultant `criteria' donot provide a de®nitive list. Rather, they should beseen as `emerging' and should be interpreted `liberally',leaving the evaluator free to decide on the applicabilityof each one within a particular cultural context.

The following review summarises some of the mainproblems associated with conventional approaches toevaluation and highlights what researchers have saidabout possible ways forward.

2.2. Problems with the conventional approach

There is widespread agreement amongst evaluatorsof overseas development projects, that conventionaltypes of evaluation have been largely instigated bydonor agencies in response to their own information

D. McDonald / Evaluation and Program Planning 22 (1999) 163±174164

needs. As noted by Marsden and Oakley these evalu-

ations generally used `externally determined' indi-

cators, little understood by the project community.

Despite other overt expressions of purpose, such stu-

dies, focused principally on accountability and control

(Bamberger, 1991; Rebien, 1996; Stokke, 1991; Van

Sant, 1989), and were too narrowly concerned with

assessing whether or not the project objectives had

been attained (Binnendijk, 1989; Marsden & Oakley,

1990). Furthermore, Bamberger in reviewing evalu-

ations undertaken of World Bank projects, concluded

that even where representatives from host country gov-

ernments were involved in an evaluation, they tended

to `respond more to the information needs of the

donor agencies than to the needs of project manage-

ment' (Bamberger, 1991).

In addition, many evaluation designs were too soph-

isticated, costly and inappropriate within the local con-

text (Binnendijk, 1989). Too much emphasis was

placed on the collection of quantitative data with little

¯exibility in the methods used. Other problems were

that evaluation e�orts were generally short in duration

and allocated few resources, thus limiting the amount

(and quality) of the data collected (Berlage & Stokke,

1992).

Another criticism has been that evaluators have, in

the main, been foreigners, often with skills in the `pro-

ject substance', rather than evaluation expertise

(Thompson, 1989; Van Sant, 1989). In general, they

lacked knowledge of the local social, political and cul-

tural context in which the project was located.

Kalyalya bemoaned the fact that too great a focus on

engaging the experts from the so-called `developed

world' maintained the exclusion of host country per-

sonnel, thereby limiting opportunities for local people

to learn and to share their own insights via the evalu-

ation process.

Despite changes in rhetoric in recent years, donor

agencies using the conventional approach placed insuf-

®cient emphasis on building local capacity in project

management and evaluation (Snyder & Doan, 1995).

This concern was well-articulated by Snyder and Doan

(1995) in the following terms:

While the donor community has given strong

encouragement to host country governments to

demonstrate greater concern for the performance of

their programs and agencies, their track record of

building institutions capable of evaluating the

causes and consequences of that performance is

strikingly weak (Snyder & Doan, 1995).

In their meta-evaluation of 177 USAID projects,

they found that despite a recommendation to encou-

rage the participation of host country personnel in the

evaluation process, this happened in only 40% ofcases.

Practitioners have also indicated that where thedonor agency had made an e�ort to involve hostcountry project stakeholders, foreign governmentsoften showed little enthusiasm or commitment to theevaluation. A shortage of trained and experiencedlocal evaluation personnel, a lack of understanding ofthe purpose of the evaluation and a fear of `hiddenagendas', compounded further the di�culties associ-ated with involvement at the local level. Involving ben-e®ciaries in a collaborative evaluation processremained one of the biggest challenges facing thoseengaged in foreign assistance projects (Thompson,1991).

Moreover, evaluations undertaken using convention-al methods failed, all too often, to lead to improve-ments in program management and implementation,either of the existing project or in terms of future ones.Amongst others, it was argued that insu�cient supportfor the involvement of the project community through-out the evaluation contributed to disinterest in the ap-plication of its ®ndings (Binnendijk, 1989). A majorconcern was one of timeliness, for often decisionsabout future projects were made before the evaluationdata was made known (Bamberger, 1991). Lack ofappropriate feedback to the decision-makers within theproject community was one of the major impedimentsof e�ective evaluation (Cracknell, 1996; Stokke, 1991;Van Sant, 1989). To emphasise problems inherent inthis area Cracknell (1991) quoted from a governmentpolicy document:

Evaluators should not show copies (i.e. of evalu-ation reports to o�cials in developing countries) . . .If a developing country has supplied personnel towork closely with the evaluation team the contentof an early draft may be agreed with them, but notthe ®nal draft or ®nal report.

2.3. The purpose of the evaluation of overseasdevelopment projects

From the literature selected, four key objectives ofODP evaluation have been identi®ed.

Several evaluators consider that one of the import-ant goals of an evaluation is to provide reliable andvalid information to assist in decision making (Rebien,1996; Rugh, 1994; Van Sant, 1989). Such information,according to Rugh (1994) in his work with NGOs,needs to be useful, a view supported by both Patton(1985b) and Stokke (1991).

Strong support is also given for strengthening thefocus of an evaluation to ensure that it provides an

D. McDonald / Evaluation and Program Planning 22 (1999) 163±174 165

educational opportunity for all stakeholders(Cracknell, 1996; Marsden & Oakley, 1990; Piciotto,1995; Rugh, 1994; Swantz, 1992; Van Sant, 1989;Vargas, 1991).

A third important aim identi®ed by these writersand many others such as Snyder and Doan (1995),and Kalyalya is to develop the capacity of hostcountry personnel including members of the projectcommunity, to evaluate their own activities.

Although implied by many writers within the con-text of their discussions about participation, Marsdenand Oakley put the strongest case for the inclusion ofa fourth objective, that being to build a meaningfuland equitable partnership. For them:

Partnership . . . is a relationship . . . which goesbeyond the mind and the intellect, and enters theheart and the emotions . . . (It) requires a sharing ofvisions, dreams, hopes, fears, aspirations and frus-trations among members of the project constituen-cies (Marsden & Oakley, 1990).

They go on to warn, however, that real partnershipmust be developed over time:

To attempt a sudden, short-duration, quick-®x part-nership during a limited-purpose social developmentevaluation exercise will be both unrealistic and di�-cult . . . (It) is possible only when there has beensome history of sharing and understanding acrossconstituencies (Marsden & Oakley, 1990).

2.4. How can evaluations be made more e�ective?

There is no shortage of suggestions regarding waysto improve the evaluation of development assistanceprojects. Pleasingly, there is a fair degree of consensusabout the way in which this should be achieved. Manywriters elaborate on the importance of making changesin certain areas, but in a collection of articles based onthe proceedings of an international conference con-ducted by aid agencies in 1989, Marsden and Oakleyprovide the most comprehensive coverage of key fac-tors that can assist e�ective evaluation in this ®eld.These comprise the need to:

. ensure participation of stakeholder groups through-out the evaluation;

. address the issues of relevance to stakeholders;

. look beyond purely assessing the achievement ofobjectives to include unexpected outcomes;

. orient the evaluation towards enhancing learning forthe entire project community;

. develop appropriate methodologies that provide rel-evant, timely and accurate information;

. build evaluation activities into on-going project ac-tivities;

. use strategies to enhance the capabilities of the pro-ject community to undertake future evaluations;

. assess the impact of the project within the broadersocial, political and cultural context;

. ensure appropriate feedback to all stakeholdersthroughout the evaluation process.

They argue that in order to integrate these principlesinto an evaluation, it is necessary for aid and develop-ment agencies to cultivate quite di�erent attitudes suchas `a fundamental re-alignment of the relationshipbetween donor agency and bene®ciary'; longer termcontact with the project community; and rede®ning therole of the evaluator as facilitator, mediator and cata-lyst (Marsden & Oakley, 1990). They suggest severalproject level indicators that could be reviewed in orderto assess whether or not an evaluation has enhancedlocal capacity and equality in partnership, including:`evidence of shared decision making (and) leadership';`signs of solidarity and cohesion'; `commitment to theproject . . . goals and activities'; `improve(d) . . . techni-cal and managerial competence'; and `capacity for self-re¯ection . . . critical analysis' and `action' (Marsden &Oakley, 1990).

2.5. How does evaluation conducted by non governmentorganisations (NGOs) compare with those undertakenby other donors?

While the larger bilateral (government to govern-ment aid and development providers) and multilateral(international aid and development providers) agenciesare often criticised for their lack of contact with thelocal community, their colleagues in the non-govern-ment sector are, according to Riddell (1992), noted fortheir a�nity with their indigenous project partners.Evidence indicates that whereas aid ministries are, asCracknell (1996) puts it, under great pressure `to givepriority in their evaluation work to the accountabilityobjective', NGOs engaged in the evaluation of devel-opment projects have a tendency to focus on self-evaluation and learning. It could therefore be surmisedthat NGOs generally have had a better record thantheir government counterparts in the conduct ofappropriate evaluation activities.

However, Cracknell warned, that although manydonors favour the allocation of funds to NGOs:

. . . there is growing concern as to whether theNGOs are in fact as e�cient in . . . achieving theirdevelopmental objectives, as they are generallyassumed to be, and it may well be in the interests ofthe NGOs themselves to improve their evaluationpractices . . . (Cracknell, 1996).

D. McDonald / Evaluation and Program Planning 22 (1999) 163±174166

A list of di�culties associated with NGO evaluationof development projects assembled by Kalyalya closelymirrors the problems faced by other donor agencies(Kalyalya, 1988). These include: foreign experts, unfa-miliar with the local language and culture, `visited . . .projects for only a few days, talking mostly with theleaders and looking mainly at physical aspects'; evalu-ation reports, rarely seen by project participants,viewed issues mainly from the donor perspective; qua-li®ed local evaluators were not involved in the evalu-ation; and project participants learned little about howto improve their activities (Kalyalya, 1988). A numberof other issues were raised by Brown as peculiar to theevaluation of NGO projects, including problems as-sociated with the lack of clear `boundaries of responsi-bility' within a project; and the tendency for NGOs tobe too ¯uid in their de®nition of project objectives(Brown, 1991).

Within a recent collection of articles edited byStokke (b), Marcussen, comparing the performances ofaid and development NGOs with their multilateral col-leagues, found that the `comparative advantage' ofNGOs was `more a myth than a reality' (Marcussen,1996). Speci®cally, he concluded that: there were nosigni®cant di�erences between the participatoryapproaches of NGOs and multilateral agencies; NGOsproduced poorer quality project documents; and thatboth types of organisation experienced di�culties inthe area of sustainability of project activities(Marcussen, 1996).

The ®ndings of a subsequent major review of 240NGO projects in 26 developing countries conducted onbehalf of the OECD/Development AssistanceCommittee, Expert Group on Evaluation, showed that. . . `there is still a lack of ®rm and reliable evidence onthe impact of NGO development projects and pro-grammes' (Riddell, Krise, Kyllonen, Ojanpera &Vielajus, 1997). While the study found that someNGOs and the smaller community-based organisationswere experimenting with new methods, there was `apaucity of evidence of participation' (Riddel et al.,1997).

2.6. Methods to use in evaluation of overseasdevelopment projects

Many studies, including that undertaken by Berlargeand Stokke, strongly recommend that if evaluators areserious about wanting to collect reliable and usable in-formation, they need to give careful consideration tomethodology. The applicability of Western methods toevaluate overseas development projects has been ques-tioned by many working in the ®eld, including Rist(1995) and Merry®eld (1985). Several writers, amongstthem Ginsberg (1998), have called for the developmentof new techniques more appropriate to the local con-

text. Considerable support has been given by evalua-tors including Rist (1995), Knox and Hughes (1994),Salmen (1989) and Campbell (1979) for multiplemethods to be used, incorporating both quantitativeand qualitative modes. For Marsden and Oakleythese methods should be: participatory; regular; `basedupon the notion of self-evaluation'; `merge easily . . .with on-going activities'; in keeping with the capabili-ties and availability of the project community; and sys-tematic and sensitive to cost (p. 82) Above all, theyargue, the methods chosen `must place project partici-pants centre stage' (p. 91). This approach can belikened to that of Guba and Lincoln who stress theneed for `the claims, concerns, and issues of stake-holders (to) serve as organizational foci (the basis fordetermining what information is needed)' (Guba &Lincoln, 1989).

Based on the broad experience of evaluatorsinvolved in conducting evaluations of overseas devel-opment projects (including indigenous evaluation per-sonnel), Marsden and Oakley specify that e�ectiveevaluations should contain six main stages (Marsden& Oakley, 1990). These are to: `discuss, explore andagree terms of reference' for the evaluation with stake-holder groups; `clarify . . . expected outcomes'; deter-mine responsibilities; `select appropriate methods' andexplore new ones during the evaluation process;`encourage open debate' of `provisional conclusions';and report ®ndings to all parties (Marsden & Oakley,1990).

2.7. Outcomes of this approach

Researchers have clearly articulated what they con-sider to be the major outcomes resulting from re-focus-ing ODP evaluation as speci®ed in the abovediscussion. Writing of the bene®ts of the participatoryapproach, some studies conclude that increased stake-holder involvement and access to feedback enhanceslearning and leads to greater commitment to the utilis-ation of the ®ndings (Lawrence, 1989; Patton, 1985b;Rebien, 1996; Van Sant, 1989). Supporting this con-clusion, Forss et al. propose that `the road to utilitypasses through organizational learning' (Knox &Hughes, 1994).

A second outcome noted by several writers includingSalmen is that involvement of the project communityin the evaluation enhances ownership of the projectincreasing its chance of success (Salmen, 1989). Byencouraging the project community to ®nd its own sol-utions to project improvement, Kalyalya observedgreat improvement in national capacity to plan moree�ective strategies which then lead to sustained devel-opment.

For several of those involved in ODP evaluation,training local people in evaluation techniques and

D. McDonald / Evaluation and Program Planning 22 (1999) 163±174 167

undertaking activities geared towards institutional ca-pacity building leads to the achievement of a thirdmajor outcomeÐempowerment (Kalyalya, 1988;Rebien, 1996; Rist, 1995; Vargas, 1991). The ideasexpressed by Rebien (1996) regarding the importanceof giving local stakeholders greater access to infor-mation in order to increase their sense of power, andthe need to encourage them to create their own sol-utions leading to social change are shared by theseother writers.

A further, and arguably the most signi®cant out-come of a more participatory approach to evaluationrelates to increased solidarity between the developedand developing worlds and the building of meaningfulpartnership (Marsden & Oakley, 1990; Vargas, 1991).

2.8. The role of the evaluator

As indicated previously, in order that an evaluationleads to the above outcomes, the role of the external,objective evaluator needs to be re-conceptualised.Patton for one, emphasised the importance of dis-tinguishing `what evaluators do' from `what evaluationis' (Patton, 1994) Marsden and Oakley identify severaldesirable qualities including being able to: `combineknowledge of local cultures with knowledge of evalu-ation techniques'; `generate trust' and empathise withtarget groups; `recognise the constraints established bythe cultural context'; commit themselves fully and cope`with di�cult or unfamiliar local conditions'; `be sensi-tive to local power structures'; and to collect infor-mation from `non-powerful' sources in the projectcommunity (Marsden & Oakley, 1990). Added to thisshould be an ability to support the project communityin utilising the ®ndings and spreading the lessonslearned.

More recently, Cracknell identi®es an importantchange in the role for the evaluator from `disinterestedobserver to . . . moderator . . . negotiator . . . (and)agent of change' (Cracknell, 1991). Other roles ident-i®ed earlier include that of facilitator, trainer and cata-lyst. In specifying the most important function of anevaluator, Marsden and Oakley state categorically thatit must be `to facilitate mutual understanding, re¯ec-tion and learning across constituencies' involved in theactivity (Marsden & Oakley, 1990). Thompson agrees,seeing the evaluator as a collaborator who `must dealwith the separate and often con¯icting agendas' of themajor stakeholder groups (Thompson, 1991).Extending this notion Mathison sees the role of theevaluator as that of `an involved collaborating partici-pant in the evaluation', something which she considerscan be achieved only by developing a `long term re-lationship' with the host organisation (Mathison,1994). Following on from his concern about the im-portance of feedback Van Sant (1989) stresses that

another important quality is the ability of the evalua-tor to tailor the presentation of the ®ndings to variousstakeholders groups .

Some researchers have argued that the role of theevaluator should be `that of a neutral and detachedobserver' (Knox & Hughes, 1994). However othersquestion whether this is possible, or even desirable(Marsden & Oakley, 1990). In his article about partici-patory evaluation in NGOs, Taylor argues that `evalu-ation necessarily involves a large element of subjectivejudgement, for the personal values of those engaged inevaluating are always a part of the evaluation processitself' (Taylor, 1991). Lincoln puts the case even morestrongly:

Only by abandoning the senseless commitment towhat we now think of as objectivity can we reattainthe state of being a learning community (Lincoln,1995).

2.9. Obstacles to the evaluation of overseas developmentprojects

Despite all good intentions and extensive planning,the evaluator is often confronted by unexpected ob-stacles. Cuthbert (1985) identi®ed a long list of logisti-cal and situation speci®c constraints that inhibited thetimely completion of her research in the Caribbean.Similarly, Ginsberg (1988) reported some of the unex-pected di�culties of working in a foreign country,such as lack of resources and materials easily obtainedin the West. As mentioned earlier, evaluators also facecertain di�culties in trying to involve local stake-holders in the evaluation process. In addition to theproblems already noted, Rebien (1996) raised a con-cern about the possibility that in some instances,e�orts to encourage participation could actually resultin manipulation.

Another concern is the potential for bias. Concernedabout the legitimacy of some of the methodologicalinstruments used in the evaluation of social develop-ment projects, Brown raises the possibility of localproject personnel merely `mirror(ing) the attitudes ofthe change agents' (Brown, 1991). Other researchershave put forward strong arguments about the need todevise new, appropriate, evaluative methodologies thatwill help to counter accusations of bias (Marsden &Oakley, 1990; Salmen, 1989; Vargas, 1991). In re-sponse to issues raised about the lack of objectivity oflocal evaluators, several writers, including Riddellsuggest that the use of triangulation would help toavoid any prejudiced ®ndings (Riddell, 1992). Theengagement of an evaluation team comprising bothlocal and external personnel could greatly assist in thisregard (Slaughter, 1991).

A further obstacle to e�ective evaluation leading to

D. McDonald / Evaluation and Program Planning 22 (1999) 163±174168

use is identi®ed by Picciotto (1995). Concluding on asobering note he cautions that despite the relevance ofthe ®ndings, there is no guarantee that they will betaken up by project sta�. This view is supported byKumar who warns evaluators to be aware that theresults of their studies `are often conceptualized di�er-ently by the development community' (Kumar, 1995).Similarly, Ginsberg stresses the importance of the eva-luator being conscious of `the political rami®cations. . . (associated with the) dissemination of evaluationresults' (Ginsberg, 1988).

In summary, evaluators of development projects arefaced with many important challenges which will needto be overcome if they are to succeed in their task.Not least of these is a point raised by Salmen that edu-cated professionals (both external and local) might®nd it di�cult to admit to having `something to learnfrom the uneducated' (Salmen, 1989).

3. The cross-cultural evaluation of overseas developmentprojects

3.1. Additional challenges confronting the evaluator

The above review focuses on ODP evaluations ingeneral. But what new issues need to be considered inlight of the cross-cultural context in which most ofthese evaluations take place? For Patton cross-culturalevaluation is `a two-way educational process aimed atmutual understanding' (Patton, 1985a). Evaluatorsassigned this task face complex challenges due to cul-tural di�erences between themselves and people of thehost country in which the evaluation takes place.Amongst these challenges are di�erences in: worldview; beliefs and values; ethics; language, communi-cation and styles of interaction; social relationships;attitude towards time; infrastructure; and political sen-sitivities (Merry®eld, 1985).

Based on her experience, Cuthbert found that `unfa-miliarity with a speci®c cultural context (made) itmuch more di�cult for the perceptions of outside eva-luators to re¯ect the reality' (Cuthbert, 1985). Further,she believed that important information could be lostin trying to interpret across cultures. According toother researchers, a number of additional di�cultiesconfront the cross-cultural evaluator (Marsden &Oakley, 1990; Merry®eld, 1985). In addition to theinappropriateness of some Western evaluation tech-niques, they note issues such as knowing which vari-ables exist within a given cultural context and the factthat regional cultural variations could complicate theevaluation task further. Another important matterannunciated by Rebien (1996) is that in some instancesthe concept of participation itself might not be in ac-cordance with the local culture.

3.2. Possible ways forward

Evaluators involved in ODP evaluation havesuggested several ways in which these studies could beimproved by taking culture into account. Essentiallythese suggestions fall into two main areas: one is theuse of appropriate methodologies and the other, theengagement of suitable personnel.

Based on his experience as a Peace Corps volunteer,Russon suggested that cross-cultural evaluators shouldadopt a more naturalistic approach which assumes`multiple, intangible realities' (Russon, 1995). The ben-e®ts of presenting evaluative information to hostcountry participants in a qualitative mode have beennoted by several writers. Cuthbert found this approach®tted with `the strong oral and narrative traditions ofsuch cultures' (Cuthbert, 1985). Qualitative case studyreports were an e�ective way to present evaluation®ndings (Garaway, 1996). This view is supported byRusson (1995) whose experience showed that reportspresented in case study mode were generally moreappropriate than even simple quantitative presenta-tions. One exception mentioned by Russon, Wentlingand Zuloaga (1995) was where the target audience hada background in quantitative measures. But even so,this later research suggests the possible `intuitiveappeal' of the qualitative data, which could lead toaction (Russon et al., 1995).

As with the evaluation of development projects ingeneral, methods used in cross-cultural evaluationsshould be multiple, using both qualitative and quanti-tative techniques. In addition, evaluators such asCuthbert (1985) and Merry®eld (1985) suggest otherimportant features. These include: ¯exibility, particu-larly in terms of time and attitudes; involving the var-ious cultural groups within the target communitythroughout the evaluation process; and the develop-ment of culturally sensitive instruments for data collec-tion. A further point mentioned by Russon (1995) isthe need to adapt methods of reporting to suit thelocal cultural context.

In addition, various evaluators provide helpfulinsights into the skills and personal qualities requiredof the `successful' cross-cultural evaluator. Besides thegeneral skills and qualities identi®ed earlier as import-ant for all evaluators, Seefeldt (1985) found certaininterpersonal attributes indispensable for workinge�ectively in a cross-cultural situation. In particular hefelt that `tolerance for ambiguity, patience, adaptive-ness, capacity for tacit learning and courtesy (were) es-pecially necessary'. For Fetterman (1984) who workedextensively in the area of ethnographic research,another essential trait was the capacity to deal withethical dilemmas. Based on his comprehensive workwith international development organisations, Duncan(1985) adds to this list the need for the cross-cultural

D. McDonald / Evaluation and Program Planning 22 (1999) 163±174 169

evaluator to integrate into the local `development cul-ture'. For the evaluator coming from a very di�erentlife experience, developing an understanding of howthe target audience views `development' can be quite achallenge.

Several writers have documented the bene®ts ofevaluation teams comprising both foreign personneland members of the host community (Chow, Murray& Charoula, 1996; Cuthbert, 1985; Duncan, 1985;McAlpine, 1992; Merry®eld, 1985; Slaughter, 1991;Westwood & Brous, 1993). Such teams can strengthenthe evaluation process by helping to overcome some ofthe problems faced by the sole external evaluator. Theteam approach provides, according to Merry®eld, a`diversity of perspectives and interests' which enhancesthe e�ectiveness of the study (Merry®eld, 1985).

McAlpine and Slaughter highlight some of the sig-ni®cant contributions made by what they respectivelycall the `bilingual researcher' and the `cultural expert',someone who is most likely to be a member of thehost community (McAlpine, 1992; Slaughter, 1991).According to Slaughter this person contributes: localknowledge and skills; an ability to present views per-suasively to the project community; and can establishcredibility and receptivity of the evaluation amongstlocal people. She considers that the possibility of biasor partisanship of the local evaluator can be overcomeby triangulation and ensuring the representation ofdi�erent viewpoints in evaluation reports, concludingthat overall, `the advantages in terms of cultural sensi-tivity and a fair hearing for minority cultural groupsfar outweighs any disadvantages' (Slaughter, 1991).The idea that there were potential bene®ts for the eva-luator by incorporating cross-cultural triangulationinto the research design, was also promoted byCampbell (1979).

In their review of issues facing evaluators who arepart of a multicultural team, Chow et al. (1996) ident-i®ed several important personal attributes for teammembers, including: a respect for di�erence; an open-ness to learn from the world views of other team mem-bers; an ability to overcome the tendency to stereotypeindividuals of various backgrounds; and an awarenessof his/her own biased attitudes, beliefs and feelingsand how they might a�ect others.

Ideally then, an evaluation should be undertaken bya cross-cultural evaluation team which includes at leastone member of the host community, preferably some-one working closely with the development projectitself. However, if it is impossible to locate a suitablelocal candidate, and no external evaluator familiarwith the host culture is available, then, according toRusson et al. (1995) adequate training about the cul-tural context should be provided to any foreignerengaged in the evaluation. Some writers have notedthat one of the most signi®cant di�culties faced by

external cross-cultural evaluators has been an inabilityto speak the local language (Ginsberg, 1988).However, while he agrees that language skills are help-ful, Duncan (1985) argued that they are not essential.Along a similar line, Smith and Robbins contend thatbuilding `rapport with potential informants is more afunction of time spent on site and of interpersonalskills than it is of cultural identity' (Smith & Robbins,1984).

4. Other issues to consider in order that an evaluationleads to the empowerment of local stakeholders

As noted earlier, in recent years evaluators engagedin the review of overseas development projects havegiven increased emphasis to the importance of empow-ering local stakeholders through the evaluation pro-cess. According to Fetterman, one of the leadingproponents of this approach, people are no longer `tol-erant of the limited role of the outside expert . . .Participation, collaboration, and empowerment arebecoming requirements in many community-basedevaluations, not recommendations' (Fetterman, 1996).

Many researchers who promote what has becomeknown as `empowerment evaluation', includingFetterman, Dugan, Mithaug, Preskill, Papineau andKiely, Elkington and Owen, Hurworth and Whitmore,consider the desired outcome of this approach to beincreased self-determination and/or liberation of pro-gram participants (Dugan, 1996; Elkington and Owen,1996; Fetterman, 1994a; Fetterman, 1994b; Fetterman,1995; Fetterman, 1996; Hurwurth, 1995; Mithaug,1996; Papineau & Kiely, 1996; Preskill, 1996;Whitmore, 1988; Whitmore, 1990). For Fetterman thisform of evaluation is of particular importance for thedisenfranchised `to ensure that their voices are heardand their real problems are addressed' (Fetterman,1996). Quoting Gilley (1996), Elkington and Owennoted that `the bene®ts of participating in empoweringprocesses . . . include(d) increased feelings of self-worth . . . ; improved skills; improved relationships;greater sense of one's rights; and improved infor-mation and knowledge' (Elkington & Owen, 1996).

It is not, however, the intention of the current paperto provide a detailed analysis of empowerment evalu-ation. Rather, it is to identify some of the more im-portant considerations that should be taken intoaccount by those conducting a cross-cultural evalu-ation, in order that it leads to the increased empower-ment of host country project participants. It should benoted that prior to the 1990's the term `empowermentevaluation' was not commonly used, however someresearchers, writing about other forms of evaluation,used terminology which today would be equated withan empowering approach. For example, Brunner and

D. McDonald / Evaluation and Program Planning 22 (1999) 163±174170

Guzman considered the aim of participatory evalu-ation was to empower dominated groups to build `ajust and egalitarian society' (Brunner & Guzman,1989).

However, while it can be likened to other evaluationapproaches such as participatory, collaborative andaction-research, investigation generally shows thatempowerment evaluation extends these evaluativeforms into a more emancipatory dimension. In theeyes of Fetterman the project community should notonly participate in the evaluation process, but controlit, and in so doing `empower themselves' (Fetterman,1996). In a similar vein both Usher (1995) and Rugh(1986), promoted the practice of self-evaluation. In arecent article, concerning the role of evaluation increating social change, Vanderplaat contended thatevaluation strategies designed to lead to empowermentwere not only educative, but should `construct knowl-edge for communicative and emancipatory interests'(Vanderplaat, 1995). The earlier research of Whitmore(1988) concluded that for participation to lead toincreased utilisation, stakeholders needed to beinvolved in empowering ways. For her, an empoweringevaluation process should not only enable programparticipants to understand and critically appraise theirsituation within a broader social, political and culturalcontext, but also encourage them to take action result-ing in personal and social change. This approach, ofeducation leading to action, was strongly promoted byFreire (1972), who was a strong advocate for the liber-ation of the oppressed.

A key focus for empowerment evaluators is the col-lection of data from `non-powerful' sources in the pro-ject community. This approach involves theapplication of `techniques to encourage improvementsstemming from grass-roots' decisions' (Kalyalya,1988). An evaluator subscribing to an empowermentapproach ensures that information from the di�erentstakeholder groups is shared within the project com-munity, so that they feel empowered in the process.Therefore, evaluators who desire empowerment as anoutcome need to act as trainers, catalysts, enablers,negotiators, interpreters, advocates, and, as noted byPreskill `educators of learning and change' (Preskill,1996).

One form of evaluation that is particularly pertinentto the current study, is that embodied withinParticipatory Rural Appraisal (PRA), which evolvedfrom the Third World around the beginning of the1990's. According to Chambers, PRA, which is nowwidely practised in many parts of the developingworld, enables `local people to share, enhance and ana-lyze their knowledge of life and conditions, to planand act' (Chambers, 1994). Using participatory andempowering techniques, PRA is increasingly being uti-lised in the evaluation of programs directed towards

the marginalised rural poor. It emphasises the vitalrole to be played by program bene®ciaries in the plan-ning and conduct of the evaluation, and in sharingtheir knowledge and experience with outsiders. Forthis to be e�ective, PRA demands that these outsiders,including representatives of donor agencies and exter-nal evaluators themselves `step o� their pedestals, sitdown, `hand over the stick', and listen and learn'(Chow et al., 1996). In his article Chambers providedevidence that in PRA, evaluators take on a further,more humble, yet arguably more emancipated roleÐthat of `learners and trainees' (Chow et al., 1996).

In considering such approaches, there is evidence toshow that certain conditions need to prevail for thesuccessful conduct of an empowerment evaluation.Fetterman stated that overall, the atmosphere shouldbe `honest, self-critical, trusting, and supportive' and`conducive to sharing successes and failures'. Programparticipants, who should control the evaluation pro-cess, should have the `latitude to experiment, takingboth risks and responsibility for their actions'(Fetterman, 1996). They should also, be `clear abouttheir values and share a common vision for a betterfuture' (Brunner & Guzman, 1989).

Further conditions, applying to the agency conduct-ing the study, have been speci®ed by Elkington andOwen (1996). These include: a commitment by man-agement to enhancing sta� evaluation skills; agreementthat learning is a legitimate purpose for evaluation;giving `key stakeholders the licence to act'; openness toconsidering `broader socio-political factors and valuepositions' and being willing to examine internal atti-tudes and practices; openness to self-re¯ection in orderto change and improve; and `commitment to social jus-tice and social action' (Elkington & Owen, 1996).

5. Possible guidelines emerging from the literature

From this review it is clear that evaluators have pro-gressed considerably in terms of de®ning practices toimprove the ODP evaluation and in identifying someof the di�culties which they face in undertaking thistask. It is now opportune to endeavour to draw outthe key factors that an evaluator could usefully con-sider, when planning and implementing a cross-culturalevaluation of an overseas development project in orderthat it lead to the empowerment of host country stake-holders. Seven main components have been identi®ed.These comprise:

. Participation (Lawrence, 1989; Marsden & Oakley,1990; Rebien, 1996; Thompson, 1991)ÐAn evalu-ation needs to be participatory, involving all stake-holder groups. In particular, it should address theneeds of project participants and bene®ciaries, invol-

D. McDonald / Evaluation and Program Planning 22 (1999) 163±174 171

ving extensive discussions with members of the var-ious cultural and sub-cultural groups in the commu-nity. The evaluation process ought to be driven byprogram participants, who need to be engaged at allstages. The evaluator needs to provide methodologi-cal guidance and support to assist them in this task.

. Education leading to action (Cracknell, 1996; Rugh,1986; Vargas, 1991; Whitmore, 1990)ÐThe purposeof the evaluation should extend beyond accountabil-ity and an assessment of the achievement of objec-tives, towards an educational focus for allstakeholders. The evaluation process needs to:enable program participants and bene®ciaries tolearn more about the project; assess critically itsimpact within the broader social, political and cul-tural context; and to take action leading to personaland social change.

. Appropriate methodology (Chambers, 1994;Marsden & Oakley, 1990; Merry®eld, 1985; Rist,1995)ÐMethods included in the evaluation ought tobe appropriate to the situation and sensitive to thelocal cultural context and protocols. Data instru-ments which are ideally built into project activities,need to develop the understanding and commitmentof the project community for conducting their ownon-going evaluation activities. E�orts should bemade to collect information from all cultural andsub-cultural groups within the target community.Evaluation strategies ought to recognise the need todevelop a learning community, and to enhance self-determination. The emphasis needs to be on ¯exi-bility and the collection of systematic, relevant,timely and valid data, using multiple techniqueswhich include both qualitative and quantitativeinstruments, giving greater prominence to the for-mer.

. Maximisation of feedback and promotion of utilis-ation (Binnendijk, 1989; Cracknell, 1996; Stokke,1991; Van Sant, 1989)ÐThe evaluation should pro-vide regular feedback to stakeholder groups,throughout the evaluation process. The informationprovided needs to be: timely; useful for decision-making purposes; and presented in a suitable form,giving consideration to language and cultural con-text. This will enhance participant ownership of theevaluation process and encourage use of the ®ndingsfor program improvement and sustainability.

. Enhancement of local capacity (Fetterman, 1994a;Fetterman, 1994b; Fetterman, 1995; Kalyalya, 1988;Rebien, 1996; Rugh, 1994)ÐThe evaluation processshould increase the capabilities of local stakeholdersto manage and evaluate their own programs. Thisincludes enhancing their self-e�cacy, leading to self-determination. Members of the various culturalgroups within the project community need to beencouraged and supported to develop skills to

enhance their independence, in particular in termsof: expressing their needs; establishing goals; anddeveloping a plan of action to achieve them.

. Building of partnership (Marsden & Oakley, 1990;Vargas, 1991)ÐThe evaluation should aim to buildan equitable partnership between project constitu-ents. This involves developing a mutual understand-ing of each other's world view, including culturaldi�erences and the way in which real developmentcan be achieved. It requires long-term contactbetween the donor and the project community, lead-ing to solidarity and mutual support.

. Developing and resourcing cross-cultural evaluationteams (Chow et al., 1996; McAlpine, 1992;Slaughter, 1991; Westwood & Brous, 1993)ÐEngaging evaluation teams comprising both externalpersonnel and people from the host community can:enhance processes of data collection and analysis;encourage local ownership of the evaluation; andbuild indigenous capabilities. At the same time thisapproach can enable foreign evaluation personnel togain greater insights into the local cultural context.However, to do this e�ectively, adequate time andresources need to be allocated to the team to designand implement the evaluation process and follow upactivities. Particular technical skills and personalqualities are required from team members.

6. Conclusion

The purpose of this study was to try to identify thebest way to design and implement a cross-culturalevaluation of an overseas development project. Ane�ort has been made to discover factors which need tobe included, in order that the evaluation process leadsto the empowerment of local stakeholders, in particu-lar project sta� and bene®ciaries. To do this, a widerange of literature was reviewed, resulting in the devel-opment of seven broad `guidelines' which could use-fully be considered by those working in the ®eld.

However, these `guidelines' should not be seen as de-®nitive. Rather, they provide an indication of some ofthe more important aspects that a development agencyand/or a cross-cultural evaluator should consider inplanning and implementing a study of this nature.While these `criteria' are based on a wealth of experi-ence already accumulated by many prominent evalua-tors, in order to ascertain the usefulness andcomprehensiveness of the proposed list, the suitabilityof the `guidelines' now needs to be further tested andextended by other practitioners who are engaged in theevaluation of overseas development projects in a widerange of di�erent project settings and cross-culturalcontexts. In so doing, an e�ort should be made to seek

D. McDonald / Evaluation and Program Planning 22 (1999) 163±174172

input from members of the local project community,as well as other stakeholder groups. Openness toexploring how such `guidelines' might be incorporatedinto future evaluative activities can only strengthen on-going evaluation endeavour.

References

Bamberger, M. (1991). The politics of evaluation in developing

countries. Evaluation and Program Planning, 14, 325±339.

Berlage, L., & Stokke, O. (Eds.) (1992). Evaluating development as-

sistance: approaches and methods. London: Frank Cass.

Binnendijk, A. L. (1989). Donor agency experience with the monitor-

ing and evaluation of development projects. Evaluation Review,

13(3), 206±222.

Brown, D. (1991). Methodological considerations in the evaluation

of social development programmesÐan alternative approach.

Community Development Journal, 26(4), 259±265.

Brunner, I., & Guzman, A. (1989). Participatory evaluation: a tool

to assess projects and empower people. New Directions For

Program Evaluation, 42, 9±17.

Campbell, D. T. (1979). Degrees of freedom and the case study. In:

T. D. Cook, & C. S. Reichardt, . Qualitative and quantitative

methods in evaluation research (pp. 49±67). Beverley Hills: Sage

Publications.

Cassen, R. Associates (1986). Does Aid Work?. Su�olk: Clarendon

Press.

Chambers, R. (1994). Participatory rural appraisal (PRA): chal-

lenges, potentials and paradigm. World development, 22(10),

1437±1454.

Chow J, Murray S, Charoula A. (1996). International and

Multicultural Teaming: A Kaleidoscope of Kolors. Paper pre-

sented at the Annual Conference of the American Evaluation

Association: Atlanta, Georgia 1996.

Cracknell, B. E. (1991). The evaluation policy and performance of

Britain. In: O. Stokke, . Evaluating Development Assistance:

Policies and Performance. London: Frank Cass.

Cracknell, B. E. (1996). Evaluating development aid: strengths and

weaknesses. In: Evaluation.

Cuthbert, M. (1985). Evaluation encounters in third world settings: a

Caribbean perspective. New Directions for Program Evaluation,

25, 29±35.

Dugan, M. A. (1996). Participatory and empowerment evaluation:

lessons learned in training and technical assistance. In:

D. M. Fetterman, S. J. Kaftarian, & A. Wandersman, .

Empowerment Evaluation: Knowledge and Tools for Self-

Assessment and Accountability (pp. 277±303). Thousand Oaks,

CA: Sage Publications.

Duncan, R. L. (1985). Re¯ections of a development advisor. New

Directions for Program Evaluation, 25, 37±45.

Elkington D, Owen JM. (1996). Roles for an Evaluation Agency in

Encouraging Stakeholder Participation. Paper presented at the

Annual Conference of the American Evaluation Association:

Atlanta 1996.

Fetterman, D. M. (1984). Guilty knowledge, dirty hands, and other

ethical dilemmas: the hazards of contact research. In:

D. M. Fetterman, Ethnography in Educational Evaluation (pp.

211±236). Beverly Hills, CA: Sage Publications.

Fetterman, D. M. (1994a). Steps of empowerment evaluation: from

California to Capetown. Evaluation and Program Planning, 17(3),

305±313.

Fetterman, D. M. (1994b). Empowerment evaluation (Presidential

address). Evaluation Practice, 15(1), 1±15.

Fetterman, D. M. (1995). In response. Evaluation Practice, 16(2),

179±199.

Fetterman, D. M. (1996). Empowerment evaluation: an introduction

to theory and practice. In: D. M. Fetterman, S. J. Kaftarian, &

A. Wandersman, . Empowerment Evaluation: Knowledge and

Tools for Self-Assessment and Accountability (pp. 3±46).

Thousand Oaks, CA: Sage Publications.

Feuerstein, M-T. (1986). Partners in Evaluation: Evaluating

Development and Community Programmes with Participants.

London: Macmillan Press.

Forss, K., Cracknell, B., & Samset, K. (1994). Can evaluation help

an organization to learn?. Evaluation Review, 18(5), 574±591.

Freire, P. (1972). Pedagogy of the Oppressed. Middlesex, England:

Penguin Books.

Garaway, G. (1996). The case-study modelÐan organizational strat-

egy for cross-cultural evaluation. Evaluation, 2(2), 201±211.

Gilley Cited in D. Elkington & J.M. Owen. (1996). Roles for an

Evaluation Agency in Encouraging Stakeholder Participation.

Paper presented at the Annual Conference of the American

Evaluation Association, Atlanta 1996.

Ginsberg, P. E. (1988). Evaluation in cross-cultural perspective.

Evaluation and Program Planning, 11, 189±195.

Guba, E. G., & Lincoln, Y. S. (1989). Fourth Generation Evaluation.

Newbury Park, CA: Sage Publications.

Hurworth R.E. (1995). An Empowerment Model to Foster

Evaluation Utilization: The Active Seniors Project. Paper pre-

sented at the International Evaluation Conference, Vancouver

1995.

Kalyalya, D. (Ed.) (1988). Aid and Development in Southern Africa.

New Jersey: Africa World Press.

Klitgaard, R. (1995). Including culture in evaluation research. New

Directions for Program Evaluation, 67, 135±146.

Knox, C., & Hughes, J. (1994). Policy evaluation in community

development: some methodological considerations. Community

Development Journal, 29(3), 239±250.

Kumar, K. (1989). Conducting village/community interviews. New

Directions for Program Evaluation, 42, 75±83.

Kumar, K. (1995). Measuring the performance of agricultural and

rural development programs. New Directions for Program

Evaluation, 67, 81±91.

Lawrence, J. E. S. (1989). Engaging recipients in development evalu-

ation: the stakeholder approach. Evaluation Review, 13(3), 243±

256.

Lincoln, Y. S. (1995). Emerging criteria for quality in qualitative and

interpretive research. Qualitative Inquiry, 1(3), 275±289.

Marcussen, H. S. (1996). Comparative advantages of NGOs: myths

and realities. In: O. Stokke, . Foreign Aid Towards the Year 2000:

Experiences and Challenges. London: Frank Cass.

Marsden, D., & Oakley, P. (Eds.) (1990). Evaluating Social

Development Projects. Oxford: OXFAM.

Mathison, S. (1994). Rethinking the evaluator role: partnership

between organizations and evaluators. Evaluation and Program

Planning, 17(3), 299±304.

McAlpine, L. (1992). Cross-cultural instructional design: using the

cultural expert to formatively evaluate process and product.

Educational and Training Technology International, 29(4), 310±

315.

Merry®eld, M. M. (1985). The challenge of cross-cultural evaluation:

some views from the ®eld. New Directions for Program

Evaluation, 25, 3±17.

Mithaug, D. E. (1996). Fairness, liberty and empowerment evalu-

ation. In: D. M. Fetterman, S. J. Kaftarian, & A. Wandersman, .

Empowerment Evaluation: Knowledge and Tools for Self-

Assessment and Accountability (pp. 234±255). Thousand Oaks,

CA: Sage.

Papineau, D., & Kiely, M. (1996). Participatory evaluation in a com-

D. McDonald / Evaluation and Program Planning 22 (1999) 163±174 173

munity organization: fostering stakeholder empowerment and

utilization. Evaluation and Program Planning, 19(1), 79±93.

Patton, M. Q. (Ed.) (1985a). Cross-cultural nongeneralizations

(special issue). New Directions for Program Evaluation, 25, 93±96.

Patton, M. Q. (1985b). Utilization-Focused Evaluation. Thousand

Oaks, CA: Sage Publications.

Patton, M. Q. (1994). Developmental evaluation. Evaluation

Practice, 15(3), 311±319.

Piciotto, R. (1995). Introduction: evaluation and development. New

Directions for Program Evaluation, 67, 13±23.

Preskill, H. (1996). From Evaluation to Evaluative Inquiry for

Organizational Learning. Paper presented at the Annual

Conference of the American Evaluation Association, Atlanta

1996.

Rebien, C. C. (1996). Participatory evaluation of development assist-

ance: dealing with power and facilitative learning. Evaluation,

2(2), 151±171.

Riddell, R. C. (1992). Evaluating UK NGO projects in developing

countries aimed at alleviating poverty. In: L. Berlage, &

O. Stokke, . Evaluating Development Assistance: Approaches and

Methods (pp. 121±146). London: Frank Cass.

Riddell, R. C., Krise, S-E., Kyllonen, T., Ojanpera, S., & Vielajus, J-

L. (1997). Searching for Impact and Methods: NGO Evaluation

Synthesis Study. Finland: FINNIDA.

Rist, R. (1995). Postscript: development questions and evaluation

answers. New Directions for Program Evaluation, 67, 167±174.

Rugh, J. (1986). Self Evaluation: Ideas for Participatory Evaluation of

Rural Community Development Projects. Oklahoma: World

Neighbours Inc.

Rugh J. (1998). Can Participatory Evaluation Meet the Needs of all

Stakeholders?: A Case Study: Evaluating the World Neighbours

West Africa Program. Paper [condensed version] presented to the

Annual Conference of the American Evaluation Association,

Boston 1994.

Russon, C. (1995). The in¯uence of culture on evaluation. Evaluation

Journal of Australasia, 7(1), 44±49.

Russon, C., Wentling, T., & Zuloaga, A. (1995). The persuasive

impact of two evaluation reports on agricultural extension admin-

istrators from two countries. Evaluation Review, 19(4), 374±388.

Salmen, L. F. (1989). Bene®ciary assessment: improving the design

and implementation of development projects. Evaluation Review,

13(3), 273±291.

Seefeldt, F. M. (1985). Cultural considerations for evaluation con-

sulting in the Egyptian context. New Directions for Program

Evaluation, 25, 69±78.

Slaughter, H. B. (1991). The participation of cultural informants on

bilingual and cross-cultural teams. Evaluation Practice, 12(2),

149±157.

Smith, A. G., & Robbins, A. E. (1984). Multimethod policy research:

a case study of structure and ¯exibility. In: D. M. Fetterman, .

Ethnography in Educational Evaluation (pp. 115±132). Beverley

Hills: Sage.

Snyder, M. M., & Doan, P. L. (1995). Who participates in the evalu-

ation of development aid?. Evaluation Practice, 16(2), 141±152.

Stokke, O (Ed.) (1991). Evaluating Development Assistance: Policies

and Performance. London: Frank Cass.

Stokke, O. (Ed.) (1996). Foreign Aid Towards the Year 2000:

Experiences and Challenges. London: Frank Cass.

Swantz, M-L. (1992). Participatory research and the evaluation of

the e�ects of aid for women. In: L. Berlage, & O. Stokke, .

Evaluating Development Assistance: Approaches and Methods (pp.

104±119). London: Frank Cass.

Taylor, L. (1991). Participatory evaluation with non-government or-

ganisations (NGOs): some experiences of informal and explora-

tory methodologies. Community Development Journal, 26(1), 8±13.

Thompson, R. J. (1990). Evaluators as change agents: the case of a

foreign assistance project in Morocco. Evaluation and Program

Planning, 13, 379±388.

Thompson, R. J. (1991). Facilitating commitment, consensus, credi-

bility, and visibility through collaborative foreign assistance pro-

ject evaluation. Evaluation and Program Planning, 14, 341±350.

Usher, C. L. (1995). Improving evaluability through self-evaluation.

Evaluation Practice, 16(1), 59±68.

Vanderplaat, M. (1995). Beyond technique: issues in evaluating for

empowerment. Evaluation, 1(1), 81±96.

Van Sant, J. (1989). Qualitative analysis in development evaluations.

Evaluation Review, 13(3), 257±271.

Vargas, L. (1991). Re¯ections on methodology of evaluation.

Community Development Journal, 26(4), 266±270.

Westwood, J., & Brous, D. (1993). Cross-cultural evaluation: lessons

from experience. Evaluation Journal of Australasia, 5(1), 43±48.

Whitmore E. Evaluation as Empowerment and the Evaluator as

Enabler. ERIC Doc Reproduction Service No. ED294±877, Nova

Scotia, Canada 1988.

Whitmore, E. (1990). Empowerment in program evaluation: a case

example. Canadian Social Work Review, 7(2), 215±229.

Williams, D. D. (1991). Lessons learned: introducing accreditation

and program evaluation to a teachers college in Indonesia.

Evaluation Practice, 12(1), 23±31.

D. McDonald / Evaluation and Program Planning 22 (1999) 163±174174