problematizing participation - stes: santé au … from the systems field (taket and white, 1995,...

21
Problematizing Participation A Critical Review of Approaches to Participation in Evaluation Theory AMANDA GREGORY Hull University Business School, UK It is widely accepted that evaluation is a social process which implies the need for a participatory approach. In this article, it will be argued that the blanket use of the term ‘participation’ has masked the heterogeneity evident in its realization in practice. This article highlights a lack of transparency in participatory methods in evaluation by, first of all, critically discussing Rebien’s (1996) definition of a set of criteria for distinguishing participative projects from interventions that are non-participative or have a low level of participation. Rebien’s work is important because it not only discusses participation from a practical perspective, but also from a methodological point of view, advancing the argument that Guba and Lincoln’s (1989) Fourth Generation Evaluation is an appropriate methodology for supporting participation. Fourth Generation Evaluation will be described and critiqued with reference to Oakley’s (1991) obstacles to participation. In the light of this critique, the argument will be advanced, through an examination of Patton’s Utilization-Focused Evaluation (1986, 1997) and Pawson and Tilley’s Realistic Evaluation (1997), that the notion of participation is ill-understood and is an important problem across a range of methodologies in evaluation. Guidance on how best to realize a participatory approach in practice will then be sought through examination of an approach to evaluation which has emerged from the systems field (Taket and White, 1995, 1996, 1997). Consequently, it will be argued that the problem of participation can only be approached through an understanding of power and its realization in practices that prohibit or promote participation. K E Y WO R D S : critique; evaluation; decisionmaking; participation; planning; stakeholder. Introduction Participation is widely believed to be a good thing. Acceptance of the value of participation is evident in domains or sites of action as varied as information systems development (see for example the work of Mumford, 1993, 1995, 1996) and management science (see for example the work of Friend, 1989, 1993; Friend Evaluation Copyright © 2000 SAGE Publications (London, Thousand Oaks and New Delhi) [1356–3890 (200004)6:2; 179–199; 013322] Vol 6(2): 179–199 179

Upload: vuongdung

Post on 31-Aug-2018

212 views

Category:

Documents


0 download

TRANSCRIPT

Problematizing ParticipationA Critical Review of Approaches to Participation inEvaluation Theory

A M A N DA G R E G O RYHull University Business School, UK

It is widely accepted that evaluation is a social process which implies theneed for a participatory approach. In this article, it will be argued that theblanket use of the term ‘participation’ has masked the heterogeneity evidentin its realization in practice. This article highlights a lack of transparency inparticipatory methods in evaluation by, first of all, critically discussingRebien’s (1996) definition of a set of criteria for distinguishing participativeprojects from interventions that are non-participative or have a low level ofparticipation. Rebien’s work is important because it not only discussesparticipation from a practical perspective, but also from a methodologicalpoint of view, advancing the argument that Guba and Lincoln’s (1989) FourthGeneration Evaluation is an appropriate methodology for supportingparticipation. Fourth Generation Evaluation will be described and critiquedwith reference to Oakley’s (1991) obstacles to participation. In the light ofthis critique, the argument will be advanced, through an examination ofPatton’s Utilization-Focused Evaluation (1986, 1997) and Pawson and Tilley’sRealistic Evaluation (1997), that the notion of participation is ill-understoodand is an important problem across a range of methodologies in evaluation.Guidance on how best to realize a participatory approach in practice willthen be sought through examination of an approach to evaluation which hasemerged from the systems field (Taket and White, 1995, 1996, 1997).Consequently, it will be argued that the problem of participation can only beapproached through an understanding of power and its realization inpractices that prohibit or promote participation.

KEYWORDS : critique; evaluation; decisionmaking; participation; planning;stakeholder.

Introduction

Participation is widely believed to be a good thing. Acceptance of the value ofparticipation is evident in domains or sites of action as varied as informationsystems development (see for example the work of Mumford, 1993, 1995, 1996)and management science (see for example the work of Friend, 1989, 1993; Friend

EvaluationCopyright © 2000

SAGE Publications (London, Thousand Oaks and New Delhi)

[1356–3890 (200004)6:2; 179–199; 013322]Vol 6(2): 179–199

179

05gregory (ds) 25/4/00 10:16 am Page 179

and Hickling, 1987) as well as in evaluation theory and practice. Indeed Biggs(1995), recognizing a general trend towards the development and use of partici-patory methods, refers to the ‘participatory orthodoxy’.

Four key arguments are typically advanced to support the notion of partici-pation in decision-making, design and planning (Flynn, 1992):

• Ethics – everyone has the right to command their own destiny.• Expediency – people who are not involved in decision-making may revoke or

subvert decisions made by others.• Expert knowledge – certain decisions require expert knowledge and to ensure

that knowledge is brought to bear on a decision, the experts themselves shouldbe involved in the decision-making process.

• Motivating force – participation in the decision-making process ensures thatpeople are aware of the rationale for the decision and are more likely to wantto see it implemented efficiently and effectively.

Whilst the arguments in favour of participatory approaches are persuasive,Dudley (1993) questions their value in practice: ‘Community participation mayhave won the war of words but, beyond rhetoric, its success is less evident’ (p. 7).

This lack of evident success, it is argued here, is a result of a general failure tomake the nature of participation, and what it is to achieve, explicit. This articlerepresents an attempt to critically examine the extent to which evaluationmethodologies are guilty of this.

In the next section an attempt will be made to elucidate the concept of par-ticipation by reference to work on criteria for distinguishing different degrees andtypes of participation.

Rebien’s Continuum and Criteria

Rebien (1996) initiates an important discussion on the nature of participation inevaluation by defining a set of criteria for distinguishing participatory projectsfrom those that are non-participatory or have a low level of participation.

Rebien suggests that, in practice, all evaluations are participatory, what variesis merely the extent. This notion of a spectrum of degrees of participation is alsorecognized by Feuerstein (1986) who defines four categories of participation:‘studying specimens’; ‘refusing to share results’; ‘locking up the expertise’; and‘real partnership in development’.

Whilst Rebien does not refer to Feuerstein’s work, it illustrates well his argu-ments that there are different degrees of participation and that consistency inapproach cannot be assumed across evaluations. Rebien goes on to advance acontinuum ranging from low to high participation and suggests that in order todistinguish those projects which are ‘truly’ participatory from those which are not,a threshold needs to be set. Three criteria have to be met to pass this threshold:

1. Stakeholders must have an active role as subjects in the evaluation process,that is they identify information needs and design the terms of reference,rather than merely having passive roles as objects and mere sources of data.

Evaluation 6(2)

180

05gregory (ds) 25/4/00 10:16 am Page 180

2. As it is practically impossible to actively include all stakeholders in the evalu-ation process, at least the representatives of beneficiaries, project field staff,project management and the donor should participate.

3. Stakeholders should participate in at least three stages of the evaluationprocess – designing the terms of reference, interpreting data and using evalu-ation information.

The idea that merely three simple criteria can be applied to determine whetheran evaluation is truly participative is an appealing one, but perhaps a little criti-cal analysis is warranted in determining whether they are adequate given thecomplexity inherent within the concept of participation.

Rebien’s Criteria Critically AnalysedIn this section each of the criteria specified by Rebien as being a key determinantof whether an approach may be classified as participative will be analysed.Throughout, reference will be made to both micro and macro issues from, forexample, individual motivation and ability, through to sociopolitical structuresand processes (following Dachler and Wilpert, 1978) which might be affected asa result of employing the suggested criteria.

1. Stakeholders must have an active role as subjects in the evaluation process,that is they identify information needs and design the terms of reference, ratherthan have a merely passive role as objects or mere sources of data. Rebienrecognizes that all too often participants are viewed as passive subjects as a con-sequence of the existence of an unhealthy power imbalance between the evalua-tor and the participants. He tries to redress this imbalance in power by stipulatingthat, for a project to be classed as participatory, stakeholders must take a moreactive role as subjects rather than objects. Promotion of active participation maybe based on an appreciation of participants’ roles as experts on their own lives:‘Theoretically, experience among stakeholders is thus available, which in anevaluation situation surpasses that of an external evaluation team visiting theproject for a short period’ (Rebien, 1996: 164). It is argued here, however, thatRebien’s suggestion is rather more difficult to enact in practice than might beimagined due to the operation of forces that discourage stakeholders being activein the evaluation.

Rahman (1993) also notes that in an intervention, the facilitator, like the eval-uator, is placed in a privileged position because s/he often brings with her/himresources. Even those evaluators/facilitators who do not bring material resources,but are able to show ‘the way forward’ by getting people organized and takingcollective initiatives, possess something that other people do not have. Evalu-ators/facilitators also tend to have status because of their association and contactswith formal organizations, such as non-governmental organizations and universi-ties. This puts the facilitator/evaluator in a position of importance and tends tocreate a relation of dependence. Such dependence is a force against the realiza-tion of people’s own initiatives and, consequently, is anti-participatory. Rahmanalso recognizes that this tendency may arise from more insidious sources: ‘The

Gregory: Problematizing Participation

181

05gregory (ds) 25/4/00 10:16 am Page 181

presumption of superior wisdom of middle-class educated activists working withthe people is often explicit. Often it is inherent, arising from the ego of the edu-cated’ (Rahman, 1993: 154).

Rahman argues that this tendency towards dependence must be dealt with atthe level of methodology and he focuses on the process of animation as emerg-ing from participatory research and action research. Animation refers to theprocess of empowering people to regard themselves as the principal actors intheir lives, in order for them to unlock states of mental dependence and apathyand to exercise their creative potential in social situations. Animation implies aprocess of learning through participation, during which control of the process bythe facilitator is relinquished to the subjects. Rebien, it appears, fails to considerthat active participants may challenge the way in which the evaluation is organ-ized and hence participation may be considered as a means to ensure the accept-ance of the evaluation. The important distinction between participation as ameans and as an end is made by Oakley (1991). Participation as a means impliesthat it is to be used to achieve some predetermined goal or objective and is ashort-term exercise. Oakley contrasts participation as a means with participationas an end, which implies an ‘unfolding process’ that serves to develop andstrengthen the capabilities of the participants to engage in initiatives.

Thus, as regards Rebien’s first criterion, it is concluded that the notion of offer-ing participants active roles is weak, does not recognize ingrained tendenciestowards dependence on external sources of expertise and, to make it work inreality, requires recourse to methodological support, such as that offered byRahman’s process of animation.

2. As it is practically impossible to actively include all stakeholders in the evalu-ation process, at least the representatives of beneficiaries, project field staff,project management and the donor should participate. Involving all stake-holders in an evaluation is an ideal and, in recognition of this, it is suggested thatrepresentatives of key parties only should take part in the process. Jayaratna(1994), in reference to Mumford’s (1993) ETHICS, argues that representationcan only be a participative process if the representatives are elected and theinvolvement is extended by the representatives to their constituent members, i.e.everyone can share in the decision-making. If representatives are not elected andthey do not extend involvement to their constituent members then restrictinginvolvement to a few representatives is likely to mean that the views of a rangeof affected parties are overlooked. However, some omission is inevitable andChurchman (1979) refers to an inevitable lack of comprehensiveness on the basisof our inability to include future and past generations. Recognizing the fact thatour investigations are inevitably partial, Ulrich (1985) proposes an imperative tocritically reflect on the lack of comprehensiveness. Hence, when using repre-sentatives we need to be able to justify who is included and who is not. Thisjustification may be on the basis, as Jayaratna suggests, of an election (assumingthat we can rely on the election process as being both equitable and involvingall stakeholders).

It should be recognized that resorting to the use of representatives may be at

Evaluation 6(2)

182

05gregory (ds) 25/4/00 10:16 am Page 182

the cost of the deep involvement of all but a few representatives of stakeholderparties and, as a result, sources of external expertise necessary to the decision-making process may be excluded. Spaul (1997) suggests that participatory plan-ning models should include procedures for affected and involved parties to bebriefed by sources of external expertise, to question such expertise, call for sup-plementary information and so on. Further, the views of the few included as rep-resentatives in the evaluation may be over-emphasized and consequently takento represent the views of all stakeholders and used to justify the findings of theevaluation.

Therefore, as regards Rebien’s second criterion, it is surmised that restrictingparticipants in the evaluation process to representatives may lead to the exclu-sion of important stakeholder groups and sources of expertise. Further, thisprocess of representation essentially treats participation as a means for defend-ing the evaluation process and ensuring the acceptance of its findings.

3. Stakeholders should participate in at least three stages of the evaluationprocess: designing terms of reference, interpreting data and using evaluationinformation. In concentrating the participatory process on stakeholderinvolvement in functional tasks and the passing on of practical skills, it may beargued that Rebien privileges facilitation, that is the passing on of basic manage-ment and technical skills. Tilakaratna (1987) is opposed to this type of approachon the basis that ‘in the promotion of such skills, the attempt should be to assistthe people to critically adapt and selectively absorb the knowledge from outsiderather than a mechanical transfer of knowledge which alienates people’ (p. 35).When participants’ involvement is restricted to only a few stages of the project,as Rebien suggests, they are effectively prevented from gaining an appreciationof the methodology in full and a mechanical transfer of knowledge is inevitable.Furthermore, even if participants’ involvement is increased to encompass thefull process with the aim of engendering a transfer of knowledge to stakehold-ers, it may be argued that what is actually being transferred is not knowledge ofmethodology use, but rather the methodology users’ knowledge and skills.Without recourse to the original literature and other sources of expertise aboutthe evaluation methodologies, what participants receive is likely to be anappreciation of the approach skewed by the evaluator’s biases and misunder-standings. Jayaratna (1994) makes a similar point with respect to Mumford’sETHICS methodology in stating that, whilst ETHICS aims to promote transferof knowledge to the design group, what is actually being transferred is not know-ledge of methodology use but rather the methodology users’ own knowledge andskills.

With regard to the third criterion defined by Rebien, it is concluded that therestriction of participants’ involvement to only a few stages of the methodologyeffectively prevents participants from gaining an understanding of the method-ology in full. The participants’ understanding of the methodology is also con-strained by the facilitator/evaluator’s own knowledge and this effectively actsagainst participants’ ability to contribute to the process and to carry out futureevaluation for themselves.

Gregory: Problematizing Participation

183

05gregory (ds) 25/4/00 10:16 am Page 183

SummaryOn the basis of the above critical discussion of Rebien’s criteria for distinguish-ing participatory methodologies, it is argued here that they are insufficientlydefined to be able to act as criteria capable of being operationalized in practiceand, indeed, may promote practices which actually have a negative impact on par-ticipation, rather than a positive one. However, in advancing a set of criteria,Rebien initiates an important debate on what a participatory approach shouldlook like and advances the argument that Guba and Lincoln’s (1989) FourthGeneration Evaluation is an appropriate methodology for supporting partici-pation. We will now examine the participatory intent within Fourth GenerationEvaluation through the use of Oakley’s (1991) obstacles to participation as aframework to structure the analysis. Oakley recognizes that, as the practice ofparticipation does not occur in a vacuum, it is susceptible to a whole range ofinfluences that may form obstacles to participation. In this article, reference willbe made to three of these obstacles which are categorized as:

• Structural – the political environment within a particular context may prohibitparticipation and may reinforce the restriction of policymaking to a few indi-viduals.

• Administrative – centralized administrative and planning procedures oftenconstitute major barriers to participation. Administrators are often reluctantto relinquish control over information necessary for effective decision-making.

• Social – there is often a culture of dependence on experts and leaders withincommunities which is deeply ingrained. Lack of experience of decision-making, apathy and simple lack of time often render involved parties incapableof effective participation in decisions that affect them.

Each of the barriers will be employed in an analysis of the extent to which FourthGeneration Evaluation promotes a participatory approach to evaluation.

Guba and Lincoln’s Fourth Generation Evaluation

Guba and Lincoln (1989) have clearly articulated the ontological, epistemologi-cal and methodological orientation of Fourth Generation Evaluation. Thisapproach may alternatively be referred to as ‘responsive constructivist’ evalu-ation. It is responsive in that it allows, following Stake (1975), the parameters andboundaries for the evaluation to be determined through an interactive, negoti-ated process involving stakeholders on the basis that realities are social con-structions.

As regards the realization in practice of responsive constructivist evaluation,Guba and Lincoln outline a 12-stage process and, in summary, the major steps inthis dialectic process are: the identification and involvement of stakeholders; thesurfacing of claims, concerns and issues; and finally, consensus-oriented negoti-ation. Whilst Guba and Lincoln claim their approach is ‘more informed andsophisticated than previous constructions have been’ (1989: 22), in the nextsection we shall take a critical look at this methodology and its ability to addressbarriers to participation in order to realize its participatory intent.

Evaluation 6(2)

184

05gregory (ds) 25/4/00 10:16 am Page 184

Critique of the Fourth Generation Approach to Evaluation

Structural barrier Any evaluation which seeks to promote participation must beseen to be a political act. This is recognized by Dudley (1993) in stating, ‘true par-ticipation is about power, and the exercise of power is politics’ (p. 160). If par-ticipation is to be regarded as an end in itself then it must be regarded as anexplicitly political act on behalf of those involved.

Guba and Lincoln (1989) acknowledge the existence of political forces whichserve to restrict participation in the evaluation process and seek to restrict itsinfluence through the specification of a set of conditions for a successful dialecticprocess. Whilst Guba and Lincoln seek to bind participants to these conditionsand maintain that all parties negotiate their rights, they allow the client to ‘claima legal need to retain veto power over decisions’ and, in so doing, privilege therights of the client, which effectively supports a structural barrier (Laughlin andBroadbent, 1996). Guba and Lincoln recognize that unscrupulous clients couldagree to the conditions while not enacting them in practice, and it is recom-mended that the evaluator be constantly on the alert for this type of undesirablebehaviour. An example of such behaviour might be the advancement of persua-sive arguments for the exclusion of parties perceived as possessing insufficientknowledge or sophistication to engage in a meaningful discourse. Guba andLincoln seek to overcome this by stipulating that the evaluator is morally obligedto bring parties which do not possess sufficient knowledge or sophistication to theminimal level required for full participation. Of course, defining the ‘minimallevel required’ is itself problematic and putting the onus on evaluators to providewhatever may be required to facilitate respondents’ participation – training, role-playing, for example – hardly circumvents this problem. Guba and Lincoln go onto propose that the evaluator may act as an advocate or introduce third partiesto act as advocates in order to redress imbalances in knowledge between stake-holding parties. This introduces a set of new problems to the evaluation processsince their consideration of the role of advocates is underspecified.

Whilst Fourth Generation Evaluation may appear to take seriously the impactof power relations and does seek to alert users of the methodology to the needto consider and involve stakeholders, it does not go on to discuss in necessarydetail the range of values, motivations, prejudices, etc. which will emerge andneed to be effectively managed in the participatory environment. Indeed, it istaken for granted that the potential exists to overcome divergence between inter-ested parties through the development of an agreement at a higher level of under-standing which would surmount petty grievances and personal interests. AsLaughlin and Broadbent state, ‘GL have a very real belief that extra informationand/or an “increase in sophistication to deal with information” . . . primarilythrough either exposure to other views and/or through discourse concerningthese views, will resolve most disagreements even if founded on value differences’(1996: 145 and 436). Indeed, they go on to claim that the Fourth GenerationEvaluation approach, with its essentially solipsist world-view, accepts the possi-bility of the ‘social engineering of the mind maps of individuals’ (p. 437). Laugh-lin and Broadbent claim that the Fourth Generation approach adopts a rather

Gregory: Problematizing Participation

185

05gregory (ds) 25/4/00 10:16 am Page 185

naive view of the processes that lead to the changing of values in accommodatingothers’ views and does not give credence to the idea that fundamental conflictsmay arise which are not easily resolved and which give rise to political activitydetrimental to the interests of the organization and its stakeholders. If it is thecase that the issues raised by Fourth Generation Evaluation are easily resolved,perhaps this is a consequence of the evaluators/facilitators artificially restrictingthe boundaries for the evaluation by focusing on issues considered legitimate fordiscussion and not seeking to probe those issues that are not, in order that theevaluation will elicit more easily reconcilable opinions from stakeholders.

Furthermore, in terms of agreeing actions in response to agreed consensus,Laughlin and Broadbent (1996), drawing on the work of Habermas, accuse Gubaand Lincoln of falsely assuming that actions simply fall out of a consensus aboutclaims, concerns and issues. Laughlin and Broadbent advise that ‘action alterna-tives need to be debated by the discursive parties just as rigorously followingsimilar discursive rules as in the initial understanding process’ (p. 445).

Administrative barrier It is recognized by Guba and Lincoln (1989) that the freeflow of information between stakeholders and competence in decision-making isan ideal which is rarely found in reality. Indeed, such a situation is commonlyreferred to as an ideal speech situation following the work of Habermas (1984,1990) and is unlikely to sit easily with the common practice of gatekeeping andemploying devices to ensure control of information in organizations. Guba andLincoln state, ‘Formal gatekeepers have authority and informal gatekeepers haveinfluence, but either has the power to support or hinder an evaluation’ (1989:198). Indeed, they advise the evaluator of the endemic nature of gatekeepingpractices and recommend that each gatekeeper may require a particular quid proquo to ensure their cooperation with the evaluation. Whilst this recommendationmay serve to improve the flow of information between involved parties in theshort term and in the service of the evaluation process, in the longer term it mayserve to legitimate disruptive practices and perpetuate imbalances in knowledge.So, whilst Guba and Lincoln recognize the problem of gatekeeping, their sug-gested solution is aimed more at treating the symptoms rather than tackling theroot cause of the problem.

Social barrier Guba and Lincoln (1989) give explicit consideration to social,political and cultural factors in suggesting, ‘Every evaluation context inevitablyhas its own unique mix of such factors; knowledge about them is absolutely essen-tial if the evaluation is to be a success’ (p. 200). The evaluator must have know-ledge of the social context and conventions through intensive involvement withthat context. Guba and Lincoln therefore suggest several ways to deal with social,political and cultural factors. First, they suggest prior ethnography, that is theevaluator actually living in the context as a participant observer before the evalu-ation takes place. An alternative way is to engage ‘local informants’ who aremembers of the community and teach the evaluators about and sensitize evalu-ators to social, political and cultural factors. Guba and Lincoln do, however,recognize that local informants are placed in a privileged position which can

Evaluation 6(2)

186

05gregory (ds) 25/4/00 10:16 am Page 186

affect the evaluation process and, indeed, that they may have reasons for beingwilling to serve in the role of informants which are not in the best interests of theevaluation. The issue of the most appropriate and efficient means for the evalu-ator to become acquainted with the local social context is a difficult one and Gubaand Lincoln only hint at its extremely complex and problematic nature.

The evaluator’s knowledge of the social context and subject of the evaluationmay in itself be problematic. Laughlin and Broadbent (1996) suggest that, as theevaluator is the only person who has access to sufficient data to formulate a well-rounded view of the situation, this ensures his position of supremacy vis-à-visother stakeholding parties. Thus, whilst Guba and Lincoln are determined thatthe position of the evaluator should not be privileged, based on the methodologyof Fourth Generation Evaluation this appears rather inevitable.

A further factor which should be addressed in an evaluation is the issue ofsocial heterogeneity. Guba and Lincoln make efforts to deal with the problem ofheterogeneity by suggesting that it is the duty of the evaluator to make everyeffort to uncover stakeholder audiences, and, once uncovered, to take their pointsof view into account. Although the evaluator should be vigilant to the possibilityof additional stakeholding groups emerging throughout the evaluation, accord-ing to Guba and Lincoln several criteria are useful in deciding when to leave thefirst cycle of stakeholder identification:

One criterion is that of redundancy; when succeeding respondents add almost no newinformation it may be time to stop. Another criterion is that of consensus; when con-sensus is reached on a joint construction, there is little point in adding further respon-dents. (Guba and Lincoln, 1989: 207)

Guba and Lincoln recognize that the process of inclusion may be restricted as aresult of practical constraints such as project resources. Consequently, they go onto advance the notion of relative stake to identify those parties who should beincluded in the evaluation process. They realize that each stakeholding group islikely to place its own stake higher than that of others but propose that, follow-ing their belief that a consensus can be achieved about most if not all issues, rela-tive stake can be determined by negotiation between stakeholding parties.Laughlin and Broadbent (1996) regard the notion of relative stake as being prob-lematic and see it as a means to make ‘the exclusion of third or fourth level ben-eficiaries appear rationally and ethically justifiable’ (p. 439).

It may be argued that Guba and Lincoln’s recognition of the need to both dif-ferentiate levels of relative stake and exclude those parties having a lesser stakeis indicative of a whole host of problems which relate to the framing of the evalu-ation as a ‘project’. Whilst Guba and Lincoln assert that evaluation is a continu-ous process they nevertheless reinforce its project nature by advocating that theprudent evaluator should seek a written contract. Whilst they recognize that, asa consequence of the emergent nature of the design process, it is not possible toidentify milestone events, they do counsel that there is a need to develop realis-tic estimates of events, resources and likely products. The imposition of a bound-ary for the project through the drawing up of a contract by the externalevaluator/facilitator (necessitating associated processes of restriction of focus and

Gregory: Problematizing Participation

187

05gregory (ds) 25/4/00 10:16 am Page 187

disengagement) serves to constrain the project for funding or other practicalreasons. The process of framing the evaluation as a project involves forcingpeople, activities, events and issues into predefined categories and schedules: thisis done in order to simplify the subject and/or context to enable external partiesto understand it and the process only makes sense when viewed with this objec-tive in mind.

SummaryOn the basis of the above analysis, it is argued that the most explicitly participa-tive evaluation methodology in evaluation theory includes some rather naiveassumptions in its approach which are unlikely to promote participatory prac-tices. This finding is significant if one considers that the question of how to engen-der participation is fundamentally important to those evaluation methodologiesthat embody a subjectivist epistemology.

In the next section the argument will be advanced that participation is an issueof importance not only to subjectivist types of evaluation but also across a rangeof evaluation methodologies. In establishing this argument we will look at twopopular forms of evaluation. The first is Patton’s Utilization-Focused Evaluation,which advocates a pragmatic approach, and second, Pawson and Tilley’s Realis-tic Evaluation which, as the name suggests, is firmly grounded in the realist schoolof thought. It is important to make it clear that this exploration will be strictlylimited to explicating the extent to which these methodologies are based on par-ticipatory practices and assessing their effectiveness in realizing their partici-patory intent.

Patton’s Utilization-Focused Evaluation

Utilization-Focused Evaluation has a history of over 20 years and Patton (1997)claims that throughout this time ‘the central challenge to professional practiceremains – doing evaluations that are useful and actually used’ (italics in original,p. xiv). Given the popularity of this approach, evidenced through the demand forPatton’s writings and teaching, it would seem important to subject this approachto critical analysis. First, a summary description of the Utilization-Focused Evalu-ation process will be provided.

The first stage of the evaluation process is the identification of intended usersof the evaluation who are brought together, preferably organized into an evalu-ation task force, to work with the evaluator and partake in decision-making aboutthe evaluation. The evaluator and intended users commit to plans for theintended uses of the evaluation and decide the focus of the evaluation. Optionsare reviewed and, in the light of political and ethical considerations, a range ofapproaches are selected. The evaluator then helps intended users focus on decid-ing whether, given expected uses, the evaluation is worth doing, and to whatextent and in what ways intended users are committed to the intended use.

Given that the evaluator and intended users decide to proceed with the evalu-ation, the next stage of the process is based on a wide-ranging discussion thatembraces decisions about methods, measurement and design. A range of

Evaluation 6(2)

188

05gregory (ds) 25/4/00 10:16 am Page 188

methodological options is considered with particular emphasis on issues of valid-ity, reliability and utility. Further, the discussion focuses on questions of methodo-logical appropriateness, believability of the data, understandability, accuracy,balance, practicality, propriety and cost. Prime consideration is given to utility.

The next stage of the process is data collection and analysis, and following thisintended users actively participate in interpreting findings, forming judgementsand making recommendations. In the light of the actual findings, previouslyagreed plans for their use are implemented in practice. Finally, decisions aboutpublic dissemination of the evaluation report are enacted.

Critique of the Utilization-Focused Approach to EvaluationIt is not the primary aim of Patton’s Utilization-Focused Evaluation to promoteparticipatory practices; the prime aim of this type of evaluation is to providedecision-makers, or intended users, with the information they deem necessary.

Patton (1997) recognizes the need for a participatory approach and advises thatwhat a participatory evaluation means must be defined in each evaluation context.Patton advances a set of Principles of Participatory Evaluation (p. 100) as a basisfor evaluators working with primary intended users to decide what principles theywish to adopt. Whilst the idea of a set of principles is initially impressive, it shouldbe remembered that they are mere options and have the standing more of a wishlist rather than clear practical guidelines for the evaluator. Patton seems to recog-nize the importance of a participatory approach but whether the methodology ofUtilization-Focused Evaluation supports this in anything more than a superficialway is the critical question that is now addressed.

Patton’s perspective on participation in the evaluation process is a pragmaticone in that he is advocating that the evaluator ‘Find strategically located peoplewho are enthusiastic, committed, competent and interested’ (1997: 54). However,Patton goes on to argue that ‘Participants and collaborators can be staff and/orprogramme participants (e.g. clients, students, community members, etc.). Some-times administrators, funders and others also participate, but the usual conno-tation is that the primary participants are “lower down” in the hierarchy.Participatory evaluation is bottom up’ (p. 98). So, Patton argues that those lowerdown in the hierarchy are strategically located, but does this equate to being in aposition of strategic influence?

Further, Patton is adamant that the composition and size of the evaluation taskforce should be limited for practical reasons. For example, it is stated that not allstakeholders can or should participate, though an attempt should be made to rep-resent all major stakeholder groups with ‘fairness and a healthy regard for plural-ism being guiding lights on this regard’ (1997: 354). It has already been establishedthat the notion of representation is a weak one and Patton’s lack of attention tothese matters from a methodological point of view is a concern.

Another assumption of Utilization-Focused Evaluation is that all those identi-fied as intended users, and thus expected to participate in the evaluation, aremotivated to be involved and have the ability to support that involvement. So,whilst on the positive side, Patton does not discriminate or exclude on groundsof ability; on the negative side, neither does he make it incumbent on the

Gregory: Problematizing Participation

189

05gregory (ds) 25/4/00 10:16 am Page 189

evaluator to ensure that all intended users are able to be involved. In fact, Pattondoes not consider ability to be an impediment at all and he assumes that every-one is able to learn simply through exposure to the evaluation process: ‘In theprocess of participating in an evaluation, participants are exposed to and have theopportunity to learn the logic of evaluation and the discipline of evaluationreasoning’ (1997: 97). Whilst Patton’s ideas about methodological transparencyand capacity building aspirations are worthy, one wonders to what extentintended users actually participate in the evaluation tasks or whether the Uti-lization-Focused approach in practice is perhaps more expert-driven than Pattonwould care to admit. This point is particularly pertinent as Patton’s writings areillustrated with tales of his use of Utilization-Focused Evaluation in the com-munity domain where the involvement of affected parties, methodological trans-parency and capacity building through the passing on of methods, are widelyrespected principles. Indeed, whilst Patton documents projects in which com-munity members were actively engaged in the evaluation process, his methodol-ogy does not logically support that participation. Adopting the espousedphilosophy of Utilization-Focused Evaluation would lead the evaluator to preferworking with decision-makers such as government officials, who have theimmediate power to impact on action, rather than community members who arerelatively powerless. We are left to assume that the participation that occurred inthe projects that Patton cites as examples, occurred as a result of his own ethicalstance concerning who should be included and his expertise in facilitating par-ticipation – clearly this introduces the threat of a participation which is limited inextent and quality by the abilities of the facilitator(s).

Pawson and Tilley’s Realistic Evaluation

The aim of Realistic Evaluation (Pawson and Tilley, 1997) is to find out why aprogramme works for whom, and in what circumstances. It is based on a gener-ative theory of causation according to which a programme offers ‘chances whichmay (or may not) be triggered into action via the subject’s capacity to makechoices’ (p. 38). Clarke (1998) claims that what distinguishes Pawson and Tilley’sapproach is ‘the way they enlist a realist approach to scientific explanation andcausality in order to construct a conceptual framework to enable explanatorypropositions to be derived from programme theories’ (p. 332).

A set of concepts has been established for describing the operation of socialsystems and Pawson and Tilley (1997) draw on five of these concepts in attempt-ing to describe and explain the operation of programme systems: embeddedness,mechanisms, context, regularities and change.

• Embeddedness – realists believe that all human actions only make sensebecause they contain in-built assumptions about a wider set of social rules andinstitutions. Embeddedness implies consideration of programme structure,people involved and their wider lives and expectations, cultural, social andeconomic circumstances, etc.

• Mechanisms – the notion of mechanisms promotes understanding about not

Evaluation 6(2)

190

05gregory (ds) 25/4/00 10:16 am Page 190

just whether a programme works but also what it is about a programme whichmakes it work.

• Context – this not only embraces the spatial, geographical and institutionallocation but also the dominant social rules, norms, values and interrelation-ships operating which affect the efficacy of programme mechanisms. In otherwords, the relationship between mechanisms and their effects is contingentupon context.

• Regularities – evaluation involves the identification and explication of someregularity or pattern. Regularities are the product of some mechanism oper-ating in a context.

• Change – social programmes seek to bring about change. It is the evaluator’sjob to discern mechanisms for change triggered by a programme and how theydeal with countervailing social processes.

So, Realistic Evaluation involves the generation of propositions/hypothesesabout how mechanisms operate in contexts to produce outcomes. Following this,programmes are broken down to enable identification of:

• what it is about the measure which might produce change;• which individuals, subgroups and locations might benefit most readily from the

programme; and• which social and cultural resources are necessary to sustain the changes.

The evaluation then turns to observations and to the methods of data collec-tion and analysis needed to test the propositions/hypotheses. Pawson and Tilleyregard themselves as ‘whole-heartedly pluralists when it comes to the choice ofmethod’ (1997: 85) and embrace a whole array of techniques (both quantitativeand qualitative) to ensure that the choice of method is appropriate to the propo-sition/hypothesis. Pawson and Tilley then go on to look for confirmation of theexistence of regularities, through the comparison of expected outcomes withactual performance.

Critique of Realistic EvaluationGiven the realist assumptions which support this type of evaluation, it is notexplicitly participatory. However, whilst Pawson and Tilley (1997) provide clearmethodological guidelines for conducting Realistic Evaluation, by not makingparticipation problematic an uncritical form of limited participation is implicitlyadvanced. In the case of Utilization-Focused Evaluation limited participationserved to promote decision-makers’ (intended users’) goals whereas with Realis-tic Evaluation the focus is on knowledge generation.

Realistic Evaluation is very much based on the separation of the roles of prac-titioner/researcher and subject, and Pawson and Tilley reveal that RealisticEvaluation is based on a limited form of participation in stating:

The best sources of knowledge of the inner workings of a program lie, very often, withpractitioners who have ‘seen it all before’ . . . In general, realist investigation will notonly rely on rather broad hypotheses culled from the background literature . . . but willalso incorporate the ‘folk wisdom’ of practitioners. (1997: 107)

Gregory: Problematizing Participation

191

05gregory (ds) 25/4/00 10:16 am Page 191

As a consequence of restricting the generation of hypotheses to backgroundliterature and practitioners’ knowledge, it may be argued that Pawson and Tilleycritically undermine the research process by limiting the type of knowledge whichit is based upon. It is useful at this point to refer to Reason (1994) who distin-guishes three types of knowledge:

• experiential knowledge acquired through direct encounter with persons,places or things;

• practical knowledge gained through practice, knowing how to do something,demonstrated in a skill or competence; and

• propositional knowledge expressed in statements and theories.

Reason argues that ‘In research on persons the propositional knowledgestated in the research conclusions needs to be grounded in the experiential andpractical knowledge of the subjects in the inquiry’ (1994: 42). He recognizes thatwhere propositions and conclusions are generated by a practitioner/researcherwho is not involved in the experience being researched (without consultationwith the subjects of the research), as is the case with Realistic Evaluation, thenthe findings reflect the experience of neither the practitioner/researcher nor thesubject.

Heron (1996) also reflects on the knowledge brought to bear in the researchprocess, in claiming that:

If the researchers are not subjects of their own research, they generate conclusions thatare not properly grounded either in their own or in their subjects’ personal experience,as in traditional quantitative research; or they try to ground them exclusively in theirsubjects’ embodied experience, as in traditional qualitative research. (p. 21)

In restricting participation in the generation of hypotheses and analysis of out-comes to practitioners/researchers, it appears that the findings of Realistic Evalu-ation are not grounded either in their own or in their subjects’ personalexperience. Consequently, the knowledge produced by Realistic Evaluation iscritically restricted as a result of the methodology’s failure to embrace the rangeof knowledge held by people except as represented in the literature and/or asinterpreted by practitioners/researchers.

SummaryIt has now been established that participation is critical across a range of evalu-ation methodologies and that the evaluation literature is lacking in guidance onhow to conduct participatory evaluations. Attention will now turn to the systemsliterature for guidance. The systems discipline takes seriously the need to adopta holistic approach, the notion of irreconcilable differences, heterogeneity andhow system elements impact on one another. Given these considerations, anevaluation approach informed by the system theory should be able to address theissue of participation; this assumption is assessed in the next section.

Evaluation 6(2)

192

05gregory (ds) 25/4/00 10:16 am Page 192

Taket and White: Working with Heterogeneity

Taket and White (1995, 1996, 1997) advocate a form of evaluation based on whatthey refer to as ‘pragmatic pluralism’, in recognition of the high degree of het-erogeneity in terms of the differences between the people involved, betweengroups and instability in the environment evident in actual evaluation situations.

In seeking to address extant social heterogeneity, Taket and White argue forpluralism in the:

• use of methods/techniques;• role(s) of the evaluators;• modes of representation employed;• use of different rationalities;• nature of the client.

Taket and White regard evaluation as a social process that is focused around theparticipants and their needs rather than a purely scientific one, which is ‘remotefrom everyday life’ and methodologically driven.

Critique of the Pragmatic Pluralist Approach to EvaluationTaket and White (1997) acknowledge the idea that an intervention must beshaped as a project by making reference to the notion of boundaries. This shapingof the intervention as a project, as considered earlier, seriously affects the extentand quality of participation and, unfortunately, they fail to adequately discuss theimplications of this shaping process for the evaluation process, preferring to seeksolace in the underspecified notion of ‘leaky and changeable boundaries’ (p. 109).Given that Taket and White seek to reject theoretical criticisms of their approachon the grounds that it is essentially practical, it may be argued that their con-sideration of the implications of shaping the evaluation as a project, which logi-cally implies the selection of boundaries (an issue which is essentially practical innature), is given superficial treatment.

Indeed, whilst Taket and White advance a persuasive description of partici-patory evaluation and recommend triangulation, they fail to outline their ownmethodology for conducting this type of evaluation and instead merely makereference to Participative Rapid Appraisal. Taket and White seek to justify theirapproach by arguing for a particular notion of praxis in which ‘theory in itself isdownplayed and instead the emphasis is on collective theorizing for evaluation,and theorizing as a means of critical reflection on evaluation’ (1997: 103).However, Taket and White appear to fail to recognize that they are advancingthis argument from an elitist position since, as experts, they have the choice todownplay theory and this is a privilege not afforded to others involved in theevaluation. This downplaying of theory/expertise is exhibited in their accounts ofevaluation practice in Belize and London where it is stated that their approachinvolves the ‘judicious mix and match’ (1997: 105) of methods. The complemen-tary use of methods is a popular one, especially in systems theory (see for exampleFlood and Jackson, 1991; Mingers and Brocklesby, 1996), which is dependent onan awareness of the range of methods available for use in a given situation. Such

Gregory: Problematizing Participation

193

05gregory (ds) 25/4/00 10:16 am Page 193

an understanding of available methods and how they may be used in combinationis likely to be held only by the evaluator/facilitator and not the participants.Further, evidence in support of the argument that Taket and White’s pragmaticpluralist approach actually promotes an expert driven form of evaluation ratherthan a participatory one is supplied in their description of their work as involv-ing ‘ongoing critical reflection on the part of the evaluator(s), plus intermittentgroup critical review’ (1997: 105). On the surface at least this would appear toindicate a rather restricted form of participation which raises the question of whythe process is informed only by the evaluator’s ongoing critical review and notthe participants’.

Taket and White, like Guba and Lincoln, acknowledge the need to respect andincorporate the values of a wide range of stakeholders but, unlike Guba andLincoln, they recognize that oppositional, even irreconcilable, viewpoints may beexpressed which may necessitate consideration of power relations between groupsof stakeholders. It is considered important to explore the implications of powerrelations for the choice of ways of working, so that the participation of marginal-ized groups is facilitated. Indeed, Taket and White are careful to consider the long-term effects of the evaluation and the need to reject methods of evaluation whichmay serve the short-term needs of the project but have negative effects on levelsof participation in decision-making processes in the longer term. Taket and Whitetalk about ‘facilitating the interruption of racism, sexism, classism, heterosexism,ableism and the oppression of other groups’ (1997: 104), but one must questionwhat they mean by this. At a surface level it appears persuasive rhetoric but theyfail to provide an adequate account of how this is enacted in practice.

In conclusion, Taket and White highlight a number of critical issues in attempt-ing to advance a participatory approach to evaluation. However, whilst theysuperficially acknowledge the influence of power on participation, theirrecommendations for dealing with power are not well developed. Gergen (1995)sees a reticence to deal with power as endemic across the social sciences, withimportant exceptions such as work emanating from the Marxist and critical schooltraditions. The nettle of power will now be grasped.

Power and Participation

Power is really the great unmentionable in evaluation theory. Indeed, a ratherindirect approach has been purposefully employed in this article; we’veapproached it through the back door in talking about participation and barriers.Whilst we have now reached the point where the issue of power must be tackled,it is beyond the scope of this article to present a thorough exposition on power.Rather the aim here is to illustrate the argument that only through an appreci-ation of power can the problem of participation be addressed.

Foucault (1976) proposed two models of power. With the juridical model:

Power is taken to be a right, which one is able to possess like a commodity, and whichone can in consequence transfer or alienate, either wholly or partially through a legalact or through some act that establishes right, such as takes place through cession orcontract. (p. 88)

Evaluation 6(2)

194

05gregory (ds) 25/4/00 10:16 am Page 194

Foucault contrasts the juridical model of power with the microphysics of power,and he regarded the two models of power as being inextricably linked. Of themicrophysics of power he states:

Power must be analysed as something which circulates, or rather as something whichonly functions in the form of a chain. It is never localised here or there, never inanybody’s hands, never appropriated as a commodity or piece of wealth. Power isemployed through a net-like organization. And not only do individuals circulatebetween its threads; they are always in the position of simultaneously undergoing andexercising this power. (p. 98)

Complementary to Foucault’s discourse on power is Probyn’s (1990) suggestionthat emancipation, and it is suggested here participation, is an ongoing processfor which we are each responsible. Emancipation demands:

. . . that we be continually vigilant to the necessity of bringing to the light the submergedconditions that silence others and the other of ourselves. The subaltern’s situation isnot that of the exotic to be saved. Rather, her position is ‘naturalized’ and reinscribedover and over again through the practices of locale and location. In order for her to askquestions, the ground constructed by these practices must be rearranged. (p. 186)

How can this ‘rearrangement’ be supported at the level of methodology? Arethere any simple techniques that can be incorporated into our existing evaluationmethodologies? Logically the answer is an emphatic no; according to Ashby’sLaw of Requisite Variety a complex problem requires a complex solution. Butparticipatory methodologies must be simple if they are to be transparent andcapable of being passed on to the people so that they can be employed without afacilitator’s assistance. As a starting point I would like to recommend the use ofLedwith’s (1997) Sites of Oppression matrix as a good way of getting evaluatorsand stakeholders to look at how the processes of power operate.

The matrix is based on Freirean pedagogy and was originally devised as ameans of identifying sites of patriarchal oppression, consequently adaptationmight be necessary according to the context of use. The matrix (see Figure 1)‘illustrates the potential ways in which oppression overlays and interlinks’(Ledwith, 1997: 55) with the vertical axis locating elements of oppression withinsix categories (paid work, unpaid work, culture, sexuality, violence and the state)and the horizontal axis focusing awareness on the way in which oppression oper-ates at different, reinforcing and interdependent levels (personal, community,national and global). According to Ledwith this differentiation between levels notonly ‘facilitates a deeper analysis, but in its turn it leads to levels of action whichtake consciousness beyond the parameters of the community; from the personalto the political and to the global’ (p. 55).

Ledwith goes on to discuss how the matrix is used in practice. The first stageinvolves the production of codifications (for example, photographs, drawings,stories, etc.) which capture familiar scenes. A process of decoding at the indi-vidual level is then undertaken which involves discussion of such questions as:What do you see in the photograph? What are they doing? What do you thinkthey are feeling? Why is it happening? The focus of the decoding process gradu-ally shifts from the individual to the community level and then on to national and

Gregory: Problematizing Participation

195

05gregory (ds) 25/4/00 10:16 am Page 195

global levels. Through the decoding process, the shift also tends to move awayfrom what is happening in the codification to why it is happening and how itrelates to real life. Through reflection on real life problems the causal elementsspecified along the vertical axis of the matrix are brought into consideration. Inrooting the discussion in reality and the probing of complex causal links, Ledwithclaims that there is a shift from ‘naïve consciousness to critical consciousness’(1997: 56) which serves to address the causes of oppression.

It is not difficult to see how the evaluator working with stakeholders mightemploy Ledwith’s matrix to structure an investigation of how power operateswithin the context of an evaluation. The kind of analysis afforded by the matrix

Evaluation 6(2)

196

Figure 1. Ledwith’s Sites of Oppression MatrixSource: Ledwith (1997: 57)

05gregory (ds) 25/4/00 10:16 am Page 196

would clearly benefit those of a pluralist orientation since it would serve to clarifywho should be involved in the evaluation, what barriers exist to prevent partici-pation, and how those barriers might be removed. Those approaching an evalu-ation with a more realist bent would gain an appreciation of howmechanisms/structures work at different levels and this analysis, if undertakenwith the ‘subjects’ of the evaluation, would not merely be based on their propo-sitional knowledge, but also draw on the experiential and practical knowledge ofthe subjects.

Conclusions

Effective participation in evaluation is problematic and this article focuses on alack of transparency in participatory methods in evaluation. First, Rebien’s defi-nition of a set of criteria for distinguishing participative projects from interven-tions which are non-participative or have a low level of participation was criticallydiscussed. Rebien’s argument that Guba and Lincoln’s Fourth Generation Evalu-ation is an appropriate methodology for supporting participation was then takenup and this methodology was described and critiqued with reference to Oakley’sobstacles to participation. Following this, the argument that participation is animportant problem across a range of evaluation methodologies was establishedthrough consideration of Patton’s Utilization-Focused Evaluation and Pawsonand Tilley’s Realistic Evaluation. The extent to which these approaches are basedon participatory assumptions and the quality of the participatory processes pro-moted were critically discussed. Consequently, reference was made to anapproach to evaluation emanating from the systems discipline as a source of guid-ance on how to promote effective participation. Significantly, this approachrevealed power to be a critical determinant of the level and effectiveness of par-ticipation. On the basis of this, a summary review was made of power and it wassuggested that a simple technique such as Ledwith’s Sites of Oppression matrixmight enhance the ability of a range of evaluation methodologies to realize a par-ticipatory approach by ensuring that participation is explicitly considered ratherthan ignored or implicitly assumed.

Participation in evaluation is an important issue and this article has sought tohighlight how this issue has been neglected and the endemic nature of this neglect,and to suggest a way forward.

ReferencesBiggs, S. D. (1995) ‘Participatory Technology Development: Reflections on Current Advo-

cacy and Past Technology Development’, Participatory Technology DevelopmentWorkshop on ‘The Limits of Participation’, Intermediate Technology.

Churchman, C. W. (1979) The Systems Approach. New York: Delta.Clarke, A. (1998) ‘Review of Realistic Evaluation’, British Journal of Sociology 49: 331.Dachler, H. P. and B. Wilpert (1978) ‘Conceptual Dimensions and Boundaries of Partici-

pation in Organizations: A Critical Evaluation’, Administrative Science Quarterly 23: 1.Dudley, E. (1993) The Critical Villager: Beyond Community Participation. London: Rout-

ledge.

Gregory: Problematizing Participation

197

05gregory (ds) 25/4/00 10:16 am Page 197

Feuerstein, M-T. (1986) Partners in Evaluation: Evaluating Development and CommunityProgrammes with Participants. London: Macmillan.

Flood, R. L. and M. C. Jackson (1991) Creative Problem Solving. Total Systems Inter-vention. Chichester: John Wiley and Sons.

Flynn, D. J. (1992) Information Systems Requirements: Determination and Analysis.London: McGraw-Hill.

Foucault, M. (1976) ‘Two Lectures’, in C. Gordon (ed.) Michel Foucault: Power/Know-ledge. London: Harvester Wheatsheaf.

Friend, J. K. (1989) ‘The Strategic Choice Approach’, in J. Rosenhead (ed.) RationalAnalysis for a Problematic World. Chichester: John Wiley and Sons.

Friend, J. K. (1993) ‘The Strategic Choice Approach in Environmental Policy Making’,The Environmental Professional 15: 164.

Friend, J. K. and A. Hickling (1987) Planning Under Pressure. The Strategic ChoiceApproach. Oxford: Butterworth Heinemann.

Gergen, K. J. (1995) ‘Relational Theory and the Discourses of Power’, in D.-M. Hosking,H. P. Dachler and K. J. Gergen (eds) Management and Organization: Relational Alterna-tive to Individualism. Aldershot: Avebury.

Guba, E. G. and Y. S. Lincoln (1989) Fourth Generation Evaluation. London: Sage.Habermas, J. (1984) The Theory of Communicative Action: Vol. I, Reason and the Rational-

ization of Society; Vol. II, The Critique of Functionalist Reason. Cambridge: Polity Press.Habermas, J. (1990) Moral Consciousness and Communicative Action, trans. C. Lenhardt

and S. Weber Nicholsen. Cambridge: Polity Press.Heron, J. (1996) Co-operative Inquiry: Research into the Human Condition. London: Sage.Jayaratna, N. (1994) Understanding and Evaluating Methodologies: NIMSAD A Systemic

Framework. London: McGraw-Hill.Laughlin, R. and J. Broadbent (1996) ‘Redesigning Fourth Generation Evaluation: An

Evaluation Model for the Public-Sector Reforms in the UK?’, Evaluation 2(4): 431–52.Ledwith, M. (1997) Participating in Transformation: Towards a Working Model of Com-

munity Empowerment. Birmingham: Venture Press.Mingers, J. and J. Brocklesby (1996) ‘Multimethodology: Towards a Framework for Criti-

cal Pluralism’, Systemist 18(3): 101.Mumford, E. (1993) Designing Human Systems for Health Care: The ETHICS Method.

Stockton Heath: Eight Associates.Mumford, E. (1995) Effective Systems Design and Requirements Analysis: The ETHICS

Approach. London: Macmillan Press.Mumford, E. (1996) Systems Design: Ethical Tools for Ethical Change. London: Macmillan

Press.Oakley, P. (1991) Projects with People. International Labour Organization, Geneva.Patton, M. Q. (1986) Utilization-Focused Evaluation, 2nd edn. London: Sage.Patton, M. Q. (1997) Utilization-Focused Evaluation, 3rd edn. London: Sage.Pawson, R. and N. Tilley (1997) Realistic Evaluation. London: Sage.Probyn, E. (1990) ‘Travels in the Postmodern: Making Sense of the Local’, in L. J. Nichol-

son (ed.) Feminism/Postmodernism. London: Routledge.Rahman, M. A. (1993) People’s Self-Development. London: Zed Books.Reason, P. (1994) ‘Human Inquiry as Discipline and Practice’, in P. Reason (ed.) Partici-

pation in Human Inquiry: Research with People. London: Sage.Rebien, C. C. (1996) ‘Participatory Evaluation of Development Assistance: Dealing with

Power and Facilitative Learning’, Evaluation 2(2): 151–72.Spaul, M. (1997) ‘Exploring “Our Common Future”. Generalised Interests and

Evaluation 6(2)

198

05gregory (ds) 25/4/00 10:16 am Page 198

Specialised Discourses’, in Systems for Sustainability: Proceedings of the 5th Inter-national Conference of the UK Systems Society. London: Plenum Press.

Stake, R. E. (1975) Evaluating the Arts in Education. Colombus, OH: Merrill.Taket, A. and L. White (1995) ‘Working with Heterogeneity: A Pluralist Strategy for

Evaluation’, in Critical Issues in Systems Theory and Practice, Proceedings of the 4thInternational Conference of the UK Systems Society. London: Plenum.

Taket, A. and L. White (1996) ‘Pragmatic Pluralism – An Explication’, Systems Practice9(6): 571.

Taket, A. and L. White (1997) ‘Working with Heterogeneity: A Pluralist Strategy forEvaluation’, Systems Research and Behavioral Science 14(2): 101.

Tilakaratna, S. (1987) The Animator in Participatory Rural Development (Concept andPractice). Geneva: ILO.

Ulrich, W. (1985) ‘The Way of Inquiring Systems: Review of Churchman’s “The Designof Inquiring Systems”’, Journal of the Operational Research Society 36: 873.

AMANDA GREGORY, PhD, is a lecturer at Hull University Business School,where she teaches on the Doctor of Business Administration Programme. She isalso the Deputy Editor of the journal Systems Research and Behavioral Science.Please address correspondence to: Hull University Business School, Hull, UnitedKingdom HU6 7RX. [email: [email protected]]

Gregory: Problematizing Participation

199

05gregory (ds) 25/4/00 10:16 am Page 199