o introductory manual - réseau santécom · evaluation tea sem at numbe or objectivef fo itselfrs...
TRANSCRIPT
![Page 1: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/1.jpg)
Program Evaluation for Organizations under CAPC
(Community Action Program for Children)
O
HV 745 . Q44 T373 1998 V.l
INTRODUCTORY MANUAL
Centre de recherche sur les services communautaires
DIVERSITÉ WAL
Faculté des sciences sociales
Association des centres jeunesse du Québec
![Page 2: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/2.jpg)
S A N T É C O M
i
![Page 3: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/3.jpg)
PROGRAM EVALUATION FOR ORGANIZATIONS UNDER THE COMMUNITY ACTION PROGRAM FOR
CHILDREN ( C A P C )
INTRODUCTORY MANUAL
INSTITUT NATIONAL DE SANTÉ PUBLIQUE DU QUÉBEC CENTRE DE DOC\fMENTAnON
MONTRÉAL
Caroline Tard Hector Ouellet
André Beaudoin
with the collaboration of Patricia Dumas
MARCH 1998
![Page 4: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/4.jpg)
Secretarial Services: Jocelyne Gallant
English Translation: Translation Bureau
Public Works and Government Services Canada
The masculine used herein refers to both genders.
ISBN: 2-89497-015-3
These documents were produced as part of a contract with CLSC Les Blés d'Or evaluating Health Canada's Community Action Program for Children (CAPC).
The viewpoints herein are those of the authors and do not necessarily reflect the official policy of Health Canada or the Province of Quebec.
![Page 5: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/5.jpg)
Table of contents
Foreword Hi
Introduction 1
Section 1
Section 2
Injecting Some Realism 2 All You Have To Do Is Show Us That Your Project Works 2 The Many Questions That Can Be Answered by Evaluation 9 Summary 12
Defining Evaluation 23 Evaluation: A Word With ManyjMeanings 13 Evaluation Is Essential to Action. 15 Evaluation Is Not Control j 17 Two Critically Important Perspectives 18 Summary 22
Section 3 Different Types of Evaluation 23 A Process Related to the Target of Evaluation and the Steps in the Activity } 23 Summary j 27
Section 4 Needs Evaluation and Feasibility Evaluation 28 Summary j.. 30
Section 5 Process or Implementation Evaluation (Formative Evaluation) 32
.34 ,35 .37
Clarifying Objectives Making Components Measurable Summary
![Page 6: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/6.jpg)
Il
Section 6 Effect Evaluation (Summàtive Evaluation) 38 A Twofold Process i 39 The Scope of Effect Evaluation...1 50 Summary j . 52
Section 7 Impact Evaluation j. 53 Summary | 54
Section 8 Two Complementary Approaches: Quantitative and Qualitative j - 55 The Formai or Quantitative Approach 55 The Qualitative Approach 1 56 Summary 58
Summary of Sections 3 to 8 -59
Section 9 For a Successful Evaluation of Outcomes 62 Is the Program Evaluable? |. 61 The Evaluation Context 1 69 Evaluate the Program, Not the People 72 Summary I 75
In conclusion | 76
Appendix Evaluation in Evolution j. 80
Glossary 90
Further Reading on Evaluation 93
i I
![Page 7: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/7.jpg)
Foreword
Aspeaker w h o was to talk on the topic of planning and
who wanted to lead the listeners to have realistic
expectations about the potential of the planned action
began his presentation with the following:
"When Christopher Columbus discovered America, he
didn't know where he was going. He didn't know how to
get there. He didn't know how long
what resources would be needed a
landed, he didn't know where
discovered America. "
the trip would take or
ong the way. Once he
he was. And yet h e
To which someone interested in the field of eva luat ion
could have responded:
"At least Columbus had an objective: finding India.
Having an objective enabled Columbus to decide to
attempt the trip and to pull together the required
resources. The few navigational instruments ( in
evaluation terms, these are referred to as evaluation tools)
Program Evaluation for Organizations Introductory Manual
![Page 8: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/8.jpg)
he had indicated that he was heading westward and o n
course. Despite their primitiveness, his instruments
allowed him to calculate the time lapsed since departure
and provided a rough estimation of the distance traveled.
Furthermore, these instruments (in this case, his own
observations of the results, combined with a comparison
between what he expected to discover to what he actually
saw) indicated to him that either he hadn't reached India
or that his description of India was incorrect. "
This is what evaluation is all about: a means for
stakeholders and practitioners to guide their actions.
The Community Action Program for Children (CAPC)
evaluation team set a number of objectives for itself,
including the publication of manuals to provide groups
working under the CAPC framework with support that
would correspond as closely as possible to their needs and
real potential for cariying out evaluation activities. Three
Program Evaluation for Organizations Introductory Manual
![Page 9: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/9.jpg)
manuals have been produced.1 The first presents and
clarifies the main ideas, topics, and notions relating to the
concept of evaluation. This is the Introductory Manual.
The second manual aims at providing tools to organizations that want to evaluate their performance in order to enhance the quality of the programs that they deliver. It provides concrete examples that illustrate
different ways of using evaluation methods and tech-
niques. It also shows how to handle a certain number of
ethical and methodological challenges often encountered
in carrying out evaluations. The title of the second manual
is Evaluation Tools for Enhancing Program Quality.
The third manual, entitled Presentation of Evaluation
Guides, offers a fairly detailed presentation of the contents
of certain "evaluation guides" that have been developed
These three manuals (in English and French versions) may be obtained from the Centre de recherche sur les services communautaires (community services research centre), whose address is provided on the back cover.
Program Evaluation for Organizations Introductory Manual
![Page 10: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/10.jpg)
vi and are available. We have focused on highlighting the
aspects of these guides that seem the most relevant or
useful to organizations.
We would like to express our appreciation to the
individuals who, in one way or another, have helped in
preparing these manuals. We are particularly grateful to
the members of the CAPC advisory committee for program
evaluation, whose names and affiliations are listed below.
André Beaudoin Lyne Champoux Martine Cinq-Mars Richard Cloutier Danielle Couture/ Lucie Lafrance Anne Dubé Richard Foy Michel Gaussiran Florence Isabelle
Gisèle Laramée Hector Ouellet
Program Evaluation for Organizations Introductory Manuaf
CRSC, Université Laval (Quebec) CRSC, Université Laval (Quebec) Regroupements des projets PACE des Laurentides (Montreal) CRSC, Université Laval (Quebec) Mauricie/Bois-Francs Regional Board (Trois-Rivières) La Débrouille (Rimouski) Le Pignon Bleu (Québec) CAPC (Montreal) CLSC Seigneurie Beauharnois (Valleyfield) ENAP (Québec) CRSC, Université Laval (Québec)
![Page 11: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/11.jpg)
vi i We sincerely hope that these manuals will be of use and that the organizations for which they have been published will find them of value in their efforts to develop services offered to children and parents in their communities.
Program Evaluation for Organizations Introductory Manuaf
![Page 12: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/12.jpg)
1
Introduction
This first manual presents and clarifies the main ideas,
topics, and notions relating to the concept of
evaluation.
It offers an overview of evaluation, including:
- a preliminary discussion about the difficulties in
undertaking a program or intervention evaluation;
- clarification concerning the nature of evaluation;
- a description of the different types of evaluations;
- a synthesis of requirements for evaluation to be feasible.
This manual concludes with an appendix outlining the
history of evaluation and a short glossary that contains the
more common terms pertaining to the subject.
Program Evaluation for Organizations Introductory Manual
![Page 13: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/13.jpg)
2
Section 1 Injecting Some Realism
All You Have to Do Is Show Us That Your Project Works
ow many times have organizations had to deal with
this kind of remark from a program sponsor? And
furthermore, how many times have these same
organizations really accepted the requirement as being
normal? "After all, why would the Minister or Regional
Board provide the funding I've requested if I can't show
them that I'm productive? "
Demonstrating the effectiveness of an intervention, pro-
ject, or program may appear, at first glance, to be obvious
and normal, therefore possible, if not relatively easy. Expe-
rience has shown, however, that the reality is sometimes
quite different and taking on the commitment of demons-
trating effectiveness can be a frustrating experience.
Program Evaluation for Organizations Introductory Manual
![Page 14: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/14.jpg)
3 It must be understood, moreover, that evaluating effecti-
veness is only a part of what is meant by evaluation: it
covers much more territory than simple effectiveness.
Evaluation is just as appropriate before an intervention as
after. When it precedes action, evaluation touches on
issues underlying program implementation. It helps show
how the project has been designed to respond to needs and
to reach the target population. When occurring after an
intervention, evaluation reveals what the program (its
activities) has achieved, what it has produced (its
outcomes), and program effects or results (its effectiveness
and impact).
Normally, it is assumed that, if a project or intervention
has been effective, the situation will have significantly
changed for the better, that is, in the way targeted by the
project. Consequently, the direction and scope of the
changes are considered as being the best measures of a
project's or intervention's effectiveness. In other words,
there is a direct relationship between the magnitude of the
observed changes and the effectiveness of the project or
Program Evaluation for Organizations Introductory Manual
![Page 15: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/15.jpg)
4 intervention. On the other hand, we consider that a project
or intervention has been ineffective if the observed
changes are small or nil. Even worse, a project or inter-
vention is deemed harmful if the observed changes are
completely contrary to the targeted direction.
But chances are that these arguments are incomplete, if not
outright mistaken. Let's analyze them using some
examples.
La Politique de la santé et du bien-être, published by the
Government of Quebec in 1992, lists the following among
the 19 objectives set for the next ten years:
"OBJECTIVE 1: By 2002, reduce the number of instances of sexual abuse, violence, and neglect directed at children, and attenuate the related consequences. "
Around 2002, when we are getting ready to determine the
scope of the phenomenon of sexual abuse, violence, and
neglect affecting children:
Program Evaluation for Organizations Introductory Manual
![Page 16: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/16.jpg)
5 - if, in 2002, the number of instances of sexual abuse,
violence, and neglect directed at children has
significantly decreased from 1992, should we conclude
that the activities carried out from 1992 to 2002 were
effective?
- if, in 2002, the number of instances of sexual abuse,
violence, and neglect directed at children remains the
same as in 1992, should we conclude that the activities
carried out from 1992 to 2002 were ineffective?
- if, in 2002, the number of instances of sexual abuse,
violence, and neglect directed at children has increased
from 1992, should we conclude that the activities carried
out from 1992 to 2002 were harmful?
The answer to these three questions should probably be
"No one knows!" The rationale underlying this response
is basically that the phenomena of sexual abuse, violence,
and neglect directed at children are highly complex. The
factors influencing them are many and, still today, very
poorly understood. The same can be said about how these
factors impact on behaviour that results in abuse, violence,
Program Evaluation for Organizations Introductory Manual
![Page 17: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/17.jpg)
6 and neglect. Lastly, many of the factors that could be
involved in a decrease or increase in the number of
instances of abuse, violence, or neglect may not lend
themselves to direct intervention or, at least, may not
respond to an intervention program of limited duration.
Under such conditions, what hope is there of demon-
strating that the activities carried out were effective in
reducing the number of instances of abuse, violence, and
neglect? To do so, it would have been necessary to:
A- identify a single factor or a very limited number of
factors that, if not present, would reduce the number
of instances; and
B- remove these factors or cancel their effects.
But this could quite easily not be the case here.
Consequently:
- it is entirely possible that the activities carried out from
1992 to 2002 to. reduce the incidence of abuse, violence,
and neglect were very effective, even if the number of
Program Evaluation for Organizations Introductory Manual
![Page 18: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/18.jpg)
7 instances remained unchanged in 2002. It may be that
the activities eliminated or inhibited certain abuse
factors, while others came into play,1 offsetting the
interventions carried out and resulting in an
unchanged rate of abuse, violence, and neglect. The
claim could be made that the rate might have been
higher were it not for the activities carried out.
- a case could be made against the effectiveness of the
activities, despite a drop in instances of abuse, violence,
and neglect over the period. This would be the case if
several factors disappeared on their own or were
canceled out by others, while the activities targeting a
reduction may have had no effect whatsoever.
It is possible, for example, that the stress related to job loss could lead to abusive or violent behaviour if the individuals do not h a v e social support. It is conceivable that, if a specific program to combat this type of behaviour were launched during a period of increased layoffs, the rates of abuse, violence, and neglect could increase in spite of the program's effectiveness.
Program Evaluation for Organizations Introductory Manual
![Page 19: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/19.jpg)
8 It is the nature of social programs that those who
implement them typically have a twofold effect. As would
be expected, workers strive to attenuate the problem at
hand. But in doing so, they call attention to the existence
and extent of the problem. This can easily create the
impression that the incidence of the problem has increased
since they first drew attention to its existence.
We can illustrate this phenomenon using child sexual
abuse, once again, as an example. As people working in the
field analyze, call attention to, and publicly denounce the
existence and extent of this problem, we can be left with
the impression that the problem is becoming more serious
as the number of people who succeed in bringing it into
the public eye increases.
Conjugal violence presents a similar picture. The number
of cases of violence seems to grow as the number of groups
and people working on the problem increases. This
impression could be linked to the fact that these
Program Evaluation for Organizations Introductory Manual
![Page 20: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/20.jpg)
9 individuals and groups often play both screening and
intervention roles.
What can we conclude from these remarks? First of all,
that demonstrating the effectiveness of a program, project,
or intervention must be undertaken with caution.
Program sponsors must be aware of this and ensure, before
requiring it, that the organization has the necessary
resources and tools.
The Many Questions That Can Be Answered by Evaluation
Since evaluation cannot be focused solely on measuring
the effectiveness of a program, what can it be used for? It
can provide answers to a multitude of questions relating to
developing the program, providing a framework for its
implementation, and taking stock of the activities carried
out. The following are examples of applicable questions.
Program Evaluation for Organizations Introductory Manual
![Page 21: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/21.jpg)
10 A- Developing the Program
- What is the nature of the targeted problem?
- Is there a need for the program or service in the
community?
- What are the types and quantities of resources
required by the project? How can the resources be
obtained?
- What objectives should be set?
- Who would use this type of service?
- What is the typical profile for these clients?
- Do these objectives correspond to what we want to
promote?
B- Providing a Framework
- What is the profile of the individuals participating in
the program? Does this profile correspond to the
expected profile? If not, how do they differ and why?
- What activities are being carried out under the
program?
- How much time are participants putting into it?
- In which activities do they generally participate?
Program Evaluation for Organizations Introductory Manual
![Page 22: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/22.jpg)
11 - What is their level of satisfaction?
- Which of the program's activities do they most
appreciate?
- How many have stayed with the program to the end?
- How do they differ from those who left the program
early?
- How can the withdrawals be accounted for?
C- Taking Stock of Activities
- To what extent did the program reach the targeted
clients?
- To what extent did the participants modify their
attitudes and behaviour?
- What efforts did the staff put into program activities?
- What were the consequences of the program in the
community?
- And so on.
Answering these questions is just as much an evaluation
activity as trying to measure the effectiveness of a program
or intervention. In addition, organizations find that this
Program Evaluation for Organizations Introductory Manual
![Page 23: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/23.jpg)
12 kind of evaluation often has the advantage of being easier
to deal with.
SUMMARY
Evaluation doesn't only concern the effectiveness of an
activity. It can also be used in developing the activity,
providing a framework for it, and taking stock of the
activities carried out.
Program Evaluation for Organizations Introductory Manual
![Page 24: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/24.jpg)
13
Section 2 Defining Evaluation
What is evaluation? That's the first question to ask
yourself in order to understand the process
described in this manual.
The common sense answer is that evaluation is making a
judgment based on certain criteria. Whenever you want to
judge an activity and are able to state the rationale for your
judgment (that is, the criteria on which the judgment is
based), you are evaluating. We all make judgments when
making budget decisions, implementing an activity, or
taking action. We decide to do one thing rather than
another, after making an evaluation based on what we
consider the most relevant criteria.
Evaluation: A Word With Many Meanings |
In English, the word evaluate has a number of meanings.
Sometimes, it can be used to mean to estimate or assess
something; in other words, to set an approximation of its
Program Evaluation for Organizations Introductory Manual
![Page 25: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/25.jpg)
14 worth or value. For example, you can estimate the size of a
crowd to be about 10,000 without having an exact number.
Our CLSCs (Local Community Service Centers) talk about
intake-evaluation-referral to describe the initial contact
with the client that the practitioner uses to direct the client
to the appropriate resources, depending on the needs
expressed by the client. In this context, we can say that the
practitioner evaluates in order to act.
The process described in this manual is evaluating the
activities of organizations. In this context, the word
evaluation has a more precise meaning: judging an
activity, its progress, and its results based on certain
criteria. In other words, evaluation is checking that the
organization's action is based on needs. It's drawing the
relationship between what the organization wanted to do
and what it actually accomplished. It's putting the
organization's philosophy and its real operations on the
table for scrutiny. It's measuring the effect, results, and
impact of the organization's interventions.
Program Evaluation for Organizations Introductory Manual
![Page 26: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/26.jpg)
15
Evaluation Is Essential to Action \
Evaluation provides organizations with the opportunity of
taking a critical look at their activities in order to improve
them.2
It allows program staff and participants alike to deepen
their understanding of the activity in which they are
involved and to have a clearer grasp of its progress, scope,
and effects. The goal is to learn from the activity and to
better understand it in order to enhance performance.
Assessing what you do improves your understanding of it
and gives you the means to maximize the targeted
outcomes. It can also lead to immediate or future decision-
making. Evaluation is therefore a way of giving yourself a
right of review with an eye toward change. Evaluation
For more information on this concept we invite you to read Ricardo Zuniga: Zuniga, Ricardo. (1994). L'évaluation dans l'action. Collection INTERVENIR. Les Presses de l'Université de Montréal, 200 pp. Zuftiga, Ricardo. (1994). Planifier et évaluer l'action sociale. Collection INTERVENIR. Les Presses de l 'Université de Montréal, 225 pp.
Program Evaluation for Organizations Introductory Manual
![Page 27: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/27.jpg)
16 implies a certain degree of risk, however, since taking a
close look at what you're doing often seems threatening.
Evaluation can contribute to understanding the
organization's action in different ways:
- it helps identify and clarify the steps in the program, its
organization, and process with respect to achieving the
targeted outcomes and changes;
- it providës for the organization of available information
relating to the activity, its planning, process, and
outcomes;
- it allows criteria to be set, which everyone is aware of
and on which they can base their own judgments about
the program;
- it can facilitate communication between participants,
program personnel, and decision-makers at various
levels. During the evaluation process, participants are
called on to express their opinions about the program,
which is useful in adjusting its components. In this way,
Program Evaluation for Organizations Introductory Manual
![Page 28: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/28.jpg)
17 program workers can avail themselves of systematic
information on the effectiveness of their actions, which
makes it easier for them to give an accounting of,
explain, and justify their program. On the decision-
making side, it provides access to information on how
the program is working and the factors affecting its
functioning and outcomes.
Evaluation Is Not Control
Too often, evaluation is seen as an obligation that is
imposed by and applied at the request of an outside agency
(most often the sponsor). Evaluation must not be viewed
as administrative control imposed from outside on the
organization or program staff.
It's not a question of being assessed, that is, being the object
of evaluation, as if the evaluation were outside the
activity, personnel, and participants in the program or
organization. To the contrary, evaluation is an integral
part of a collective undertaking. It must be seen as a form
of learning, a means of worker development, and
Program Evaluation for Organizations Introductory Manual
![Page 29: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/29.jpg)
18 assurance of improvement. This means seeing yourself as
a participating subject in the evaluation. In other words,
adopting the process as your own, participating in it,
learning lessons from it, and making sure that the
organization benefits from the experience.
Two Critically Important Perspectives
Although the evaluation of an organization's activity or
intervention program can be viewed from many different
perspectives, two are especially important. They can be
summarized by the questions below.
1) Is the activity being carried out as planned? This
involves examining how the program works, how the
program components contribute to attaining the objec-
tives, practitioner/participant involvement, and so on.
Program Evaluation for Organizations Introductory Manual
![Page 30: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/30.jpg)
19
^ Example Lefs suppose that there's a program that has the goal of preventing children from adopting violent behaviour. This program is premised on the idea that promoting 'ton-violent" toys can prevent violent behaviour in children. The program objective could be establishing a toy lending library that would provide community children access to a selection of non-violent toys. The program analysis could consist in examining the kind and number of toys made available to children; the profiles of children who come to the toy lending library or who use the toys; and the times of use. Attention could also be given to determining the factors that facilitate or Impede library use.
2) To what degree is the program producing the targeted
results? This means examining the effects of the activity
on the participants and formulating a judgment by
comparing the results with the program objectives, based
on clear, objective criteria established beforehand.
Program Evaluation for Organizations Introductory Manual
![Page 31: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/31.jpg)
20
Example If we were to evaluate the same program from the perspective of the effects of the activity, we would look to see if the targeted objectives had1 been achieved. If one of the program objectives were to provide training and specific tools to parents to help their children, then it would be necessary to evaluate to what extent the objective had been achieved. In other words, were the parents better prepared after the program than before? The evolution of the parents is assessed with well-defined criteria that allow the measurement of the newly acquired knowledge, the attitudes developed, and behavioural changes.
From whatever viewpoint the evaluation is made, the
quality of information gathered is very important because
it provides the basis for judging the activity or program,
and for decision-making. The information must therefore
be adequate, reliable, and valid.
Reliable information means information that faithfully
reflects reality. In other words, anyone repeating the
process would come to the same conclusion. Despite all of
the evaluator's efforts to gather reliable information, it
may fall short of the mark. This may be attributable to
respondent mood, fatigue, or motivation. Sometimes,
however, it may be due to the evaluator, who can get tired.
Program Evaluation for Organizations Introductory Manual
![Page 32: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/32.jpg)
21 A lack of reliability can also be linked to the setting where
the information is collected or because the questions were
poorly formulated during the interview or on the written
questionnaire. Furthermore, errors can occur when the
information is processed. These potential sources of error
need to be taken into account and steps taken to counter
them.
The information must also be valid, which means that it
reflects what the evaluator wants to know. To illustrate, if
the evaluator wants to determine if a program increases
parental self-esteem, he must ensure that the instrument
used does a good job of measuring self-esteem and not, for
example, the ability of participants to express their
emotions.
Program Evaluation for Organizations Introductory Manual
![Page 33: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/33.jpg)
22
SUMMARY
The evaluation must respond to the organization's needs.
Above all, it must provide for summarizing the know-
ledge gained, better defining the objectives, examining
failures, and improving the quality of the activity. It must
serve to identify and make known the necessity, quality,
and relevance of the organization.
Evaluation is:
- understanding the foundation on which an activity is undertaken;
- outlining the progress of an activity; - observing the effects of the activity by gathering and
analyzing reliable, valid information; - judging the effects of the activity by comparing them
with the objectives according to precise, explicit criteria; - better understanding the activity; - balancing outcomes and costs; - finding ways of enhancing the activity in order to bring
it more into conformity with the targeted goals.
Program Evaluation for Organizations Introductory Manual
![Page 34: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/34.jpg)
23
Section 3 Different Types of Evaluation
There are a number of types of evaluation. Each type
targets different goals and is carried out differently and
at different times during the program. Accordingly, the
choice of the evaluation method depends on the stage of
the program and what you want to evaluate.
A Process Related to the Target of Evaluation and the Steps in the Activity
What do we want to evaluate? That's the first question
that has to be asked and which determines the overall
evaluation process. Indeed, this question provides for
defining the target of the evaluation. Evaluation can focus
on different targets, such as the need for an activity and its
planning as well as on the process, the effects, or the
impacts of the activity. The evaluation of each of these
targets applies to a different stage of the activity.
Program Evaluation for Organizations Introductory Manual
![Page 35: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/35.jpg)
24 You might want, therefore, to evaluate if a specific
problem situation needs to be addressed. This is needs
evaluation.
^ Example If different indicators in a community lead us to suspect that mothers under the age of 20 have no support resources, we can check the validity of our impression and, if relevant, implement an advocacy or referral system. Before undertaking this activity, however, we have to evaluate the real needs of young mothers, determine for which situations it would be realistic to implement a support program for them, and set the orientation that the program should have to respond to the problem or observed needs.
Obviously, this kind of process should be carried out in the
initial stage of the activity, when the problem and need for
action are being recognized. This is likewise the planning
stage, when feasibility is assessed. In other words, when
you determine the scope of the program and whether you
have the resources needed to carry it through.
You may also want to assess program implementation and
performance on an ongoing basis to ensure that it is on-
track, that the targeted clientele (young mothers) is being
Program Evaluation for Organizations Introductory Manual
![Page 36: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/36.jpg)
25 reached, and that the activities and operations relating to
the objectives are being achieved. This is referred to as a
process, implementation, or formative evaluation.
Formative evaluation can help in solving problems that
crop up, making any necessary adjustments, and fine-
tuning the program while it is running. It should be
evident that this kind of evaluation must be undertaken
when the program is implemented and delivered.
The end results of the program can be assessed, which is
referred to as summative evaluation. In this case, the focus
is on evaluating program effectiveness, which means
determining to what extent the targeted objectives and
outcomes have been achieved. In other words, the
evaluation focuses on whether the program caused the
targeted changes in the participants and if it had secondary
effects, beneficial or otherwise. You can also evaluate
whether the program warrants continuation and compare
its effectiveness to other programs. Although this kind of
evaluation is carried out in-process or once the activity has
been completed, it must be planned for from program
Program Evaluation for Organizations Introductory Manual
![Page 37: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/37.jpg)
26 implementation because a baseline must be established to
compare the situation prior to and after a certain number
of activities have been carried out.
Lastly, you may want to evaluate program spin-offs in the
community. This is termed impact evaluation. In this case,
the focus is on determining the effects of the program on
participants, as members of the community, and on the
community as a whole. An attempt is made to define the
state of the community after the program and to examine
the predicted and unexpected effects produced by the
program. This type of study can furthermore be used to
compare the program's effects to those of existing programs
with similar objectives. Organizations, however, rarely
have the resources necessary to carry out program impact
evaluations.
The table below summarizes the different types of
evaluations and correlates them to specific program stages.
Program Evaluation for Organizations Introductory Manual
![Page 38: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/38.jpg)
27
SUMMARY
Stage of Action Type of Evaluation
• Awareness that some-thing has to be done
• Action planning
=> Needs evaluation
and
=> Feasibility evaluation
• Program implementation and delivery
• Progress of action
=> Process evaluation or formative evaluation
• Program results for participants
• Measurement of spin-offs for the community
=> Effect evaluation or summative evaluation
=> Impact evaluation
Program Evaluation for Organizations Introductory Manual
![Page 39: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/39.jpg)
28
Section 4
Needs Evaluation and Feasibility Evaluation
eeds evaluation consists in gathering information in order to identify the situation, objectives, and
conditions to be fulfilled in planning an activity or
program. It normally occurs in the first stage, that is,
during planning. Any organization that wants to
implement a new program should begin here. It ensures
that the proposed intervention responds to a real problem
and that the expected solutions correspond to a clearly
identified, specific need.
Three aspects are examined in evaluating needs: the
problem situation, the targeted objectives, and t h e
conditions that must be met.
Needs evaluation consists in gathering the information
relevant to the problem situation and to the needs of the
population with respect to services. The program's
objectives are determined based on this information and
Program Evaluation for Organizations Introductory Manual
![Page 40: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/40.jpg)
29 program orientation is set. It is important to ensure that all
the stakeholder groups agree on program objectives. Lastly,
an attempt is made to determine to what extent the
program being planned responds to participant needs.
Even though our definition puts needs evaluation as a
preliminary to program implementation, it can be useful
in determining if a program still addresses the situation in
the community. In this context, needs evaluation can be
carried out while the program is ongoing. Needs
evaluation can therefore provide insight into why a
program no longer produces the expected outcomes in a
given setting.
A feasibility assessment, on the other hand, attempts to
determine if the program is realistic and relevant. It looks
at whether the organization has the necessary human ,
material, and financial resources to carry out the program.
It establishes which of the possible objectives appear the
most readily achievable given the available resources and
possible constraints. The scope of the program is also set.
Program Evaluation for Organizations Introductory Manual
![Page 41: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/41.jpg)
30
SUMMARY
N E E D S A S S E S S M E N T
Questions to Ask Yourself
OBJECTIVE:
Identify the situation, the
objectives, and conditions to be
fulfilled in planning an
activity or program.
The Situation: OBJECTIVE:
Identify the situation, the
objectives, and conditions to be
fulfilled in planning an
activity or program.
• What is the problem situation?
• How do clients perceive their needs?
Objectives:
OBJECTIVE:
Identify the situation, the
objectives, and conditions to be
fulfilled in planning an
activity or program.
• Which activities should be stressed in the activity to be carried out?
• Do the other stakeholder groups in the community agree on the target objectives?
• What direction should the program take?
Conditions:
OBJECTIVE:
Identify the situation, the
objectives, and conditions to be
fulfilled in planning an
activity or program.
• To what extent does the program address the identified needs?
• Does the program still address the expressed needs?
• Why does a program fail to yield the targeted outcomes or produce unexpected results in a given setting?
Program Evaluation for Organizations Introductory Manual
![Page 42: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/42.jpg)
31
FEASIBILITY ASSESSMENT
OBJECTIVE:
Examine the conditions
governing program implementation.
• Does the organization have the human, material, and financial resources required to carry out the activity?
• What will be the program's scope?
• Is the community ready or inclined to accept the activities targeted under the program?
Program Evaluation for Organizations Introductory Manual
![Page 43: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/43.jpg)
32
Section 5 Process or Implementation Evaluation
(Formative Evaluation)
Process, implementation, or formative evaluation
basically allows the organization to determine if it is
achieving what it planned to and how, and, if there is a
discrepancy between the planned and actual activities, to
account for them.
This kind of evaluation is a means of intervention
feedback for making ongoing adjustments. It implies
continuing questioning by the main stakeholders. As such,
formative evaluàtion can begin at the start of program
implementation and continue on to its end.
Formative evaluation attempts to identify how the activity
is progressing towards achieving its objectives. The main
question to ask yourself is "Does the project correspond to
what we wanted to do at the outset?" This requires the
Program Evaluation for Organizations Introductory Manual
![Page 44: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/44.jpg)
33 gathering of descriptive data about the participants, staff,
and program.
You need to look at how program activities are
functioning and consider the impact of services on
participants. The latter includes finding out if services are
delivered in a consistent manner, determining the nature
of participant involvement and participation throughout
the program, and defining the quality of interaction
between the practitioners and participants.
^ Example In order to determine the program's components, data must be gathered about the sociodemographic characteristics of the participants (age, sex, income, and employment status), about their places of residence, about the reasons that drew them to the program, and about the services that they received prior to joining the program. The number of workers, their level and type of training, and their experience must be known. Consideration • must also be given to the nature, contents, and frequency of activities; the geographic and physical access to facilities where the activities occui; how the service is delivered; the duration of the service; referral sources to the service and other types of services, and so on. To find out how the activity is functioning, how the program is actually carried out must be specifically analyzed. In this respect, participant flow through the program, interactions between participants and program staff, and participant involvement in activities could be defined.
Program Evaluation for Organizations Introductory Manual
![Page 45: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/45.jpg)
34 Clarifying Objectives
When attempting to gain a clearer understanding of how
well a program is working, you need to establish its
objectives and ensure that they are clear and definite.
Program objectives are often vague or too general.
Example Lefs suppose that an intervention program for parents and children aged 0-5 has the objective of preventing neglect of and violence to young people in a Local Community Service Center's (CLSC) territory. Carrying out a formative evaluation of the program would require specifying, clarifying, and making the objectives measurable. What are the program's specific objectives? Offering respite to parents? Improving parenting skills? Once the objectives have been clarified, you need to determine if the program's activities effectively achieve them. Are the awareness workshops an the needs of the very young in line with the specific objectives? What are the workshop contents? What can be achieved with workshops? With what staff? What is the quality of parental participation in the workshops? Is home support provided in a consistent manner and when parents need it the most? Is this support appropriate for the categoiy of parents targeted by the program?
In the context of formative evaluation, defining the
objectives to be achieved by the program allows you to
design the program so that it can be assessed in terms of
Program Evaluation for Organizations Introductory Manual
![Page 46: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/46.jpg)
35 effects and outcomes. (For more on this topic, see Section 9
"Is the Program Evaluable")
Making Components Measurable
Obviously, in order to be able to describe and understand what is happening when an activity is implemented and is ongoing, you have to define its elements in the most precise and measurable terms. This is referred to as operationally defining the variables.
^ Example To measure level of education, you could count the number of years of schooling or ask for the last degree earned or the field of study. To measure the amount of time spent in a program, you could calculate the number of weeks during which participants took part in activities. To measure the frequency of contacts between participants and program staff, you could calculate the number of phone calls, interviews, or meetings they have had.
To illustrate, if you want to find out if parents are
committed to action when they take part in activities, ask
yourself questions such as these:
Program Evaluation for Organizations Introductory Manual
![Page 47: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/47.jpg)
36 - Do they ariive on time and stay till the end? - Are they attentive? - Do they ask questions?
- Do they do the suggested activities?
- Do they talk about the topic under discussion or about
other things?
By measuring the various program components in this
way, you can come up with a sound description of it. But
you have to go beyond simply describing the program,
however, and compare the information with what you
targeted doing at the outset.
Reference points are therefore introduced to evaluate program quality. These criteria will vary from one situation to the next, and even from one sector of activity to another, because they are dependent on the specific conditions that apply to the project. Generally, these criteria are linked to program objectives, community needs, cost of services, and participant satisfaction.
Program Evaluation for Organizations Introductory Manual
![Page 48: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/48.jpg)
37 SUMMARY
PROCESS OR IMPLEMENTATION
EVALUATION (FORMATIVE EVALUATION)
Questions to Ask Yourself
OBJECTIVE: Identify the
situation, the objectives, and conditions to be
fulfilled in planning an activity or
program.
Program Objectives: • What are the objectives? • How do the program activities lead to
achievement of the objectives? • How can achievement be measured? Program Operation: • What are the program's characteristics
(staff, activities, administration, materials)?
• How does the program run or operate? • What allows the program to best reach the
target population and to best use all the resources at its disposal or, if applicable, what prevents it from doing so?
• What problems were encountered during the program and how were they resolved?
• What adjustments are needed to better achieve the objectives?
• What do participants experience in the program?
• Is the program being carried out as planned? Observations about the Program:
OBJECTIVE: Identify the
situation, the objectives, and conditions to be
fulfilled in planning an activity or
program.
• Is the program or aspects of it better suited to certain types of participants?
• What are the strong and weak points in program operation?
• Which activities or combinations of activities best correspond to each objective?
• What variations are there in the application of the program from one practitioner to the next?
Program Evaluation for Organizations Introductory Manual
![Page 49: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/49.jpg)
38
Section 6
Effect Evaluation
(Summative Evaluation)
Summative evaluation of the activity makes it possible
to judge the effects, outcomes, or results of an intervention program. There is often confusion about the
meaning of the word "result". An activity generated by a
program is not a result. For example, the fact that 10
workshops involving more than 64 parents were held
during the course of a parenting-skills program is not a
program result but an activity. An outcome, result, or
effect is rather what has been observed in the participants
or the community about the achievement of a program objective. This type of evaluation aims mainly at
determining if participation in the program produced
changes in the participants or the community. Planning
for summative evaluation must start as soon as the
program is implemented since, in most cases, it requires
baseline data for comparison with program end results.
Program Evaluation for Organizations Introductory Manual
![Page 50: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/50.jpg)
39 A Twofold Process
Assessing program outcomes is a twofold process.
- It consists in demonstrating that a difference has
occurred in a participant from the moment he entered the program to a point when he had carried out a certain number of program activities. This is referred to as change measurement.
- It consists in ensuring that the change that has occurred is due to the program and not to a range of other
circumstances. This is what's known as change
attribution.
Summative evaluation is often complex, requiring the
involvement of resources skilled in evaluation.
Program Evaluation for Organizations Introductory Manual
![Page 51: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/51.jpg)
40
^ Example Lefs take the example of infant stimulation workshops for children aged 3 to 5. If you want to evaluate the effects of the workshops, that is, demonstrate that they contributed to the children's development, it's not enough to state that the children generally benefited. You have to be able to specify the changes and attribute them to the program, not a series of elements normally encountered in this stage of child development
Let's take a look now at how changes can be measured and
what strategies can be used to attribute the changes to the
program.
a) Change Measurement
Measuring change in a participant starts by gathering
information about elements that have a direct link to
program objectives. For example, you might want to
measure the parents' knowledge about child development,
attempt to define the kind of social-support network the
family has, or try to determine the child's level of
development.
Program Evaluation for Organizations Introductory Manual
![Page 52: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/52.jpg)
41 Two approaches can be used in gathering this information.
The first is based on using standardized instruments that
have been validated. When administered and analyzed
according to standards of scientific research, these
instruments provide pertinent information. And, since
they result in a score, they can be used for statistical
analysis. For example, a participant's scores can be
compared when the instrument is applied at two different
times. The scores of different participants can also be
compared. This approach comes under what is termed
quantitative research. The items below are examples of
such instruments.
- Self-esteem rating scale;
- Test on knowledge about child development;
- Child psychomotor development scale;
- Social-support network rating scale;
- Parenting skill confidence rating scale.
The second approach is based on a research process in
which the items measured are less clearly defined. For
Program Evaluation for Organizations Introductory Manual
![Page 53: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/53.jpg)
42 example, you could try to determine the parents'
knowledge by asking them to talk about their child's needs
or by observing their participation in certain activities with
their child. This approach has the advantage of bringing to
light unexpected aspects. It also provides a more general
picture of individuals. On the other hand, it has specific
requirements to ensure that the information gathered is
reliable, that is, it reflects reality.
^ Example If you ask someone how they behave in a certain situation, they can easily provide responses that don't correspond with what they really do. They often give what they consider "the right answer". Similarly, the question itself can suggest a response. To illustra te, asking "Do you feel qualified to take care of your child when he is having a fit of anger" is quite different from 'Tell me what happened the last time your child had a fit of anger. "
This approach, which comes under qualitative research, is
often preferred because it relies less on a standardized
process. In order to yield pertinent, useful information,
however, it must be carefully planned and carried out
according to the same strict scientific standards as
quantitative research.
Program Evaluation for Organizations Introductory Manual
![Page 54: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/54.jpg)
43 b) Different Strategies for Attributing Change to the
Program
Even if changes are measured very carefully, a number of
elements can lead us down the wrong path and make us
erroneously attribute the changes to the program. This is
referred to as threats to internal validity. Examples of them
are maturation (natural changes that occur with time as
part of the normal development of children 0 to 5 years of
age participating in the infant stimulation workshops) or
events that have nothing to do with the program but
which occur at the same time. Instability or changes in the
procedures used (measurements, samples, and so on) for
measuring change may also be at work.
There are a number of strategies that can be used to
counter these problems and measure changes that actually
derive from the program.
Program Evaluation for Organizations Introductory Manual
![Page 55: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/55.jpg)
44 Strategy 1: Comparison With an Identical Group That Did Not Participate in the Program
In order to ensure that the infant stimulation workshops
actually affected the children aged 3 to 5 that participated in
them, we can compare them to a group that didn't attend
the workshops. The second group is referred to as a control
group, comparison group, or reference group. If changes
are observed in the children that took part in the
workshops but not in those in the control group, then we
could assume that the changes are due to the program and
not to other causes, such as normal development.
In order to compare the two groups, however, you need to
ensure that they are really similar (share the same
characteristics) before the start of the program. The most
elaborate model, often referred to as experimental or with
control group, consists in randomly separating the
children into two groups before the start of the program.
The first group takes part in the workshops, while the
second does not (reference group). It should be obvious
Program Evaluation for Organizations Introductory Manual
![Page 56: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/56.jpg)
45 that the second group cannot participate actively in the
program until the evaluation has been carried out. The
children can, of course, be invited to take part in the
workshops at a later date.
This evaluation strategy can give rise to ethical questions
in certain circumstances. Organizations delivering
assistance programs often find it difficult to apply this
strategy because it implies that a portion of the participants
eligible for assistance will either be deprived of it or receive
it later. It can nevertheless be used when there are more
participants than places in the program. In such cases, the
strategy can be used and the control group could take part
in a second running of the program.
Strategy 2: Comparison With a Very Similar Group
If no randomly selected control group can be established as
described above, program participants can be compared to
another group of children who are quite similar but who
have not taken part in the program. For example, you
could select a group of children aged 3 to 5 whose parents
Program Evaluation for Organizations Introductory Manual
![Page 57: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/57.jpg)
46 share the same characteristics. This is known as the quasi-
experimental model.
Since the two groups are considered similar, they can be
compared. If changes are observed in the children that took
part in the workshops but not in the control group, then
the changes may be due to the program.
Strategy 3: The Same Group Before and After the Program
If no control group can be set up, then the comparison can
be based on the situation of the group of children before
they start the workshops and after they finish, or even after
they complete various stages of the program.
This model lets you determine the children's progress
from the beginning to the end of the program, although it
cannot confirm that the observed changes were brought
about through their participation in the program. This
model is referred to as non-experimental model.
Program Evaluation for Organizations Introductory Manual
![Page 58: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/58.jpg)
47 Strategy 4: The Group Starting the Program and the Group Completing the Program
Another strategy can sometimes be applied if you were
unable to gather the data about the children at the outset or
decided not to do so. In this case, the information is
gathered once the children have finished the program and
then compared to that of a group that is getting ready to
start it. While this doesn't provide a profile of the
children's progress in the program, if the two groups are
similar enough for comparison and if there are differences
between them, then the changes are probably due to the
effects of the program.
Strategy 5: The Group After the Program
If you cannot opt for strategy 4, you can still gather the data
on the group of children after they have finished the
program. Since there is no reference point, every effort
must be made to use means or tools that will provide
information that is as valid as possible. Subjective
opinions (such as participant's satisfaction with the
Program Evaluation for Organizations Introductory Manual
![Page 59: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/59.jpg)
48 activity) should be avoided; strive for information that
would indicate, looking back, whether there has actually
been a change in participant attitude or behaviour.
Although this is far from being the best strategy, some
measurements of the situation of program participants is
better than none.
Strategy 6: Evaluation of Individual Effects
The preceding strategies are based on the use of identical
measurements of a certain number of people. The basic
idea is to compare the scores of different groups
(participants/control group) or scores obtained at different
times (before/after). You can, however, get indications
about the effects of a program by looking at the specific
results produced in each participant. This strategy is based
on the premise that program results can differ widely from
one participant to the next in nature and extent. The
evaluation would therefore focus on documenting what
changes the program brought about in the participant. The
following information is needed for this strategy:
Program Evaluation for Organizations Introductory Manual
![Page 60: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/60.jpg)
49 - a true portrait of the individual upon entering the
program;
- the nature of his participation in the program;
- a true portrait of the individual upon finishing the
program.
This strategy, based on case studies, uses qualitative
methodology. If there is only scant information about the
participant when he enters the program, a retrospective
study could be appropriate. This procedure consists in
retracing the participant's passage through the program
with him mainly to find answers to these questions: What
brought him to participate in the program? What was his
situation at that time? What did he experience during the
program? What is his opinion of the program? What is his
current situation?
Program Evaluation for Organizations Introductory Manual
![Page 61: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/61.jpg)
50 Obviously, as in any evaluation process, the utmost rigor is
necessary to prevent biased opinions. To illustrate, asking a
participant "How was the program beneficial? " or "How
did the program improve your situation? " suggests that
the program was useful. Neutral questions are more
appropriate: "What is your opinion of the program?"
Furthermore, if someone directly connected with the
program asks the questions, the participant might be
expected to praise it. This last strategy therefore poses some
specific difficulties for organizations.
The Scope of Effect Evaluation
Assessing the effects of an activity should enable us to
respond to the following questions:
- Did the program achieve its objectives to a significant
extent?
- What is the relationship between program outcomes
and costs?
- Does the program warrant continuation?
Program Evaluation for Organizations Introductory Manual
![Page 62: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/62.jpg)
51 Assessing the effects allows a judgment to be made
regarding the program's effectiveness, which is its ability to
attain the targeted objectives. The answer to the first
question leads to a second: "Are the effects adequate given
the costs of the program?" This aspect deals with program
efficiency, which is an evaluation of the results in relation
to the costs.
This puts us in a position to judge if the program warrants
being continued or extended. Program effectiveness can
also be compared to that of other programs. Another
interesting aspect is to attempt to find differences in the
outcomes according to participant type or community.
Program Evaluation for Organizations Introductory Manual
![Page 63: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/63.jpg)
52
SUMMARY EFFECT
EVALUATION (SUMMATIVE
EVALUATION)
Questions to Ask Yourself
OBJECTIVE: Determine to what extent a
program achieves its objectives and
if it should be continued or
extended.
Program Objectives: • What program objectives have
been achieved? • Is the program effective: do the
methods used enable it to achieve the objectives?
• What are the outcomes of the various program components?
• Is the program efficient: what is the relationship between program outcomes and costs?
• Does the program lead to outcomes other than those targeted?
Program Continuation:
OBJECTIVE: Determine to what extent a
program achieves its objectives and
if it should be continued or
extended.
• Does the program warrant being continued or extended?
• Is the program effective when compared to other programs?
• Are there differences in the outcomes depending on the type of participant or community?
Program Evaluation for Organizations Introductory Manual
![Page 64: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/64.jpg)
53
Section 7 Impact Evaluation
Assessment of an activity's impact widens the scope of
the effect evaluation. This type of assessment consists
in determining to what extent a program achieves its goals,
that is, to what point the various effects observed enable us
to conclude that the set goals have actually been achieved.
Impact evaluation also lets us highlight the program
effects for the entire population over a longer period. It
entails, therefore, looking not only at the effects on
participants targeted by the program, but also the
consequences for the community and even society. This
evaluation further allows us to identify unexpected
program effects, whether positive or negative.
The program's contribution to achieving these objectives
for the whole of society is likewise considered. Moreover,
the program's cost-effectiveness and cost-benefit analyses
can be included for a wider purview, that is, from the
standpoint of society. Lastly, impact evaluation enables us
Program Evaluation for Organizations Introductory Manual
![Page 65: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/65.jpg)
54 to compare the program's effects to those of other
programs with similar objectives. As a general rule,
organizations rarely undertake impact evaluations since
this goes well beyond their mandates.
SUMMARY
IMPACT EVALUATION
Questions to Ask Yourself
OBJECTIVE: Measure program
impacts on participants and the community at
large.
Achievement of Program Objectives: OBJECTIVE:
Measure program impacts on
participants and the community at
large.
• To what extent do the observed effects allow us to conclude that the set objectives have actually been achieved?
• What are the effects on the overall population over a relatively longer time?
Other Program Impacts: • Were there other outcomes produced? • Did the program have unexpected or
undesirable outcomes? • What were the program's impacts on the
overall community? • What did the program contribute to
achieving the objectives in society as a whole?
• What are the cost-effectiveness and cost-benefit ratios with respect to society?
• Can the program's effects be compared to that of other programs with similar objectives?
Program Evaluation for Organizations Introductory Manual
![Page 66: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/66.jpg)
55
Section 8 Two Complementary Approaches:
Quantitative and Qualitative
Two approaches or methods can be used in evaluating
an activity: the formal or quantitative approach and
the qualitative approach.
The Formal or Quantitative Approach
The formal approach is based on the rationale underlying
the program, that is, its formally stated goals and
objectives.
This approach aims at determining if the program
objectives have been achieved and attempts to establish
relationships between program activities and changes in
individuals who have participated in the program. This
approach starts with hypotheses that are checked out using
different research methods.
Program Evaluation for Organizations Introductory Manual
![Page 67: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/67.jpg)
56
^ Example Two hypotheses could be put forward In evaluating if the objectives of a support program for children aged 6-12 years with learning disabilities and their parents: 1) children taking part in the program exhibit more changes than those who do not and 2) children whose parents take part in training activities and in the support group exhibit more changes than the children whose parents do not participate in the activities offered.
The formal approach focuses on statistical and quantitative
methods in gathering and analyzing data pertaining to the
program under study.
The Qualitative Approach
Instead of working from the formal objectives, the
qualitative approach aims at looking at the program in an
overall manner, especially how the program is actually
carried out. Its main focus is on activities carried out
through the program and the way things are done.
In addition to enabling us to develop a detailed description
of the program or certain activities, the qualitative
Program Evaluation for Organizations Introductory Manual
![Page 68: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/68.jpg)
57 approach can lead to a judgment about the program based
on evaluation criteria.
^ Example In the case of the support program for children aged 6-12 years with learning disabilities, the qualitative approach would enable us to cull the important aspects regarding the program, its operation, its various activities, its participants, and its practitioners. These elements, which form the core of the program, would then be analyzed based on evaluation criteria.
The qualitative approach uses methods such as interviews,
written documentation, and observation. In gathering
qualitative data, the evaluator strives to understand the
richness of the experience of the people participating in the
program.
Program Evaluation for Organizations Introductory Manual
![Page 69: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/69.jpg)
58
SUMMARY Approach Characteristics Method Used
Formal or Quantitative
• Based on the rationale underlying the program.
• Puts forward hypotheses about the achievement of objectives and uses quantitative research methods to check them.
• Quantitative and statistical methods with quantifiable information: counts, averages, percentages, proportions, participation rates, contact frequency, scores on different scales, etc.
Qualitative • Approaches the program as a whole and as carried out
• Makes a detailed description of the program or one of its parts according to evaluation criteria.
• Based on observations, brings out important aspects and analyzes them based on evaluation criteria.
• Method uses qualitative information: detailed data on activities, events, and observed interactions or behaviour. The information can be gathered through interviews, observation, or use of written program documents.
Program Evaluation for Organizations Introductory Manual
![Page 70: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/70.jpg)
59
Summary of Sections 3 to 8
Each type of evaluation is different in its goals, how it is applied, and when it is applied.
- Needs evaluation occurs in the planning stage and consists in gathering the information required to identify the problem situation, the goals to be achieved, and the conditions that must be fulfilled to set up a program. Feasibility evaluation enables us to determine if the intervention is realistic and relevant.
Process or formative evaluation focuses on the implementation or operation of the intervention. Based on relevant evaluation criteria, it enables us to determine if the means used for the program allow for the objectives to be reached easily.
Evaluation of the effects or summative evaluation enables us to judge the intervention program's outcomes. Different research strategies can be used to
Program Evaluation for Organizations Introductory Manual
![Page 71: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/71.jpg)
60 show that the changes in participants are actually due to program participation. This type of evaluation must be planned for at the start of the program since, in most instances, it involves baseline (before and after) comparison.
Impact evaluation allows us to look at the degree to which a program achieves its objectives and how it affects the community in the longer term, not just the participants.
Two complementary approaches can be used in carrying out an evaluation: the formal approach and the qualitative approach.
Program Evaluation for Organizations Introductory Manual
![Page 72: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/72.jpg)
61
Section 9 For a Successful Evaluation of Outcomes
At this point, the question that we could ask ourselves
is "Is it always possible to determine the effects or
outcomes of an intervention or certain activities under
one of our programs?"
The answer depends to a great extent on how the program
is formulated and the conditions under which evaluation
is carried out. In order for an evaluation to be successful, you need to implement a certain number of conditions that relate as much to program and objective formulation as to the evaluation context. Successful evaluation rests on
1) the program being évaluable and 2) the right context.
Is the Program Evaluable?
This is the starting point for the evaluation: if the program
is not évaluable, then there's no evaluation. For a program
to be évaluable, it must meet a number of prerequisites.
Program Evaluation for Organizations Introductory Manual
![Page 73: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/73.jpg)
62 - The program must be clearly stated.
If the program is not clearly defined and stated, then it
won't be possible to conclude that the outcomes are
actually due to program intervention. Therefore, the
program statement must be adequately structured and
complete, its elements must be well defined, and the
program must really be achievable according to the
established plan.
- The program must have precise objectives.
Program objectives should always be clear since they guide
all action taken. Objective clarity and accuracy are essential,
however, if you want to carry out a summative evaluation
(effects). In fact, summative evaluation uses program
objectives as basis to evaluation criteria, so they must be
clearly stated. The general objectives, which represent the
overall program intent, and the specific objectives, which
guide intervention, must be clearly defined.
Program Evaluation for Organizations Introductory Manual
![Page 74: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/74.jpg)
63
Example The practitioners in an assistance program for children an family violence3 have identified four general objectives for participating children:
to lfbreak the secret" of abuse in the families;
to learn to protect themselves;
- to experience the group as a positive and safe environment;
to strengthen their self-esteem.
Since these objectives are very broad, they can be broken
down into specific objectives aimed at bringing about
concrete changes among participants. Each activity carried
out in group meetings should lead to definite changes in
participants' attitudes and feelings. Taken together, the
expected changes should allow four general objectives to be
attained. Thus, the first objective of "breaking the silence"
is made up of three specific objectives translated into group
activities:
3 See Einat Peld and Diane Davis (1995): Groupwork with Children of Battered Women: A Practitioner's Manual. Interpersonal Violence: The Practice Series. Sage Publications, 225 pp.
Program Evaluation for Organizations Introductory Manual
![Page 75: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/75.jpg)
64
- Defining violence abuse is not okay and it's not my fault.
- Feeling education => It's okay to feel and express feelings.
- Sharing personal experiences I am not the only one.
Specific program objectives should always be formulated
in terms of results, which means that they should contain
the following elements.
1) A subject, which designates the target of the
intervention.
The subject answers the question Who?
^ Example The participants in a program are the targets of the intervention.
2) An action verb, which describes the expected outcome of
the program.
The action verb must convey an observable behaviour,
which means that the behaviour is evident and not subject
to a multitude of interpretations.
Program Evaluation for Organizations Introductory Manual
![Page 76: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/76.jpg)
65
^ Example Avoid veibs whose range of meaning is too wide and which lend themselves to a number of interpretations, such as "understand," know," "trust," and "be aware of." To avoid all ambiguity, use veibs that express concrete, directly observable behaviour, such as "identify," "distinguish," "select," "resolve," "provide," "deliver," "increase," "reduce," and "prevent."
3) The effect or anticipated result
The effect or anticipated result —the W h a t ?
component— is the product of the activity. It must
immediately follow the action verb.
^ Example After noticing young people hanging out in the street and hallways during lunchtime, an organization decided to serve low-cost hot lunches in a CLSC basement to students from a city high school. One of the specific objectives of the project could be:
"The participants will have a safe place to meet." SUBJECT ACTION VERB EFFECT/ANTIQPATED RESULT
These three criteria are essential to formulating an évaluable objective. Two other accessory but very useful
criteria can be added to specify the evaluation context.
Program Evaluation for Organizations Introductory Manual
![Page 77: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/77.jpg)
4) Conditions for achievement 66
In other words, How, where, or when? This criteria
aims at determining the conditions under which the effect
or anticipated result should appear.
^ Example
We could formulate the following objective about hot meals for h igh school students:
"By taking part in sports activities organized in the basement after lunch, CONDITION FOR ACHIEVEMENT
the students will get involved in worthwhile leisure activities. »
SUBJECT ACTION VERB EFFECT/ANTICIPATED RESULT
5) Performance criteria
A minimum performance criteria is set, which, in other
words, answers the question To what extent?
Program Evaluation for Organizations Introductory Manual
![Page 78: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/78.jpg)
67
^ Example We could formulate the following objective about student lunches:
"By the end of the first semester,
CONDITION FOR ACHIEVEMENT
the students will take part in the activities offered
SUBJECT ACTION VERB EFFECT/ANTICIPATED RESULT
on a regular basis, which will be at least four days out of five. "
PERFORMANCE CRITERIA
An objective that has been formulated to be évaluable has
three qualities:
• It is clear: the objective provides indications about what
must be sought out as an observable consequence of the
program.
• It is specific: the objective indicates that once achieved,
something will be different and that the difference will
be visible.
• It is measurable: indicators can be found that enable us to
establish the difference between the starting and ending
situations, which means that we can show that the
Program Evaluation for Organizations Introductory Manual
![Page 79: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/79.jpg)
68 difference exists or does not. While it may appear to be
difficult to make an objective measurable, it can be done
when the objective has been formulated according to the
first two qualities: clarity and specificity.
- There is a logical link between the program and its
objectives and anticipated effects.
It is also essential to determine if the relationship
between the program and the effects and anticipated
effects is logical enough to justify evaluating the
program.
^ Example
A project has the objectives of promoting self-help among parents, reducing social isolation, and offering parental support It would be illogical to attempt to evaluate this project an the basis of another objective that has no logical link to the program.
When program objectives are too vague or poorly defined,
they sometimes need to be restated. As mentioned in
Section 5, formative evaluation enables us to more clearly
Program Evaluation for Organizations Introductory Manual
![Page 80: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/80.jpg)
69 define the program and thereby formulate objectives that
can be assessed in terms of results.
The Evaluation Context \
The decision to evaluate a project or program must also
take into account the context. The organization must base
its decision on adequate knowledge of:
• the motives driving the evaluation: Who has requested the evaluation and why? Is it at the request of a funding
agency or does it come from the program staff? Is the
evaluation centered on enhancement of the quality of
the activity?
The organization must identify the interests, whether
hidden or expressed, underlying the request for an
evaluation.
• the timing of the evaluation: Is this the right time to
carry out the evaluation? Have you allowed enough
time for the program to be phased in, implemented, and
produce outcomes? Evaluation won't yield any useful Program Evaluation for Organizations Introductory Manual
![Page 81: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/81.jpg)
70 information if you try to evaluate the program before it
has had time to produce any results.
• the probable use of the evaluation: If evaluation is used to bring out the need for change, just how far is the
organization prepared to go? Is the evaluation to serve
the interests of the organization? Is it really
worthwhile?
• the direction the evaluation will take: How will the evaluation be interpreted by the various stakeholders in
the social context of the time?
• the real chances of the evaluation being carried out: Can the program evaluation be carried out? What
constraints could the evaluation impose on workers,
participants, and coordinators? Does the organization
have the human resources needed to carry out the
evaluation alone or should the task be given to an
outside agency? Are there research methods available
for what you want to evaluate? What financial
resources can be committed to the evaluation?
Program Evaluation for Organizations Introductory Manual
![Page 82: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/82.jpg)
71 • the political, legal, economic, and ethical constraints:
Will any of these constraints place limits on the
evaluation? What can actually be accomplished given
the budgetary limits? Or the political situation? Or the
legal framework pertaining to program interventions?
Are there conditions for respecting the individuals
affected by the evaluation?
• the cooperation of workers and coordinators in the evaluation: Do the workers and coordinators agree on
the need for evaluation? Will they cooperate in the
evaluation? Their attitude is essential to the evaluation
context. Without their participation, the chances of the
evaluation being successful are practically nil.
Program Evaluation for Organizations Introductory Manual
![Page 83: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/83.jpg)
72
Evaluate the Program, Not the People
Since evaluation puts program workers on the firing line,
they may feel reticent or apprehensive about any kind of
evaluation. You cannot dismiss or minimize their feelings.
These feelings may stem from a variety of causes. Program
workers may not see the necessity of spending money on
evaluation that could be invested in intervention. Or they
may feel that evaluation is not very useful or relevant.
They can be uncomfortable with someone interfering with
their work; they may fear that the evaluation will call their
competence into question or put their jobs in jeopardy.
They may also fear that the evaluation will result in
changes to their roles or duties. i
It is essential therefore to involve practitioners so that they
feel that they are stakeholders, support the evaluation
project, and collaborate actively in it. You must not lose
Program Evaluation for Organizations Introductory Manual
![Page 84: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/84.jpg)
73 sight of the concept that evaluation targets the program,
not the people. The staff isn't being evaluated; the action taken is.
A number of conditions are important to ensure that the
organization's personnel feels that they are active players
in the evaluation, not the subject of it. Among these are:
• group agreement on the need for the evaluation;
• negotiation of an agreement between the various stakeholders on the elements of the evaluation: goals,
information needs, program effectiveness criteria,
breakdown of resources for the evaluation, respective
evaluation tasks, ways of using the results, and means
for implementing program change;
• agreement on procedures to resolve conflicts that may
occur between the evaluation and the program itself;
• frequent communication, sharing of information, and
encouraging those involved to express their positive
and negative reactions to the evaluation;
Program Evaluation for Organizations Introductory Manual
![Page 85: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/85.jpg)
74 • respect, on the part of the evaluators, for group
members and their values;
• use of the evaluation as a tool for staff development and program improvement.
In short, you have to make the evaluation an integral part of the organization's collective mission.
Program Evaluation for Organizations Introductory Manual
![Page 86: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/86.jpg)
75
SUMMARY
A certain number of conditions must be fulfilled if the evaluation is to be successful.
• The program must be évaluable: - it must be clearly stated; - the objectives must be well-defined; - there must be a logical link between the program and its
objectives or the anticipated program outcomes.
• The evaluation context must be propitious: - account must be taken of the motives for the evaluation,
the situation to be assessed, and the timing of the evaluation. The use of the evaluation findings and the direction the evaluation will take given the social context must also be considered. Attention should also be paid to the potential for carrying out the evaluation and to the various constraints that could impede the evaluation.
- Lastly, it is essential that the various people involved, especially, the program workers, feel that they are real stakeholders in the evaluation and take an active part in it.
Program Evaluation for Organizations Introductory Manual
![Page 87: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/87.jpg)
76
In Conclusion
serious evaluation of an organization's activity and
enhancement of the quality of its interventions
cannot be limited to "evaluation meetings" at year-end in
which participants give their individual opinions of how
the year went. This kind of undertaking is rather risky
since it is based on a subjective appreciation. Depending on
the mood at the time, the judgment may be that the work
quality was better (or worse) and the program more
successful (or less) than they actually were. Without
reference data, no meaningful evaluation is possible.
As we have seen above, activity evaluation is a process
that involves a specific, rigorous approach, a systematic
examination of program operation and components, and
precise analysis of program effects and outcomes. This
process implies appropriate questioning in relation to the
program or project context and the type of evaluation to be
carried out. In any event, evaluation must be based on
valid, reliable information and enable us to judge the
Program Evaluation for Organizations Introductory Manual
![Page 88: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/88.jpg)
77 implementation, evolution, or effects of an action based on
explicit, objective criteria.
Nevertheless, you need to keep in mind that even the
most objective evaluation possible is there to shed light on
how to improve an activity by highlighting where results
are produced and where problems lie.
The evaluation process cannot take the place of the other
actors, whether they are coordinators, practitioners,
workers, volunteers, or even the participants. To illustrate,
when faced with a program's evaluation results, the
decision-makers must make their decisions guided by their
own opinions.
For organizations, however, the effort is worthwhile.
Although evaluation may carry risks, it also furnishes
organizations with significant ammunition to
demonstrate that the action taken is relevant to the setting
and responds to the needs of the community.
Program Evaluation for Organizations Introductory Manual
![Page 89: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/89.jpg)
78 But we shouldn't fool ourselves: evaluation is inevitable.
There's no way to get around it in the current context in
which government is looking to control spending and the
activities it implements. Given the scarcity of resources,
government must decide to keep some programs but cut
others.
Evaluation is also a way to show the public that adequate
measures have been taken to ensure quality. In addition, it
can showcase how money is being used and where the real
successes lie. So, why not make the most of it?
Social intervention is not exempt from scientific
rationality. An organization cannot win recognition by
simply having good intentions and doing good work. It
has to prove it.
The development of computer science, which now makes
it easier to process large quantities of data when required by
evaluation, constitutes an additional means for
demonstrating the pertinence of the action undertaken by
the organization.
Program Evaluation for Organizations Introductory Manual
![Page 90: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/90.jpg)
79 Generally, organizations openly recognize the need to
evaluate their action. But the challenge facing evaluation
is how to develop it so that it respects the specific nature of
organizations and their authenticity, both as pertains to the
subject of the evaluation and the method used. This
should be feasible if the actors develop a common
definition of the target of the evaluation based on concrete
objectives; evaluation use; and precise, known, accepted,
and shared criteria.
Program Evaluation for Organizations Introductory Manual
![Page 91: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/91.jpg)
80
Appendix Evaluation in Evolution
Evaluation in the social sphere has evolved, as has the
rest of society. It has developed from the clash of ideas
on social issues.
The evaluation undertaken by researchers from
organizations that develop CAPC programs is termed a
fourth-generation evaluation. It puts forward a process
entailing cooperation between the various partners
involved in the evaluation from a constructivist
perspective in which choices are open rather than defined
beforehand. This type of evaluation follows à path of
exploration —» discovery —> induction. In other words, it
attempts to understand how program workers and
participants perceive it and not just according to categories
predetermined by researchers. This evaluation also applies
a combination of approaches and methodologies rather
than following a single model. It enables us to use both a
Program Evaluation for Organizations Introductory Manual
![Page 92: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/92.jpg)
81 quantitative/statistical approach and in-depth interviews,
for example, in order to grasp the situation.
While the road to fourth-generation evaluation has been
long, we will limit ourselves to sketching it here. It is
important to understand the phases in its development in
order to grasp its nature.
The First Wave: The Toddler Stage
Evaluation remained in its infancy until the end of the
fifties. Its development was helped along by a number of
major events, such as the two World Wars. It was also
linked to the development of industrial productivity. At
the same time, the fields of psychology and professional
counseling were developing many kinds of tests for
evaluating individuals.
The Second Wave: Large-scale Evaluation
Under the Kennedy and Johnson administrations in the
sixties, a wide variety of social interventions were
implemented in the United States, signaling the Program Evaluation for Organizations Introductory Manual
![Page 93: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/93.jpg)
82 introduction of large-scale social programs. Many of these
programs targeted the promotion and integration into
American society of the poor and blacks (such as the War
on Poverty).
This was fertile ground for evaluation. The designers of
these huge social programs needed information about the
way their activities were progressing in order to
successfully review the programs. The agencies funding
the programs also wanted information about program
effects that could guide them in making decisions and
setting policy.
These needs for large-scale, accurate information favoured
what has been termed positivist methods, inspired by the
natural sciences: standardized data collection; large
samples; and quantitative, systematic, and technical data.
The goal was to prove unequivocally the success or failure
of a program and to establish, beyond a shadow of a doubt,
cause-effect relationships between programs and their
Program Evaluation for Organizations Introductory Manual
![Page 94: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/94.jpg)
83 outcomes. In this context, the individual heading up the
research —the evaluator— serves as expert and judge.
The second wave of evaluation —initially based on goals
but later on decisions, results, and problems— provided
decision-makers with standardized and technically
rigorous information on predetermined processes and
results. Nevertheless, it was criticized for being often
superficial and failing to account for setting-related
variations within a program.
The Third Wave: Evaluation More Sensitive to Communities
While quantitative evaluation from the second wave
continued to dominate during the seventies, more and
more attempts were made —especially towards the end of
the decade— to pull program workers and clients into
evaluation. There was progressive movement to make
evaluation sensitive to the characteristics of local
communities; it was becoming increasingly clear that
nonquantitative/nonstatistical methods were needed to
Program Evaluation for Organizations Introductory Manual
![Page 95: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/95.jpg)
84 appreciate how programs ran and the effects they
produced. The need was seen to let the people living the
experience talk about it; qualitative methods, inspired by
fields such as ethnology, were needed. The evaluator's role
also shifted from that of expert to stakeholder in a
qualitative-oriented approach.
Unfortunately, this model lacked appropriate and credible
means for gathering or generalizing research results from
different communities. This represented a serious
shortcoming when it came to making decisions about
program continuation and funding.
Nevertheless, qualitative methods continued to gain
ground and were increasingly seen as a means for taking
into account the viewpoint of the individuals involved in
the action: staff, clients, and program participants.
Program Evaluation for Organizations Introductory Manual
![Page 96: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/96.jpg)
85
The Fourth Wave: A Mixed, Integrated Approach
Fourth-generation evaluation grew out of this evolution
of ideas and changing times.
It was becoming more and more apparent that indicators
such as program costs, rate of use of services, and outcome
rating scales were not fully adequate to account for,
explain, and justify the real reasons underlying an
intervention, because they couldn't bring out all the real
problems experienced. There was recognition that
administrative norms and professional standards were not
always enough to ensure program relevance or quality.
The need to get beyond the debate over quantitative versus
qualitative and over inclusion or exclusion of the
evaluator in the evaluation had become apparent. It was
time to break the mind-sets that made quantitative and
qualitative approaches incompatible, and work towards a
mixed, integrated approach to evaluation. It was also felt that the actors —the individuals who implement the
Program Evaluation for Organizations Introductory Manual
![Page 97: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/97.jpg)
86 action and those who receive it— had to be involved in
the evaluation process. This is what has been termed
constructivist evaluation because, although it is based o n
the perception of individuals (that is, subjective data), it
uses a valid, scientific process. This type of evaluation is a
process of program construction or outcome definition that corresponds to the values of the various organizations involved in the evaluation, including the evaluator.
People's perception of events (subjective data) has long
been used in the field of social research. For example, it's
the main method used in conducting opinion polls. Its
current use in evaluating program outcomes and their
impacts on participants is, however, novel.
^ Example
CAPC Evaluation
The evaluation applied to programs under CAPC is fourth
generation in many regards. Throughout, it seeks out the
points of view of the various actors so that the evaluation
Program Evaluation for Organizations Introductory Manual
![Page 98: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/98.jpg)
87 reflects their perception of the program and their relation
to it. The approach is also mixed, integrating the analysis of
statistical/quantitative and qualitative data. In one
component of the evaluation, organization workers are
called on to fill out questionnaires about project
implementation and functioning as experienced locally.
This component attempts to describe projects and to
determine to what extent they reach the target clientele. It
also provides for analyzing program implementation,
operation, and partnership between the community
organizations taking part in the project.
Furthermore, a wider, more macroscopic analysis of the
organizational relations between CAPC projects, regional
authorities, and the like could be used to indicate how
resources are allocated to the various projects in the region
and to look at the changes that these projects have brought
about in the regional or local services structure.
Moreover, this evaluation process includes a qualitative
component that complements and enriches the
Program Evaluation for Organizations Introductory Manual
![Page 99: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/99.jpg)
88 information yielded by statistical data analysis. For the
purposes of this part, twenty some projects were selected
from across Quebec according to different criteria relating
to their clienteles or specific problems. In-depth interviews
were conducted with the project heads, workers,
participants, and partners from community organizations,
which allowed expression of different points of view and
provided a portrait of the implementation and functioning
of these projects as seen through the eyes of the various
actors. Moreover, to gain a more tangible understanding of
how the projects were run, the researchers directly
observed interventions, workshops, and the like on-site.
Analysis of qualitative and subjective data does not follow
a plan that has been completely organized and thought-out
by researchers in advance. There are no predetermined
choices or hypotheses to be checked; researchers interview,
listen, and observe. Based on what they hear and observe,
they can get at the reality of the project.
Integrating the various aspects enables us to draw a
complete portrait of projects by combining the recognition
Program Evaluation for Organizations Introductory Manual
![Page 100: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/100.jpg)
89 and comprehension of the differences and particularities
that characterize the various settings with quantitative
data. This twin approach enables us to portray a more
realistic image of projects from the viewpoints of each of
the actors and by leaving choices open instead of
predefined.
This type of evaluation translates into a cooperative
process involving the various actors, practitioners, project
workers, participants, community partners, and
researchers to produce an evaluation that is an integral
part of organizational action.
SUMMARY
Fourth-generation evaluation offers mixed and integrated approaches that combine statistical/quantitative methods with qualitative methods. It focuses attention on the quality and validity of the research process, while striving to develop a true picture of the program as seen by program workers and participants instead of according to categories determined beforehand by researchers.
Program Evaluation for Organizations Introductory Manual
![Page 101: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/101.jpg)
90 Glossary
Deduction
Effectiveness evaluation
A process of reasoning in which conclusions can be drawn from preliminary assumptions (premises). It's a means of predicting using theories.
Provides information on the usefulness of methods used in a program to achieve objectives. It provides for making the link between the results achieved and the objectives set by the organization and between the services offered and client needs.
Efficiency Provides for balancing program or evaluation intervention results with costs (human,
material, and financial resources).
Formative Evaluation of the process, imple-evaluation mentation, and functioning of a
program. Provides for determining if the existing means facilitate achieving objectives. Makes it possible to fine-tune the program in progress and can con-tribute to clarifying program objectives. This method is also a powerful tool in educating and training participants.
Program Evaluation for Organizations Introductory Manual
![Page 102: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/102.jpg)
91
General objectives
Impact evaluation
Induction
Needs evaluation
Qualitative methods
Statement of a result targeted by the program.
Provides for looking at a program's secondary effects, whether anticipated or not, and at a program's impact on the community, not just the participants.
Form of reasoning that proceeds from effects to the cause, that is, using empirical data to find meaning and consistency.
Used to estimate the type, intensity, and extent of problems in a community and to identify the population's needs with respect to services in order to develop new programs or interventions.
Set of approaches (case studies, parti-cipant observation) and techniques (document analysis, interviews) aimed at interpreting the activity, intervention, or program from the point of view of the actors. Uses the judgment and opinions of the actors.
Program Evaluation for Organizations Introductory Manual
![Page 103: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/103.jpg)
92
Quantitative Provides for collecting a quantity of methods objective (measurable) data, such as
statistics, based on a predetermined model.
Reliability Quality of a technique or measuring instrument to produce similar results when used on different occasions or by different people.
Specific Definite goals or aims serving as guides objectives in achieving changes within a target
population. A specific objective must be measurable and verifiable; it should also be revised and adjusted regularly.
Summat ive Evaluation of the effects of an evaluation intervention or program. Resembles a
balance sheet or comparison of the situations before and after a program or certain activities. Allows assessment of a program's performance in terms of effectiveness and efficiency.
Validity Capacity of a technique or instrument to reliably measure what it is supposed to measure.
Program Evaluation for Organizations Introductory Manual
![Page 104: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/104.jpg)
93
Further Reading on Evaluation References
Alter, C. and W. Evens (1990). Evaluating Your Practice. A guide to Self-Assessment. New York, Springer Publishing Co.
Anderson, S.B. and S. Ball (1978). The Profession and Practice of Program Evaluation. San Francisco, Jossey-Bass Publ.
Beaudry, J. and B. Gauthier (1992). "L'évaluation de programme", in B. Gauthier (ed.) Recherche sociale -De la problématique à la collecte des données, Presses de l'Université du Québec, pp. 425-252.
Bingham, R.D. and C.L. Felbinger (1989). Evaluation in Practice : A Methodological Approach. New York, Longman
Biais, R. (1985). Collaborer à l'évaluation de programme: Oui, mais à quelles conditions? Laboratoire de Recherche en Écologie Humaine et Sociale, Université du Québec à Montréal.
Brinkerhoff, R.O. et al (1983). Program Evaluation : A v
Practitioner's Guide for Trainers and Educators. Boston, Kluwer-Nijhoff Publishing.
Program Evaluation for Organizations Introductory Manual
![Page 105: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/105.jpg)
94 Chambers; D.E., K. Wedel and M.K. Rodwell (1992).
Evaluating Social Programs. Toronto, Allyn and Bacon.
Cronbach, L.J. (1982). Designing Evaluations of Educational and Social Programs. San Francisco, Jossey-Bass Publ.
Daniel, C. (1992). L'évaluation d'un organisme social et communautaire. Guide pour la réalisation d'un bilan annuel. Centre de formation populaire.
Guba, E.G. and Y.S. Lincoln (1981). Effective Evaluation : Improving the Usefulness of Evaluation Results Through Responsive and Naturalistic Approaches. San Francisco, Jossey-Bass Publ.
Guba, E.G. and Y.S. Lincoln (1989). Fourth Generation Evaluation. Newbury Park, Sage Publications.
Hébert, Y.M. (1986). "Naturalistic Evaluation in Practice : A case Study", in D.D. Williams (ed.), Naturalistic Evaluation, New Directions for Program Evaluation. San Francisco, Jossey-Bass Publ.
Kettner, P.M., R.M. Moroney and L.L. Martin (1990). Designing and Managing Programs. An Effectiveness-Based Approach. Newbury Park, Sage Publications.
Lecomte, R. and L. Rutman (1982) (editors). Introduction aux méthodes de recherche evaluative. Université de Carleton, Ottawa. Distributed by Les Presses de l'Université Laval.
Program Evaluation for Organizations Introductory Manual
![Page 106: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/106.jpg)
95 Lefrançois, R. (1991). Dictionnaire de la recherche
scientifique. Les éditions Nemésis.
Légaré, J. and A. Demers (1993) (editors). L'évaluation sociale: savoirs, éthique, méthodes. 59th Congress of ACSALF held in Sherbrooke in May 1991. Éditions du Méridien.
Love, A J. (editor). (1991). Evaluation Methods Sourcebook. Ottawa, Canadian Evaluation Society.
Marceau, R., D. Otis and P. Simard (1992). "La planification d'une évaluation de programme", in R. Parenteau (éd.), Management public, Presses de l'Université du Québec, pp. 445-479.
Marceau, R., P. Simard and D. Otis (1992). "La gestion de l'évaluation de programme au Gouvernement du Québec", in R. Parenteau (éd.), Management public, Presses de l'Université du Québec, pp. 481-501.
Marsden, D. and P. Oakley (1990). Evaluating Social Developmen t Projects. Oxfam Development Guidelines.
Mayer, R. and F. Ouellet (1991). "La recherche évaluative", in Méthodologie de recherche pour les intervenants sociaux, Gaëtan Morin éditeur, pp. 233- 267. (A second revised edition is being prepared).
Program Evaluation for Organizations Introductory Manual
![Page 107: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/107.jpg)
96 Meyers, W.R. (1981). The Évaluation Enterprise. San
Francisco, Jossey-Bass Publ.
Morris, L. Lyons and C. Taylor Fitz-Gibbon (1988). Evaluator's Handbook. Center for the Study of Evaluation, University of California, Los Angeles. Sage Publications, 2nd edition.
Pardeck, J.T. (1996). Social Work Practice. An Ecological Approach. Westport, Connecticut, London, Auburn House.
Peld, E. and D. Davis (1995). Groupwork with Children of Battered Women : A Practitioner's Manual. Interpersonal Violence : The Practice Series. Sage Publications.
Pietrzak, ]., M. Ramier, T. Renner, L. Ford and N. Gilbert (1990). Practical Program Evaluation. Sage Sourcebooks for the Human Services Series 9. Sage Publications.
Rossi, P.H. (1982). Standards for Evaluation Practice. San Francisco, Jossey-Bass Publ.
Rossi, P.H. and H.E. Freeman (1993). Evaluation : A systematic Approach. 5th edition. Newbury Park, Sage Publications.
Program Evaluation for Organizations Introductory Manual
![Page 108: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/108.jpg)
97 Saint-Jacques, M.-C, F. Ouellet and J. Lindsay (1994).
L'alliance de l'évaluation et de la pratique en service social des groupes. Cahiers du service social des groupes, n° VIII, École de service social, Université Laval.
Seidl, F.W. (1995). "Program Evaluation" in Encyclopedia of Social Work, 19 t h edition, Washington, NASW.
Von Schoenberg, B. (1985). Les points de vue des clients et des citoyens: leur place dans l'évaluation des programmes. Série études et analyses n° 19. Service de l'Évaluation des programmes de services sociaux, Direction de l'évaluation des programmes, ministère des Affaires sociales, Gouvernement du Québec.
Weiss, H.B. and F.H. Jacobs (editors) (1988). Evaluating family programs. New York, Aldine de Gruyter
Zuniga, R. (1994). L'évaluation dans l'action. Collection INTERVENIR. Les Presses de l'Université de Montréal.
Zuniga, R. (1994). Planifier et évaluer l'action sociale. Collection INTERVENIR. Les Presses de l'Université de Montréal.
Program Evaluation for Organizations Introductory Manual
![Page 109: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/109.jpg)
0 13,067 Vol.1
![Page 110: O INTRODUCTORY MANUAL - Réseau Santécom · evaluation tea sem at numbe or objectivef fo itselfrs , including th publicatioe o manualfn ts provido groupe s working unde thr CAPe](https://reader034.vdocuments.us/reader034/viewer/2022042417/5f32a6a4783ada75cc01f432/html5/thumbnails/110.jpg)
To order:
Centre de recherche sur les ser\ ices comm una Lit aires, Université Laval, Bureau 2446, Pavilion Charles-De Koninck, Québec (Quebec) G1K 7P4
This series «Program ÊVAUJÀTfQN for Organizations under C APC (Community Action Program for Children)" comprises three volumes:
© Introductory Manual
© Evaluation Tools for Enhancing Program Quality
© Presentation of Evaluation Guides
Telephone: (418} 656-2674 Fax: E Mail: W3:
(418) 656-7787 crsc@crsc .utavaLca http:/ / www cr.-c.Li la val .ca
The production of these three documents has been subsidized bv the Community Action Program for Children, Health Canada, with the agreement of the Province of Quebec,